logoalt Hacker News

Darkbloom – Private inference on idle Macs

256 pointsby twapitoday at 4:06 AM133 commentsview on HN

Comments

kennywinkertoday at 4:51 AM

I have a hard time believing their numbers. If you can pay off a mac mini in 2-4 months, and make $1-2k profit every month after that, why wouldn’t their business model just be buying mac minis?

show 8 replies
tgmatoday at 5:54 AM

I installed this so you don't have to. It did feel a bit quirky and not super polished. Fails to download the image model. The audio/tts model fails to load.

In 15 minutes of serving Gemma, I got precisely zero actual inference requests, and a bunch of health checks and two attestations.

At the moment they don't have enough sustained demand to justify the earning estimates.

show 4 replies
haspoktoday at 10:24 AM

Having strong SETI@Home vibes from 25 years ago, except of course, this is not for the greater good of humanity, but a for-profit project.

Problem is, from a technical point of view, what kind of made sense back then (most people running desktops, fans always on, energy saving minimal) is kind of stupid today (even if your laptop has no fan, would you want it to be always generating heat?)...

I definitely want my laptops to be cool, quiet and idle most of the time.

show 1 reply
nltoday at 4:48 AM

They use the TEE to check that the model and code is untampered with. That's a good, valid approach and should work (I've done similar things on AWS with their TEE)

The key question here is how they avoid the outside computer being able to view the memory of the internal process:

> An in-process inference design that embeds the in- ference engine directly in a hardened process, elimi- nating all inter-process communication channels that could be observed, with optional hypervisor mem- ory isolation that extends protection from software- enforced to hardware-enforced via ARM Stage 2 page tables at zero performance cost.[1]

I was under the impression this wasn't possible if you are using the GPU. I could be misled on this though.

[1] https://github.com/Layr-Labs/d-inference/blob/master/papers/...

show 4 replies
gleenntoday at 8:09 AM

You have to install their MDM device management software on your computer. Basically that computer is theirs now. So don't plan on just handing over your laptop temporarily unless you don't mind some company completely owning your box. Still might be a validate use for people with slightly old laptops lying around, but beware trying to share this computer with your daily activities if you e.g. use a bank on a browser on this computer regularly. MDM means they can swap out your SSL certs level of computer access, please correct me if I'm wrong.

show 3 replies
heddycrowtoday at 10:19 AM

I think it’s important that systems like this exist, but getting them off the ground is non-trivial.

We’ve been building something similar for image/video models for the past few months, and it’s made me think distribution might be the real bottleneck.

It’s proving difficult to get enough early usage to reach the point where the system becomes more interesting on its own.

Curious how others have approached that bootstrap problem. Thanks in advance.

ramoztoday at 5:06 AM

Unfortunately, verifiable privacy is not physically possible on MacBooks of today. Don't let a nice presentation fool you.

Apple Silicon has a Secure Enclave, but not a public SGX/TDX/SEV-style enclave for arbitrary code, so these claims are about OS hardening, not verifiable confidential execution.

It would be nice if it were possible. There's a lot of cool innovations possible beyond privacy.

show 3 replies
pants2today at 5:00 AM

Cool idea. Just some back-of-the-envelope math here (not trusting what's on their site):

My M5 Pro can generate 130 tok/s (4 streams) on Gemma 4 26B. Darkbloom's pricing is $0.20 per Mtok output.

That's about $2.24/day or $67/mo revenue if it's fully utilized 24/7.

Now assuming 50W sustained load, that's about 36 kWh/mo, at ~$.25/kWh approx. $9/mo in costs.

Could be good for lunch money every once in a while! Around $700/yr.

show 9 replies
NiloCKtoday at 8:34 AM

Interesting to see an offering with this heritage [1] proposing flat earnings rates for inference operators here, rather than trying to sell a dynamic marketplace where operators compete on price in real-time.

Right now the dashboards show 78 providers online, but someone in-thread here said that they spun one up and got no requests. Surely someone would be willing to beat the posted rate and swallow up the demand?

I expect this is a migration target, but a tactical omission from V1 comms both for legitimate legibility reasons (I can sell x for y is easier to parse than 'I can participate in a marketplace') and slightly illegitimate legibility reasons (obscuring likely future price collapse).

Still - neat project that I hope does well.

[1] Layer Labs, formerly EigenLayer, is company built around a protocol to abstract and recycle economic security guarantees from Ethereum proof of stake.

show 1 reply
miki123211today at 9:17 AM

> Operators cannot observe inference data.

Is there some actual cryptography behind this, or just fundamentally-breakable DRM and vibes?

Jn2G3Np8today at 8:53 AM

Love the concept, with some similarity to folding@home, though more personal gain.

But trying it out it still needs work, I couldn't download a model successfully (and their list of nodes at https://console.darkbloom.dev/providers suggests this is typical).

And as a cursory user, it took me some digging to find out that to cash out you need a Solana address (providers > earnings).

TuringNYCtoday at 4:53 AM

I'd love a way to do this locally -- pool all the PCs in our own office for in-office pools of compute. Any suggestions from anyone? We currently run ollama but manually manage the pools

show 3 replies
puttycattoday at 9:46 AM

> Every request is end-to-end encrypted

Afaik you will need to decrypt the data the moment it needs to be fed into the model.

How do they do this then?

show 1 reply
stuxnet79today at 5:30 AM

So basically ... Pied Piper.

show 1 reply
pants2today at 5:05 AM

You might not even know it as a user but the payment/distribution here is all built on crypto+stablecoins. This is a great use case for it.

show 1 reply
subpixeltoday at 9:57 AM

Why isn’t a MacBook Air M5 on the hardware list?

show 2 replies
0xbadcafebeetoday at 6:18 AM

I'm not sure how the economics works out. Pricing for AI inference is based on supply/demand/scarcity. If your hardware is scarce, that means low supply; combine with high demand, it's now valuable. But what happens if you enable every spare Mac on the planet to join the game? Now your supply is high, which means now it's less valuable. So if this becomes really popular, you don't make much money. But if it doesn't become somewhat popular, you don't get any requests, and don't make money. The only way they could ensure a good return would be to first make it popular, then artificially lower the number of hosts.

WatchDogtoday at 7:33 AM

I installed two models, but it just always reports:

    Available models (2):
    CohereLabs/cohere-transcribe-03-2026 (4.6 GB)
    flux_2_klein_9b_q8p.ckpt (20.2 GB)
    ...
    Advertising 0 model(s) (only loaded models)

Also the benchmark just doesn't work.

Interesting idea, but needs some work.

v9vtoday at 8:37 AM

They could consider registering as a provider on something like OpenRouter if they aren't getting enough inference requests on their own site.

dr_kiszonkatoday at 5:18 AM

"These are estimates only. We do not guarantee any specific utilization or earnings. Actual earnings depend on network demand, model popularity, your provider reputation score, and how many other providers are serving the same model.

When your Mac is idle (no inference requests), it consumes minimal power — you don't lose significant money waiting for requests. The electricity costs shown only apply during active inference.

Text models typically see the highest and most consistent demand. Image generation and transcription requests are bursty — high volume during peaks, quiet otherwise."

BingBingBaptoday at 5:03 AM

Generate images requested by randoms on the internet on your hardware.

What could possibly go wrong?

utkarsh_apoorvatoday at 6:26 AM

Like the concept. This is not a business - should be an open source GitHub repo maybe.

They lost me with just one microcopy - “start earning”. Huge red signal.

show 1 reply
amdiviatoday at 6:47 AM

Until we have breakthroughs in homomorphic encryption compute, I won't trust such privacy claims

woadwarrior01today at 7:33 AM

I won't install some random untrusted binary off of some website. I downloaded it and did some cursory analysis instead.

Got the latest v0.3.8 version from the list here: https://api.darkbloom.dev/v1/releases/latest

Three binaries and a Python file: darkbloom (Rust)

eigeninference-enclave (Swift)

ffmpeg (from Homebrew, lol)

stt_server.py (a simple FastAPI speech-to-text server using mlx_audio).

The good parts: All three binaries are signed with a valid Apple Developer ID and have Hardened runtime enabled.

Bad parts: Binaries aren't notarized. Enrolls the device for remote MDM using micromdm. Downloads and installs a complete Python runtime from Cloudflare R2 (Supply chain risk). PT_DENY_ATTACH to make debugging harder. Collects device serial numbers.

TL;DR: No, not touching that.

gndptoday at 6:07 AM

They are almost claiming FHE, isn't it just a matter of creating the right tool to get the generated tokens from RAM before it gets encrypted for transfer. How is it fundamentally different than chutes?

jboggantoday at 6:00 AM

Is this named after the 2011 split album with Grimes and d'Eon?

grvbcktoday at 9:42 AM

Broken calculator or am I missing something here?

  Macbook Air M2  8GB   12h/day -> $647/month

  Mac Mini M4     32GB  12h/day -> $290/month
I mean, I'd be happy to buy a few used M2 Airs with minimal specs and start printing money but…
egorfinetoday at 8:31 AM

I really want this to succeed

kolibertoday at 5:59 AM

Apple should build this, and start giving away free Macs subsidized by idle usage.

resonanormaltoday at 6:07 AM

I could imagine this working for the openclaw community if the price is right

chaoz_today at 4:51 AM

That solution actually makes great sense. So Apple won in some strange way again?

Guess there are limitations on size of the models, but if top-tier models will getting democratized I don’t see a reason not to use this API. The only thing that comes to me is data privacy concerns.

I think batch-evals for non-sensitive data has great PMF here.

show 2 replies
bentttoday at 4:52 AM

I thought this was Apple’s plan all along. How is this not already their thing?

DeathArrowtoday at 4:44 AM

Why only Macs? If we think of all PCs and mobile phones running idle, the potential is much larger.

show 3 replies
dcreatertoday at 5:20 AM

I cant buy credits - says page could not load

rvztoday at 4:46 AM

Should have called it “Inferanet” with this idea.

Away this looks like a great idea and might have a chance at solving the economic issue with running nodes for cheap inference and getting paid for it.

jaylanetoday at 6:33 AM

latest (v0.3.8) tar doesn't contain image-bank or gRPCServerCLI dependencies so installer fails.

jiusanzhoutoday at 7:04 AM

[dead]

0xelpabl0today at 6:09 AM

[dead]

eddie-wangtoday at 7:44 AM

[dead]

jstlykdattoday at 7:10 AM

[dead]