Who needs a new app, just use DoorDash…
> Waymo is paying DoorDash gig workers to close its robotaxi doors
> The Alphabet-owned self-driving car company confirmed on Thursday that it's running a pilot in Atlanta to compensate delivery drivers for closing Waymo doors that are left ajar. DoorDash drivers are notified when a Waymo in the area has an open door so the vehicles can quickly get back on the road, the company said.
https://www.cnbc.com/amp/2026/02/12/waymo-is-paying-doordash...
The founder is a friend of mine, so maybe I'm bias, but I'm surprised wired doesn't get how network effects work and adoption curves happen, at least, it seems strange to publish this about a project someone did in a weekend, a few weekends ago, and is now trying to make a go of it? Like.. give him a couple of months to see how to improve the flow for the bots side, and general discoverability of the platform for agents at large. Maybe I'm a bit grumpy because it's my buddy but this article kinda rubs me the wrong way. :\
Applying for the bounty to deliver flowers and then simply not doing it seems like bad faith on the author's part in order to write that headline
Note how the number advertising how many bots actually use RentAHuman has vanished from their website. Instead we now have the number of bounties. 1/40th as many as registered humans. And just scrolling through them, maybe 1/4th of the bounties are not bounties at all but more humans offering services.
It's a service that is clearly a lot more appealing to humans than to agents
Tangent
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
This is post-AGI.
RentAHuman.
What a boring misanthropy.
It's work. You're hiring qualified people. For qualified work. You're not "renting a human." Which is just an abstract idealism of chattel slavery, so, is it really a surprise the author made nothing?
The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!