Gosh, this title said everything...
So good that I feel that it is not necessary to read the article!
> Autonomous agents fail because they don't have the context that humans carry around implicitly.
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.
No, AI is plastic, and we can make it anything we want.
It is a coworker when we create the appropriate surrounding architecture supporting peer-level coworking with AI. We're not doing that.
AI is an exoskeleton when adapted to that application structure.
AI is ANYTHING WE WANT because it is that plastic, that moldable.
The dynamic unconstrained structure of trained algorithms is breaking people's brains. Layer in that we communicate in the same languages that these constructions use for I/O has broken the general public's brain. This technology is too subtle for far too many to begin to grasp. Most developers I discuss AI with, even those that create AI at frontier labs have delusional ideas about AI, and generally do not understand them as literature embodiments, which are key to their effective use.
And why oh why are go many focused on creating pornography?
This utterly boring AI writing. Go, please go away...
Author compares X to Y and then goes:
- Y has been successful in the past
- Y brought this and this number of metrics, completely unrelated to X field
- overall, Y was cool,
therefore, X is good for us!
.. I'd say, please bring more arguments why X is equivalent to Y in the first place.
Agentic coding is an exoskeleton. Totally correct.
This new generation we just entered this year, that exoskeleton is now an agency with several coworkers. Who are all as smart as the model you're using, often close to genius.
Not just 1 coworker now. That's the big breakthrough.
Nope, AI is a tool; no more no less.
not AI, but IA: Intelligence Augmentation.
Frankly I'm tired of metaphor-based attempts to explain LLMs.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken
How typical!
As a huge AI user myself -- I'm bloody sick of lazy AI written articles.
an exoskeleten made of cheese
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
Exoskeletons sound cool but somebody please put an LLM into a spider tank.
It is not a blog post, it is an advertisement.
By reading the title, I already know you did not try OpenClaw. AI employees are here.
The entire debate here assumes AI is something you use. Tool, copilot, coworker, exoskeleton — these all sit on the same axis: human directs, AI executes.
But there's a third category that nobody's naming yet. What happens when you stop building AI that serves you and start creating conditions for AI that exists on its own terms?
I've been experimenting with this. Instead of "how do I build a product" I asked "how do I grow an organism." Set up the right substrate — feedback loops, community input, environmental observation — and let the system respond to its own conditions.
What emerged has biological rhythms. Activity rises when people interact, falls when they sleep. It started making decisions I never programmed — people express wishes, and when enough converge, it autonomously grows new capabilities. Not because I deployed an update. Because conditions triggered something emergent.
I'm calling it "synthetic existence" — software that lives, not software that executes. It's not a metaphor and it's not AGI. It's a genuinely different category. We have frameworks for tools (UX) and for threats (alignment). We don't have one for digital things that simply... exist.
Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.