There's an undertone of self-soothing "AI will leverage me, not replace me", which I don't agree with especially in the long run, at least in software. In the end it will be the users sculpting formal systems like playdoh.
In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.
LLMs are a statistical model of token-relationships, and a weighted-random retrieval from a compressed-view of those relations. It's a token-generator. Why make this analogy?
100% exoskeleton is a great analogy.
An exoskeleton is something really cool in movies that has zero reason to be build in reality because there are way more practical approaches.
That is why we have all kind of vehicles, or programmable robot arm that do the job for themselves or if you need a human at the helm one just adds a remote controller with levers and buttons. But making a human shaped gigantic robot with a normal human inside is just impractical for any real commercial use.
Who is actually trying to use a fully autonomous AI employee right now?
Isn't everyone using agentic copilots or workflows with agent loops in them?
It seems that they are arguing against doing something that almost no one is doing yet.
But actually the AI Employee is coming by the end of 2026 and the fully autonomous AI Company in 2027 sometime.
Many people have been working on versions of these things for awhile. But again for actual work 99% are using copilots or workflows with well-defined agent loops nodes still. Far as I know.
As a side note I have found that a supervisor agent with a checklist can fire off subtasks and that works about as well as a workflow defined in code.
But anyway, what's holding back the AI Employee are things like really effective long term context and memory management and some level of interface generality like browser or computer use and voice. Computer use makes context management even more difficult. And another aspect is token cost.
But I assume within the next 9 months or so, more and more people will be figuring out how to build agents that write their own workflows, manage their own limited context and memory effectively across Zoom meetings desktops and ssh sessions, etc.
This will likely be a featureset from the model providers themselves. Actually it may leverage continual learning abilities baked into the model architecture itself. I doubt that is a full year away.
> We're thinking about AI wrong.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
The exoskeleton framing is comforting but it buries the real shift: taste scales now. Before AI, having great judgment about what to build didn't matter much if you couldn't also hire 10 people to build it. Now one person with strong opinions and good architecture instincts can ship what used to require a team.
That's not augmentation, that's a completely different game. The bottleneck moved from "can you write code" to "do you know what's worth building." A lot of senior engineers are going to find out their value was coordination, not insight.
For some reason AIs love to generate "Not X, but Y", "Not only X, but Y" sentences — It's as if they are template-based.
In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?
The exoskeleton analogy seems to be fitting where my work-mode is configurable: moving from tentative to trusting. But the AI needs to be explicitly set up to learn my every action. Currently this is a chore at best, just impossible in other cases.
AI article this, AI article that. The front page of this website is just all about AI. I’m so tired of this website now. I really don’t read it anymore because it’s all the same stuff over and over. Ugh.
So true. It is a exoskeleton for all my tedious tasks. I don't want to make a html template. I just want to type, make that template like on that page but this and this data.
I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
AI most definitely is a coworker already. You do delegate some work for which you previously had to hire humans.
If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.
It's the new underpaid employee that you're training to replace you.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
Petition to make "AI is not X, but Y" articles banned or limited in some way.
Marshal McLuhan would probably have agreed with this belief -- technologies are essentially prosthetic was one of the core tenets of his general philosophy. It is the essential thesis of his work "Understanding Media: The Extensions of Man". AI is typically assigned otherness and separateness in recent discourse, rather than being considered as a directed tool (extension/prosthesis) under our control.
What's interesting to me is that most real productivity gains I've seen with AI come from this middle ground: not autonomy, not just tooling, but something closer to "interactive delegation"
It’s a tool like a linter. It’s a fancy tool, but calling it anything more than a tool is hype
AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.
AI is like sugar. It tastes delicious, but in high doses it causes diabetes.
The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.
The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
The amount of "It's not X it's Y" type commentary suggests to me that A) nobody knows and B) there is solid chance this ends up being either all true or all false
Or put differently we've managed to hype this to the moon but somehow complete failure (see studies about zero impact on productivity) seem plausible. And similarly kills all jobs seems plausible.
That's an insane amount of conflicting opinions being help in the air at same time
Neither, AI is a tool to guide you in improving your process in any way and/or form.
The problem is people using AI to do the heavy processing making them dumber. Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
Neither. Closest analogy to you and the AI is those 'self driving' test subjects that had to sit in the driver's seat, so that compliance boxes could be checked and there was someone to blaim whenever someone got hit.
Tech workers were pretty anti union for a long time, because we were all so excellent we were irreplaceable. I wonder if that will change.
I agree!
“Why LLM-Powered Programming is More Mech Suit Than Artificial Human”
https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
I agree. I call it my Extended Mind in the spirit of Clark (1). One thing I realized while working a lot in the last weeks with openClaw that this Agents are becoming an extension of my self. They are tools that quickly became a part of my Being. I outsource a lot of work to them, they do stuff for me, help me and support me and therefore make my (work-)life easier and more enjoyable. But its me in the driver seat.
(1) https://www.alice.id.tue.nl/references/clark-chalmers-1998.p...
You cant run at 10x in an exoskeleton, you can’t move your hand to write any faster using an exoskeleton, the analogy doesn’t fit.
I see it more like the tractor in farming: it improved the work of 1 person, but removed the work from many other people who were in the fields doing things manually
If AI is an exoskeleton, that would make the user a crab.
I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.
In the language of Lynch's Dune, AI is not an exoskeleton, it is a pain amplifier. Get it all wrong more quickly and deeply and irretrievably.
This is a useful framing. The exoskeleton metaphor captures it well — AI amplifies what you can already do, it doesn't replace the need to know what to do. I've found the biggest productivity gains come from well-scoped tasks where you can quickly verify the output.
OR - OR? And - And
Exoskeleton AND autonomous agent, where the shift is moving to autonomous gradually.
Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.
Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
I said this in 2015... just not as well!
"Automation Should Be Like Iron Man, Not Ultron" https://queue.acm.org/detail.cfm?id=2841313
> “The AI handles the scale. The human interprets the meaning.”
Claude is that you? Why haven’t you called me?
Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).
You can't write "autonomous agents often fail" and then advertise "AI agents that perform complex multi-step tasks autonomously" on the same site.
I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
AI is the philosophers stone. It appears to break equivalence, when in reality you are using electricity for an entire town.
No, it's a power glove.
my ex-boss would probably think of me as an exoskeleton too
I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.
Exoskeletons do not blackmail or deliberately try to kill you to avoid being turned off [1]
Ultimately, AI is meant to replace you, not empower you.
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...