To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
When I look at LLMs as an interface, I'm reminded of back when speech-to-text first became mainstream. So many promises about how this is the interface for how we'll talk to computers forevermore.
Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).
I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.
The hype around AI is admittedly annoying - especially from the Wall St crowd who don't know how to pronounce 'Nvidia' correctly, and who haven't managed to internalize the fact that the chatbots they use hallucinate.
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
>Blockchain... NFTs >The problem is, the same dudes who were pumped for all of that bollocks now won't stop wanging on about Artificial Intelligence.
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
By the looks of it, 2026 might be the year where reality and fiction will finally collide with AI and we'll be able to see if all the hype was warranted.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
This just sounds like the "nothing ever happens" theorem slightly rephrased, of which Scott Alexander did a great refutation here: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
I’m doing enterprise coding tasks that used to take a month of whole team coordination from mockups to through development and testing in 3 days now. It’s all test driven development, codex 5.3 and a small team of two people who know how to hold it right orchestrating the agents. There’s no reason not to work this way. The sociotechnical engineering aspects of this change are fascinating and rewarding to solve.
>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.
Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.
LLMs have not radically transformed the world yet because the number of people capable of solving problems by typing into a blinking cursor on a blank screen is actually quite small. Take that subset of the population and reduce it to those that can effectively write communicative prose, and its even smaller still.
It's just an interface problem. The VT100 didn't change the world overnight either.
Author forgot Segway. Remember when it was going to fundamentally change humanity?
The post nicely lists a bunch of failed hyped tech:
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
...conveniently doesn't list a bunch of hyped tech that hasn't failed:
> microchips, PCs, the internet, ecommerce, cloud, EVs, 5G
...and presents this as evidence that the current hyped tech (AI) will fail:
> Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.
When the article needs to construct disingenious arguments, I'm not interested in its conclusion.
But wait! If you actually read to the end, there's a plot twist!
> The ideology of "winner takes all" is unsustainable and not supported by reality.
Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?
At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.
But of course not. This is not an argument. This is preaching to the choir.
Preach, brother, preach.
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.
I always figured AI would be a big deal from childhood onwards and wrote about it for my college entrance exam in 1980 or so. That doesn't apply to any of
>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX
It's quite a different thing, more on the level of the evolution of life on earth and quite unlike all that junk.
When non-programmers make sweeping statements about LLMs.
Deep disconnect from reality.
For me, this captures it:
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
- Terry Pratchet's Faust Eric"
I get that everyone has a strong opinion on whats-going-to-happen-with-AI, but I really think nobody knows.
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
I got my first tech job in 2001. I've been doing this a while and ridden all the waves.
There are two kinds of waves. The ones that don't require collective belief in them to succeed, and those that do.
The latter are kinds like crypto and social media. The former is mobile...and AI.
If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world. People would see my level of output and be utterly shocked at how I can do so much so quickly. It doesn't matter if others don't use AI for me to appreciate AI. In fact, the more other people don't use AI, the better it works out for me.
I'm sympathetic to people who feel like they are against it on principle because scummy influencers are talking about it, but I don't think they're doing themselves any favors.
to me, ai seems likely to become a new user interface, just like gui did from cli.
abstract away a lot of the mechanics of working with data/information.
helpful, when literacy seems to be trending in a downward direction.
Perhaps this is the failure to understand the distinction between a technology and a meta-technology. Upgrading the factory that builds the robots is much different than upgrading the robots.
Everything is the same until it's not, good luck predicting when "until it's not" is on the horizon though. Isn't technology innovation a power law thing? Everything hums along fairly regular and then, out of the blue, there's a massive impact. Personally, I think AI has made a pretty large impact in software dev and overall tech industry but I don't see AGI any time soon (and that hype has died down) and therefore I don't see the economics working out. The coding tools, API integrations, chatbots, those are great but I don't see them producing the returns required to keep companies like OpenAI running unless OpenAI takes all the customers and all the ad clicks from everyone else ( Athropic, Alhphabet, X, Amazon, Meta, even Microsoft ). I just don't see that happening.
Said elsewhere on this post... "AI is a bubble!" "AI will change everything!"
Is just propaganda...
Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams
Russia is fighting with shovels Russia is on the verge of swarming Europe
What would Joost Meerloo say about it, I wonder.
Use the reader view button.
A very cynical article.
Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.
Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.
All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.
AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.
This Andrew Klavan interview on AI is worth your time, if not an independent submission:
https://www.youtube.com/watch?v=SZFhFGpDWGw
"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."
Honestly, the remixes this generation suck compared to priors.
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
This lazy kind of post annoys me because it sort of groups any of us saying that this technology is profoundly different in with all the town criers who have said this kind of thing before — even if we have never said it before and were even skeptical of past declarations
Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.
Lazy.
I enjoyed Dave Cridland's comment more than the article. The article is dismissive of AI and other technologies in an unsubstantiated way.
New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.
LLMs are really a marvel, GPT 2 actually inspired me to go back to college (not directly, rather I needed to understand how it worked).
I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.
If I were a junior now, and less confident, I would be abandoning my career in this climate.
LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.
This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.
I don't want to be reminded daily of the disgusting reality of unbridled capitalism.
What is the point being made here? Some past technologies were overhyped, therefore AI is overhyped? Well, some past consumer technologies did change the world (smartphones, texting, video streaming, dating apps, online shopping, etc), so where's the argument that AI doesn't belong to this second group?
Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.
For all of those, there is a gartner hype cycle. The thing that matters is when it comes out the back end, is 1m, 1b, 6b people using it?
for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.
If you can't distinguish the actual utility and progress of AI from it's annoying hype-men then it's hard to take your dismissal of AI seriously.
Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".
Blatant strawman.
Nuclear weapons - this time is different
Internet - this time is different
iPhone - this time is different
this just looks like someone hearing about tons of hyped things from people across the internet (which almost by definition, is full of false signals and grifters), imagining they are coming from the same person, then arguing with how wrong that person always is. how is that interesting?
I invested in Tesla extremely early (2011) because electric cars, if built correctly, would obviously make great cars, and Elon was one of the few people I actually thought had a shot at doing it.
I was right that blockchain was BS and all the "not sure about Bitcoin, but blockchain will be big" people were idiots.
I've been right for the last couple of years on AI and that people were vastly underestimating it when it came to it's coding potential. And I put my money where my mouth was here. In 2021 when GPT-3 came out I decided almost immediately I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs. Which at the time I thought was probably going to happen around 2030 not realising how far LLMs could go with reasoning.
I'm not particularly intelligent ("only" top 1-2% IQ), but my ability to predict the future is very good. If you have a skill you're unusually good you might relate to how it's strange other people find it so hard to do that thing you find kinda easy. For me that's predicting things and computers.
Since I was a young teen I have been worrying about AI. Most of my IRL best friends I have made from talking about AI risk in 2010s when I was studying AI.
Admittedly I got some of the details wrong back then. In 2010 I thought a lot of manual labour jobs would be easier to automate first – warehouse work, mail, taxis, buses, trains, etc. I worried primarily about the economic and political ramifications, and much less about ASI scenario (at least in this half of the century). But I think still I got the general timeframes and direction right. This was the decade I was concerned about.
I'm so scared right now... My whole life I've had nightmares about AI. I know there are some people who talk about how AI is an existential risk, but it feels like they don't internalise it like I do. They're not prepping like me for one, not that you really can prep for what's coming. If they're concerned why don't they have the nightmares of the omnipresent AI which you can't out think or punch to protect those you love? AI is so powerful in the scariest ways. Super viruses, mass surveillance and control, mind reading, unimaginable sci-fi weapons. It's like a horror story, but suddenly real.
I am an OG AI doomer, but until the last few months I've at least always had some doubt in my mind about whether I'm right, perhaps not about the risk of AI broadly, but about whether we'd actually be able to develop highly capable AIs while I still have a lot of my life ahead of me.
In my opinion this time is different, and what I've been worrying about for the last couple of decades is now here.
We are collectively the indigenous peoples of America and the Europeans have just arrived in the new world. The risk vectors are now endless and how this all plays out is hard to know exactly. What we do know is that the majority of ways this will play out are bad, and some are incomprehensibly bad. Some may achieve status and wealth in the near-term, but longer-term we're all dead, or worse.
I always worry these comments make me sound like a lunatic, I think I am, but I hope I am. I hope you will all forgive me, but I just need to shout about this tonight while I still can. We need to stop this insanity. Data centers need to be nuked. You may doubt me now, but in time you will understand. Hopefully I won't be around to say I told you so. Please make the best of the time we have left.
I hoped the article would be be a meta-discussion of "time" and perhaps relativity or some other phenomenon. Sigh, it's an investment thesis saying "This Time is Different" is a risky bet.
I feel like someone is in a bubble of Crypto-bros. That does not instill confidence.
I would suggest editing the title to "This Time is Different". I think that captures the essence much better.
Love the Sir Terry reference.
Title got mangled somehow, the original title is "This time is different".
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
Agreed, these things all failed to live up to the hype.
But these didn't:
Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...
So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.