In the past couple days I've become less skeptical of the capabilities of LLMs and now more alarmed by them, contra the author. I think if we as a society continue to accept the development of LLMs and the control of them by the major AI companies there will be massively negative repercussions. And I don't mean repercussions like "a rogue AI will destroy humanity" per se, but these things will potentially cause massive social upheaval, a large amount of negative impacts on mental health and cognition, etc. I think if you see LLMs as powerful but not dangerous you are not being honest.
I see them as powerful and dangerous. The goal for decades now is to reduce the human population to 500 million. All human technology was pushed to this end, covertly. If we suddenly have a technology that renders white collar workers useless, we will get to that number faster than expected.
There are some good things here:
First, we currently have 4 frontier labs, and a bunch of 2nd tier ones following. The fact that we don't have just oAI or just Anthropic or just Google is good in the general sense, I would say. The 4 labs racing each other and trading SotA status for ~a few weeks is good for the end consumer. They keep each other honest and keep the prices down. Imagine if Anthropic could charge 60$ /MTok or oAI could charge 120$ /MTok for their gpt4 style models. They can't in good part because of the competition.
Second, there's a bunch of labs / companies that have released and are continuing to release open mdoels. That's as close to "intelligence on tap" as you can get. And those models are ~6-12 months behind the SotA models, depending on your usecase. Even though the labs have largely different incentives to do so, a lot of them are still releasing open models. Hopefully that continues to hold. So not all control will be in the hands of big tech, even if the "best" will still be theirs. At some point "good enough" is fine.
There's also the thing about geopolitics being involved in this. So far we've seen the EU jumping the gun on regulation, and we're kinda sorta paying for it. Everyone is still confused about what can or cannot be done in the EU. The US seems to be waiting to see what happens, and China will do whatever they do. The worst thing that can happen is that at some point the big players (Anthropic is the main driver) push for regulatory capture. That would really suck. Thankfully atm there's this lingering thinking that "if we do it, the others won't so we'll be on the back foot". Hopefully this holds, at least until the "good enough" from above is out :)