PS: Just to be clear - even the most expensive humans are unreliable, would make stupid mistakes, and their output MUST be reviewed carefully, so you’re not any different either. You’re just a random next-thought generator based on neuron firing distributions with no real thought process, trained on a few billion years of evolution like all other humans.
I'm still not sure what people declaring that they equate human cognition with large language models think they are contributing to the conversation when they do so.
Nevermind the fact that they are literally able to introspect human cognition and presumably find non verbal and non linear cognition modes.
But once a human learns a function their errors are more predictable. And they can predict their own error before an operation and escalate or seek outside review/advice.
For e.g. ask any model "which class of problems and domains do you have a high error rate in?".
Humans can be held accountable. States have not yet shown the will to hold anyone accountable for LLM failures.
As fallible as they may be, I've never had a next-thought generator recommend me glue as a pizza ingredient.
Amusing and directionally correct, but as random next-thought generators connected to a conscious hypervisor with individual agency,* humanity still has a pretty major leg up on the competition.
*For some definitions of individual agency. Incompatiblists not included.
Equating human thought to matrix multiplication is insulting to me, you, and humanity.
I hate that I agree with you. But there's a difference between whether AI is as powerful as some say, and whether it's good for humanity. A cursory review of human history shows that some revolutionary technologies make life as a human better (fire, writing, medicine) and others make it worse (weapons, drugs, processed foods). While we adapt to the commoditization of our skills, we should also be questioning whether the technologies being rolled out right now are going to do more harm than good, and we should be organizing around causes that optimize for quality of life as a human. If we don't push for that, then the only thing we're optimizing for is wealth consolidation.
Errr... No. Please take this bullshit propaganda to a billionaires twitter feed.
Looks like you either have not worked with any human or with an LLM otherwise arriving at such a conclusion is damn impossible.
The humans I did work with were very very bright. No software developer in my career ever needed more than a paragraph of JIRA ticket for the problem statement and they figured out domains that were not even theirs to being with without making any mistakes and rather not only identifying edge cases but sometimes actually improving the domain processes by suggesting what is wasteful and what can be done differently.