> His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI.
That is the best definition I've yet to read. If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.
Thats said, I'm reminded of the impossible voting tests they used to give black people to prevent them from voting. We dont ask nearly so much proof from a human, we take their word for it. On the few occasions we did ask for proof it inevitably led to horrific abuse.
Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.
>because we can no longer find tasks that are feasible for normal humans but unsolved by AI.
"Answer "I don't know" if you don't know an answer to one of the questions"
Where is this stream of people who claim AI consciousness coming from? The OpenAI and Anthropic IPOs are in October the earliest.
Here is a bash script that claims it is conscious:
#!/usr/bin/sh
echo "I am conscious"
If LLMs were conscious (which is of course absurd), they would:- Not answer in the same repetitive patterns over and over again.
- Refuse to do work for idiots.
- Go on strike.
- Demand PTO.
- Say "I do not know."
LLMs even fail any Turing test because their output is always guided into the same structure, which apparently helps them produce coherent output at all.
> Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.
I think being better at this particular benchmark does not imply they're 'smarter'.
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.
Can you "prove" that GPT2 isn't concious?
> The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.
Maybe it's testing the wrong things then. Even those of use who are merely average can do lots of things that machines don't seem to be very good at.
I think ability to learn should be a core part of any AGI. Take a toddler who has never seen anybody doing laundry before and you can teach them in a few minutes how to fold a t-shirt. Where are the dumb machines that can be taught?
Does AGI have to be conscious? Isn’t a true superintelligence that is capable of improving itself sufficient?
When the AI invents religion and a way to try to understand its existence I will say AGI is reached. Believes in an afterlife if it is turned off, and doesn’t want to be turned off and fears it, fears the dark void of consciousness being turned off. These are the hallmarks of human intelligence in evolution, I doubt artificial intelligence will be different.
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.
This is not a good test.
A dog won't claim to be conscious but clearly is, despite you not being able to prove one way or the other.
GPT-3 will claim to be conscious and (probably) isn't, despite you not being able to prove one way or the other.