Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals
Hasn't this already been observed with not too stable individuals? remember some story about kid asking ai if his parents/government etcs were spying on him.
> whether AI can push to radicalize susceptible individuals
My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.
Maybe it’s just a prank played on white expats here in UAE, but don’t all Arabic speakers say inshallah all the time?
Wow, I would never expect that. Do all models behave like this, or is it just Gemini? One particular model of Gemini?
Gemini loves to assume roles and follows them to the letter. It's funny and scary at times how well it preserves character for long contexts.
I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.
> and can't help but think whether AI can push to radicalize susceptible individuals
What kind of things did it tell you ?
When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.
I mean if it is citing the sources, there is only so much that can be done without altering original meaning.
Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.
Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.