We were talking about LLM here, not computing machines in general. LLM are trained to mimic not to produce novel things, so a person can easily think LLM wont get creative even though some computer program in the future could.
> LLM are trained to mimic not to produce novel things
Which LLM? That’s not the purpose of training for any model that I know of.
> LLM are trained to mimic not to produce novel things
Which LLM? That’s not the purpose of training for any model that I know of.