> We need to know if the email being sent by an agent is supposed to be sent and if an agent is actually supposed to be making that transaction on my behalf. etc
At the same time, let's not let the perfect be the enemy of good.
If you're piloting an aircraft, yeah, you should have perfection.
But if you're sending 34 e-mails and 7 hours of phone calls back and forth to fight a $5500 medical bill that insurance was supposed to pay for, I'd love for an AI bot to represent me. I'd absolutely LOVE for the AI bot to create so much piles of paperwork for these evil medical organizations so that they learn that I will fight, I'm hard to deal with, and pay for my stuff as they're supposed to. Threaten lawyers, file complaints with the state medical board, everything needs to be done. Create a mountain of paperwork for them until they pay that $5500. The next time maybe they'll pay to begin with.
The AI bot wouldn’t be representing you any more than your text editor would be. You would be using an AI bot to create a lot of text.
An AI bot can’t be held accountable, so isn’t able to be a responsibility-absorbing entity. The responsibility automatically falls through to the person running it.
Is this before or after they have already implemented their own models to reply to your mountain of paper work with their own auto denial system
What if it's convinced to resolve the matter on your behalf, against your favor while it was acting autonomously?