logoalt Hacker News

iamjake648yesterday at 9:42 PM9 repliesview on HN

I hear this a lot, and I'm genuinely curious why you think it might take more energy to be on alert for tricky situations. Wouldn't you already be doing that for your own manual driving?


Replies

lateforworkyesterday at 9:53 PM

Think about a junior coworker you offloaded some of your tasks to. It turns out the coworker frequently makes mistakes. At some point you are going to say it is easier to just do this myself. Especially if a single mistake can cost you your life!

californicalyesterday at 9:49 PM

I’m guessing that predicting the failure modes of a computer is more taxing than your brain using pattern recognition of what it needs to react to.

If you’re driving, your brain can automatically prioritize the importance of things that you see. But since a computer fails in different ways than a human, you lose all automatic prioritization

jerlamyesterday at 10:26 PM

It's not just "tricky situations", sometimes FSD will do things that no normal driver would ever do, and it will do them inconsistently. Sometimes it's brilliant and sometimes it's drunk.

show 1 reply
rented_muleyesterday at 9:47 PM

It's easier to predict, understand, and react to your own driving behavior.

whiplash451yesterday at 9:47 PM

Because constantly switching between full attention and degraded attention (which the FSD promises) is more tiring that staying on full attention continuously.

michaeltyesterday at 11:14 PM

This is a subject that has been studied quite a bit, as there are a bunch of jobs where people have to monitor for rare emergencies, and react fast if an emergency should arise. Things like pilots on flights with autopilot; lifeguards watching for swimmers in distress; CCTV monitoring; operating airport X-ray machines, and so on.

One such study is "Performance consequences of automation-induced 'complacency'" (Parasuraman, Molloy & Singh, 1993) https://www.pacdeff.com/pdfs/Automation%20Induced%20Complace...

Previous studies had found that a human and a computer performed markedly better than either a human alone or a computer alone - but in those studies failures were quite common, so they didn't give the humans time to get bored or distracted.

When researchers got test subjects to perform a simulated flying task, monitoring a system with 99%+ reliability, they found the humans were proportionally much worse at stepping in than they were on less reliable systems.

Swimming pool lifeguards will often change posts every 15-20 minutes and and get a 10-15 minute break every hour, to keep things interesting enough that they can pay attention. Good luck getting drivers to do that.

show 1 reply
burnteyesterday at 9:59 PM

This is the real trick about 95% accurate or 99% accurate, if you never know when that 1% incident will occur, you ALWAYS have to watch for it. And eventually we'll have to live with the fact it'll never hit 100% accuracy, just as we don't have 100% accuracy today with human driving.

Mawrtoday at 8:08 AM

There's no way to model what a "tricky situation" may be to an opaque and ever-changing piece of self-driving software. It may fail in random ways at random times — it's completely, 100% unpredictable.

Therefore, you have to be 100% ready at all times to react in case anything that's possible happens.

Sounds way more tiring than just driving yourself and only having to account for the known, relatively easy to model human failure modes.

ImPostingOnHNyesterday at 10:45 PM

I know my normal, non-self-driving car won't randomly slam on the brakes or swerve into a median. Even if I take my hands off the wheel, I know it will keep going straight-ish for a second or two.

A "self-driving" tesla is an adversary you need to supervise to make sure it doesn't take actions you wouldn't expect of a normal car.

As other posters have pointed out, it's like running an LLM with `--dangerously-skip-permissions`: I wouldn't `rm -rf /` my computer (or in the case of tesla, my life), but an AI might.