logoalt Hacker News

kyproyesterday at 9:44 PM1 replyview on HN

I wasn't arguing Tesla is ahead of Waymo? Nor do I think they are. All I was arguing was that it makes sense from the perspective of a consumer automobile maker to not use lidar.

I don't think Tesla is that far behind Waymo though given Waymo has had a significant head start, the fact Waymo has always been a taxi-first product, and given they're using significantly more expensive tech than Tesla is.

Additionally, it's not like this is a lidar vs cameras debate. Waymo also uses and needs cameras for FSD for the reasons I mentioned, but they supplement their robotaxis with lidar for accuracy and redundancy.

My guess is that Tesla will experiment with lidar on their robotaxis this year because design decisions should differ from those of a consumer automobile. But I could be wrong because if Tesla wants FSD to work well on visually appealing and affordable consumer vehicles then they'll probably have to solve some of the additional challenges with with a camera-only FSD system. I think it will depend on how much Elon decides Tesla needs to pivot into robotaxis.

Either way, what is undebatable is that you can't drive with lidar only. If the weather is so bad that cameras are useless then Waymos are also useless.


Replies

DoctorOetkertoday at 5:24 AM

What causes LiDAR to fail harder than normal cameras in bad weather conditions? I understand that normal LiDAR algorithms assume the direct paths from light source to object to camera pixel, while a mist will scatter part of the light, but it would seem like this can be addressed in the pixel depth estimation algorithm that combines the complex amplitudes at the different LiDAR frequencies.

I understand that small lens sizes mean that falling droplets can obstruct the view behind the droplet, while larger lens sizes can more easily see beyond the droplet.

I seldom see discussion of the exact failure modes for specific weather conditions. Even if larger lenses are selected the light source should use similar lens dimensions. Independent modulation of multiple light sources could also dramatically increase the gained information from each single LiDAR sensor.

Do self-driving camera systems (conventional and LiDAR) use variable or fixed tilt lenses? Normal camera systems have the focal plane perpendicular to the viewing direction, but for roads it might be more interesting to have a large swath of the horizontal road in focus. At least having 1 front facing camera with a horizontal road in focus may prove highly beneficial.

To a certain extend an FSD system predicts the best course of action. When different courses of action have similar logits of expected fitness for the next best course of action, we can speak of doubt. With RMAD we can figure out which features or what facets of input or which part of the view is causing the doubt.

A camera has motion blur (unless you can strobe the illumination source, but in daytime the sun is very hard to outshine), it would seem like an interesting experiment to:

1. identify in real time which doubts have the most significant influence on the determination of best course of action

2. have a camera that can track an object to eliminate motion blur but still enjoy optimal lighting (under the sun, or at night), just like our eyes can rotate

3. rerun the best course of action prediction and feed back this information to the company, so it can figure out the cost-benefit of adding a free tracking camera dedicated to eliminating doubts caused by motion blur.