Cameras alone can handle the vast majority of nominal driving scenarios, but the long tail of safety critical edge cases is where progress slows dramatically. Many of these cases are driven by degraded or ambiguous perception, which is where multi‑modal sensing, such as combining cameras with lidar, can reduce uncertainty. In adverse weather like fog or heavy rain, that reduction in uncertainty can translate directly into safer behavior, such as earlier and more confident emergency braking, even if no single sensor performs perfectly on its own