Sensor technology fusion for safe autonomous driving

Insights

  • Autonomous car accidents nearly doubled to 544 in 2024 from 288 in 2023. In the US, the NHTSA reported 22 to 81 crashes by self-driving cars each month in 2024.
  • Self-driving cars carry a number of sensors, including light detection and ranging (LiDAR), as well as radar, cameras, each with its own pros and cons.
  • An analysis in California revealed that nearly 26% of disengagements when a self-driving vehicle stops operating on its own were due to sensor-related software or hardware issues.
  • A multi-sensor strategy that integrates LiDAR arrays with camera systems is a sensible approach to address the challenges arising from either one while leveraging each other’s benefits.
  • The integration of a range of sensor inputs brings in the importance of fusion technologies to integrate data on a common processing engine to take smarter decisions for safer navigation.

Autonomous cars are increasingly being seen on roads: In the US, for example, Waymo’s deployments in Phoenix and Los Angeles crossed 5 million autonomous trips in 2024 and are set to exceed 20 million trips in 2025 with a fleet of 1,500 vehicles. In China, Baidu runs 500 robotaxis in Wuhan, planned to break even in 2024, and aims to expand to 1,000 robotaxis in the city. In 2025, it partnered with Uber to provide riders the option of choosing a driverless vehicle.

The main challenges in scaling robotaxi operations are safety, compliance with local regulations, and profitable business models. Safety is a key concern: According to one report, in 2021, self-driving cars averaged 9.1 accidents per million miles, compared with 4.1 accidents per million miles for traditional vehicles.

More recently, self-driving car accidents nearly doubled to 544 in 2024 from 288 in 2023. In the US, the National Highway Traffic Safety Administration reported 22 to 81 crashes by self-driving cars each month in 2024. In the first few months of 2025, fully autonomous vehicles reported more accidents than driver-assisted vehicles, with the exact numbers not reported yet. Driverless car crashes have been on an upward trend due to the increase in such vehicles and trips, with the technology not fully mature yet for all road conditions.

Self-driving cars carry a number of sensors, including light detection and ranging (LiDAR), as well as radar, cameras, and ultrasound sensors, to discern the environment and feed data to the vehicle as it drives. One reason for autonomous car accidents is the malfunctioning of these sensors. An analysis in California revealed that nearly 26% of disengagements — when a self-driving vehicle stops operating on its own because of a technical failure or safety issue — were due to sensor-related software or hardware issues.

LiDAR systems for 3D point cloud

More than 22% of carmakers have integrated LiDAR into their autonomous driving systems in 2025, with adoption continuing to rise. However, LiDAR poses two key challenges. First, its performance can be adversely affected by poor weather conditions such as fog, rain, or snow, as well as by highly reflective surfaces that disrupt laser signals. And second, the cost of LiDAR. A high-end LiDAR can cost up to $75,000, making it the most expensive component in autonomous driving, although advances in solid-state and chip-based LiDAR technologies are contributing to a gradual cost reduction.

Demanding specifications for range, resolution, application requirements, reliability, integration, maintenance, and adaptability keep LiDAR costs high. The cost of a LiDAR system varies with the level of autonomy, from a few hundred dollars for entry-level autonomous vehicles to more than $1,000 for mid-range (L2/L3) vehicles, and a few thousand dollars for L4 vehicles.

SAE International categorizes autonomous driving levels as below:

  • Level 0 (L0): No autonomy.
  • Level 1 (L1): Driver assistance features.
  • Level 2 (L2): Partially automated features for acceleration and speed, but the driver remains engaged.
  • Level 3 (L3): Conditional automation without constant driver monitoring, but the driver must be available when needed.
  • Level 4 (L4): High automation for all features under certain conditions, with driver intervention when required.
  • Level 5 (L5): Full automation for all features under all conditions.

Despite its higher cost, LiDAR offers many benefits over alternative vision technologies such as computer vision with cameras. These benefits include precise mapping: LiDAR systems generate high-resolution 3D point clouds, which map the X, Y, and Z coordinates of all visible details in the landscape to create a dynamic, precise spatial modeling of objects on the road. This precision is critical for robust environmental assessment and accurate obstacle detection. Not all manufacturers deploy LiDAR in their autonomous vehicles due to cost issues.

Vision systems for visual detail

Vision technologies are a cheaper alternative to LiDAR in autonomous driving, and are primarily adopted for scene understanding, object detection, lane detection, and perception assessment. The technology provides real-time assessment of the road situation for the vehicle. However, vision-based systems face challenges in poor lighting and weather conditions, and in complex environments. Another challenge with vision technologies is data privacy, which limits its adoption to certain features in autonomous vehicles, such as facial identification.

The potential for capturing personal data necessitates rigorous privacy and data security measures and strict compliance with regulatory standards, particularly in densely populated urban environments. LiDAR, on the other hand, does not collect personal data in the form of photos of faces, license plates, and other personal attributes.

Vision systems also demand multiple cameras to understand the surrounding environment, which in turn require significant computing power to process the gigabytes of data they capture. Also, they rely on graphic processing units (GPUs) to process the collected videos and images to make real-time decisions for safer navigation. A camera system, though cheaper to implement, demands high-end computing systems for taking the right decisions on time. The cost of a camera for computer vision varies widely from as low as $20 for a simple camera to $500 for high-resolution versions, with between eight and 12 cameras required for each autonomous vehicle.

Integration of sensor inputs

A multi-sensor strategy that integrates 2D/3D LiDAR arrays with camera systems is a sensible approach to address the challenges arising from choosing either LiDAR or vision technologies while leveraging each other’s benefits. However, integrating input from a variety of sensors requires fusion algorithms and high-performance computing platforms.

Ongoing technological advancements coupled with economies of scale and the adoption of over-the-air (OTA) software updates are expected to progressively mitigate sensing costs. OTA is a transformative aspect of autonomous or software-defined vehicles, with the capability for remote upgrades and continuous improvement. This will facilitate broader implementation of integrated sensor suites across various vehicle segments, thereby accelerating the move toward full autonomy.

By 2030, around 1 million cars with L4 autonomy are expected on the road, and by 2035, 4% of new cars sold globally are estimated to be at L4, offering high automation under certain conditions and driver intervention when required. The overall automotive market is gearing up in developing a hybrid approach with the combination of LiDAR and vision technologies to make it more reliable, safer, and economical for mass adoption (Figure 1).

Figure 1. Sensor input fusion in autonomous vehicles and its key benefits

Figure 1. Sensor input fusion in autonomous vehicles and its key benefits

Source: Infosys

The integration of a range of sensor inputs brings in the importance of fusion technologies to harness and integrate data on a common processing engine to take smarter decisions for safer navigation. Cost, power optimization, computing optimization, and standardization are the key driving elements for larger adoption.

Extrinsic calibration is the process of determining the position and orientation of LiDAR and camera sensors, relative to the reference coordinate frame of the car, so that the sensors can accurately estimate the position of objects. Distortion correction fixes any errors in the position of objects due to factors such as lens distortions or vehicle movement.

Automating extrinsic calibration and distortion correction, when integrated with the vehicle’s operating system, helps eliminate false positives or erroneous hazard detection. This improves the accuracy of object detection and prevents unnecessary actions, such as sudden braking.

Investments in sensors

It is important for carmakers to invest in sensors, their integration, and to diversify their supply base to mitigate risks such as a repeat of the semiconductor shortage during the pandemic, or the more recent rare earth metal shortage for motors.

  • Carmakers should develop a strategy and the necessary architecture to integrate multiple-sensor inputs, utilizing the strength of each type of sensor.
  • They must also invest in research for cheaper, accurate, and effective sensor technology, while focusing on diversifying their supplier base for the same. This will prevent dependence on a few regions or suppliers and help avoid production stoppages during supply chain disruptions.

Connect with the Infosys Knowledge Institute

All the fields marked with * are required

Opt in for insights from Infosys Knowledge Institute Privacy Statement

Please fill all required fields