A typical autonomous vehicle sensor stack is made up of four different types of sensors: GPS, radars, cameras, and LiDAR. The final of these four is one of the most – if not the most – essential sensors in Level 4 autonomous driving.
Radars can see through dust and in the dark but lack resolution. Cameras provide seemingly infinite range and high-resolution but are only capable of providing two-dimensional mapping of a three-dimensional world.
LiDAR, on the other hand, due to its primary function of sending signals into free space and capturing them back individually, can sense all of the spatial dimensions of the environment and record them as points in its mapping system. As a result, LiDAR provides more than a picture of the road, it uses thousands of individual data points in its pointcloud to tell you exactly where objects are, how big they are, movement patterns and velocity.
AV technology relies heavily on LiDAR data to make critical driving decisions, which means LiDAR sensors need to reach strict resolution requirements (or high point density) to empower the vehicle’s perception algorithm to make safe, accurate driving decisions.
So, the question is, how much resolution or “points” is enough to safely reach L3 and L4 autonomous driving?
LiDAR’s resolution is limited by the number of points sent out in each frame, which at a given frame rate (say, 10Hz, for instance) is ultimately linked to the system’s points per second (PPS) capability. This is different from what a standard camera does, which captures external light one full frame at a time, with each frame consisting of many pixels.
The first, seemingly obvious solution would be to add enough PPS to achieve sufficient point density across the entire field-of-view (FoV), making it irrelevant where a critical object lands in the FoV. This is assuming there will always be enough points to detect and classify it fast enough.
However, quick calculations allow us to estimate how many PPS would be needed for a typical long range LiDAR to achieve this. With a FoV of 120° x 25° and a desired resolution of 0.05° x 0.05° at the minimum required frame rate of 10Hz, a long range LiDAR would need to produce roughly 12 million PPS to achieve the desired resolution.
Hitting 12 million PPS is virtually impossible for two reasons:
Since it’s virtually impossible to increase PPS and collect enough points for every object within the entire FoV, there is only one other solution to achieving the point density needed for safe self-driving – by only allocating points where needed, when needed.
With “on-the-fly” foveation, the Baraja Spectrum-Scan™ platform can instantly adjust its scanning patterns and increase the resolution around critical objects, such as pedestrians and other vehicles, to accurately track them as they move. And without sacrificing cost or reliability.
On-the-fly foveation is the ability to focus resolution on specific objects in the FoV and dynamically change the point density and scan pattern in microseconds. In other words, it’s the capacity to choose when and where to increase points density so the perception algorithm can make safe driving decisions.
Most long-range LiDAR architectures are not capable of implementing this approach. Some architectures are simply unable to dynamically change point density in the FoV, as is the case for fixed laser arrays. Others, such as galvos and MEMS, may be able to generate up to a single band of increased density within the FoV but creates additional vulnerabilities in their already delicate fast-moving scanning mechanisms, which makes them less suitable for automotive applications.
Baraja Spectrum-Scan™ LiDAR, thanks to its fully solid-state vertical scan system, is the only LiDAR sensor capable of dynamic fully-configurable foveation. Instead of relying on fragile rotating lasers and oscillating mirrors, Spectrum-Scan™ technology uses wavelengths and non-moving prism-like optics to direct laser beams wherever they’re needed at the moment it needs it.
Each point in a vertical slice of the FoV is addressed by a specific wavelength from the laser. Thus, to sense the next point in the given vertical scan line, the Spectrum-Scan™ LiDAR simply needs to change the frequency of its fast, tunable laser. It then changes the frequency repeatably for each point to the bottom of the vertical FoV and then repeats the top-down scan for the next vertical scan line.
The big resolution advantage to this approach is twofold:
Both of these advantages have the added benefit that no other performance trade-offs are needed to achieve customized scan patterns and on-the-fly pattern switching.
With Spectrum-Scan™ solid-state vertical steering, you can get reliability and resolution without adding costs or sacrificing other features. It doesn’t matter whether you have eight points or 2000 points per vertical line or if your vertical scan is homogeneous or disparate. To achieve the desired resolution, the perception simply has to communicate to the Baraja LiDAR what wavelength needs to come next.
Baraja’s Spectrum-Scan™ LiDAR also uses a robust and reliable slow-axis scan mechanism that is fully-independent from the vertical scan, enabling foveation in both horizontal and vertical directions. Independent horizontal and vertical foveation enables you to create multiple regions of higher and lower density within the same frame. We refer to this as 2D-foveation.
When an autonomous vehicle enabled by Baraja Spectrum-Scan™ technology is started, the LiDAR will initialize in a standard, wide FoV with uniform resolution points distribution. However, as the vehicle maneuvers an urban area and the perception algorithm begins to detect obstacles or potential hazards, such as kids playing on the side of the road or a cyclist waiting to cross the street, the Spectrum-Scan™ LiDAR will begin to redistribute its points based on commands from the perception algorithm. These pattern changes will add focus and resolution to those critical objects and follow them as they move around.
Then, if a new critical condition were to appear, such as a child jumping onto the road from between cars, the perception system can detect the immediate danger, and within a fraction of a frame, move more points to that area. This split-second adjustment will allow the vehicle to make a rapid assessment and decide whether it should steer or brake. The LiDAR will still maintain points across the entire FoV and may even maintain several other high-density focus areas, such as the group of kids still playing across the street.
In other scenarios, such as a motorway, the foveation may occur more gradually. As speeds increase, so does the need for effective range, which in turn requires higher points density. In motorway and high-speed scenarios, Spectrum-Scan™ sensors can increase point density along the horizon and on other vehicles to improve their ability to classify objects further and further away, spotting potential hazards within enough time for the vehicle to safely and calmly respond.
With on-the-fly 2D-foveation, AVs can push the foveation limit by building very narrow scan patterns in specific regions of interest in order to boost the resolution to values up to 0.04° by 0.02°, reaching up to 1200 points per degree square.
Configuring a self-driving car is easy. Configuring a self-driving car that can safely and correctly respond to limitless scenario possibilities in fractions of a second is hard. Baraja’s Spectrum-Scan™ technology enables on-the-fly foveation, providing automakers with the resolution, range and reliability it needs to safely put L3 and L4 AVs on the road to read more about Spectrum-Scan™ technology, download the whitepaper here.
If you’re looking for a high-quality LiDAR that can power your perception stack, contact us today.