From touchless taps to AR/VR headsets, robots and self-driving vehicles – products that “see” the world around them are becoming ever more ubiquitous.
Camera sensors are the first technology of choice to make products that can “see”. However, a camera alone can only capture a 2D image. In practice, the world is three dimensional. To work, most of these applications need to also know depth. For example: touchless taps need to know the distance of a hand, robots and autonomous vehicles need to detect distance to objects to avoid collisions, and Augmented Reality experiences need a 3D map of the real world scene.
Depth sensing techniques directly measure distance from the sensor to objects around. Where needed, this can be combined with 2D image data to build up a full 3D map of the scene, and 6 degree of freedom position within it, using techniques such as Simultaneous Localization and Mapping (SLAM)