ams OSRAM’s IP and technology expertise is pioneering the evolution of 3D sensing which is enabling speed-to-market for many differentiating mobile applications like AR/VR and autofocus.
Trusted partner for 3D Sensing
How can mobile manufacturers get to market more simply and quickly with differentiating 3D applications like augmented reality?
Combining state-of-the-art Vertical Cavity Surface Emitting Laser (VCSEL) and Single-photon avalanche diode (SPAD) technology with unique optical design and packaging expertise, ams OSRAM provides best-in-class, reliable performance for the most demanding high-volume mobile platforms. ams OSRAM technology supports all 3D sensing technologies (Time-of-Flight, Active Stereo Vision, Structured Light) and the portfolio includes components, modules as well as system reference designs.
A 1D (or single zone) Time-of-Flight sensor is optimized for presence detection, and is part of the overall ams OSRAM solution for 3D sensing. Applications like face recognition, augmented reality, 3D object scanning and 3D image rendering, as well as other industrial and automotive applications, also benefit from the use of ams OSRAM 3D sensing solutions.
Our 3D direct Time-of-Flight (dToF) offers complete solutions for 3D sensing for use in mobile devices, from entry-level to flagships. High-performance, low-power (<300mW) dToF sensing systems are optimized for applications in augmented reality, camera enhancement and camera auto-focus assistance in challenging lighting conditions. ams OSRAM also takes its cutting-edge 3D sensing solutions and expertise forward from mobile, into computing, robotics, as well as access and payment application areas.
3D vision system
3D Vision System Flood NIR
Near-infrared image, acquired with assist of a PMSIL Plus flood illuminator. These images are used for 2D image analysis that will be fused with the 3D image data.
Near-infrared image, acquired with the assistance of a dot projector. The dot pattern enhances contrast in the image to provide an excellent depth map in all lighting conditions with low contrast objects in the scene.
ams OSRAM face recognition uses depth map information of the user’s face. A facial depth map is a set of thousands of coordinates mapped in three-dimensional space describing the contours of the surface of the user’s head relative to a single point of view in front of the user. This depth map may be compared with a reference depth map of the user’s face to authenticate the user. Reconstructed 3D image is handed over to the application software for further use case specific processing (e.g. face recognition).
3D direct time-of-flight for mobile augmented reality
ams OSRAM provides a complete technology stack – from optical sensing through to scene reconstruction and integration with RGB camera – for world-facing 3D dToF sensing in mobile devices. This technology aims to achieve higher range and lower power consumption than other implementations.
Integrating ams OSRAM’s 3D optical sensing solutions with software for simultaneous localization and mapping (SLAM) and 3D image processing, the result offers manufacturers with the option to implement AR functions quickly and more simply on mobile devices. The high-performance, low-power dToF sensing system also supports 3D environment and object scanning, camera image enhancement, and camera auto-focus assistance in dark conditions.
Combining best-in-class technologies, the new 3D dToF system includes high-power infrared VCSEL array, dot-pattern optical system, and a high sensitivity sensor.
Active stereo vision
Stereo-matching algorithms might fail to find correspondences in surface with uniform color, low texture and no features. Passive stereo vision systems rely on the ambient light to capture images and perform poorly under low light conditions. For optimal performance, active stereo vision includes an additional pattern light projector, like the ams OSRAM Belago dot projector, to enhance feature extraction and low light performance. The illumination increases the light intensity and several features in the scene to overcome typical system limitations.