RoboSense 125-Laser Beam Solid-State LiDAR: Rs-LiDAR-m1 Is Officially on Sale Priced at $1,898
World’s First Smart LiDAR Sensor Will Be Demonstrated at CES 2020 With On-Vehicle Public Road Test
RoboSense, the world’s leading autonomous driving LiDAR perception solution provider, announced that the solid-state LiDAR RS-LiDAR-M1Simple (Simple Sensor Version) is now ready for customer delivery, priced at $1,898. The new RS-LiDAR-M1Simple is less than half the size of the previous version, with dimensions of 4.3” x 1.9” x 4.7” (110mm x 50mm x 120mm), and is equipped with enhanced hardware performance virtually equal to the serial production version provided to OEMs. The main body design of this automotive-grade solid-state LiDAR is finalized and ready for shipment.
In addition, RoboSense will demonstrate the world’s first smart solid-state LiDAR, the RS-LiDAR-M1Smart (Smart Sensor Version), at CES 2020 in Las Vegas, Booth 6138, LVCC North Hall from Jan 7-10, 2020 with an on-vehicle public road test. Products will be available for ordering by key customers. The RS-LiDAR-M1Smart main body is embedded with an AI perception algorithm that fully takes advantage of LiDAR’s potential to transform conventional 3D LiDAR sensors to a full data analysis and comprehension system, outputting semantic-level structured environment information in real-time to be used directly for autonomous vehicle decision making.
Read More: How Is Artificial Intelligence (AI) Changing The Future Of Architecture?
The RS-LiDAR-M1 family inherits the performance advantages of traditional mechanical LiDAR, simultaneously also taking into consideration requirements for the mass production of vehicles. The RS-LiDAR-M1 family meets every automotive-grade requirement, including intelligence, low cost, stability, simplified structure and small size, vehicle body design friendliness, and algorithm processed semantic-level perception output results.
“The RS-LiDAR-M1 is an optimal choice for the serial production of self-driving cars, far superior to mechanical LiDAR. The sooner solid-state LiDAR is used, the sooner production will be accelerated to mass-market levels,” said Mark Qiu, RoboSense COO.
RS-LiDAR-M1 Family Features:
- 125 laser beams with exceptional performance: the RS-LiDAR-M1 has a field of view of 120°*25°, which is the MEMS solid-state LiDAR’s largest field of view among released products worldwide. RoboSense uses 905nm lasers with low cost, automotive grade and small size instead of expensive 1550nm lasers. At the same time, RoboSense continuously breaks ranging ability limits to 150m at 10% NIST target, which is also MEMS solid-state LiDAR’s longest detection range. The frame rate of RS-LiDAR-M1 is increased to 15Hz, which can reduce the impact of point cloud distortion caused by target movement.
- World’s smallest MEMS solid-state LiDAR: the size has been reduced by half, one-tenth the size of conventional 64-beam mechanical LiDAR. The RS-LiDAR-M1 can be easily embedded in the car’s body while still maintaining the vehicle’s appearance intact.
- Reduced parts for lower cost, shorter production time, and large-scale production capacity: provides high performance, low cost, high stability, manufacturability, and a high degree of integration because of its unique patented optical module. Parts have reduced from hundreds to dozens in comparison to traditional mechanical LiDARs, greatly reducing the cost and shortening production time — achieving a breakthrough in manufacturability. The coin-sized module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements.
- Modular design: the scalability and layout flexibility of the optical module lay the foundation for subsequent MEMS LiDAR products and support the customization of products for different application cases.
- Stable and reliable: the RS-LiDAR-M1 uses VDA6.3 as the basis for project management, and the development of all modules undergoes a complete V model closed loop. RoboSense fully implemented IATF16949 quality management system and ISO26262 functional safety standards, combining ISO16750 test requirement and other automotive-grade reliability specifications to verify the RS-LiDAR-M1 series of products. MEMS mirror is the core component in RS-LiDAR-M1. According to the AEC-Q100 standard, combining the characteristics of MEMS micro-mirror, a total of ten verification test groups are designed covering temperature, humidity, packaging process, electromagnetic compatibility, mechanical vibration and shock, life-time, and others. The cumulative test time for all test samples has now exceeded 100,000 hours. The RS-LiDAR-M1 uses 905nm lasers to achieve long-distance and also meets Class 1 laser safety. The longest-running prototype has been tested for more than 300 days, while the total road test mileage exceeds 150,000 kilometers with no degradation found in various testing scenarios.
- All-weather: In Vienna, Austria, the RS-LiDAR-M1 was tested for rain and fog under different light and wind speed conditions. The test results prove that the RS-LiDAR-M1 has met the standards, and the final mass-produced RS-LiDAR-M1 will adapt to all climatic and working conditions.
- Minimal wear and tear: as a solid-state LiDAR, the RS-LiDAR-M1 has minimal wear and tear vs. movable mechanical structures, eliminating potential optoelectronic device failures due to mechanical rotation. The characteristics of solid-state provide a reasonable internal layout, heat dissipation, and stability — a leap in quality as compared to mechanical LiDAR.
Read More: How AR and VR are Changing the Real Estate Industry
The RS-LiDAR-M1Smart is a comprehensive system with sensor hardware, AI point cloud algorithm, and chipsets, which provides an end-to-end customer environment perception solution. RoboSense’s powerful AI perception algorithm injects the sensor with structured semantic-level comprehensive information, focusing on the perception of moving objects.
The RS-LiDAR-M1Smart Features:
- Adapts to complex traffic conditions.
- Supports multiple driving scenarios.
- Supports dense traffic flow, such as mixing pedestrians and vehicles in intersections during peak hours.
- Comprehensive perception of a wide range of dynamic, static, and background objects.
- Achieves semantic level prediction for 3D point clouds.
- Handles the challenges caused by two-wheel vehicles (motorcycles, bicycles, etc.) and pedestrians who do not follow traffic rules.
- Over-segmentation and under-segmentation are fixed based on the clustering algorithm. The robustness against sparse point clouds ensures the integrity of object detection.
- Outputs two redundant channels of data: the original point cloud and the object list. The two channels of data are redundant to provide vehicles with a wide range of sensing results, including dynamic and static and inside and outside the road.
Comments are closed, but trackbacks and pingbacks are open.