Explaining why Tesla can't recognize obstacles from the perception layer

Time:

Sep 25,2019

Author:

Source:


Recently, at a global smart car frontier summit, how to solve the problem of autonomous driving safety became the focus of attention of experts, scholars and business representatives. In fact, although users have enjoyed some of the convenience brought about by autonomous driving technology, the real-life discussions brought about by autonomous driving safety issues have not stopped. Tesla is a good example of a series of accidents that have occurred around the world. Among them, the recent accident in which a Tesla Model 3 (parameters|quote) rear-ended a stationary trailer with Autopilot on has aroused great concern. The outside world is puzzled: in the face of such a large obstacle in front of the road Autopilot auto-assisted driving system why "invisible"?

 

 

60 seconds to quickly understand the key information in this article:

★ The current camera + millimeter wave radar combination is like a frog's eye and has difficulty recognizing non-standard static objects. Even for dynamic objects, the recognition rate outside the vehicle is less than 80%.

★ Facing the complex driving environment, the data of a single sensor cannot meet the needs of perceived environment in various environments. For static object recognition, the fusion of radar, camera and laser imaging radar is a more ideal solution.

★In the process of self-driving operation, the interaction between human and self-driving car is very important, for the driver needs to strengthen the theoretical literacy, to avoid accidents caused by blind superstition of self-driving function.

▪ Without machine recognition training, the car will only panic

In ordinary people's perception, a large stationary obstacle is a very easy to identify objects, why Tesla Autopilot system will not be able to identify? Is it possible that all these sensors are virtually useless?

In this regard, ideal car CEO Li Xiang expressed his views on Weibo: "The current combination of camera + millimeter wave radar is like a frog's eye, which is okay for dynamic object judgment, but almost incompetent for non-standard static objects. Vision progress at this level is almost stagnant, even dynamic, the recognition rate outside the vehicle is less than 80%, do not really use as autonomous driving."

 

 

In that case, what is the difficulty of identifying still life by autonomous driving? What are the requirements for body-mounted cameras and radar sensors? In this regard, AutoZone exclusively interviewed Valeo China Chief Technology Officer Jianmin Gu and Continental Advanced Driver Assistance Systems China Head HAIYI Tang, two experts, from the perception level, that is, the sensor perspective to analyze these issues.

First of all, let's talk about what "perception" is. In the whole package of autonomous driving (including the perception layer, decision layer and execution layer), there is nothing more lively than the perception and decision layer, which is the area where artificial intelligence makes its presence felt. Perception is the ability of the autonomous driving system to collect information about the external environment and perceive from it, which is equivalent to the driver's observation of the driving environment.

At this stage, the perception layer of autonomous driving is realized through a variety of sensors. The main ones are LIDAR, camera, millimeter wave radar and ultrasonic sensors. The characteristics of various sensors are different and each has its own advantages and disadvantages. Facing the complex driving environment, the data of a single sensor cannot meet the needs of the perceived environment in various environments.

 

 

For the difficulty of sensor recognition of static objects, Jianmin Gu believes that "for the camera, machine learning is needed to train to recognize objects. But the static object category, the form is also very different, without the sample training to identify. Crash the rear of the fire truck or odd-shaped pier railing and other accidents, should be no such sample training caused by. In addition, front-facing cameras are also sensitive to weather and lighting conditions when recognizing objects; for fast-moving cars, front-facing cameras usually end up capturing blurred or distorted images of objects."

"For millimeter wave radar, on the other hand, it is mainly affected by the target's sensitivity to electromagnetic wave reflection; some rubbery static objects reflect poorly and recognition can be difficult. In addition, radar can barely distinguish between gantries, metal signs on the side of the road or stationary cars parked on the road, because the spatial resolution of radar is very poor, in the algorithm can only usually ignore the radar echoes that do not move relative to the road. Otherwise, cars would panic every time they passed a stationary object such as a road sign." Jianmin Gu added.

Tang Haiyi expressed a similar view in an interview with AutoZone. In his opinion, the distribution and types of static objects in the actual traffic environment are relatively complex, and it is difficult for a simple kind of sensor to achieve a high recognition effect. For example, the advantage of radar is to measure the speed, distance, for the smaller size, reflection is not strong still object no advantage. The advantage of the camera is that it can obtain rich environmental information, and can use machine learning to do target classification, but speed and distance measurement is not as good as the radar.

Tang Haiyi further explained that the combination of both radar and camera can relatively achieve some standard static obstacle recognition. But even so, the appearance of non-standard objects in the lane, or smaller size static obstacles (10-20CM) is still a problem, because at this time the radar is basically invalid, even if the machine learning can not identify the obstacles without training, and to achieve the detection of small size static obstacles on the ground, the detection distance, accuracy and perspective of the camera have relatively high requirements.

▪ Despised by Marx, LIDAR has great potential

Given this, is there not a better solution at the moment? What solutions do Continental and Valeo have in the field of identifying static objects in self-driving cars?

Jianmin Gu said, "Valeo's ScaLa laser scanner (LIDAR) can solve most of these problems and can detect stationary or moving objects very well, day or night."

 

 

In fact, the first generation of Valeo ScaLa achieved mass production on the new Audi A8 back in November 2017. According to Jianmin Gu, the second generation of ScaLa is scheduled for mass production in early 2020. Valeo's low-speed automatic parking is solved with camera computer vision three-dimensional target detection, fusion of other sensors such as ultrasonic or millimeter wave radar, etc., in the automatic parking has been mass-produced models.

 

 

Talking about LIDAR, the industry will definitely recall that Musk had publicly dissed LIDAR in April this year. At that time, he had bluntly said "fools use LIDAR, now who use LIDAR who finished". This statement has caused a technical difference between LIDAR and camera.

In this regard, an industry analyst said: "Once you use these theoretical technology for reality, there are many unknowns can not be avoided. Theoretically it may be possible to collect data with the camera alone, but to be 100 percent confident that the system is correct, it is best to incorporate other sensing to assist, such as LIDAR."

From an objective point of view, Musk's pursuit of the future Tesla to camera + millimeter wave radar + AI chip composed of autonomous driving system program also has its own considerations. The car home earlier interview Velodyne, the lidar company, the relevant person in charge told the editor of the car home, it can understand Tesla's position, that is, "autonomous driving" as the main selling point of the production car company, Tesla needs to consider a lot of cost factors.

 

"Tesla CEO Musk

However, autonomous driving is about vehicle safety and life safety, and to be able to detect various static objects, it is necessary to build and continuously maintain a large database of sample features to ensure that this database contains all the feature data of the target to be identified. Obviously from the current point of view the difficulty is quite high, not yet able to do. For LIDAR, it does not rely on ambient light, can be directly detected, three-dimensional imaging, identification of static objects more accurate and reliable. Although its current cost is quite high, but with the market price in the future will also gradually decline. Lidar and radar, camera fusion, can better identify the Tesla "can not see" the obstacles, improve driving safety.

If you think farther, consumers may ask again, that a car installed more LIDAR, millimeter wave radar and camera will not be able to achieve driverless?

Unfortunately, the answer is also no! Because sensors can't replace the brain, perception can't replace cognition. We should realize that no sensor can guarantee to provide completely reliable information in all situations. It's a simple fact that if a kid tells you he has superpowers and can sense everything within 200 meters, do you think he's an "old driver"? No! This child has never even driven a car, and that superpower will not teach him how to drive, except for making him see extra clearly.

Automatic assisted driving ≠ automatic driving Don't play with your life

When self-driving technology is not so perfect, it is very important for people to interact with self-driving cars in a way that is both agile and comprehensive. Drivers need to strengthen their theoretical literacy, which is in place, so that accidents do not occur because of blind superstition about self-driving features.

 

 

Tesla owner fell asleep while driving this scene let people see feel disturbed. Although Tesla has repeatedly reiterated the requirement for drivers to always keep their hands on the steering wheel and be prepared to take over the vehicle at any time, there are still owners who want to do the opposite, not knowing that your car is "invisible" if there are obstacles ahead.

The driver needs to be clear, L2 level driver assistance system, the driver needs to maintain attention to monitor the environment, the vehicle by the driver with the system control; L3 level self-driving car in the event of an inability to cope with the situation requires the driver to take over the vehicle; only L4 and L5 level self-driving car drivers can free their hands. However, based on existing technology and legal and regulatory conditions, L4 and above are still a long way from mass production.

Therefore, the driver in contact with the initial stage of self-driving car assistance system, if the "self-driving" understanding is not sufficient, it is easy to appear some conditions. In fact, vehicles with autonomous driving assistance systems generally display important information to drivers in various forms, such as how to turn on, how to take over, how to warn and what it means. This information will enable the driver to quickly and intuitively understand "autonomous driving".

 

 

To put it bluntly, you can think of the current vehicles with autonomous driving assistance systems as an elementary school student doing homework on the more difficult topics or needing a teacher on the side to help answer.

Finally, some friendly advice to users: based on existing technology and legal and regulatory requirements, when enabling assisted driving, do not take your hands off the steering wheel and be prepared to take over the vehicle at any time. Be sure to pay attention to the situation on the road, strong light, backlight driving pay attention to control the steering wheel and speed, high-speed driving for lane keeping function to control the distance between the car and the front, the foot should be placed on the brake pedal to maintain a reasonable distance between the car, to learn from a large number of accident cases.

Editor's summary:

At present, the level of autonomous driving worldwide is still in the stage of L3 to L4 level breakthrough, and the current autonomous driving functions equipped on all vehicles can only play an auxiliary role. In the face of a new technology, a new application, the user's understanding and the production side, the technical side of the understanding may not be at a level, the user's understanding is often prone to saber-rattling. Therefore, manufacturers should be more cautious and responsible for the safety of users. At the same time, for autonomous driving, consumers should have both emotional enthusiasm and rational thinking, which is the mindset we should promote. (Article/automotive home Peng Fei)