By: Marta Robertson
Image recognition will make our cars safer, more efficient and reliable. Learn how image recognition technology is improving.
The image of a self-driving car has been prevalent in sci-fi movies for the last several decades, but the reality is just starting to catch up. There are currently some prototypes from Google, Ford, Tesla, General Motors, and Apple. These companies have invested heavily in the technology for autonomous vehicles so far, with Uber’s self-driving cars valued at $7.25 billion.
The success of the product is highly dependent on the automation level achieved. These are the five widely-accepted levels:
- Driver assistance includes safety features, which have now become mandatory.
- Partial automation is about stability control, blind-spot detection, and collision warning, yet keeping the driver fully engaged.
- Conditional automation gives the driver a supervisory role while staying ready to take control at all times.
- High automation includes self-parking, lane-keeping, and traffic jam assistance.
- Full automation implies that the driver is no longer present; vehicles communicate with each other on their own.
It follows that the evolution from each step to the next one requires substantial innovations and control systems.
Some of these cars rely on LiDaR (Light Detection and Ranging), a laser-based technology to 3D-map the environment in a similar way to sonars. It can detect objects, slope changes, street furniture, and more. However, it has no predictive abilities; it is lagging a bit due to the need for the light to come back to the receiver and the newly created data points to be evaluated.
To solve this problem, Elon Musk suggests focusing more on cameras and AI, an idea also picked up by Apple. This means that there will be more pressure on improving the image recognition aspect of autonomous cars.
Computer Vision for Automotive
What if we could have a second pair of eyes to help compensate for the driver’s mistakes? This is possible through computer vision (CV), a set of algorithms which strive to mimic human understanding of the surroundings and which make uses of image recognition possible in multiple areas.
As the first step, the CV processing unit identifies the objects with a machine learning algorithm (typically a convolutional network), which has been trained on millions of images from the real-life environment. At this point, the computer assigns tags to each object, like “a car,” “a pedestrian,” “a traffic light,” “street furniture,” or “a cat” and determines their geometric boundaries.
The problem that could arise is that convolutional networks only know how to classify single objects. This problem is solved by moving a sliding window over the image and breaking it into smaller images. The image gets split into a grid, and each piece of the grid receives a score regarding the object it holds.
The next step is about making predictions about the previously identified objects. For example, are the nearby cars continuing their trajectory at a safe distance, or are they moving dangerously close? Are the pedestrians on the sidewalk or crossing the street? This detection is done through image localization. The difficulty here is that the same object might be split across multiple grid cells, a challenge solved by identifying the cells with the highest probability of having a specific object in adjacent cells.
Until the industry gets to the fully autonomous vehicle stage, there are important safety measures to be implemented along the way to prevent crashes with the help of computer vision.
Lane departure warning (LDW) systems can identify another vehicle entering the current lane without proper preparation (the turn signal or ensuring the road is clear), trigger a warning for the driver, or even activate automatic braking systems and help avoid a collision. It is also helpful if the driver gets distracted.
A CV system can also monitor the driver’s reactions and biometrics (vision, pulse, etc.) to ensure that there is no risk of falling asleep while driving and also prevent accidents by blocking the car if the driver seems to be under substances.
CV can also help prevent accidents involving pedestrians. When the car identifies a pedestrian, the vehicle CV system will keep monitoring its behavior, triggering a warning signal to the driver if the pedestrian does anything unsafe, like crossing the street at a red light or where it’s not permitted.
A CV system mounted on a car also enhances the night vision mode, helping the driver avoid imminent danger in the blink of an eye by seeing as good at night as during daytime.
Preventing accidents is the top priority of both self-driving cars and the assistance systems found in more conventional cars, but there is another goal: efficiency.
When the majority of vehicles become semi-autonomous or entirely autonomous, capable of communicating with each other, route optimization and better resource allocation will hopefully alleviate urban traffic. For example, by scanning the danger and transmitting this information to other vehicles, there will be no more traffic jams caused by accidents.
Fuel efficiency is also on the short list of objectives, although we will most probably be talking about the efficiency coming from electric motors rather than generated by fossil fuel. Here, image recognition can map the terrain ahead and help the computer decide on the best power consumption, particularly in the case of hills and valleys.
The ultimate goal of self-driving cars is not only the driver’s comfort, but increasing road safety and preventing traffic jams. We live in a world where there are on average 3,287 deaths a day in car crashes, out of which 1,000 are of people under 24. Any technology that can cut these numbers down is worth investing in.
About the author
Marta Robertson has over 7 years of IT experience and technical proficiency as a data analyst in ETL, SQL coding, data modeling and data warehousing involved with business requirements analysis, application design, development, testing, documentation, and reporting. Implementation of the full lifecycle in data warehouses and data marts in various industries.
Featured image via Pixabay.