Can Self-Driving Cars Differentiate Between an Object and a Projected Image?

Posted on

Self-driving vehicle technology has grown by leaps and bounds in recent years. However, like any new technology, it has flaws that were discovered recently that have people concerned for their safety. Researchers from Ben-Gurion University in Israel discovered that semi-autonomous and fully autonomous vehicles could be tricked into stopping by projecting an image of an object or a pedestrian onto the road. The vehicle detected the image as an actual person or object rather than a “phantom” image. This fuels one of the main concerns related to self-driving cars, which is the ability for people to hack into the system and take control of the vehicle.

According to the lead author of the study, the companies that are developing these self-driving cars are not paying enough attention to these technological issues. The fact that these vehicles cannot distinguish between a real and fake object is a fundamental flaw, not a glitch or poor coding error. A similar incident was reported by researchers from the University of South Carolina, who were able to trick a self-driving Tesla into thinking an object was there but was instead a projected image.

Autonomous vehicles could not detect when something was a 2D image. The researchers began using a neural network to develop a system that is capable of differentiating between a projected, 2D object and an actual person or object, such as another vehicle, a stop sign, or a telephone pole.

Impact of Driverless Cars Being Hacked

There are a number of reasons why researchers are concerned about hackers accessing self-driving cars. Even if the hacking is done in a low-tech fashion, with only a small number of vehicles getting hacked, it can have a major impact on a city’s traffic conditions, which can cause hazardous driving conditions. The people in the vicinity of the hacked vehicles could be in danger, but the ripple effect could impact drivers beyond the immediate area, causing major traffic issues.

Expecting car manufacturers to design self-driving cars that are 100 percent protected against hackers may be a tall order, but it should be able to detect the difference between an object and a projected image. Approximately 40,000 people are killed in motorist-operated car accidents each year. It is possible that autonomous vehicles may be safer, and some experts believe that it is more difficult to hack into an autonomous car compared to a driver-operated car.

Baltimore Car Accident Lawyers at LeViness, Tolzman & Hamilton Represent Victims of Self-Driving Car Accidents

If you were seriously injured in a car accident involving a self-driving car, you are urged to contact the Baltimore car accident lawyers at LeViness, Tolzman & Hamilton as soon as possible. Car manufacturers have a responsibility to ensure that their vehicles’ technology is safe, effective, and that it can detect whether an object is real or projected. We will determine who is responsible for your injuries and secure the maximum financial compensation you deserve. To schedule a free consultation, call us today at 800-547-4LAW (4529) or contact us online.

Our offices are located in Baltimore, Columbia, Glen Burnie, and Prince George’s County, allowing us to represent victims in Maryland, including those in Anne Arundel County, Baltimore County, Carroll County, Harford County, Howard County, Montgomery County, Maryland’s Western Counties, Prince George’s County, Queen Anne’s County, Southern Maryland, and the Eastern Shore, as well as the communities of Catonsville, Essex, Halethorpe, Middle River, Rosedale, Gwynn Oak, Brooklandville, Dundalk, Pikesville, Nottingham, Windsor Mill, Lutherville, Timonium, Sparrows Point, Ridgewood, and Elkridge.