The path towards self-driving cars has had more bumps than the average Chicago roadway. The latest reports of Uber’s self-driving programs showed previous issues before leading to a fatal accident. Uber isn’t the only company pushing for autonomous vehicles, Lyft, Tesla, and even Google have been testing their own. While each company attempts to perfect the technology, many keep running into similar issues and have had similar results. The challenges include getting autonomous vehicles to react appropriately to situations drivers face every day and deciding liability.
There have been a number of controversies surrounding autonomous vehicles, and features on cars that make them nearly fully autonomous. From Tesla’s auto-pilot issues to Google’s interactions with local populations, self-driving capabilities have drawn much attention. Unfortunately, much of the attention is not surrounding what self-driving vehicles could do for us, but rather what they have done so far. In the case mentioned earlier involving an Uber self-driving car, the results were fatal, and who is liable is still unclear.
One of the biggest challenges for autonomous vehicles is something you wouldn’t expect to be a problem. Unfortunately, in the case of the self-driving Uber, the vehicle did not recognize a pedestrian crossing the street until just before impact. The investigators found that the person was jay-walking, not using the crosswalk. According to the investigating group, the software used to detect objects was not programmed to recognize jay-walking. As an added precaution, Uber has “drivers” behind the wheel for emergency situations, who are required to remain vigilant. Well, the person behind the wheel in this particular case was not vigilant, sparking the question of who is at fault
Who’s At Fault
Another challenge facing autonomous vehicle technology has been figuring out who is liable. In the Uber case, the State prosecutor has declined to press charges against Uber, and the police are looking into charging the “driver” instead. This opens up a few possibilities while sparking new questions. If the company’s new technology could not recognize or react to specific situations, is it not their fault? Yet, the person behind the wheel whose task it is to remain vigilant was distracted instead. However, if the vehicle drives itself, isn’t it an open invitation to be negligent since the “driver” is not driving. This case will bring more unwanted attention to not only Uber but to self-driving vehicles in general.
Although the challenges of programming vehicles to react to as human drivers do, and deciding who is at fault sounds like an insurmountable barrier, companies push forward. Lyft and ford are both working on or partnering with companies to assemble autonomous vehicles. It is safe to say most major transportation or tech-related company is working on something, which means we can expect some great technology down the road. Still, even with some really positive feedback from users, the technology has some major liability issues.
As stated before the challenges go beyond recognizing and reacting to situations drivers regularly face. The question of liability will plague autonomous vehicles until real decisions are made. This brings auto insurance into the fold, where companies will have a more difficult time figuring out who is liable for an accident. Many technicalities will be re-examined, and new legislation may be required in the future. What happens when two self-driving hit each other, or if the vehicle’s software makes a mistake. Would you have to prove to your auto insurance company that you were being vigilant in order to not be liable? Too many questions remain, which likely means mass production of self-driving won’t happen for a while.