According to The New York Times, auto safety regulators have determined the car manufacturer does not need to issue a recall of the model, which was in computer-assisted mode at the time of the collision.
The announcement was viewed as a significant victory for the manufacturer, although it’s not necessarily the last word on whether the vehicles may be defective. Certainly, it could be a setback for any individuals who may have pending litigation, but expert witnesses could potentially come to different conclusions about whether the vehicle was safe and whether a defect existed. It’s also possible that even if there was no defect in design, there may have been a defect in manufacturing, which could have affected solely the vehicle involved or a small number of vehicles produced at the same time.
The crash that occurred last May garnered international attention and had the potential to sideline the manufacturer’s trek toward producing totally autonomous vehicles.
It should be noted also that regulators were careful to say that these advanced driver-assisted vehicles, such as the one involved in this situation, can’t be trusted to react properly in all of the situations that may arise on the roads. What’s more, car manufacturers need to be perfectly clear about how these systems operate, what their weaknesses are, and what drivers should expect.
Our Charlotte car accident lawyers know that a failure on this front could be a basis for a claim of a breach of express or implied warranty by the manufacturer. It could also be a basis for a claim of failure to warn. Both of these are product liability claims that seek to hold a manufacturer liable when a product (in this case, a vehicle) fails to perform as promised or reasonably expected.
A representative of the National Highway Traffic Safety Administration told the Times there are a number of scenarios wherein the automatic emergency braking systems on these so-called “self-driving vehicles” aren’t sufficient to protect the vehicle occupants.
That was the case for the 40-year-old Ohio man killed in Florida while his vehicle was on this automatic mode on a state highway. He died after crashing into a tractor-trailer, and it was later determined that the camera on the decedent’s vehicle failed to detect the presence of the bright white truck against the bright sky.
The software in the vehicles, labeled by the company as “Autopilot,” has proven effective in preventing rear-end crashes and some other potentially perilous situations. However, in cases in which the vehicle is crossing traffic (such as the case involved in the fatal accident in Florida last spring), these vehicles are simply not equipped to handle the situation. The NHTSA spokesman stated that such cases are “beyond the performance capabilities” of these systems.
At all times, even when a vehicle is on the Autopilot mode, drivers have to remain fully engaged.
The Autopilot system was first engaged in October 2015 and utilizes a combination of radars and cameras in order to scan the road for other vehicles and obstacles. It has the ability to accelerate, brake, and even pass other cars automatically by tracking lines on the road and staying within those lanes. However, it isn’t truly an “autopilot” system, as the name might suggest.
Contact the Carolina injury lawyers at the Lee Law Offices by calling 800-887-1965.
Tesla’s Self-Driving System Cleared in Deadly Crash, Jan. 19, 2019, By Neal E. Boudette, The New York Times
More Blog Entries:
Car Accident Lawsuit Involves Sheriff, Qualified Immunity, Jan. 27, 2017, Charlotte Car Accident Lawyers