By allowing ads to appear on this site, you support the local businesses who, in turn, support great journalism.
What are a few deaths if Google & Tesla can make more money?
Placeholder Image

It is important to determine whether Joshua Brown was watching Harry Potter on a DVD player when he drove into history May 7 as the first motorist to get killed while his Tesla was driving itself.
Tesla spokespeople — of course — are tripping over each other to stress auto-pilot was never intended for drivers to ignore what was going on.
But before we explore that line of defense, let’s recap the accident.
The National Highway Traffic Safety Administration reports the 40-year-old Brown was going down a Florida highway on auto pilot when a semi-truck up ahead was making a left-hand turn. Instead of slowing down the Tesla kept on going. In fact, the truck driver said the car was moving at a high rate of speed and switching lanes leading him to believe the driver may have been suffering a heart attack.
The Tesla Model S ended up driving under the trailer ripping the top of the car off with such a force it caused the trailer to rock violently. When the truck driver reached the car’s wreckage he heard a “Harry Potter” movie playing. A DVD player was found in the car.
The Tesla camera system could not detect or separate the white trailer from the bright sky behind it.
As much as you might have reservations about how Tesla’s success is built on tax credits before and after the sale of cars and whether that is a sustainable business model given tax credits aren’t forever, it is kind of hard to blame Tesla for this one.
But then again I am not a personal injury lawyer, an insurance company, or a grieving relative of the deceased.
Tesla does indeed warn people not to rely 100 percent on auto pilot.
Autonomous driving car advocates love to compare them to commercial jetliners that have been using auto pilot systems for years. But the rules are clear: There has to be someone at the controls at all times in case something goes wrong. And auto pilot is never used on landings and takeoffs because there are so many variables that can go south quickly where human judgment is needed to overcome computer analytics that rely not on experience but programmers.
The likes of Google and Tesla have been slamming government agencies for years arguing their desire — or more accurate — authority to regulate for the public’s general safety is holding back the deployment of self-driving cars. In the same breath they want to be held harmless if something goes wrong meaning when a vehicle equipped with their system gets in an accident.
It is clear human error — or human smugness — contributed to the May 7 crash. The driver — if the observations of the truck driver are correct — obviously wasn’t paying attention to what his 4,600-pound weapon was barreling down on.
But then again you say auto pilot and what do some people think?
Back in the 1970s an Iranian college student’s attorney sued — unsuccessfully I might add — a car dealership after the student got into an accident on a Southern California freeway after he put his camper van on cruise control and went into the back to retrieve an item.
His attorney tried to argue the dealership should have known the language barrier would lead the Iranian college student to believe cruise control meant the van would drive on its own.
In the Tesla accident, the driver was no dummy nor did he have comprehension issues. He was an ex-Navy SEAL and owned his own tech company. He also apparently liked speed as he collected eight speeding tickets over six years.
Technology is not 100 percent full-proof especially when it comes to the endless variables you can encounter driving anywhere in the United States.
The regulatory slowdown does take that into account. But the real issue driving regulations with driverless vehicles is who assumes responsibility when something goes wrong — the manufacturer or the driver. Google et al want to be held harmless. It’s new technology, right? Why should they be slowed down by product liability?
But here’s the problem: Even if the buyer of a vehicle from Tesla signs an ironclad legal document absolving them of any responsibility in an accident, what about anyone that the driver may hit while on auto pilot?
We are told auto pilots don’t make mistakes. Tell that to family and friends of Joshua Brown.
You won’t find any disagreement from this corner that government regulations can be burdensome as well as slow down and even kill innovation. Many safety experts believe driverless technology can save lives. That of course assumes the driver is still engaged in some manner.
That said, the argument by the tech sector and the politicians whose campaigns they fuel that argue an exception should be made due to emerging technology is a throwback to the dark days of the Industrial Revolution.
Emerging technologies were improving standards of living for many but at the same time frequent industrial deaths and accidents that maimed workers for life were considered unavoidable collateral damage in the drive for the rich to get richer.
And let’s make no mistake about this. Google, Tesla and all of their kissing cousins are not in this for altruistic reasons. They are no different than old school companies. It’s about the money.
Being held liable for when your products might have a role in causing a death or serious injury impacts the bottom line and reduces the number of mansions and $300,000 sports cars their chief executive officers and their entourage can buy.

This column is the opinion of executive editor, Dennis Wyatt, and does not necessarily represent the opinion of The Bulletin or Morris Newspaper Corp. of CA.  He can be contacted at or 209.249.3519.