Skip to main content

Are self-driving cars ready for prime time? Not hardly; not hardly

Are self-driving cars ready for prime time? Torque News looks at the issue and finds that well, maybe, sort of, they aren't, but will be a few years from now.

With the first fatality linked to an autonomous vehicle this month, the auto industry has paused in its headlong race to see which carmaker or service would be the first on the road with a fully autonomous vehicle. The fatality caused Toyota to halt its autonomous driving tests and development, while Uber froze its self-driving car efforts in place as it helps officials figure out what happened. Other automakers halted or slowed their development programs, as well. It is, in my opinion, a needed break in the race toward self-driving cars.

First self-driving car fatality

Uber was the ride-sharing service involved in the fatality last week as one of its vehicles, a Volvo XC90, operating in self-driving – autonomous mode – struck a pedestrian in Tempe, Ariz. There was a “safety” driver, who, some sources say, may or may not have been able to stop or move the test mule in time to save the homeless person crossing the roadway. Video from two cameras on the vehicle shows a person materializing suddenly out of the dark as she walked her bike across the road, while the driver appears surprised at her sudden appearance. Torque News take on Tempe fatality

Since the first self-driving car demonstrations were launched about five years ago, there has been only one mode for not only the auto industry but also for regulators, full speed ahead. However, in all honesty, there are three questions that have never been answered:

  1. Is autonomous technology really needed?
  2. Is autonomous technology safe?
  3. Is autonomous technology ready for prime time?

In my opinion, the answers should be prefaced with it depends. It depends whether people are fed up with driving and want their cars to do it for them? It also depends whether people are ready to wait until the self-driving car concept is fully proven. And, it depends whether cars and technology are so far ahead of us that they can take over from people behind the wheel.

Is self-driving car technology safe?

It also depends whether self-driving car the technology is safe, no matter what industry or government says about it. Finally, it depends what you consider “prime time.” When I think of prime time, it says to me that the technology under consideration has been adequately vetted so that vehicle safety in self-driving mode is comparable or better than driven-mode.

(Some commentators have pointed out that drivers are often responsible for pedestrian accident injuries and fatalities so self-driving cars and related technology has to be real, as quickly as possible. Like as not, the pedestrian may not have been killed if the sensors were working correctly. Indeed, one story pointed out that the pedestrian should not have been killed. It’s fairly obvious the sensors onboard the Uber car never even saw her, though they should have. Self-driving cars won’t be ready for the roadways until all sensors and detectors are as foolproof as the headlight.)

So, you might be wondering why I am nattering on about self-driving cars and technology? The answer is simple: it’s not as safe as some folks would have you believe. For example, the Trump Administration’s Department of Transportation (DOT) and its automotive regulatory arm, the National Highway Traffic Safety Administration (NHTSA) are on the way toward eliminating all controls testing and deploying self-driving cars. They are also urging states to stay out of the way. Safety groups, on the other hand, believe NHTSA and regulators should be taking the lead in providing totally safe self-driving vehicles. They believe any deployment should be delayed until the technology is proven safe.

Perhaps if the authorities had remained more engaged and vigilant over the last half-decade, we wouldn’t be in the position in which we are now: the auto industry stuttering over words they don’t want to hear such as safety testing and more extended vetting programs and consumer groups raging against the lack of progress. And, the City of Tempe wouldn’t have buried a homeless person who just happened to be in the wrong place on Uber test time. (refer to john’s story)

Auto industry pushing self-driving technology

Though the auto industry has been touting self-driving cars for at least the last five years and has been pushing to put get them onto roads as quickly as possible, there is little indication that any responsible independent, industry, consumer or regulatory body has done any meaningful testing on self-driving car technology.

Of course, manufacturers do their share of in-house testing to ensure that any technology they place in their vehicles works as it should and doesn’t interfere with other systems. But, no agency like the NHTSA or a responsible industry or consumer groups like the Insurance Institute for Highway Safety (IIHS) or Consumer Reports has established comprehensive self-driving car testing programs to ensure that the technology is safe. About the only known testing program, in fact, has been conducted by Google, which is certainly not a disinterested party as it launched the march toward self-driving cars five or six years ago.

Of course, it is likely there are hundreds of pages of anecdotal and hard data stored in auto industry files about issues found during the miles driven by specific brand test vehicles. However, the files probably are marked SECRET or EYES-ONLY. As far as I know, there have been no hard metrics that prove self-driving cars are ready for prime time. Yes, there are people out there who discuss the safety and convenience offered by self-driving cars, but, when you peal away one or two layers, you find that they are likely to be industry spokesmen or spokespeople for parts manufacturers, the very people who are pushing this technology in the first place.

To be sure, their efforts have yielded improvements such as adaptive cruise control and automatic braking; adaptive headlights; blindspot warnings; lane-keeping technology, and the like. Those pieces are working, quite well, in fact, but they are individual parts or systems of parts, not the whole enchilada. That said, various media outlets have also been touting autonomy. And, the auto industry has been beating the drums, pushing self-driving cars, as well, often with claims that sometimes leave you shaking your head. Indeed, if the auto industry had its way self-driving vehicles would be on the road tomorrow as they want us to believe it is a proven technology. The reality is very much otherwise at the moment; look at the Tempe fatality, as the latest example. Self-driving cars and technology just aren’t ready yet. There have been incidents across the country from Sacramento to Las Vegas and now Tempe as well as in other cities, such as Pittsburgh and elsewhere.

If you think that this isn’t correct, here are a couple of examples from the recent past that point to where we are.

In California, Google has been conducting long-term testing which it is using to prove the safety and efficiency of its self-driving car concept. I would bet that if you were to ask the Alphabet, Inc. subsidiary about its self-driving development program, that they would claim there had been no major issues to derail it. In fact, they would likely affirm the safety of their vehicles and the self-driving car program. But, really, were there no problems? Well, there was that incident in Sacramento in 2017.

Google test vehicle versus a bus

It seems a Lexus RX370, equipped with Google’s autonomous technology and operating in self-driving mode (with a human monitor), was in the path of a Sacramento municipal bus. Apparently, the software directing the Lexus expected the oncoming bus to move aside, probably because it was an autonomous vehicle. However, the approaching coach just wasn’t cooperating. The RX370 didn’t understand why the bus wasn’t operating by its “rules.” Those rules must have indicated to the Lexus that the coach would move aside or stop, giving the autonomous vehicle the right of way. That didn’t happen. Instead, the inevitable occurred, the Lexus started up, even though the bus kept on coming and struck the bus on the right-hand front side. There were no serious injuries though some passengers were shaken up.

There was an investigation. As far as I know, the results are not public, yet. To its limited credit, Google did take partial responsibility for the crash, but that was it. It left the majority of responsibility out there, silently blaming the bus for the whole thing. That the self-driver may have been a bit more responsible seems to have been borne out by Google’s quick announcement that it had made some serious changes to its software, curing the problem. Google spokesmen said it would never occur again. Well, that’s one cure out of a universe of potential problems, and it is nice to know they fixed that one, but what about pedestrian traffic or a bicycle that is not operating as the self-driving rules expect? Couldn’t something similar or worse happen again? I think the Tempe fatality answers that question.

Moving on to another issue, a self-driving small bus was involved in a more serious crash with a semi-trailer, which, admittedly, wasn’t supposed to be where it was at the time of the accident. The police ruled the truck driver at fault. However, it illustrates the state of the art in self-driving auto technology.

Late last year, Las Vegas announced that it was establishing a-mile loop over which a self-driving vehicle would run, picking up and discharging passengers from various shopping and entertainment venues. It should have been a snap to code, too. After all, it was a fixed route with a low-speed, known-speed target. Again, the software made assumptions that were not true. The assumptions seem to have included the idea that there would be no vehicles parked on or near the vehicle’s route.

Two vehicles can’t occupy one spot

With a semi parked where it should not have been – on the self-driver’s route, trying to occupy the same space as the autonomous vehicle – the results were predictable: there was an accident, and some people were a bit shaken up. The crash, by the way, was caused when the autonomous vehicle didn’t know what to do in the situation where it found itself. To me, it sounds like another software limitation. However, there has been no release of final information.

The semi’s driver admitted he was parked where he shouldn’t have been parking, but still, the self-driving vehicle should have addressed the issue. To my thinking, the programmers should have done more modeling and “what-if” testing before they let the self-driver loose on the public.

Do you see a pattern here? I’ll grant you that my database is only two incidents deep. The information is the most detailed I have from which I can draw some conclusions. There seem to be just too many similarities between the events to ignore the fact that self-driving cars are not yet ready for public release.

The fatality in Tempe adds another layer of urgency and complexity to the entire picture. Let’s look at this from another standpoint. If you are familiar with the author Isaac Asimov and his robotic tales then you know he posited three laws for robots, one of which was robots could not harm humans, the primary law. The second and third laws backed up the first. Honestly, I think those backing quick deployment of self-driving cars should take a closer look at the author’s work. Granted, it was part of a series of famed sci-fi stories, but the robotics laws still have some merit, in my opinion.

The main rule should be a vital part of any attempt to take on self-driving car technology. I know this may sound a bit foolish. However, if you think about it, and if you were to code in the idea that robots or artificial intelligence cannot harm humans and must protect them, then here is what should have happened as the Volvo test car approached a formless object in the dark; it should have assumed the form in the gloom was human and it should have braked to a halt, shutting down.

Uber has been testing self-drivers, using Volvos

The vehicle was one of Uber’s self-driving cars. The ride-sharing firm has been testing them for some months now. Volvo has also been supplying cars for conversion to autonomous driving. The accident caused Uber to stop its program immediately. The most chilling part of this pedestrian accident was that the self-driving car never slowed.

More importantly, there was a human control driver behind the wheel which indicates another issue could have played into this crash, distraction. Looking closely at the driver-facing video, you can see the driver concentrating on something in his lap. At the last minute, he pulls his gaze up and is shocked when he sees the vehicle about to hit a pedestrian. It was too dark to have done much on that stretch of road, but, still, the driver should not have been doing something else. Instead, the driver should have been concentrating on the highway ahead.

Like as not there won’t be much more forthcoming from Tempe or the feds until the crash investigation finishes up. However, things point to a very important issue facing all self-driving car developers, what happens if a major subsystem fails. Indeed, the car never slowed as it approached the night-cloaked pedestrian. It seems key sensors failed at the worst possible moment with a pedestrian crossing a darkened roadway (sonar or radar sensors should illumnate pedestrians, even on dark roads). That tells me self-driving car technology is not ready for roadways at all. If a vehicle has major failures, it should warn the driver and should just stop. There is too much at stake to allow self-driving cars to continue on without their “eyes and ears” working properly. And, of course, the auto industry wants its products to work correctly at all times, however, things do break. So, there must be ways to keep tragedy from happening. There are just too many things that can go wrong. Whether the fatality was the result of poor maintenance or just sensor failure – no one knows yet – there have to be failsafes ready to take overat all times to ensure safe operation.

These three incidents – there are others across the country – point out something that I think the auto industry would like us not to consider: autonomous driving is a myth now. It is a myth which points to the main issue self-driving technology faces; it is just not ready for public roads. There are too many unanswered questions that remain. Until the answers are found – they will be, but it will take time – self-driving technology must remain the subject of research and development. It is far too soon to put out there.

Sources: Torque News, Car and Driver, MSN

Photo courtesy Motor Trend