Zephyrnet Logo

Asimov’s Three Laws Of Robotics And AI Autonomous Cars 

Date:

Since it is life-or-death on the line, it is conceivable that we should consider applying Asimov’s three laws of robots to self-driving cars. (Credit: Getty Images)  

By Lance Eliot, the AI Trends Insider 

Advances in Artificial Intelligence (AI) will continue to spur widespread adoption of robots into our everyday lives. Robots that once seemed so expensive that they could only be afforded for heavy-duty manufacturing purposes have gradually come down in cost and equally been reduced in size. You can consider that Roomba vacuum cleaner in your home to be a type of robot, though we still do not have the ever-promised home butler robot that was supposed to take care of our daily routine chores.   

Perhaps one of the most well-known facets about robots is the legendary set of three rules proffered by writer Isaac Asimov. His science fiction tale entitled The Three Laws was published in 1942 and has seemingly been unstoppable in terms of ongoing interest and embrace.   

Here are the three rules that he cleverly devised: 

1)      A robot may not injure a human being or, through inaction, allow a human being to come to harm, 

2)      A robot must obey the orders given it by human beings except where such orders would conflict with the First Law, 

3)      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

When you read Asimov’s remarks about robots, you might want to substitute the word “robot” for simply the overarching moniker of AI. I say this because you are likely to otherwise narrowly interpret his three rules as though they apply only to a robot that happens to look like us, conventionally having legs, arms, a head, a body, and so on.   

Not all robots are necessarily so arranged.   

Some of the latest robots look like animals. Perhaps you’ve seen the popular online videos of robots that are four-legged and appear to be a dog or a similar kind of creature. There are even robots that resemble insects. They look kind of creepy but nonetheless are important as a means to figure out how we might utilize robotics in all manner of possibilities.   

A robot doesn’t have to be biologically inspired. A robotic vacuum cleaner does not particularly look like any animal or insect. You can expect that we will have all sorts of robots that look quite unusual and do not appear to be based solely on any living organism.   

Some robots are right in front of our eyes, and yet we do not think of them as robots. One such example is the advent of AI-based true self-driving cars. 

A car that is being driven by an AI system can be said to be a type of robot. The reason you might not think of a self-driving car as a robot is that it does not have that walking-talking robot sitting in the driver’s seat. Instead, the computer system hidden in the underbody or trunk of the car is doing the driving. This seems to escape our attention and thus the vehicle doesn’t readily appear to be a kind of robot, though indeed it is. 

In case you are wondering, there are encouraging efforts underway to create walking-talking robots that would be able to drive a car. Imagine how that would shake up our world.   

Right now, the crafting of a self-driving car involves modifying the car to be self-driving. If we had robots that could walk around, sit down in a car, and drive the vehicle, this would mean that all existing cars could essentially be considered self-driving cars (meaning that they could be driven by such robots, rather than having a human drive the car). Instead of gradually junking conventional cars for the arrival of self-driving cars, there would be no need to devise a wholly-contained self-driving car, and we would rely upon those meandering robots to be our drivers. 

At this time, the fastest or soonest path to having self-driving cars is the build-it into the vehicle approach. Some believe there is a bitter irony in this approach. They contend that these emergent self-driving cars are going to inevitably be usurped by those walking-talking robots. In that sense, the self-driving car of today will become outdated and outmoded, giving way to once again having conventional driving controls so that either the vehicle can be driven by a human or be driven by a driving robot. 

As an added twist, there are some that hope we will be so far along on adopting self-driving cars that we will not use independent robots to drive our cars.   

Here’s the logic. 

If a robot driver is sitting at the wheel, this suggests that the conventional driving controls are still going to be available inside a car. This also implies that humans will still be able to drive a car, whenever they wish to do so. But the belief is that the AI driving systems, whether built-in or as part of a walking-talking robot, will be better drivers and reduce the incidences of drunk driving and other adverse driving behaviors. In short, a true self-driving car will not have any driving controls, precluding a walking-talking robot from driving (presumably) and precluding (thankfully, some assert) a human from driving.   

This leads to the thinking that maybe the world will have completely switched to true self-driving cars and though a walking-talking driving robot might become feasible, things will be so far along that no one will turn back the clock and reintroduce conventional cars. 

That seems somewhat like wishful thinking. One way or another, the central goal seems to be to take the human driver out of the equation. This puts a self-driving car—one that has the AI driving system built-in or a robot driver—into a position to decide life or death.   

If that seems rather doom-and-gloom, consider the moment you put your beloved teenaged newbie driver at the driving controls. The specter of life-or-death suddenly becomes quite pronounced. The teenaged driver usually also senses this duty, .   

Since life and death are on the line, here is today’s intriguing question: Do the Asimov three rules of robots apply to AI-based true self-driving cars, and if so, what should be done about it?   

Let’s unpack the matter and see. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Asimov’s Laws 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving 

Let’s briefly take a look at each of Asimov’s three rules and see how they might apply to true self-driving cars. First, there is the rule that a robot or AI driving system (in this case) shall not injure a human, either doing so by overt action and nor by its inaction.   

That’s a tall order when sitting at the wheel of a car. 

A self-driving car is driving down a street and keenly sensing the surroundings. Unbeknownst to the AI driving system, a small child is standing between two parked cars, hidden from view and hidden from the sensory range and depth of the self-driving car. The AI is driving at the posted speed limit. All of a sudden, the child steps out into the street.   

Some people assume that a self-driving car will never run into anyone since the AI has those state-of-the-art sensory capabilities and won’t be a drunk driver. Unfortunately, in the kind of scenario that I’ve just posited, the self-driving car is going to ram into that child. I say this because the law of physics is paramount over any dreamy notions of what an AI driving system can do. 

If the child has appeared seemingly out of nowhere and now is say a distance of 15 feet from the moving car, and the self-driving car is going at 30 miles per hour, the stopping distance is around 50 to 75 feet, which means that the child could be struck. No two ways about that.  

And this would mean that the AI driving system has just violated Asimov’s first rule. 

The AI has injured a human being. Keep in mind that I’m stipulating that the AI would indeed invoke the brakes of the self-driving car and do whatever it could to avoid the ramming of the child. Nonetheless, there is insufficient time and distance for the AI to avoid the collision.   

Now that we’ve shown the impossibility of always abiding by Asimov’s first rule in terms of strictly adhering to the rule, you could at least argue that the AI driving system attempted to obey the rule. By having used the brakes, it would seem that the AI driving system tried to keep from hitting the child, plus the impact might be somewhat less severe if the vehicle was nearly stopped at the time of impact.   

What about the other part of the first rule that states there should be no inaction that could lead to the harm of a human? 

One supposes that if the self-driving car did not try to stop, this kind of inaction might fall within that realm, namely once again being unsuccessful at observing the rule. We can add a twist to this. Suppose the AI driving system was able to swerve the car, doing so sufficiently to avoid striking the child, but meanwhile, the self-driving car goes smack dab into a redwood tree. There is a passenger inside the self-driving car and this person gets whiplash due to the crash. 

Okay, the child on the street was saved, but the passenger inside the self-driving car is now injured. You can ponder whether the action to save the child was worthy in comparison to the result of injuring the passenger. Also, you can contemplate whether the AI failed to take proper action to avoid the injury to the passenger. This kind of ethical dilemma is often depicted via the infamous Trolley Problem, an aspect that I have vehemently argued is very applicable to self-driving cars and deserves much more rapt attention as the advent of self-driving cars continues.   

All told, we can agree that the first rule of Asimov’s triad is a helpful aspirational goal for an AI-based true self-driving car, though its fulfillment is going to be pretty tough to achieve and will forever likely remain a conundrum for society to wrestle with.   

The second of Asimov’s laws is that the robot or in this case the AI driving system is supposed to obey the orders given to it by a human, excluding situations whereby such a human-issued command conflicts with the first rule (i.e., don’t harm humans).   

This seems straightforward and altogether agreeable. 

Yet, even this rule has its problems.   

I’ve covered in my columns the story last year of a man that used a car to run over a shooter on a bridge that was randomly shooting and killing people. According to authorities, the driver was heroic by having stopped that shooter.   

If the Asimov second law was programmed into the AI driving system of a self-driving car, and suppose a passenger ordered the AI to run over a shooter, presumably the AI would refuse to do so. This is obvious because the instruction would harm a human. But, we know that this was a case that seems to override the convention that you should not use your car to ram into people. 

You might complain that this is a rare exception. I concur.    

Furthermore, if we were to open the door to allowing passengers in self-driving cars to tell the AI to run over someone, the resulting chaos and mayhem would be untenable. In short, there is certainly a basis for arguing that the second rule ought to be enforced, even if it means that on those rare occasions it would lead to harm due to inaction. 

The thing is, you don’t have to reach that far beyond the everyday world to find situations that would be nonsensical for an AI driving system to unquestionably obey a passenger. A rider in a self-driving car tells the AI to drive up onto the sidewalk. There are no pedestrians on the sidewalk, thus no one will get hurt.   

I ask you, should the AI driving system obey this humanuttered command?   

No, the AI should not, and we are ultimately going to have to cope with what types of utterances from human passengers the AI driving systems will consider, and which commands will be rejected. 

The third rule that Asimov has postulated is that the robot or in this case the AI driving system must protect its own existence, doing so as long as the first and second rules are not countermanded.   

Should a self-driving car attempt to preserve its existence? 

In a prior column, I mentioned that some believe that self-driving cars will have about a four-year existence, ultimately succumbing to wear-and-tear in just four years of driving. This seems surprising since we expect cars to last much longer, but the difference with self-driving cars is that they will presumably be operating nearly 24×7 and gain a lot more miles than a conventional car (a conventional car sits unused about 95% to 99% of the time).   

Okay, so assume that a self-driving car is nearing its useful end. The vehicle is scheduled to drive itself to the junk heap for recycling.   

Is it acceptable that the AI driving system might decide to avoid going to the recycling center and thus try to preserve its existence?   

I suppose if a human told it to go there, the second rule wins out and the self-driving car has to obey. The AI might be tricky and find some sneaky means to abide by the first and second rule, and nonetheless find a bona fide basis to seek its continued existence (I leave this as a mindful exercise for you to mull over).   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

It would seem that Asimov’s three rules have to be taken with a grain of salt. The AI driving systems can be devised with those rules as part of the overarching architecture, but the rules are aspirationsnot irrefutable and immutable laws.   

Perhaps the most important point of this mental workout about Asimov’s rules is to shed light on something that few are giving due diligence. In the case of AI-based true self-driving cars, there is a lot more to devising and deploying these autonomous vehicles than merely the mechanical facets of driving a car.   

Driving a car is a huge ethical dilemma that humans oftentimes take for granted. We need to sort out the reality of how AI driving systems are going to render life-or-death decisions. This must be done before we start flooding our streets with self-driving cars. 

Asimov said it best: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”   

True words that are greatly worth revisiting.  

Copyright 2021 Dr. Lance EliotThis content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

Source: https://www.aitrends.com/ai-insider/asimovs-three-laws-of-robotics-and-ai-autonomous-cars/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?