Zephyrnet Logo

Cars Careening Out-of-Control In Crash Mode: The Case Of AI Autonomous Cars

Date:

The AI of self-driving cars needs a “crash mode” to try to prevent it from going out of control as an accident unfolds. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

Bam!

While innocently sitting at a red light, a car rammed into the rear of my car. I was not expecting it.

Things began to happen so quickly that I barely remember what actually did happen once the crash began.

Within just a few brisk seconds, my car was pushed into a car ahead of me, ripping the back and left-side of my car. The gas tank ruptured and gasoline leaked onto the ground, my airbag deployed, most of the windows fractured and bits of glass flew everywhere. Basically all heck broke loose.

This actually happened some years ago when I was a university professor. I had been driving past an elder care facility on my way to the campus. A car driven by someone quite elderly had come up behind me at the red light and he inadvertently punched on the accelerator rather than the brake. His car rammed into my car, and my car rammed into the car ahead of me.

Fortunately, none of us were badly injured, but if you saw a picture of my car after the incident, you’d believe that no one in my car should or could have survived the crash.

My car was totaled.

I think back to that crash and can readily talk about it today, but at the time it was quite a shocker.

Speaking of shock, I am pretty sure that I must have temporarily gone into shock when the crash first started.

I say this because I really do not remember exactly how things went in those few life-threatening seconds. All I can remember is that I kind of “woke-up” in that I consciously realized the airbag had deployed, and that my windshield was busted, otherwise I was utterly confused about what was going on. It was as though a magic wand had transformed my setting into some other bizarre world.

As I sat there in the driver’s seat looking stunned, and as I slowly looked around to survey the scene, trying to make sense of what had just occurred, some people from other cars nearby had gotten out of their cars right away and ran to my car. With my driver’s side window nearly entirely smashed and gone, they yelled into the car and asked me if I was okay. I looked at them and wasn’t sure that I understood what they were asking me and nor why they were even talking to me.

It was at that point that I smelled the strong odor of gasoline.

In that same instant, the people standing outside my driver’s side window were yelling for me to get out of the car because of the gasoline that had poured onto the street. I realized later on that these good Samaritans were very brave and generous to have endangered themselves in order to warn me about the dangers that I faced.

Luckily the car door still worked, so I opened it, undid my seatbelt, pushed away the remains of the air bag, shifted my body and my legs to position outside the door, and stepped out of the car.

I nearly collapsed.

Turns out my legs had gone weak as the aftermath of the shock and fright involved. Several people helped walk and semi-drag me to the curb and get away from the car itself. I sat there on the curb, watching as everyone was running around trying to help, and for a moment I thought it had occurred without me being involved at all. I was just a bystander sitting at the curb after a car accident had happened.

When the police and an ambulance showed-up, I had regained my composure. I was standing up sturdily now and calmly examining the cars. At first, the police officers and the medical crew doubted that I had been inside the car and certainly doubted that I had been the driver. I had nary a scratch on me. I seemed coherent and able to talk about what had happened.

In fact, and you’ll maybe laugh at this, I was mainly worried that I would be late to teach my class at the college.

I had never been late to any of my lectures.

What would the students do, what would they think?

Of course, I realized later on, after several years of being a professor, the students probably welcomed being able to skip a lecture and would fruitfully use their time for other “academic” purposes.

The main aspect about the incident was that my mind was blurry about those key seconds between having gotten hit from behind and the realization that I was sitting in my driver’s seat and glass was around me and my airbag was in front of me.

I cannot to this day tell you exactly what happened in those precious few seconds.

I am pretty sure that my body was likely a rag doll and merely flopped around as the impact to the car occurred.

Which way was my head facing?

Well, I had been looking straight ahead at the intersection while waiting for a green light, so presumably my head was still pointed in that direction when the initial impact occurred. Where were my arms and hands? I had been lightly holding the steering wheel and so that’s where my arms and hands were, at least up until the impact. My legs and feet were under the dash and positioned at the pedals, including that my right foot was on the brake, doing so because I was at a red light and stopped, again that was just before the impact.

I wondered whether there was anything I could have done once the impact began.

Suppose I had been forewarned and told or knew that a car was going to violently ram into the back of my car. Let’s further assume that I didn’t have sufficient time to get out of the way or make any kind of evasive maneuver.

It’s an interesting problem to postulate.

Acting As A Car Crash Begins To Emerge

We usually think about the ways to avoid a car accident.

What about trying to cope with an accident that is emerging, supposing you have a brief chance to take some form of action, aiding in perhaps reducing the impact but not being able to fully avoid the incident overall.

In this case, if I had some premonition or clue that the accident was going to happen, maybe I could have tried to turn the wheels of the car so that it might move away from the car ahead of me once my car got rammed.

Or, maybe I might have put on the parking brake in hopes it would further keep my car from being pushed by the ramming action.

The medics at the scene told me that I was probably lucky that I did not realize that the ramming was going to occur, since most people tense up.

They said that tensing up is often worse for you when you get into a car accident. According to their medical training and experience, there is a greater chance that when being jarred harshly, jostled and tossed around, the tightened or tensed muscles of my body would try to fight against the movement, and likely lose, thus it would lead to greater physical injury to my body. Instead, by being loose and unknowing, my body was more fluid and accommodated the rapid pushing, shoving, and fierce shaking.

I’d like to put aside the idea that I might have been forewarned, and instead consider a slightly different angle to the incident.

Suppose that my mind had remained clearly alert and available during those few seconds in which the accident evolved. I mentioned to you earlier that I have no particular recall and those moments are blurry in my mind, let’s pretend differently.

Pretend that my mind was completely untouched and could act as though it was separate from the severe contortions happening to my physical body.

What then?

Reenactment Of Car Crash Timing

We’ll start the clock at the moment of impact.

The car behind me has just collided into the rear of my car.

This is time zero.

Over the next few seconds, the impact will work its way throughout my car.

You might want to consider this akin to those popular online videos in which things are filmed in slow motion. You know, the videos that show what it looks like in the split seconds of a bullet going through a piece of wood or a watermelon being smashed. Imagine a slow-motion version of my car incident.

We’re now assuming that my mind can undertake whatever kind of thinking might be pertinent to the matter at-hand. Of course, my mind might be thinking about that lecture I was going to give that day, or maybe what I was going to eat for dinner that night. Put those thoughts aside. In this slow-motion version, devote my mind to focusing on the car accident that is happening.

I’d also suggest that we assume that my senses are all in perfect working order too. You might argue that my senses are going to get muddled by the forceful jerking efforts of the car being rammed, which I agree seems likely.

In a moment, I’ll revisit the pretend with that mushing effect to my senses as another variation.

Okay, my mind is fully active, focused on the car incident as the clock starts to tick, and I’ve got control over my sensory faculties, and we’ll include that I have control over my body. This means that I can take whatever kind of driving action that I want to undertake.

Is there anything that I can do to drive the car in those few seconds that might in some manner lessen the impact of the car accident?

Maybe I had taken my foot off the brakes when the real accident occurred, reflexively, and in the case of this pretend we could assert that I am going to keep my foot on the brakes. Perhaps my arms and hands flew off the steering wheel in the real incident.

Let’s pretend that I keep them on the steering wheel.

It’s not evident how much my added ability to control the car in this particular incident is going to be aided by my clear mind and the use of my senses and my body.

One limiting factor is the car and the circumstances of where the car was positioned.

The car was being pushed fiercely from behind. In this case, the brakes weren’t doing much in those split seconds anyway. The fact that there was a car ahead of me pretty much stopped my car from going much further ahead, due to my ramming into it, and I was pinned between two cars now. One car pushing from behind, the other car at a standstill and preventing me from readily driving forward.

The car itself is a limiting factor too in that the brakes might have gotten cut anyway upon the impact to the car.

In that case, pushing on the brake pedal might not have had any material effect. Likewise, the steering wheel might not be useful during those few seconds, if the linkages and internal steering controls were damaged or unable to relay my positioning of the steering wheel.

In my case, I’m going to toss in the towel and say that it is unlikely that if my mind had remained clear and available, and if my senses were continually available and working, and if my body was functioning so that I could use it to actively and purposely drive the car, there’s not much that could have gone differently to improve what happened during those seconds of impact and reaction.

If you look at different circumstances, the results might come out differently.

Remove the car that was ahead of me.

Pretend I have a straight-ahead path.

Assume too that I can see the intersection and there are no cars in it, meaning that I can use the intersection if I want to do so.

Does this change things?

In theory, depending upon the pace at which my car can accelerate, and depending upon the pace at which the car from behind me is ramming into me, there is some chance that I could have punched down on the accelerator and tried to leap ahead. It would have become a kind of race, starting when the impact began, the zero-clock point that I mentioned earlier. This could potentially have allowed me to lessen the blow from the rear of my car. I might even have accelerated fast enough to escape much of the impact, ending up on the other side of the intersection without much damage to the rear of my car.

I’d bet there are many car accidents wherein if the driver involved could magically have a clear and present mind, and be able to control their car, there is a chance that whatever dire results occurred could have been lessened.

On the news, I saw an instance recently of a driver that veered their car to avoid hitting something in the street and the car driver lost control of the car, which resulted in the car ramming into a parked car and a light post and a fire hydrant. It sheared off the fire hydrant and sent water shooting into the sky.

How did the driver lose control of the car?

Was it because of the mechanics of the car, or was it because the driver themselves lost their presence of mind and no longer were of their right mental faculties? It could be that the shock of veering caused the person to mentally go into a blur. This blurred mental state meant that the human was no longer actively driving the car. The car was out-of-control.

There was no driver actively driving the car.

Out-Of-Control Cars

I’m sure you’ve seen lots of news clips and videos of cars that became a kind of mindless projectile.

There was an incident captured on YouTube of a car that swerved to avoid hitting an animal in the street and the car smashed through a wood fence, continued onto a farm adjacent to the road, plowed a bunch of planted vegetables, and finally the car came to a stop.

Out-of-control car.

Another incident showed a car that didn’t make a left turn very well, veering beyond the confines of the left turn. The car continued to make too large a turn and rammed into a mailbox. This car then rammed into a hot dog vendor stand and ultimately came to a stop once it hit a storefront.

There are plenty of videos of cars that missed a turn and went through a fence into someone’s swimming pool. Having a car fly off a bridge is another example of an out-of-control car.

There are situations whereby an out-of-control car might be due to the car having mechanical problems and there is seemingly nothing that the driver can potentially do. For example, the accelerator pedal getting stuck and refusing to budge, forcing the car into going faster and faster.

This might happen because something is lodged into the accelerator pedal like a floor mat.

It has also happened as a result of an intrinsic defect in the car design.

Assume that the driver did not cause the accelerator pedal to be jammed downward. In that instance, is the driver now merely a passenger in that there is nothing the driver can do? I’d dare say we would all agree that the driver can still do something. They need to try and steer the car to avoid hitting other cars and other objects. They could try to see if they could dislodge the pedal to curtail the rapid acceleration. They could start honking their horns to try and warn other drivers and pedestrians that the car is a runaway.

And so on.

Not everyone would have the presence of mind to do those things.

If you’ve never had your accelerator pedal get stuck, the odds are that when it does get stuck, you’ll be shocked and unsure of what to do. You might lose your mental presence and become panicked. Even though there are actions you could take, those actions might not come to your mind. If they do come to your mind, you’d have to remain calm enough to enact those actions by forcing your body to undertake the desired actions.

Have you ever been to a demolition derby or seen one on TV?

At a demolition derby, the cars all try to smash into each other. It’s the purpose for the derby. Usually, the last running car gets the grand prize. I bring up the topic of demolition derbies to point out that those drivers are well-prepared to deal with their cars when the car is out-of-control.

A driver in one car might get hit from the left side by another car, meanwhile be getting hit from the right side by another car, and at the same time trying to hit a car ahead of them. The cars are all being pushed and shoved. Driver’s in those cars are generally able to keep their mind and wits about them. They are trained for the situation and know what to do, though of course it is somewhat easier when the matter is expected versus when unexpected (in the derby, it is expected that your car is going to be hit and go out-of-control).

One aspect of a car being out-of-control is when the car is sliding or otherwise in a motion that you as a driver did not intentionally seek to have the car do. Have you ever had your car slide on ice or snow?

That’s an example of the car being out-of-control.

Again, how you react as the driver can make a big difference. If you aren’t aware of the sliding action and aren’t prepared to react, or if your mind is muddled, you might not try the usual techniques that are recommended for dealing with a sliding car. You can potentially regain control by typically turning the wheels in the direction of the slide and avoid jamming on the brakes.

Dealing With Out-Of-Control Cars

In essence, there are actions that you can take to bring the car back into control, or you can take no actions and hope for the best, or you can take misguided actions that cause the car to go into a further out-of-control result.

You need to not only determine what is the proper course of action, you need to try to prevent the situation from getting worse, you need to take into account what your car can and cannot do, you need to consider any damage the car is undertaking and how it will limit what you can do, and consider a slew of other factors.

Demolition derby drivers are able to do this.

I don’t want to make them into seeming to be super drivers per se. Their cars are usually jiggered in a manner to make things simpler for them. Usually, the cars are stripped of items that can fly around. Cables are reinforced. There aren’t any passengers on-board. Gas tanks get special protections. Plus, the derby typically takes place in a confined area that has no pedestrians, no other obstacles, and it is like a playground in which all you can do is ram into other cars.

That’s a far cry from dealing with a real-world crash-mode and having to figure out what to do, and cope with bystanders, and cope with a myriad of other factors. Nonetheless, the derby drivers get a chance to practice dealing with the stresses of being in a car crash and able to train themselves to keep a mental awareness, enabling them to continue driving a car and maintain control, as much as feasible.

AI Autonomous Cars And Cars Out-Of-Control

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that few automakers and tech firms are considering at this time is the special characteristics of driving a car while it is in crash-mode and how the AI should be skilled to do so.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 4 and Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of out-of-control cars and dealing with them as a driver when the car is seemingly out-of-control, let’s consider how the AI of an AI self-driving car should be coping with such situations.

We’ll start by debunking a popular false myth, namely that true Level 4 and Level 5 AI self-driving cars will never get into car accidents, therefore the claim is that the AI does not need to be able to cope with car crashes.

Wrong!

Well of course AI self-driving cars are going to get into car crashes.

It is nonsense and foolhardy to think otherwise.

First, as mentioned earlier, the roadways will have a mixture of human driven cars and AI driven cars.

This mixture is going to have car crashes, involving a human driven car that crashes into an AI self-driving car, and likely too occasions whereby an AI self-driving car crashes into a human driven car. In some instances, the AI self-driving car might be the instigator of the car crash, while in other cases it is carried into the car crash as a cascading action of the car crash.

There will also be instances of AI self-driving cars crashing into other AI self-driving cars, as I’ll describe in a moment.

Recall that in my story about how I had gotten hit from behind while sitting at a red light, I was in a conventional car, but for pretend sake let’s assume that the car was actually a Level 4 or Level 5 AI self-driving car.

Would it have been able to avoid getting hit?

No.

There was no place to escape to and the car that rammed me from behind did so with almost no warning.

The AI might have detected the aspect that the car behind it was suddenly speeding up, and therefore the AI would likely have had a few seconds heads-up that the crash was going to occur.

But, given that there was a car immediately ahead of my car, and there were other cars also sitting at the intersection and all around my car, the AI would have been boxed-in or essentially surrounded and would have had no opportunity to escape. The crash would have happened.

This is an instance of a human driven car hitting an AI self-driving car.

An AI self-driving car could get hit by another car such as a human driven car and could even get hit by an AI self-driving car.

Suppose the car ahead of me at the intersection was an AI self-driving car and continue assuming that my car was an AI self-driving car.

Once the car behind me has hit my car, it would have forced my AI self-driving car to ram into the car ahead, which we’re pretending was another AI self-driving car.

This is an instance of an AI self-driving car hitting another AI self-driving car.

Sometimes when I mention this possibility, there are those that will say that it is “cheating” to say that one AI self-driving car hit another one in the sense that they were both pinned into a situation whereby the impact was unavoidable.

I then say indeed they were pinned, that’s the case here, but it still doesn’t negate the fact that one AI self-driving car hit another AI self-driving car.

My point is that it can happen.

Use Of V2V For Crash Tip-off

Some say that an AI self-driving car won’t get hit because it will have V2V (vehicle-to-vehicle) electronic communications.

This means that one AI self-driving car can electronically communicate with other AI self-driving cars, perhaps doing so to forewarn that the road ahead has debris on it or maybe that the traffic is snarled.

Okay, let’s assume in my pretend scenario that my AI self-driving car has V2V and the AI self-driving car sitting ahead of me at the intersection has V2V too.

In those few seconds wherein my AI self-driving realizes it is going to get hit, it sends quickly a V2V broadcast, which the AI self-driving car ahead of me receives and decodes. Based on the electronic message, the AI of this car ahead of me has to decide whether it believes what it is being told, which in of itself is a potential question and problem associated with V2V aspects, and if the AI does believe that my car is about to hit it, the next aspect involves figuring out what to do.

The AI self-driving car ahead of me, now forewarned and with presumably a second or two maybe to react, could try to proceed into the intersection to avoid my car hitting it from behind. The AI has to ascertain whether it is worse or not to remain in-place and get hit from behind, or potentially try to gun the engine and rush into the intersection. If the intersection has other traffic in it, the idea of aiming to rush ahead is not so attractive, though likewise staying in place is not attractive either.

This highlights the kind of ethical choices that an AI system is going to need to make when driving an AI self-driving car.

It has to decide in this instance the risks of injury, death, damages from waiting to get hit from behind by my AI self-driving car versus the instance of risks of injury, death, damages if it attempts to rush into the intersection. There is also the matter of whether the amount of time involved would actually allow the AI self-driving to rush into the intersection, depending upon the acceleration capability of the AI self-driving car.

More Examples Of AI Driverless Car Crashes

I’ll give you another example of how an AI self-driving car might hit another AI self-driving car.

Suppose there are two AI self-driving cars are going down a street (hey, this sounds like an AI self-driving car joke of some kind, like two people going into a bar!).

They are following each other at the proper distance, based on their speeds and car lengths, and is supposed to be how humans are to drive a car, though I’d wager few humans allow sufficient distances between their cars when driving.

A dog darts from seemingly nowhere and into the street. In this case, there was no possibility of detecting the dog prior to its entering into the street.

The AI self-driving car that’s ahead of the other AI self-driving car has insufficient distance to come to a stop and avoid hitting the dog. The choices for the AI are to either try to stop and yet know it will ram into the dog, or try to swerve to avoid the dog, but let’s assume there are parked cars and other cars coming down the street too.

This means that the AI will need to decide whether to hit and likely kill the dog or take a chance and swerve into the oncoming lane of traffic and possibly get hit head-on or ram itself into a parked car to try to avoid the dog.

For more about AI self-driving cars and accidents, see: https://www.aitrends.com/selfdrivingcars/accidents-happen-self-driving-cars/

What should the AI do?

The AI is between the proverbial rock and a hard place.

There aren’t any “good” choices to be made here.

Which is the least of the worst options is more akin to this problem. Suppose the AI opts to ram into a parked car, figuring that the parked car has no humans in it and thus no humans will be put at risk, and it is only property damage that will result. This saves the dog, prevents potentially hitting an oncoming car, and perhaps seems to be the least-of-the-worst choices.

The AI quickly sends out a V2V to forewarn that it is going to ram into a parked car.

The AI self-driving car coming up behind is given a somewhat sudden heads-up that this action is going to occur.

Can the AI self-driving car stop in time and avoid hitting the AI self-driving car that is going to ram into the parked car?

Maybe yes, maybe not.

We also don’t know if ramming into the parked car will cause the AI self-driving car to perhaps bounce back into the street and maybe make the situation from the perspective of the upcoming AI self-driving car even worse.

The point of these scenarios is that there will absolutely be car crashes involving AI self-driving cars.

I want to make sure that we all agree with that possibility.

Some might argue that we’ll have less car crashes due to the advent of AI self-driving cars, and for that I’d be willing to say it is hopefully the case that we’ll have less, but in no manner at all will we have zero instances of car crashes involving AI self-driving cars.

For the false notion of zero fatalities, see my article: https://www.aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/

For more on ethics and AI, see my article: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/

For the need of AI to be a defensive driver, see my article: https://www.aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/

For my article about the human foibles involved in driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

For my article about safety issues of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Missing The Boat On Crash Avoidance

In terms of AI self-driving cars and getting involved in car crashes, most of the automakers and tech firms are focused on avoiding car crashes and not particularly considering what the AI should do once a car crash is imminent or underway.

This is troubling.

If the AI is not intentionally established to have special processes or procedures for dealing with a car accident once underway, it means that the AI self-driving car is essentially going to become out-of-control.

The AI developers are assuming that the AI will be able to handle the self-driving car as if the AI self-driving car is just nonchalantly driving along, but once the car accident starts, all bets are off. The AI self-driving car is going to be likely pushed, shoved, and otherwise taken out of the “comfort zone” in which one assumes the self-driving car is operating most of the time. Assumptions about being able to brake, accelerate, and steer are no longer going to be valid due to the extenuating circumstances involving what is about to happen to the self-driving car.

Some automakers and tech firms aren’t working on this at all, or they are working on it but put it on the back-burner as a so-called edge problem.

Their logic to put this on the back-burner is that they assume the AI self-driving car is highly unlikely to get into a car accident, thus, why worry about it now.

If it is going to only happen once in a blue moon, deal with it later on.

For more about edge problems, see my article: https://www.aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/

Part of the grave concern with this kind of thinking is that it means that when an AI self-driving car does get into a car accident, it will likely do little to try to minimize the impacts and be unable or ill-equipped to find ways to either escape or  at least try to “improve” upon a bad situation.

I’ve predicted many times over that when AI self-driving cars get into car crashes, society is going to become hyper-focused on why and how it happened, and the entire future of AI self-driving cars is going to be based on these instances. It becomes the classic “bad apple” that spoils the entire barrel.

I know many AI developers are frustrated that this can occur and feel that it is unfair of society and the media to react in such a manner, but, hey, that’s the way the cookie crumbles.

Generally, the public and the media are not especially forgiving about AI self-driving cars getting involved in car accidents.

Tesla’s Elon Musk has bitterly complained that society over-hypes these instances and should try to balance those instances against the thousands of car accidents with conventional cars, but he’s barking up a rough tree to think that society will be willing to view AI self-driving cars in that kind of context.

Auto makers and tech firms need to be doing as much as they can to cope with not only avoiding car crashes but also being able to have the AI enter into a kind of “crash mode” when a car accident is either imminent or underway.

I would likely anticipate that if the auto makers and tech firms don’t have such a provision in their AI, besides the aspect that it means the AI will be somewhat acting in a willy-nilly manner during a car accident, I would predict that the auto makers and tech firms are going to be faced with some hefty legal bills and potential product liability issues.

Lawyers for those humans that are immersed in a car accident are going to ask tough questions about what the AI did, why it did so, etc.

For my article about product liability and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For my article about the lawsuits over AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/first-salvo-class-action-lawsuits-defective-self-driving-cars/

For the crossing of the Rubicon, see: https://www.aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For the public perception of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

Tick-Tock Goes The Clock

As suggested earlier, let’s put a stopwatch into the car accident aspects and assume that at the initial point of impact we start the clocking ticking.

This is almost as though we are able to slow down time and do a slow-motion analysis of a car accident.

In a manner of speaking, we might look at this from the perspective of “speeding up” rather than slowing down. A human might not be able to give much mental concentration to a car accident once the accident begins to unfold. The time allotted is very short, perhaps fractions of a second or just a few split seconds.

On the other hand, the AI might be running on some very fast computer processors and so it could potentially do a lot of computational processing in that rather short amount of time.

There are those that would argue too that the AI won’t go into shock and so it will keep its head about it, while a human is likely to not keep their presence of mind. In my car accident, I still don’t know exactly what happened from the moment of impact until I was suddenly aware that I was seated in my car and something untoward had just occurred. Whether I was in shock or maybe blacked out momentarily, we presumably don’t need to worry about the AI suffering that same fate.

I would though wish to put a caveat on this idea that the AI won’t suffer from the shock aspects.

Those that make such a claim are leaving out the important element that the AI is running on computer processors that are on-board the self-driving car. When the self-driving car is getting rammed, there is a high chance that those processors are going to suffer too. The physics of the situation can mess with the electronics. The physical crushing actions and blows to the car are likely to mess with the electronics of the computer processors and computer memory on-board the self-driving car.

In a manner of speaking, you could assert that there is a chance that the AI will go into “shock” or maybe we call it “artificial shock,” involving damage being done to the AI systems and its on-board computers.

This could alter what the AI is able to do during the crash itself. What kind of fail-safe capabilities does the AI have?

For more about fail-safe AI, see my article: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/

For cognitive timing of the AI, see my article: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

For my article about self-awareness being crucial to AI, see: https://www.aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/

Maybe the AI is not able to do anything once the crash gets underway and has become completely inoperative.

Maybe the AI is messed up and does not realize that it has become messed up, and yet still tries to drive the car, doing so in a manner that actually makes the situation worse!

Overall, the special “crash mode” of the AI needs to be able to discern what it can and cannot do, what it’s own status is in terms of working properly, and have a number of contingencies ready to go.

Special AI Capability For Crash Mode

There is no doubting that the “crash mode” becomes a highly complex problem.

The self-driving car is likely becoming less drivable as the car crash clock ticks, starting at point in time t=0. At some time, we’ll say is t+1, perhaps the brakes are no longer functioning. At some time, t+2, it could be that the car is now in a slide as a result of the ramming and the wheels are unable to gain traction to redirect the direction of the self-driving car. And so on.

I had earlier mentioned that I wasn’t sure in my car accident as to the capability of my limbs, such as whether I still had any ability to keep my arms and hands on the steering wheel or have my foot on the brake pedal.

The AI is going to have similar “driving controls” issues to cope with. Though the AI doesn’t have arms or legs, it does have electronic systems and various means to undertake the driving controls of the self-driving car.

Are those driving controls still available to the AI or might they have been damaged or cut as a result of the underway car crash as it evolves in those split seconds?

In essence, you have these aspects:

  • AI system as “mindset” for driving the car
  • Sensors of the self-driving car that the AI needs to sense what’s happening
  • Car controls for the AI to use to drive or control the self-driving car

The AI system itself might be degraded or faulty during the car crash and must have a provision to ascertain its own status and reliability. This then would be used to try to decide what actions the AI ought to be taking, and also avoid taking actions that the AI ought to not be taking.

The sensors such as the cameras, the LIDAR, radar, and the ultrasonics can become degraded or faulty during the car crash. This means that whatever the sensor fusion is reporting might be false or incomplete. This means that the updating of the virtual world model might be false or faulty. The AI action planner needs to try to ascertain what about the sensors and virtual world model seem to make sense and what aspects might now be suspect.

The car controls might no longer be accessible by the AI, due to the car crash aspects as they unfold.

Or, maybe the AI can issue car controls commands, but the controls themselves are non-responsive, or the car controls attempt to carry out the order, but the physics of the car and the evolving situation preclude the car from physically being able to carry out the instructions.

Impacts To Human Passengers Inside The Autonomous Car

One aspect that I’ve not brought up herein involves the AI having to decide what to do about any human passengers that are in the AI self-driving car.

This is quite important and must be taken into consideration.

Here’s what I mean.

For the AI to consider what action to take during the car crash, there is the matter of how the humans within the AI self-driving car are going to be impacted too.

Which is better or worse for the passengers, having the AI attempt to accelerate out of the full impact or instead maybe letting the impact happen but steer the car so that the impact happens on one side of the car versus the side that the humans are sitting in?

The crux is that the number of human passengers, where they are seated, possibly their size and age (adult versus child), could all play into how to “best” respond to the car crash as it is underway. This takes us again into an ethics laden situation. If the AI can find a means to more likely save let’s say an adult in the self-driving car versus the child, should it take such action, or should it attempt to save the child more so than the adult?

I know that you might be saying that the AI should seek to save all humans inside of the AI self-driving car. Sorry, that’s too easy an answer.

There is a myriad of options that the AI might be able to consider.

Each of those options will involve uncertainties.

We also need to consider the humans outside the AI self-driving car, such as there might be pedestrians standing nearby that are at risk, and humans in the other nearby cars that are cascading into the car crash.

For my article about uncertainties and AI, see: https://www.aitrends.com/selfdrivingcars/probabilistic-reasoning-ai-self-driving-cars/

For pedestrians as roadkill, see my article: https://www.aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/

For my Top 10 predictions about AI self-driving cars, see: https://www.aitrends.com/ai-insider/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For the aspects about rear-end collisions, see my article: https://www.aitrends.com/selfdrivingcars/rear-end-collisions-and-ai-self-driving-cars-plus-apple-lexus-incident/

Twists And Turns Involved

The AI has quite an arduous problem to solve during a car crash.

That’s partially why it is being avoided by some AI developers, it is a really tough nut to crack.

I don’t think that this means that we should just shrug it off and wave our hands in the air.

Without any kind of crash mode capability, the AI is going to be potentially useless and merely add fuel to the fire of the self-driving car becoming a kind of unguided missile.

When I say this, there are some that try to retort that the AI might have a crash mode and try to deal with a car crash as it evolves, and yet ultimately be unable to do anything of substance anyway. It might be that the car controls are unavailable or non-functioning. It might be that the choices of what to do are so rotten that doing nothing is a better choice. Etc.

Yes, it is true that the AI might end-up not being able to aid the lessening of the car crash repercussions. Does this mean that the AI should not even try to do so? Are you willing to toss away the chance that the AI might be able to assist? I don’t believe that’s prudent and nor what we might hope a true AI self-driving car will do.

Each situation will have its own particulars that dictate what becomes feasible during the crash.

Was there any in-advance indication of what was about to occur?

Was any preparation possible prior to the actual crash?

Once the crash began, what possibilities existed of still being able to exert control over the self-driving car?

Throughout the car crash, what could be done and what was done?

There’s also the post-crash aspects too.

If the AI self-driving is still in some functional capability, the AI should be trying to ascertain what to do. Is the car still drivable by the AI such that the AI can pull the self-driving car off to the side of the road, and avoid possibly getting hit therefore by other traffic that might be soon coming upon the accident scene?

The AI must be continually monitoring the car controls status to try to discern what is usable and what is not:

  • No steering, limited steering, steering is stuck
  • No accelerator, limited accelerator, accelerator stuck
  • No brakes, limited brakes, brakes stuck

Use Of Machine Learning

Here’s a bit of a twist that might catch your interest.

Some say that we should be using Machine Learning (ML) or Deep Learning (DL) to cope with and aid the crafting of the special “crash mode” of the AI for a self-driving car.

The notion is that the use of deep or large-scale artificial neural networks might allow the AI to identify patterns in what to do during car crashes. By examining perhaps hundreds or thousands of car crashes, in the same way that the ML or DL studies pictures of street signs to identify what street signs consist of, maybe the AI could become versed in handling car crashes.

This seems sensible.

One question for you, where are we going to get all of this car crash data that will be needed to do the ML or DL training? Right now, the AI self-driving cars and auto makers and tech firms are doing everything they can to avoid car crashes.

There isn’t a vast trove of car crash data available to do this kind of pattern matching and training with.

Sure, there are tons of car crashes daily that are occurring with conventional cars.

This though does not encompass the kind of car crash data that we need to have collected. For today’s car crashes, at best there is info about what happened before a crash and what the end result was.

Whatever happened in-between is not particularly data captured and nor analyzed.

Unfortunately, there is not a handy treasure trove of car crash data that includes what took place during the crash itself.

The closest that we can come would be to use simulations.

The simulations though need to be based on the reality of what happens during car crashes. This might seem obvious, but I point this out because it is “easy” to make a simulation based on aspects that have little to do with what really happens in the real-world. Training via ML or DL via simulations that aren’t realistic is not likely to be overly helpful, though it at least provides a potential step forward.

For more about Deep Learning, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

For ensemble Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For my article about street signs and DL/ML, see: https://www.aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/

For my article about the importance of simulations in AI self-driving car development, see: https://www.aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Conclusion

We need to have the AI of a self-driving car be able to deal with car crashes.

This includes not just the pre-crash aspects and the post-crash aspects, which is usually where the attention by the AI developers is aimed.

There must be a “crash mode” that is able to cope with the unwinding or evolving elements that happen during a car crash.

The crash mode could be a kind of last-resort core portion that does what it can to try to keep aware of the moment-to-moment situation and exert any car control that it can, doing so in hopes of minimizing injury, death, or damages. Similar to humans, in a manner of speaking, the AI can suffer from a type of “artificial shock” that means it will become degraded in being able to figure out what is taking place and what can be done about the emerging situation.

The complexities during a crash are enormous.

What on the self-driving car is still working and usable?

What is the status of the humans on-board? What is the situation outside the self-driving car?

How can all of these variables be coalesced into a sensible plan of action and carried out by the AI?

The odds are that whatever the AI derives, the plan itself will need to be instantly re-planned, based on the aspect that the situation is rapidly changing.

Other than demolition derby drivers, I’d suggest that most drivers are unable to remain steady and have the presence of mind during a car accident to do much to mitigate the consequences. The AI has a chance to be that demolition derby driver, though let’s subtract the part about wanting to purposely hit other cars as is the goal of a derby.

The AI potentially has fast-enough processing speed to try to find ways to cope with the car crash while it is occurring and take rudimentary actions related to the self-driving car. For the sake of AI self-driving cars, and for the sake of human lives, let’s put some keen focus on having “crash mode” savvy AI.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/cars-careening-out-of-control-in-crash-mode-the-case-of-ai-autonomous-cars/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?