Zephyrnet Logo

Noxious Stimuli And The Useful Role Of Artificial Pain in AI

Date:

Future cars should have a sensory capability, such as an outer layer to detect pressure, so the car “knows” when something potentially painful is about to happen. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

The word “pain” is from the Latin poena, meaning a type of penalty or punishment.

The downside of pain is that, well, it is painful.

Some take the viewpoint that pain is a necessity, such as William Penn’s famous statement made while he was being kept in the Tower of London: “No pain, no palm; no thorns, no throne; no gall, no glory; no cross, no crown” (see his book entitled No Cross No Crown, published in 1669).

Today’s version is the oft-repeated shortened assertion that if there’s no pain, there’s no gain, or simply stated (with dramatic emphasis) “no pain, no gain,” while some prefer to take a slightly different tack and say “no guts, no glory.”

You might quibble about the no guts variation, since it subtly surfaces the notion of no pain being needed per se. It might mean that if you don’t take a chance or risk at doing something then you won’t be able to grab the winner’s medal. In that choice of interpretation, it is not saying that you will necessarily experience pain. Instead, it is suggesting that you might need to encounter pain or maybe not, and it could be that you might skip the pain part entirely and successfully get the gain or glory. All you need is guts. This is decidedly not the same as the claim that pain is apparently a requirement to accede to gain.

Darwin was a staunch believer that pain is a vital element to our survival, both humans and animals.

The logic he used was that pain serves as a means to forewarn when something might undermine your survival. It is a defense mechanism that gets you to respond and alerts you that something untoward is happening to you. As much as we might dislike pain, Darwin was suggesting that we’d be dead before we knew it, if we didn’t have an ability to experience pain.

He had an additional corollary that those aspects that would be most likely to lead to death would be typically tied to fostering the greater pain.

If you get a tiny splinter in your finger or hand, it probably is not going to be overly painful. If you get an entire wood stake that punctures through your hand or arm, the odds are it is going to produce a lot of pain. In theory, the pain from the wood stake puncture is trying to tell you that you are heading toward death’s door, while the only slight amount of pain from a splinter is a pain annoyance that you can overlook or tolerate (and you won’t likely die from the splinter).

You might find of idle interest that there is a great deal of debate about the nature of pain in animals.

Up until modern times, many asserted that animals could not “feel” pain in the same manner that humans do. Sure, an animal might winch and react to pain, but supposedly they were not mentally equipped to experience the feelings of pain that us thinking humans do. I’m not going to weigh into that debate herein. All I’ll say is that I’ve had pet dogs and cats, and it certainly seemed like they could experience pain in a manner akin to how humans do. Or, was I anthropomorphizing my pets?

Anyway, let’s consider that pain can be physically manifested, and it could be said that pain can also be emotionally or perhaps mentally manifested.

The physical manifestation is the most obvious occurrence of pain.

Noxious Stimuli And Pain

Here’s an example. I was moving a box filled with some aged AI books (they were published in the 1990s, yikes, the AI stone age!), and I accidentally dropped the box onto my big toe.

Yes, it was painful.

In more rigorous language, we could say that I had an unpleasant sensory experience.

A noxious stimulus occurred to my big toe.

The heavy box heaved down upon my skin, bone, and other biological elements. Various specialized nerve-type detectors relayed this impact to my overall nervous system, which relayed this signal to my brain. My brain led to a reaction that included my effort to retract my toe from being under the offending box, and my brain activated my vocal cords to let out an exclamation.

I won’t tell you what word I said.

I apologize for the word that I uttered, though it seemed appropriate in the heat of the moment.

Notice that I carefully tried to trace the point of origin of the pain and walked it back to my brain. There is another kind of fascinating debate going on about pain, namely we might question what “pain” actually feels like and how much the brain is involved in that determination.

Does your toe really experience pain?

Or, is it merely reacting via sending signals that tell the brain that there is something afoot (pun!), and the brain takes the signal and makes us believe there is a thing we call pain.

I’m sure you’ve heard the often voiced off-hand remark made that the pain you are experiencing is only in your head.

Do you think that is true or false?

Some say this is a false statement and that there is truly pain that is incurred at the physical origin point and also there is pain for wherever else the pain might spread. Others say that there is just a bio-mechanical electrical-like set of signals being transmitted and until those signals get interpreted by the brain, it’s just a bunch of signals. They would claim that those signals are not what we believe to be pain and are only like electricity or water flowing through pipes.

We can pretty much agree that there are physical detectors within our bodies that are able to detect unpleasantness. I dare say everyone would agree with that assertion.

Those detectors can usually also register the amount of unpleasantness.

Prior to my dropping a weighty box onto my toe, I had the day before dropped a bottled water onto my toe (mercifully, the bottled water was less than half-full). It caused a middling of pain, briefly, which I immediately shook off, and moments later I forgot that it had even happened. Apparently, my toe was marked for bigger things to happen (the heavy box), unfortunately.

The intensity of pain can range from being quite mild to being overwhelming.

In addition to the intensity level, there is also a duration aspect.

The bottled water smacking my toe was the kind of pain that was relatively mild and brief in duration. The box of books that produced pain in my misbegotten toe was relatively severe and it lasted for the remainder of the day. The box-crushing bone-battering induced pain would have been more pronounced, but I rapidly applied ice on my toe and took some over-the-counter pain relief medicine.

Pain Can Be Useful

I earlier mentioned that pain is intended to be a survival mechanism.

The pain that I had in my toe was more than just an occurrence or an event. When the box fell onto my toe, it remained there, and would have remained there were it not for the fact that I shoved it out of the way. Why did I shove the box out of the way?

I shoved away the box because the pain in my toe was telling my brain that something was causing pain, and my brain wanted to find a means to curtail the pain. The desire to curtail the pain would be due to my brain figuring that the survival of me was crucial and the pain emanating from my toe was perhaps an indicator that my life might be in jeopardy.

In that manner, the pain was a symptom. It sparked me into reacting. The reaction was intended to reduce or remove the pain. The reduction or removal of the pain could be ultimately tied to my survival. Had I let the pain continue unabated, perhaps it might be an indication that even worse pain was going to arise. The early warning provided by the toe pain was handy and helpful to me.

It is hard to be gleeful about such pain, though yes, it is doing something somewhat heroic, ensuring survival. You have to acknowledge that the pain does seem to serve a purpose. As unwelcome as it might be, it is also welcomed as a symptom that forewarns and can spur you into taking corrective action, doing so before things might get worse.

When my children were young, they did the classic action of trying to put their hands too close to a hot stove, and the radiating heat generated slight pain in their hands, so they reacted by retracting their hands. Had they somehow stood-tough and proceed to put their hands even closer to the hot stove and maybe actually tried to touch it, they most likely would have suffered third-degree burns. The earlier onset of pain at the further away position was a helpful indicator that they dare not try to get any closer.

We normally attempt to protect ourselves from pain.

This can include avoiding pain entirely, such as if I had put shoes onto my feet when I was moving the heavy box, the odds are that if the box fell onto my toe that I would have only marginally felt the blow. The pain from the heat on the stove top was a situation of a small amount of pain that the kids then reacted to avoid getting worse pain. Overall, we react to withdraw from pain when it comes upon us, and we also seek to avoid pain if we can.

There is also a “lesson learned” element about pain that boosts our survival too.

After the kids put their hands near a hot stove, they were quite cautious in the future whenever getting near to a hot stove.

You could see that they had learned a valuable lesson.

The stove can be hot, very hot, so be careful when near it. Keep your hands away. Or, if you do need to put a hand there, approach cautiously. Some say that such lessons can only be learned by doing and are difficult to learn by simply being told. I assure you, I had told them to be careful around the stove, but in the end their curiosity piqued, and they wanted to see what this hot stove thing was all about.

A lesson learned that involves pain will possibly lead a thinking human into trying to avoid the pain in the future, or seek to minimize the pain if the pain will likely happen, or otherwise try to prepare to deal with the pain if there’s no other course of action other than experiencing the pain. This also takes us into the realm of the infamous “no pain, no gain” claim. You might need to encounter some pain now to avoid a greater pain in the future. Either way, there’s going to be pain involved.

I recall when I was a Boy Scout leader for my son’s troop and we were preparing to go on a long hike in the mountains.

My son and I donned our hiking boots and took a series of short hikes near our home, trying to toughen up our feet, get our bodies used to carrying heavy backpacks, and get in shape for the daunting hike.

It was painful.

Each time we did our short hikes, I came back aching and sore.

Why would anyone in their right mind purposely do something that would cause this pain?

I did so to get ready for the major hike and knew that if I did not do the shorter hikes, involving minor bouts of pain, I’d be in for a world-of-hurt pain during the arduous mountain hike. I traded the near-term and lesser pain to avoid a much larger and far-term future pain. That’s the kind of lesson learned about pain that gives rise to the “no pain, no gain” mantra.

When I was in college, I was an avid rock climber. We’d go up to Yosemite Valley from Los Angeles and do some impressive half-day, all-day, and multi-day rock climbs. One of my fellow rock climbers seemed to be nearly indifferent to pain. We’d climb for hours on end, and most of us were in pain, yet he seemed to shrug it off. At first, I thought it was a macho act, aiming to convince us that he was too manly to experience pain.

It turns out that he had a different tolerance for pain than the rest of us that were in my rock-climbing clique. I’m sure you know people that seem to be able to encounter pain that might have caused you to howl and cry, but they react in a muted way and don’t seem to suffer quite as much. There are those that are on the other end of the pain spectrum, namely at the slightest bit of pain they are prone to acting as though they’ve had a knife jammed into their ribs.

This brings up the point that there are individual differences regarding pain.

There are also ways to try to train yourself to cope with pain. Some believe that your DNA dictates a foundational reaction to pain. From that foundation, you can then adjust it as based on training that you might do. One might argue that pain detection and reaction is a combination of nature and nurture.

Pain can occur for a split-second and then disappear. It can last much longer and be persistent. Longer bouts of pain are often described as being chronic. Shorter bursts are considered acute. The chronic version does not necessarily mean that you are continuously experiencing pain.

The pain could be intermittent, and yet be occurring over a longer period of time.

Physical Pain And Mental Pain

Recall that I had earlier indicated that pain can be a physical manifestation, and that it can be an emotional or mental manifestation.

Does a physical manifestation of pain have to result in an emotional or mental manifestation of pain?

That’s a dicey question to answer. Some might say that whenever you experience physical pain, you will by necessity experience emotional or mental pain, though you can potentially control the emotional or mental aspects and thus mitigate the mental side of it.

Back to the story about the Boy Scouts and going hiking. When I went on those short hikes with my son, I wanted to project a bold image of being in stupendous shape (note, having an office job at the time kept me at my desk much of the day, and I would say that my once athletic body had become haggard). I could feel the pain from my feet and aching back, yet I suppressed any visible reaction. My mind told my mouth to remain shut, no whining, no complaining. My mind told my feet to keep walking and my legs to keep moving.

In that respect, it seems like we can potentially mentally deal with the physical manifestation of pain. This doesn’t though quite answer the question about whether physical pain can occur and completely avoid involving your emotional or mental state. Part of the problem of answering the question involves ascertaining what we mean by involving the mind.

Does a tree falling in the forest make a sound?

If you have a physical manifestation of pain and it routes signals to your brain to let it know, does that constitute mental “pain” or is there only mental pain when the mind overtly recognizes the incidence of the physical pain?

One might argue that the delivery of a message is not the same as reacting or acting upon the message.

Can you have emotional or mental manifestation of pain and yet not have a physical manifestation?

You are likely to right away say that of course you can have mental pain that has no origination in a physical pain. I recall a chemistry class that I took as an undergraduate in college and got a disturbingly low grade.

I was in mental anguish from it! Did the chemistry professor bop me on the head, or did the grade sheet cause me to suffer a dire paper cut?

No, I was in pain mentally without any physical origin.

The trick to being unsure that the answer of obviousness about whether you can have mental pain, but no physical manifestation associated with it, has to do with the brain itself. When I was mentally beating myself up about the chemistry class, you could argue that my brain must have been physically doing something. Neurons were firing and my brain was activating. There was a physical action happening.

Does my brain “hurting” constitute a physical manifestation of pain?

We usually only consider our limbs and the preponderance of our body as being subject to physical pain. It seems that we usually discount that the physical elements of the brain can count toward a physical manifestation of pain.

Famous Case Of Phineas Gage

One of the most famous cases of “brain pain” would be the so-called American Crowbar Case involving Phineas Gage.

If you don’t know his name or his situation, you definitely should bone-up (pun!) about him, since it’s an important example used in neuroscience and cognition as a vital study of the brain (and quite related to AI).

Phineas was a railroad construction foreman. Before I tell you what happened to him, if you are squeamish then I suggest you skip the next paragraph.

This is a trigger warning!

While working on the railroad in 1848, an iron rod shaped like a javelin, measuring 1 ¼ inches in diameter and over 3 feet long, rocketed into and through his head, passing through the left part of his brain and exiting out of his skull. A physician was sought right away, and within 30 minutes a medical doctor arrived.

At that time, Phineas was seated in a chair outside a hotel, and he greeted the doctor by what many consider one of the greatest understatements in medical history, in which Phineas reportedly said: “Doctor, here is business enough for you.” Miraculously, Phineas lived a somewhat normal life until his death in 1860, lasting some dozen years after the astounding incident, and became a medical curiosity still discussed to this day.

In any case, I’m not going to get mired herein about the matter of whether a mental pain must also be associated with a physical pain.

Let’s agree for now that you can have a physical manifestation of pain, you can have a mental or emotional manifestation of pain, and you can have a combined physical-mental manifestation of pain.

The point at which you begin to feel pain is often referred to as your pain perception threshold.

The point at which you begin to react or take action due to the pain is often referred to as your pain-tolerance threshold.

As suggested earlier, different people will have differing levels of a pain perception threshold and a pain-tolerance threshold. A person’s thresholds can change over time. They can vary by the nature of the pain origins. You can potentially train yourself to increase or decrease your thresholds. And so on.

AI And Leveraging The Concept Of Pain

I dragged you through this background about pain to introduce you to the notion of pain in the field of Artificial Intelligence (AI).

I’m not talking about pain as in pain-in-the-neck and maybe being annoyed or finding it “painful” to study or make progress in AI research.

I’m referring to the use of “pain” as a form of penalty or punishment, doing so as a technique in AI.

Recall that I began by stating that the word pain comes from the Latin poena, meaning penalty or punishment. I’ve also pointed out that pain serves a quite useful purpose, per Darwin and others, providing us with an indicator to promote our survival. We might not like pain, and we might try to avoid it, nonetheless it does have some positive qualities as to guiding our behavior and aiding our survival.

Let’s suppose you are in the process of training a robot to maneuver in a room.

There are objects strewn throughout the room. The robot has cameras to act as the “eyes” of the robot. Via the images and video streaming into the cameras, the robot is using various image processing algorithms to figure out what objects are in the room, where the objects are, and so on.

If you put a human child into the room, and asked the child to navigate the room, what would the child do?

A toddler might wander into objects and fall over them. I remember when my children were learning to advance from a crawling stage to a walking stage, they would often stand-up, be wobbling and unstable, take a step, and likely trip or fall over something, and plop to the ground. Ouch, they’d utter. They’d look at what they fell over, and you could see their mind calculating to avoid that object in the future.

My children did not just opt to avoid all objects. They would ascertain that some objects they could potentially crawl over and get to the other side, and then continue walking. It was at times easier and possibly faster to crawl over an object than it was for them to try to wobbly walk entirely around the object. When I urged them to go faster or made it a race to get from one side of the room to the other, they were willing to flop over an object, even if it meant getting a small ouchy, versus the no-pain approach of going around the object but taking a longer time to do so.

I bring up the toddler story to have you to take a closer examination of the robot in such a room of objects.

You might assume that the robot, making use of AI techniques, would be analyzing the layout of the room, scanning visually to see where objects were, and would identify a plan of motion to avoid all the objects. That’s the Roomba kind of vacuum cleaner “robot” that does not act like humans would.

As mentioned, a human would potentially crawl over an object if it made sense to try to do so.

Furthermore, the human would experience potential pain in the process of climbing up on, foraging across, and getting back down to the floor, regarding an object. From this pain, the human would “learn” that some objects are more susceptible to this crawl over method and others are not. Perhaps an object higher up is more painful when you fall down on the other side of it. The object surface such as whether it is smooth versus ragged, might also be a pain producer, and would lend itself to learning about whether crawling is a sensible idea or not on that kind of object.

In short, I am saying that at times we need to include “pain” as a factor in an AI system such as a robot that we might be trying to train to sensibly make its way in a room of objects.

The Sentience Question

Now, I’m going to get some AI sentience believers into a bit of a roar about this topic.

Am I claiming that this robot is or can be made to experience pain?

Maybe in the far future we’ll have robots of the type that you see in science fiction stories, ones that can “feel” pain because they have some kind of mysterious elaborated biological mechanisms that have been grafted from humans. In fact, there is research of that kind taking place today.

There are special robotic gloves for example that have sensors to detect “pain” in that they detect pressure, they detect heat, and so on, trying to detect noxious stimuli that would cause a human to have pain.

I don’t want to get stuck herein in the trap that this kind of “pain” is not the same as human experienced pain.

It is somewhat akin to my earlier remarks about varying beliefs of whether animals can feel pain. Recall that there are some theorists that say that animals do not experience pain since animals do not have the mental prowess and intelligence that humans do.

In AI, for the time being, let’s use the human meaning of “pain” to refer to a metaphorical type of “artificial pain” that we will simulate in an AI system.

Just as human pain causes a human to opt to steer clear of pain, or seek to minimize pain, or otherwise cope with pain, we can do somewhat of the same for an AI system, though doing so in a more mathematical way rather than a traditionally human biological way. We can also get the AI system to ascertain “lessons learned” by experiencing the artificial pain.

What do I mean by experiencing some kind of artificial pain?

Suppose we let the robot roam throughout the room and at first provide no indication whatsoever about how to deal with objects in the room. The robot rams into an unmovable object. The robot is stuck and cannot proceed forward. It wants to keep moving, but it cannot, as it is blocked from moving forward.

The robot’s Machine Learning (ML) or Deep Learning (DL) might score a “pain point” that it hit something that was unmovable.

We’ll make this a high number or score as a pain factor.

The robot backs away from the unmovable object.

Turning to the left, the robot proceeds several feet in an open area. It comes upon a lightweight box. The robot rams into the box, which slides out of the way. In this case, the Machine Learning or Deep Learning opts to register another “pain point” though it is a low number or score since the object was readily moved and the robot was able to continue its journey.

Essentially, over time, the robot after making many trial runs throughout the room, will begin to adjust to avoiding objects that are unmovable, and have a willingness to bump into objects that are movable. This will be due to the “learning” that occurs as a result of assigning pain points. Those unmovable objects are a sizable number of pain points, and we’ll setup the robot to want to reduce or minimize the number of pain points it might earn.

I’ve mentioned that pain came from the Latin roots for word penalty or punishment.

We will apply a type of penalty function to the robot AI learning aspects. The penalties are associated with “pains” that we might define for the robot. In the case of the room, pain will be considered the ramming of objects in the room. The intensity of the pain will consist of whether the object was movable or not. If the pain is for a long duration, suppose the robot rams into a sizable object that causes a longer delay, we’ll up the pain score for that object encounter.

Penalty And Reward Functions

For those of you that have dealt with the mathematics of solving constrained optimization problems, you are certainly well-familiar with the use of penalty functions.

As a mathematical algorithm tries to find an optimal path, you apply some form of penalty to the steps chosen. If the optimization is getting worse, the higher the penalty score for the choice chosen. If the optimization is getting better, the less the penalty score assigned at the time.

Use chess as an example. I might opt to move my queen right away into the middle of the chessboard. This could be a great move, and I am taking control of the center of the chess game. On the other hand, it might be putting my queen at great risk, right away, and I’ve not yet begun to battle in the chess match. If my opponent is able to quickly capture and remove my queen from the game, I’m going to be in some pain. Mental pain, one would say.

For playing chess, we can assign pain points to various moves and various chess pieces of your chess play. Whenever an AI chess playing system wants to consider a move, it will incorporate the pain related points. This could help the AI steer away from lousy moves. The pain doesn’t have to deal with just the most immediate move, and could be considering future pain, such as without the queen that the AI will be at a severe disadvantage for the end-game of the chess match.

I probably should also mention the “reward” function if I am mentioning penalty types of functions.

You might say that a reward function is akin to accruing “happiness” points. When I move my pawn forward, I score some reward points that I’ve made a move that will progress my pawn and threaten perhaps one of my opponent’s chess pieces. I am normally seeking to maximize the reward or “happiness” function. Aiming to be as happy as a clam.

When I used to teach courses on AI as a university professor in computer science, I would sometimes get a puzzled look from the students, and they would ask me whether they should be using a reward function or a penalty function. They got themselves into the classic binary world of assuming those two functions are mutually exclusive of each other. That’s a false notion.

In our everyday world, we are continually trying to maximize our rewards and minimize our penalties. For chess playing, my moving the queen to the center of the board would get me some number of reward points. That’s reassuring. It would also get me some number of penalty points. That’s important since I might otherwise be blissfully lulled into assuming that my move of the queen was a rewards-only and penalty-free action.

You can further simplify this rewards versus penalty (or pain) as a formation of the “carrot or the stick” approach to doing things. Using the carrot is the rewards side. Using the stick is the penalty side. You can use just a carrot. You can use just a stick. Most of the time you are likely to employ both the carrot and the stick.

I know that some don’t like referring to the penalties as a “pain” or an “artificial pain” because it perhaps gives an anthropomorphic glow to the use of a penalty function. Isn’t human pain much more complex than this mathematical calculus of rewards and penalties? If so, the use of the word “pain” might overstate the power of the algorithm and the approach in an AI context.

I’d say that the counter-argument is that we are ultimately trying to get AI to become more and more intelligent in the same manner in which we consider humans to be intelligent. Do humans require the inclusion of “pain” as a mode or feature of their physical and mental manifestation in order to be intelligent beings?

If you could make the case that we could entirely strip out “pain” in all its manners, from humans, and be leftover with the same intelligence of humans that we have today, it might imply that in AI we don’t need to concern ourselves with the notions and capabilities of having “pain” for our AI creations.

Another view is that maybe we ought to not be trying to model AI after the likes of humans, and that we can arrive at the equivalence of human intelligence without having to have an AI system that is like that of a human. In that case, we can potentially dispense with the inclusion of “pain” into the AI systems that we are devising.

For the moment, I’d vote that we consider trying to do what we can to model “pain” into AI systems and see how far we get by abiding by what humans seem to do regarding pain. This is merely one path. It does not preclude those that want to dispense with the “artificial pain” and opt to pursue a different course.

Furthermore, by using the word “pain” it helps to keep us grounded as to pushing further and further into how humans manifest pain and how it guides their intelligence and their behaviors. If you used the word “penalties” instead, it seems a bit muddled and less on-point that we’re aiming to figure out the nature and use of “pain” and want to somehow manifest it into AI.

As I mentioned earlier, there are efforts of constructing physical “artificial pain receptors” for robots, such as the gloves that I mentioned, and otherwise outfitting a robot with ways to detect some form of “pain” depending upon how you want to define pain.

That’s a physical manifestation of pain for AI systems.

We can also have a “mental” manifestation of pain for AI systems, akin to what I described earlier about the robot in the room that learns as it hits objects, or the chess playing AI that learns as it decides on chess moves.

That’s a “mental” manifestation of pain for AI systems (I put the word mental into quotes to distinguish that I am not referring to the human mental but instead to an artificial or automation mental).

The two can be combined, of course.

The robot in the room might have sensors that detect when it collides with an object, the physical manifestation, which gets relayed to the AI system running the robot (the “mental” manifestation). The robot AI system then commands the robot to turn back from the object and move another way. In a crude manner, this might be akin to my children detecting the heat from the stove top and opting to move their hands away from it.

For my article about super-intelligence AI, see: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my article about the Turing Test and AI, see: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For the aspects of AI sentience and the singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For why some believe we should start over on AI, see my article: https://www.aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

For the concerns about making AI Frankenstein’s, see my article: https://www.aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

Example Of Artificial Pain As Applied To AI Self-Driving Cars

This discussion about artificial pain can be further explored via as an application to the field of AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect involves the inclusion of “artificial pain” as a means to advance the AI systems that are used to drive a self-driving car.

Allow me to elaborate.

I’d like first to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 and a Level 4, there must be a human driver present in the car.

The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 and Level 4 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of “artificial pain,” let’s take a look at how this approach can be used in advancing the various AI systems for AI self-driving cars.

I’ll start with the perhaps the most obvious question that I am frequently asked on this topic, what in the world does “pain” have to do with driving a car?

To answer this question, let’s use our earlier notion that there can be a physical manifestation of pain and a mental or emotional manifestation of pain.

For the physical manifestation, you might at first glance point out that a car does not physically experience pain. It might get dented. It might get nicked or scratched. It might get scrambled by a blow from another car. Throughout any of those physical encounters and results, we would be hard-pressed to suggest that the car felt any pain per se.

The car has little to almost no built-in capability that we could convincingly argue is a pain system of some kind.

There is essentially no detection by the car that it has experienced anything akin to pain. The car does not tell you that it just got dented by a shopping cart that rolled into it. Nor does the car react to the shopping cart by emitting a bleating horn that might be saying ouch. The car also doesn’t choose to move away from the shopping cart, perhaps realizing that other shopping carts might soon descend upon the car and cause further injury or damage to the car.

Admittedly, there are some ways that you could stretch the definition of pain detection to claim that a conventional car has some means to realize that a painful moment is possibly going to occur.

Some cars have curb feelers, which I’m guessing many of you might not know what that is. These were popular for cars in the 1950’s or so. They are thin springy-like poles or wires that extend from the lower base of the car and are intended to be bend and make a sound when being bent. A driver of the car would be able to use the curb feelers when trying to park a car. Upon butting up to a curb, a curb feeler would touch the curb and begin to bend as you got closer, causing the feeler to make a noise, and the driver would then realize they are getting darned close to the curb. The driver would then presumably avoid getting any closer (this was intended to for example avoid marring the white walls of a fancy tire).

In more modern times, the rise of motion sensors and sound sensors became a popular item to add into your car.

People were fearful that someone might try to steal their car and by having a motion sensor or sound sensor you could detect an untoward action. This led to parked cars in parking lots that incessantly flashed their headlights on-and-off and bleated the horn until your ears couldn’t take it anymore, because somebody might have innocently gotten near to such an equipped car.

Some of these systems would emit a loud voice telling you to get away from the car.

This was jarring to people and often scared them needlessly. All you might be doing is parking your car next to one of these defensively equipped cars and the next thing you know that car is yelling at you. It became rather obnoxious. There were some people that delighted in purposely goading these systems to go on the defense, which they did in hopes that it might use up the person’s battery and the car would end-up silent and unable to start due to a dead battery. Serves them right, some figured.

For more about pranking and how it will emerge for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/

For the danger of the shiggy challenge and moving AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/shiggy-challenge-and-dangers-of-an-in-motion-ai-self-driving-car/

For my article about irrational behaviors of people and cars, see: https://www.aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/

For the nature of curiosity and how it might apply to AI and self-driving cars, see: https://www.aitrends.com/selfdrivingcars/curiosity-core/

For resiliency needs of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/self-adapting-resiliency-for-ai-self-driving-cars/

More On Pain Detection And Reaction

I suppose you could try to suggest that the curb feeler was an artificial pain detection device, seeking to alert you when the car was getting overly close to a curb, and you might also say that the motion sensors and sound sensors likewise were a pain detection based on the physical presence of another. This does seem a mild stretch, if not more so.

Alright, let’s say that these were at best a simplistic and faraway reach of a pain detection and reaction system. Does that mean that since there hasn’t been a true effort to-date to seek out an artificial pain detection and reaction system for a car that we ought not to have one?

There are some that believe future cars should have a better sensory capability to detect when something untoward might happen to the car. Perhaps there should be an outer layer of the car that can detect pressure. If someone leans onto your car or jumps on the hood, the layer would sense this action and could relay the matter to the AI system that is driving the car. This would be the same notion as the pain detection of the human body, along with the signaling transmitted to the brain, and a reaction by the brain based on the source and nature of the “pain” detected.

For an AI self-driving car, given that it will likely already be outfitted with ultrasonic, radar, LIDAR, cameras, and the like, you might not gain much by adding this kind of “pain receptors” layer to the car. By-and-large, those other sensory devices might be able to get you a detection of the same kinds of actions, doing so via their own each form of data collection.

Yet, there is an interesting case to be made that a futuristic car might be better off if it could self-sense its own woes. Did the shopping cart just cause a dent or did it bash-in the front bumper? Is that front bumper still usable? Is the front bumper now in the way of the car and might it cause other problems when the car goes into motion?

If you could have sensors throughout the car, both on an outer layer and an inner layer, it might allow the AI to be more self-aware of what the physical status of the car consists of. It seems unlikely that the other sensory devices of the AI self-driving car would be suitable to providing that kind of indication about the physical status of the self-driving car in the same comprehensive and all-encompassing manner.

We’ll next shift our attention to the mental or emotional manifestation of pain and how it might apply to an AI self-driving car.

When AI developers are crafting image processing capabilities for a self-driving car, they are often already making use of various penalty functions, which earlier I’ve likened to the use of “pain” as a means for guiding learning during a Machine Learning or Deep Learning system setup. A deep artificial neural network might be created via the use of a penalty function when analyzing images.

Suppose we are training a Deep Learning system to identify street signs such as stop signs and roadway caution signs. You can collect thousands of images of such signs and feed them into a large-scale neural network. The closer that the neural network gets in terms of correctly identifying a stop sign as a stop sign, you might have a reward function that increases the weights and other factors to have the DL be numerically getting boosted for doing a good job.

You might also have a penalty function. The further the neural network gets, such as mistaking a yellow caution sign for a red stop sign, the weights and other factors get deductions or penalties. These could be claimed as the “pain” for having been off-target.

For more about Deep Learning, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

For the use of ensemble Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For my article about emotions and AI, see: https://www.aitrends.com/selfdrivingcars/ai-emotional-intelligence-and-emotion-recognition-the-case-of-ai-self-driving-cars/

For convolutional neural networks and AI, see my article: https://www.aitrends.com/selfdrivingcars/deep-compression-pruning-machine-learning-ai-self-driving-cars-using-convolutional-neural-networks-cnn/

There is another realm of “pain” that few AI self-driving cars are encompassing as yet, and for which holds promise as a means to boost the AI self-driving capabilities.

When a human driver is driving a car, they might feel pain when the car takes a curve too strongly or when they are driving really fast and do a rapid swerve. In self-driving cars, the use of an IMU (Intertial Measurement Unit) already provides some indications about these kinds of movements associated with the car. The AI ought to be doing more with the IMU than it does now.

This has been classified by most of the auto makers and tech firms as an “edge” problem and so it is further down on the list of matters to fully embellish.

For more about IMU’s, see my article: https://www.aitrends.com/ai-insider/proprioceptive-inertial-measurement-units-imu-self-driving-cars/

For simulations and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

Human drivers also anticipate pain.

One might suggest that a driver realizes that if they crash into another car or if they sideswipe a wall, it is going to cause themselves to potentially have pain. They are more likely thinking about the injury or harm that might occur to them as a driver inside the car, and the bleeding and the broken bones that would result. I’m going to lump that into the pain bucket. Those are all physical indicators for which pain is quite likely to occur.

Is a driver also worried about the cost to potentially repair a damaged car if getting into a car accident? Are they worried too about their car insurance rates going up due to an accident? Yes, and you might be willing to agree those are all “pain points” associated with getting into a car accident.

Remember we are now focusing on the mental manifestation of pain. In that case, these concerns and qualms about getting physically harmed and also getting financially harmed, you could connect to being a type of mental pain.

The mental pain therefore guides the driver toward avoiding those anticipated actions and results that might produce pain.

Let’s recast this into the AI action planning aspects of a self-driving car.

The AI is trying to avoid getting the self-driving car into an accident. Why? Because the AI developers have presumably developed the AI code to do so. There might be code in the AI system that states do not run into the car ahead of you, stay back at least a car length for each 10 miles per hour, allowing for a buffer to avoid hitting the car.

Another augmented approach involves using Machine Learning and Deep Learning to guide the AI in figuring out the AI action planning. If the self-driving car gets too close to the car ahead, apply a penalty function that takes away points. Or, if you like, administer some mental pain to numerically discourage the behavior of getting overly close to the car ahead.

This also will aid in the rather untouched as-yet area of AI self-awareness for self-driving cars. Most of the AI system for self-driving cars are not anywhere close to being self-aware. I’m not trying to take us down the sentient route, and only bringing us to the notion that there needs to be a part of the AI that overlooks the AI system. We humans do the same. We overlook our own behavior, doing so to gauge and adjust as we are performing a task, such as driving a car.

For my article about self-aware AI, see: https://www.aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/

For code obfuscation of AI systems, see my article: https://www.aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

For reverse engineering of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For my article about getting motion sickness when inside a self-driving car, see: https://www.aitrends.com/selfdrivingcars/kinetosis-anti-motion-sickness-ai-self-driving-cars/

For my article about safety aspects of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Conclusion

Pain. It is key force of our existence. Songs are written about it. Most of our greatest literary works are about pain. The greatest paintings every made tend to depict pain.

Shall we exclude pain from AI systems? If so, are we potentially losing out on what might be an essential ingredient in the formation and emergence of intelligence? Some might say that pain and intelligence are inextricably connected. Darwin wants us to believe that pain is a survival mechanism and crucial to why we humans have lasted.

It pains me to say that we might indeed need to do more about pain for AI systems to further advance. The mysteries of pain in the human body are still being figured out. Likewise, we could consider how to apply whatever we do know about pain into the advancement of AI systems. For AI self-driving cars, we already use a pain-like aspect involving penalties and penalty functions.

More pain might be the remedy to get us toward more human-like driving.

No pain, no gain, as they say.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/noxious-stimuli-and-the-useful-role-of-artificial-pain-in-ai/

spot_img

Latest Intelligence

spot_img