Zephyrnet Logo

Stockholm Syndrome and AI Autonomous Cars

Date:

Like canaries in a coal mine, participants in self-driving car tests run the risk of being victimized by Stockholm Syndrome, thus masking potential safety issues. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

You might be vaguely aware of the Stockholm Syndrome.

From time-to-time, the news media will refer to a situation as somehow invoking the famous case of what happened in the 1970’s in Stockholm, Sweden.

In that case, bank robbers in Stockholm took several hostages and holed-up in the bank vault for six days, refusing to come out and refusing to give up the hostages. Once the siege ended, the hostages surprisingly later on refused to testify against the kidnappers/robbers, and were generally supportive of their captors.

This certainly seemed like a curious outcome.

We would have expected that the kidnapped victims would be upset and likely quite angry toward their kidnappers, maybe even wanting some kind of extensive revenge or at least demonstrative punishment for the crime committed. The local police brought in an expert psychiatrist/criminologist that said it was an example of brainwashing.

A name arose of calling it the Stockholm Syndrome and it seems to have stuck ever since.

Background About The Stockholm Syndrome

It is characterized as usually involving a bond developing between the hostages and the captors. The hostages might start out as rightfully hostile toward the captors, and then gradually shift toward having positive feelings toward them. This often slowly emerges during the period of captivity and is not usually instantaneous.

After getting out of captivity, the hostages might continue to retain the sense of positive bond. At first, the bonding often is quite high, and then dissipates over time. Ultimately, the hostages might someday change their minds and begin to have more pronounced negative feelings toward the captors. This all depends on a number of factors, such as the treatment of the hostages during the captivity portion, the interaction with their captors afterward, and so on.

If you carefully consider the phenomena, it might not seem particularly strange that during captivity the hostages might bond with their captors.

One could say that this is a coping mechanism.

It might increase your odds of survival. It might also be a means to mentally escape the reality of the situation. It could also be a kind of personal acquiescence to the situation and especially if you believe that you might not ever escape. Various psychological explanations are possible.

What tends to really puzzle outsiders is that after captivity the hostages would continue to retain that positive bond. It would seem that if you gained your freedom, and you were no longer under the belief that you had no other choice for pure survival purposes, you would pretty quickly bounce back with rage or some other similar reaction. We’d all allow that maybe for the first few minutes or hours after getting out of captivity that you might still be mired in what had occurred, but after days or even weeks or months, we’d assume that the hostages would re-calibrate mentally and no longer have that false bonding muddled in their minds.

Some might say that the after-effect lasts because the hostage maybe wants to self-justify the earlier bonding.

In other words, if you bonded during captivity, maybe afterward you would be embarrassed to admit it was a mistake, so you keep it going to try and show that it all made sense all along. Another explanation is that the person is so brainwashed at the time of captivity that it remains nearly permanently affixed in their psyche. There are lots of theories about this. No one explanation seems to be the all-purpose way to rationalize it.

Some object to the references about the Stockholm Syndrome and believe that it has become a kind of scapegoat to explain all sorts of unusual psychological situations.

Some say it has been watered down due to overuse. Some say it never had a crisp definition to start with and has become a popular item that lacks bona fide professional psychological bases and uses. Some try to create variants by renaming it to a local situation like say the Los Angeles Syndrome or the Piccadilly Square Syndrome.

Admittedly, it’s a handy kind of reference paradigm that most people seem to know enough about that it can get their attention and interest.

Stockholm Syndrome And AI Autonomous Cars

Which brings us to the next point, namely, what does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As part of that effort, we’re also keenly interested in the trial tests of AI self-driving cars.

Google’s Waymo has one of the most well-publicized of the trial tests of AI self-driving cars. They have for example been using a selected area of Phoenix, Arizona that involves having everyday people making use of the Waymo self-driving cars. This is being done as a kind of experiment, or maybe you’d prefer to call it a Proof Of Concept (POC), or a pilot, or a test, or a trial run, or whatever. Cleverly, Waymo coined it the “Early Rider Program” and the participants are Early Riders. The naming seems to bring forth imaginary of mavericks, those that dare to be first, and it provides an obviously upbeat way to portray the program (reminiscent of the movie Easy Rider and the freewheeling imagery of motorcyclists).

Let’s clarify that those initial trial runs were not randomly picking people up off the street.

Even though these are genuinely public kinds of trial runs, the participants needed to first apply to the program.

Only those applicants then chosen by Waymo are then allowed to participate. You can say that it is open to anyone in the sense that anyone can apply. Merely pointing out that whatever selection criteria is used, it then becomes semi-selective out of the pool of whomever actually applies.

This is in contrast to say having AI self-driving cars roaming around and picking up anyone that happens to flag one down (which is something gradually starting to occur, including in other parts of the country too).

The stated purpose of the Early Rider Program was to provide an opportunity for residents in the geographical area to have access to these AI self-driving cars and provide feedback about them.

In that sense, you can imagine how exciting it might be to become a chosen participant.

You could help shape not only how Waymo is making AI self-driving cars, but maybe the entire future of AI self-driving cars.

And, the bragging rights would be awesome, including at the time of your participation and afterward. Imagine that you want to impress a date, and tell them you’ll swing over at 7:00 p.m. to take them to dinner. Lo and behold, you show-up in an AI self-driving car. Whoa, impressive! Or, some years from now, when presumably AI self-driving cars are everywhere, you chat with a stranger and mention that, yes, you were one of the original pioneers that helped shape AI self-driving cars. You act modestly as though it was no big deal, and when the person says that you were like Neil Armstrong or “Buzz” Aldrin, Jr., you smile and say that you were a bit of a risk taker in your early days.

Speaking of risks, how much risk are these participants taking on?

According to reports, the trial runs have had a back-up human driver from Waymo in the cars, thus, presumably, there has been a licensed driver ready to takeover if needed.

Presumably, this is not just any licensed driver, but one trained to keep their attention to the self-driving car and that is ready to step into the driving task when so needed. This definitely is intended to reduce the risks of the AI self-driving car going awry. But, this is also not necessarily a risk-free kind of ride, since there are numerous issues of having a so-called back-up driver and trying to co-share the driving task.

As recently indicated, the back-up driver will no longer be present in some rides and some locales.

See my article about the drawbacks of the co-sharing of driving with a back-up human driver: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For my framework about AI self-driving cars, please see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Scope Of Trial Runs

In quick recap, a trial run of this nature consists of vendor selected people in a predetermined geographical area that are asked to participate in a kind of real-world experiment involving having AI self-driving cars transport them, doing so from time-to-time, as determined by the vendor.

It’s quite a bit different than having people come to a closed tracks or proving grounds to do trial runs, and so in that sense this is a bolder and more illuminating way to presumably get insightful feedback about AI self-driving cars.

For my article about the closed track areas for AI self-driving cars, see: https://aitrends.com/ai-insider/proving-grounds-ai-self-driving-cars/

One criticism is that these are indeed vendor selected participants, meaning that the auto maker or tech firm has chosen the people that are participating.

Suppose there is some kind of purposeful selection criteria that is weaning out certain kinds of people, or maybe a subliminal selection bias, in which case, whatever is learned during these trial runs is lopsided. It doesn’t presumably cover the full gamut of people.

Will the result be an AI self-driving car that has certain kinds of biases and those biases will be reflected in what AI self-driving cars do and how they behave?

For my article about hidden biases in AI self-driving cars, see: https://aitrends.com/ai-insider/debiasing-ai-self-driving-cars/

Another reported aspect is that the participants in such trial runs are required to sign NDA’s (Non-Disclosure Agreements).

This presumably restricts the participants from freely commenting to the public at large about their experiences of riding in these AI self-driving cars. You can certainly empathize with the automaker or tech firm that they want to keep the participants somewhat under-wraps about their newly emerging AI self-driving cars. Imagine if a participant makes an off-hand remark that they hate the thing and no one should ever ride in one. This could be a completely unfair and baseless statement, which would appear to have credence simply because the person was a participant in the trial runs.

There could also be proprietary elements underlying the AI self-driving cars that could be blurted out by a participant and undermine the secrecy of the Intellectual Property (IP) of the vendor. Right now, the AI self-driving car companies are in a fierce battle to see who can achieve this moonshot first.

There is already a lot of sneaking around to find out what other firms are doing.

There’s a potential treasure trove that you might be able to get a participant to unwittingly divulge.

For my article about the stealing of secrets about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

For why this is considered a moonshot effort to create an AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

There are some that think the auto makers and tech firms should not restrict the participants in any manner whatsoever.

They argue that it is important for the public to know what these participants feel about AI self-driving cars. Good or bad. Right or wrong. Blemishes or not. It is for the good of the public overall to know what the participants have to say.

Furthermore, they would likely claim that it will help the other automakers and tech firms too. In other words, if you believe that AI self-driving cars provide great benefits to society, the sooner we get there, the better for all of society. Thus, the more that the auto makers and tech firms share with each other, the sooner the benefits will emerge.

For my article about idealists and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

For my article about the Frankenstein like potential dangers, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

To some degree, the participants in these kinds of trial runs have been periodically allowed to say something about their experiences.

You’ll see a quote in newspaper articles or magazines, or on some social media sites. Usually, it is a very carefully crafted indication, or at least one that has been vetted and approved for release by the automaker or tech firm. It is rarely a fully off-the-cuff, anything-you-want-to-say utterance.

This is again as a result of the NDA, and the automaker or tech firm wanting to try and shape the public perception of the matter.

You can imagine that if rocket makers tried to make rockets, and if their trial runs had issues, it could become a public relations nightmare if the tiniest imperfections were made known and then potentially blown out of proportion. This actually does happen. Companies trying to create some new technology will at times get clobbered by the fact that it isn’t working right, and yet they are aware that it is not yet ready for prime time, and hence their desire to run trials first. But, if the trials become the focus of attention, and if only having complete perfection is the public criteria (even during the trial runs), the trials really serve no useful purpose, since you would need to hold back from doing any trials at all, until the system was perfected anyway. It’s kind of a Catch-22.

Feedback Taken With A Grain Of Salt

Let’s though shift our attention to something else, but related to this whole topic.

At some of my recent presentations at industry conferences, I’ve been asked about some of the comments that participants in these trial runs have been saying so far.

The comments are usually quite glowing.

Even if there is a mention of something that went awry, the participants seem to then explain it away and the whole thing seems just peachy.

For example, a participant that reported an AI self-driving car that got somewhat lost in a mall parking lot trying to get to the rider’s desired destination, and later on the AI developers adjusted the system to instead go to a designated drop-off point. This is a lighthearted tale. No one was hurt, no apparent concern, other than maybe some excess time spent waiting for the AI self-driving car to find the proper spot. Plus, it was later fixed anyway.

Others with a more critical eye question these kinds of stories.

Shouldn’t we be concerned that the AI system wasn’t able to better navigate the mall parking lot?

Maybe there are other locations that it would have problems with too?

Shouldn’t we be concerned that the AI system itself wasn’t able to make a correction, and that instead it required human intervention by the developers?

If AI self-driving cars aren’t going to be self-corrective, it seems to undermine what we are expecting of Machine Learning and the abilities of the AI for self-driving cars? And so on.

In any case, here’s the question that I sometimes get asked – are these participants in these tryouts perhaps suffering from Stockholm Syndrome?

There are some that seem to be concerned that the apparently whitewashed commentary being provided by the trial run participants might be a form of Stockholm Syndrome.

Maybe the participants are being “brainwashed” into believing that the AI self-driving cars are fine and dandy. Perhaps this is coming out of then not by their own freewill, but by the droning of it into their heads.

I’ll admit that I was a bit taken aback the first time I was asked this question.

I believe my answer was, say what?

After some reflective thought, I pointed out that the “Stockholm Syndrome” is perhaps a misapplication in this case.

The commonly accepted notion of the Stockholm Syndrome is that you have some kind of hostages and some kind of captors.

I dare say, it doesn’t seem like these trial run participants are hostages.

They voluntarily agreed to participate.

They put themselves forth to become participants.

They weren’t grabbed up in the cover of darkness and thrown into AI self-driving cars.

So, I reject the notion that you can somehow compare these trial runs with a hostage-captor scenario.

The comparison might seem appetizing, especially if you are someone averse to the trial runs, or at least how you believe the trial runs are being run. It also has a clever stickiness to it, meaning that it could stick with the trial runs because it kind of sounds applicable on a surface basis.

Suppose I am going to create a new kind of ice cream. I ask for volunteers to taste it. Those that are volunteering are presumably already predisposed to liking ice cream. I select volunteers that are passionate about ice cream and really care for it. I then have them start tasting the ice cream. They like it, and it’s a flavor and type they’ve never before had a chance to try. They are excited to be one of the first. They also believe they are shaping the future of ice cream for us all.

If I did that, I think we’d likely expect that the participants are going to generally have glowing comments about the ice cream trial. They might even suppress some of the not so good aspects, especially if we right away modified the flavors based on their feedback. After the trial runs are over, suppose the ice cream goes into mass production. I would anticipate that the original volunteers are likely to continue to say that the ice cream was great.

Does this mean that they are suffering from the Stockholm Syndrome? Just because they bonded in a positive way, and kept that positive bonding later on? I think that strips out the essence of the Stockholm Syndrome, the hostage part of things. The mistreatment part of things.

The analogy or metaphor falls apart due to a key linking element that is not there.

During these trial runs of these emerging AI self-driving cars, if some of the participants get injured or killed due to the AI self-driving car, I’d be pretty shocked if that got covered up. I think we’d all know about it. One way or another, it would leak out. There would likely be lawsuits filed. Someone would leak it. An inquisitive reporter would find out about it. An anonymous tip would get posted on a blog. Etc.

I mention this aspect because for those that are concerned about the positive commentary to-date about these trial runs, I’m suggesting that if there is something really amiss, I think it will become known.

In spite of the assertion that the participants are brainwashed, I doubt that the brainwashing could be that good that it would curtail a reveal about something systematically wrong and life threatening.

I realize there are conspiracy theorists that might disagree with me, see my article about conspiracy theories about AI self-driving cars: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

Recap And Conclusion

Overall, here’s my key thoughts on this matter:

  • Trial runs are a generally good thing for progress on AI self-driving cars, though some argue we are dangerously being turned into guinea pigs
  • Auto makers and tech firms need to remain vigilant to undertake these trial runs safely
  • Participants might be somewhat muted about things that go awry
  • Participants will likely be reporting publicly only upbeat aspects, which we should consider and also at times take with a bit of salt
  • Calamities during the trial runs are likely to get leaked out and so it is probably going to be difficult for vendors to keep a lid on issues
  • It is understandable why there are various controls related to the release of info about the trial runs
  • There does not seem to be any conspiratorial concerns on this (I’ll add “as yet” for those that holdout for a conspiracy)
  • Trying to say this is a Stockholm Syndrome seems to be an overreach

We’ll need to keep our eye on the autonomous car tryouts, including the passengers and their reactions.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/stockholm-syndrome-and-ai-autonomous-cars/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?