Zephyrnet Logo

Ethics In AI Awareness And AI Autonomous Cars

Date:

Ethically, developers of AI self-driving cars need to be aware of incidents happening in the real world, from small ones to accidents resulting in fatalities. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider

One of the first jobs that I took after having earned my degree in computer science involved doing work for a large manufacturer of airplanes.

I was excited about the new job and eager to showcase my programming prowess.

My friends that had graduated when I did were at various other companies and we were all vying to see which of us would get the juiciest project. There were some serious bragging rights to be had. The more important the project and world shaking it might be, the more you could lord it over the others of us.

My manager handed me some specs that he had put together for a program that would do data analysis.

Nowadays, you’d likely use any of a myriad of handy data analysis tools to do the work required, but in those days the data analysis tools were crude, expensive, and you might as well build your own. He didn’t quite tell me what the data was about and instead just indicated the types of analyses and statistics that my program would need to generate based on the data.

I slaved away at the code.

I got in early and left late.

I was going to show my manager that I would do whatever it took to get the thing going in the shortest amount of days that I could. I had it working pretty well and presented the program to him. He seemed pleased and told me he’d be using the program and would get back to me. After about a week, he came back and said that some changes were needed based on feedback about the program.

He also then revealed to me the nature of the data and the purpose of the effort.

It had to do with the design of airplane windshields.

You’ve probably heard stories of planes that take-off in some locales and encounter flocks of birds. The birds can potentially gum up the engines of the plane. Even more likely is that the birds might strike the windshield and fracture it or punch a hole in it. The danger to the integrity of the plane and the issues this could cause for the pilots is significant and thus worthwhile to try and design windshields to withstand such impacts.

The data that my program was analyzing consisted of two separate datasets.

First, there was data collected from real windshields that in the course of flying on planes around the world had been struck by birds. Second, the company had set up a wind tunnel that contained various windshield designs and were firing clay blobs at the windshields. There was an analysis by my program of the various impacts to the windshields and also a comparison of the test ones used in the wind tunnel versus the real-world impacted ones.

I right away contacted my former college buddies and pointed out that my work was going to save lives. Via my program, there would be an opportunity to redesign windshields to best ensure that newer windshields would have the right kind of designs. Who knew that my first program out of college would have a worldwide impact, it was amazing.

I also noted that whenever any of my friends were to go flying in a plane in the future, they should look at the windshield and be thinking “Lance made that happen.”

Bragging rights for sure!

What happened next though dashed my delight to some degree.

After the windshield design team reviewed the reports produced by my program, they came back to me with some new data and some changes needed to the code. I made the changes. They looked at the new results. About two weeks later, they came back with newer data and some changes to be made to the code. No one had explained what made this data any different and nor why the code changes were needed. I assumed it was just a series of tests using the clay blobs in the wind tunnel.

Turns out that the clay blobs were not impacting the windshields in the same manner as the real-world results of birds hitting the windshields. Believe it or not, they switched to using frozen chickens instead of the clay blobs. After I had loaded that data and they reviewed the results, they determined that a frozen chicken does not have the same impact as a live bird. They then got permission to use real live chickens. That was the next set of data I received, namely, data of living chickens that had been shot out of a cannon inside a wind tunnel and that were smacking against test windshields.

When I mentioned this to my friends, some of them said that I should quit the project. It was their view that it was ethically wrong to use live chickens. I was contributing to the deaths of living animals. If I had any backbone, some of them said, I would march into my manager’s office and declare that I would not stand for such a thing. I subtly pointed out that the loss of the lives of some chickens was a seemingly small price to pay for better airplane windshields that could save human lives. Plus, I noted that most of them routinely ate chicken for lunch and dinner, and so obviously those chickens had given their lives for an even less “honorable” act.

What would you have done?

Ethical Choices At Work

While you ponder what you would have done, one salient aspect to point out is that at first I was not aware of what the project consisted of. In other words, at the beginning, I had little awareness of what my efforts were contributing toward. I was somewhat in the blind. I had assumed that it was some kind of “need to know” type of project.

You might find of idle interest that I had worked on some top security projects prior to this effort, projects that were classified, and so I had been purposely kept in the dark about the true nature of the effort. For example, I wrote a program that calculated the path of “porpoises” trying to intersect with “whales” — my best guess was that maybe the porpoises were actually submarines and the whales were surface ships like navy destroyers or carriers (maybe that’s what it was about, or maybe something completely different!).

In the case of the live chickens and the airplane windshields, upon my becoming more informed and with the realization of what I was contributing toward, presumably the added awareness gave me a chance to reflect upon the matter.

Would my awareness cause me to stop working on the effort?

Would my awareness taint my efforts such that I might do less work on it or be less motivated to do the work?

Might I even try to somehow subvert the project, doing so under the “justified” belief that what was taking place was wrong to begin with?

If you are interested in how workers begin to deviate from the norms as they get immersed in tech projects, take a look at my article: https://aitrends.com/selfdrivingcars/normalization-of-deviance-endangers-ai-self-driving-cars/

For the dangers of company groupthink, see my article: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For background about some key ethics of workplace and society issues, see: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Workplace Awareness Of Ethical Matters

When referring to how workplace related awareness can make a potential difference in worker behavior, a recent study of that phenomena gained national interest.

The study examined the opioid drug crisis occurring in the United States.

There are many thousands of deaths each year due to opioid overdoses and an estimated nearly 2 million Americans that are addicted to opioids. According to the study, part of the reason that opioid use has vastly increased over the last two decades is as a result of prescribing opioids for pain relief and for similar purposes.

Apparently, medical doctors had gotten used to prescribing opioids and did so without necessarily overtly considering the downsides of becoming possibly addicted to it. If a patient can be helped now by giving them opioids, it’s an easy immediate solution for them. The patient is presumably then happy. The doctor is also happy because they’ve made the patient happy. Everyone would seem to be happy. This is not as true if you consider the longer term impacts of prescribing opioids.

The researchers wondered whether they could potentially change the behavior of the prescribing medical doctors.

Via analyzing various data, the researchers were able to identify medical doctors that had patients that had suffered opioid overdoses. Dividing that set of physicians into a control group and an experimental group, the researchers arranged for those in the experimental group to receive a letter from the county medical examiner telling the medical doctor about the death and tying this matter to the overall dangers of prescribing opioids.

The result seemed to be that the medical doctors in the experimental group subsequently dispensed fewer opioids. It was asserted that the use of the awareness letters as targeted to the medical doctors was actually more effective in altering their behavior than the mere adoption of regulatory limits related to prescribing opioids.

By increasing the awareness of these physicians, this added awareness apparently led to a change in their medical behavior.

You can quibble about various aspects of the study, but let’s go with the prevailing conclusions for now, thanks.

Ethics Awareness By AI Developers And AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. We also remain very much aware of any incidents involving AI self-driving cars and discuss those incidents with our teams, regardless of whether those incidents relate directly to any of our work per se.

In essence, we believe that it is important for every member of the team, whether an AI developer, QA specialist, hardware engineer, project manager, and so on, for them to be aware of what’s happening throughout industry about AI self-driving cars. Small incidents to big incidents, ones involving no injuries to ones involving deaths, whatever the incident might be it is considered vital to consider it.

Should the auto makers and tech firms that are also developing AI self-driving cars do likewise?

There’s no written rule that says there is any obligation of the automaker or tech firm to keep their AI developers apprised of AI self-driving car incidents. Indeed, it’s often easy to ignore incidents that happen to competing AI self-driving car efforts. Those dunces, they don’t know what they are doing, can sometimes be the attitude involved. Why look at what they did and figure out what went wrong, since they were not up-to-snuff anyway in terms of their AI self-driving car efforts. That’s a cocky kind of attitude often prevalent among AI developers (actually, prevalent among many in high-tech that think they have the right-stuff!).

So, the question arises as to whether promoting awareness of AI self-driving car incidents to AI self-driving car developers would be of value to the automakers and tech firms developing AI self-driving cars and their teams. You might say that even if you did make them aware, what difference would it make in what they are doing. Won’t they just continue doing what they are already doing?

The counter-argument is that like the prescribing medical doctors, perhaps an increased awareness would change their behavior. And, you might claim that without the increased awareness there is little or no chance of changing their behavior. As the example of the chickens and the airplane windshield suggests, if you don’t know what you are working on and its ramifications, it makes it harder to know that you should be concerned and possibly change course.

In the case of the opioid prescribing medical doctors, it was already ascertained that something was “wrong” about what the physicians were doing. In the case of the automakers and tech firms that are making AI self-driving cars, you could say that there’s nothing wrong with what they are doing. Thus, there’s no point to increasing their awareness.

That might be true, except for the aspect that most of the AI self-driving car community would admit if pressed that they know that their AI self-driving car is going to suffer an incident someday, somehow. Even if you’ve so far been blessed to have nothing go awry, it’s going to happen that something will go awry. There’s really no avoiding it. Inextricably, inexorably, it’s going to happen.

There are bound to be software bugs in your AI self-driving car system. There are bound to be hardware exigencies that will confuse or confound your AI system. There are bound to be circumstances that will arise in a driving situation that will exceed what the AI is able to cope with, and the result will at some point produce an adverse incident. The complexity of AI self-driving car systems is relatively immense and the ability to test all possibilities prior to fielding is questioned.

For issues of irreproducibility and AI self-driving cars, see my article: https://aitrends.com/ai-insider/irreproducibility-and-ai-self-driving-cars/

For pre-mortems about AI self-driving cars, see my article: https://aitrends.com/ai-insider/pre-mortem-analysis-for-ai-self-driving-cars/

For my article on software neglect issues, see: https://aitrends.com/ai-insider/software-neglect-will-impede-ai-self-driving-cars/

For the likely freezing robot problem and AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

Furthermore, there is a perceived rush to get AI self-driving cars on our public roadways, at least by some.

The auto makers and tech firms tend to argue that the only viable means to test out AI self-driving cars is by running them on our public roadways.

Simulations, they claim, can only do so much.

Proving grounds, they say, are limited and there’s only so much you can discover.

The public roadways are the means to get us to true AI self-driving cars. The risks to the public are presumed to be worth the assumed faster pace to perfecting AI self-driving cars.

You’ve got to accept some public pain to gain a greater public good, some say.

For public trust issues about AI self-driving cars and the makers of them, see: https://aitrends.com/ai-insider/roller-coaster-public-perception-ai-self-driving-cars/

Are AI developers and other tech specialists involved in the making of AI self-driving cars keeping apprised of what is going on in terms of the public roadways trials and especially the incidents that occur from time-to-time?

On an anecdotal basis of asking those that I meet at industry conferences, many are so focused on their day-to-day job and the pressures to produce that they find little time or energy to keep up with the outside world per se. Indeed, at the conferences, many times they tell me that they have scooted over to the event for just a few hours and need to rush back to the office to continue their work efforts.

The intense pressure by their workplace and their own internal pressure to do the development work would seem to be preoccupying them.

I’ve mentioned before in my writings and speeches that there is a tendency for these developers to get burned out.

For my article about the burnout factor of AI developers, see: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about the recent spate of accidents with AI self-driving cars, see: https://aitrends.com/ai-insider/accidents-contagion-and-ai-self-driving-cars/

Proposed Research Project Focused On AI Developers

Here’s then a proposed research project that would be interesting and informative to undertake.

Suppose that akin to the research on physicians and the awareness of opioids prescribing, we were to do a study of AI self-driving car developers and their awareness of AI self-driving car incidents. The notion would be to identify to what degree they have awareness in mind already, and whether increased awareness would aid in their efforts.

A null hypothesis could be: Developers of AI self-driving cars have little or no awareness of AI self-driving car incidents.

The definition of awareness could be operationalized by indicating that it consists of having read or seen information about one or more AI self-driving car incidents in the last N number of months.

This hypothesis is structured in a rather stark manner by indicating “no awareness” which would presumably be easiest to break. One would assume or hope that these developers would have some amount of awareness, even if minimal, about relatively recent incidents.

The next such hypothesis could examine the degree of awareness. For example, maybe levels such as Q, R, S, and T number of impressions about incidents in the last N months, wherein we use say Q=1, R=2-4, S=5-7, T=8+, in order to indicate ranges of awareness. One potential flaw to simply using the number of impressions would be whether they are repetitive of the same incident, or another loophole is that they read or saw something but did so in a cursory way (this could be further tested by gauging how much they remembered or knew about the incident as an indicator of whether they actually gained awareness per se or not).

The next aspect to consider is whether awareness makes a difference in behavior.

In the case of the physicians and the opioids prescribing, it was indicated that their presumed increased awareness led to less prescriptions of opioids being written. We don’t know for sure that the increased awareness “caused” that change in behavior, and it could be that some other factor produced the change, but in any case, the study suggests or asserts that the two aspects went hand-in-hand.

What might an AI developer do differently as a result of increased awareness about AI self-driving car incidents?

We can postulate that they might become more careful and retrospective about the AI systems they are developing. They might take longer to develop their code in the belief that they need to be more cautious to pay attention to systems safety related aspects. They might increase the amount of testing time. They might use tools for inspecting their code that they hadn’t used before or might re-double their use of such tools. They might devise new safety mechanisms for their systems that they had not otherwise done previously.

They might within their firm become an advocate for greater attention and time towards AI systems safety. They might seek to collaborate more so with the QA teams or others that are tasked with trying to find bugs and errors and do other kinds of systems testing. They might seek to bolster AI safety related practices within the company. They might seek to learn more about how to improve their AI system safety skills and how to apply them to the job. They might push back within the firm at deadlines that don’t take into account prudent AI systems safety considerations. And so on.

For my framework on AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For purposes of a research study, it would be necessary to somehow quantify those potential outcomes in order to readily measure whether the awareness does have an impact. The quantification could be subjectively based; the developers could be asked to rate their changes based on a list of the possible kinds of changes. This is perhaps the simplest and easiest way to determine it. A more arduous and satisfying means would be to try to arrive at true counts of other signifiers of those changes.

Similar to the physicians and opioids study, there would be a control group and an experimental or treatment group. The treatment group might be provided with information about recent AI self-driving car incidents, and then post-awareness a follow-up some X days or weeks later try to discern whether their behavior has changed as a result of the treatment. It would not be necessary axiomatic that any such changes in behavior could be entirely construed as due to the awareness increase, but it would seem like a reasonable inference. There is also the chance of a classic Hawthorne effect coming to play, and for which the research study would want to consider how to best handle.

Conclusion

AI developers for self-driving cars are dealing with systems that involve life-and-death.

In the pell-mell rush to try and get AI self-driving cars onto our roadways, we all collectively need to be mindful of the dangers that a multi-ton car can have if the AI encounters difficulties and runs into other cars, or runs into pedestrians, or otherwise might lead to human injuries or deaths.

Though AI developers certainly grasp this overall perspective, in the day-to-day throes of slinging code and building Machine Learning systems for self-driving cars it can become a somewhat lost or lessened consideration, and instead the push to get things going can overtake that awareness. We believe fervently that AI developers need to keep this awareness at the forefront of their efforts, and by purposely allow time for it and structuring it as part of the job effort, it is our hope that it makes a difference in the reliability and ultimate safety of these AI systems.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Source: https://www.aitrends.com/ai-insider/ethics-in-ai-awareness-and-ai-autonomous-cars/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?