Connect with us


70% of job seekers think automation skills are the key to finding a new position




“Being good is easy, what is difficult is being just.” ― Victor Hugo

“We need to defend the interests of those whom we’ve never met and never will.” ― Jeffrey D. Sachs

Note: This article is intended for a general audience to try and elucidate the complicated nature of unfairness in machine learning algorithms. As such, I have tried to explain concepts in an accessible way with minimal use of mathematics, in the hope that everyone can get something out of reading this.

Supervised machine learning algorithms are inherently discriminatory. They are discriminatory in the sense that they use information embedded in the features of data to separate instances into distinct categories — indeed, this is their designated purpose in life. This is reflected in the name for these algorithms which are often referred to as discriminative algorithms (splitting data into categories), in contrast to generative algorithms (generating data from a given category). When we use supervised machine learning, this “discrimination” is used as an aid to help us categorize our data into distinct categories within the data distribution, as illustrated below.

AI fairness

Illustration of discriminative vs. generative algorithms. Notice that generative algorithms draw data from a probability distribution constrained to a specific category (for example, the blue distribution), whereas discriminative algorithms aim to discern the optimal boundary between these distributions. Source: Stack Overflow

Whilst this occurs when we apply discriminative algorithms — such as support vector machines, forms of parametric regression (e.g. vanilla linear regression), and non-parametric regression (e.g. random forest, neural networks, boosting) — to any dataset, the outcomes may not necessarily have any moral implications. For example, using last week’s weather data to try and predict the weather tomorrow has no moral valence attached to it. However, when our dataset is based on information that describes people — individuals, either directly or indirectly, this can inadvertently result in discrimination on the basis of group affiliation.

Clearly then, supervised learning is a dual-use technology. It can be used to our benefits, such as for information (e.g. predicting the weather) and protection (e.g. analyzing computer networks to detect attacks and malware). On the other hand, it has the potential to be weaponized to discriminate at essentially any level. This is not to say that the algorithms are evil for doing this, they are merely learning the representations present in the data, which may themselves have embedded within them the manifestations of historical injustices, as well as individual biases and proclivities. A common adage in data science is “garbage in = garbage out” to refer to models being highly dependent on the quality of the data supplied to them. This can be stated analogously in the context of algorithmic fairness as “bias in = bias out”.

If these in-depth educational content is useful for you, you can subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new research updates.

Data Fundamentalism

Some proponents believe in data fundamentalism, that is to say, that the data reflects the objective truth of the world through empirical observations.

“with enough data, the numbers speak for themselves.” — Former Wired editor-in-chief Chris Anderson (a data fundamentalist)

Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations. Hidden biases in both the collection and analysis stages present considerable risks, and are as important to the big-data equation as the numbers themselves. — Kate Crawford, principal researcher at Microsoft Research Social Media Collective

Superficially, this seems like a reasonable hypothesis, but Kate Crawford provides a good counterargument in a Harvard Business Review article:

Boston has a problem with potholes, patching approximately 20,000 every year. To help allocate its resources efficiently, the City of Boston released the excellent StreetBump smartphone app, which draws on accelerometer and GPS data to help passively detect potholes, instantly reporting them to the city. While certainly a clever approach, StreetBump has a signal problem. People in lower income groups in the US are less likely to have smartphones, and this is particularly true of older residents, where smartphone penetration can be as low as 16%. For cities like Boston, this means that smartphone data sets are missing inputs from significant parts of the population — often those who have the fewest resources. — Kate Crawford, principal researcher at Microsoft Research

Essentially, the StreetBump app picked up a preponderance of data from wealthy neighborhoods and relatively little from poorer neighborhoods. Naturally, the first conclusion you might draw from this is that the wealthier neighborhoods had more potholes, but in reality, there was just a lack of data from poorer neighborhoods because these people were less likely to have smartphones and thus have downloaded the SmartBump app. Often, it is data that we do not have in our dataset that can have the biggest impact on our results. This example illustrates a subtle form of discrimination on the basis of income. As a result, we should be cautious when drawing conclusions such as these from data that may suffer from a ‘signal problem’. This signal problem is often characterized as sampling bias.

Another notable example is the “Correctional Offender Management Profiling for Alternative Sanctions” algorithm or COMPAS for short. This algorithm is used by a number of states across the United States to predict recidivism — the likelihood that a former criminal will re-offend. Analysis of this algorithm by ProPublica, an investigative journalism organization, sparked controversy when it seemed to suggest that the algorithm was discriminating on the basis of race — a protected class in the United States. To give us a better idea of what is going on, the algorithm used to predict recidivism looks something like this:

Recidivism Risk Score = (age*−w)+(age-at-first-arrest*−w)+(history of violence*w) + (vocation education * w) + (history of noncompliance * w)

It should be clear that race is not one of the variables used as a predictor. However, the data distribution between two given races may be significantly different for some of these variables, such as the ‘history of violence’ and ‘vocation education’ factors, based on historical injustices in the United States as well as demographic, social, and law enforcement statistics (which are often another target for criticism since they often use algorithms to determine which neighborhoods to patrol). The mismatch between these data distributions can be leveraged by an algorithm, leading to disparities between races and thus to some extent a result that is moderately biased towards or against certain races. These entrenched biases will then be operationalized by the algorithm and continue to persist as a result, leading to further injustices. This loop is essentially a self-fulfilling prophecy.

Historical Injustices → Training Data → Algorithmic Bias in Production

This leads to some difficult questions — do we remove these problematic variables? How do we determine whether a feature will lead to discriminatory results? Do we need to engineer a metric that provides a threshold for ‘discrimination’? One could take this to the extreme and remove almost all variables, but then the algorithm would be of no use. This paints a bleak picture, but fortunately, there are ways to tackle these issues that will be discussed later in this article.

These examples are not isolated incidents. Even breast cancer prediction algorithms show a level of unfair discrimination. Deep learning algorithms to predict breast cancer from mammograms are much less accurate for black women than white women. This is partly because the dataset used to train these algorithms is predominantly based on mammograms of white women, but also because the data distribution for breast cancer between black women and white women likely has substantial differences. According to the Center for Disease Control (CDC) “Black women and white women get breast cancer at about the same rate, but black women die from breast cancer at a higher rate than white women.


These issues raise questions about the motives of algorithmic developers — did the individuals that designed these models do so knowingly? Do they have an agenda they are trying to push and trying to hide it inside gray box machine learning models?

Although these questions are impossible to answer with certainty, it is useful to consider Hanlon’s razor when asking such questions:

Never attribute to malice that which is adequately explained by stupidity — Robert J. Hanlon

In other words, there are not that many evil people in the world (thankfully), and there are certainly less evil people in the world than there are incompetent people. On average, we should assume that when things go wrong it is more likely attributable to incompetence, naivety, or oversight than to outright malice. Whilst there are likely some malicious actors who would like to push discriminative agendas, these are likely a minority.

Based on this assumption, what could have gone wrong? One could argue that statisticians, machine learning practitioners, data scientists, and computer scientists are not adequately taught how to develop supervised learning algorithms that control and correct for prejudicial proclivities.

Why is this the case?

In truth, techniques that achieve this do not exist. Machine learning fairness is a young subfield of machine learning that has been growing in popularity over the last few years in response to the rapid integration of machine learning into social realms. Computer scientists, unlike doctors, are not necessarily trained to consider the ethical implications of their actions. It is only relatively recently (one could argue since the advent of social media) that the designs or inventions of computer scientists were able to take on an ethical dimension.

This is demonstrated in the fact that most computer science journals do not require ethical statements or considerations for submitted manuscripts. If you take an image database full of millions of images of real people, this can without a doubt have ethical implications. By virtue of physical distance and the size of the dataset, computer scientists are so far removed from the data subjects that the implications on any one individual may be perceived as negligible and thus disregarded. In contrast, if a sociologist or psychologist performs a test on a small group of individuals, an entire ethical review board is set up to review and approve the experiment to ensure it does not transgress across any ethical boundaries.

On the bright side, this is slowly beginning to change. More data science and computer science programs are starting to require students to take classes on data ethics and critical thinking, and journals are beginning to recognize that ethical reviews through IRBs and ethical statements in manuscripts may be a necessary addition to the peer-review process. The rising interest in the topic of machine learning fairness is only strengthening this position.

Fairness in Machine Learning

AI fairness

Machine learning fairness has become a hot topic in the past few years. Image Source: CS 294: Fairness in Machine Learning course taught at UC Berkley.

As mentioned previously, widespread adoption of supervised machine learning algorithms has raised concerns about algorithmic fairness. The more these algorithms are adopted, and the increasing control they have on our lives will only exacerbate these concerns. The machine learning community is well aware of these challenges and algorithmic fairness is now a rapidly developing subfield of machine learning with many excellent researchers such as Moritz Hardt, Cynthia Dwork, Solon Barocas, and Michael Feldman.

That being said, there are still major hurdles to overcome before we can achieve truly fair algorithms. It is fairly easy to prevent disparate treatment in algorithms — the explicit differential treatment of one group over another, such as by removing variables that correspond to these attributes from the dataset (e.g. race, gender). However, it is much less easy to prevent disparate impact —implicit differential treatment of one group over another, usually caused by something called redundant encodings in the data.

AI fairness

Illustration of disparate impact — in this diagram the data distribution of two groups is very different, which leads to differences in the output of the algorithm without any explicit association of the groups. Source: KdNuggets

redundant encoding tells us information about a protected attribute, such as race or gender, based on features present in our dataset that correlate with these attributes. For example, buying certain products online (such as makeup) may be highly correlated with gender, and certain zip codes may have different racial demographics that an algorithm might pick up on.

Although an algorithm is not trying to discriminate along these lines, it is inevitable that data-driven algorithms that supersede human performance on pattern recognition tasks might pick up on these associations embedded within data, however small they may be. Additionally, if these associations were non-informative (i.e. they do not increase the accuracy of the algorithm) then the algorithm would ignore them, meaning that some information is clearly embedded in these protected attributes. This raises many challenges to researchers, such as:

  • Is there a fundamental tradeoff between fairness and accuracy? Are we able to extract relevant information from protected features without them being used in a discriminatory way?
  • What is the best statistical measure to embed the notion of ‘fairness’ within algorithms?
  • How can we ensure that governments and companies produce algorithms that protect individual fairness?
  • What biases are embedded in our training data and how can we mitigate their influence?

We will touch upon some of these questions in the remainder of the article.

The Problem with Data

In the last section, it was mentioned that redundant encodings can lead to features correlating with protected attributes. As our data set scales in size, the likelihood of the presence of these correlations scales accordingly. In the age of big data, this presents a big problem: the more data we have access to, the more information we have at our disposal to discriminate. This discrimination does not have to be purely race- or gender-based, it could manifest as discrimination against individuals with pink hair, against web developers, against Starbucks coffee drinkers, or a combination of all of these groups. In this section, several biases present in training data and algorithms are presented that complicate the creation of fair algorithms.

The Majority Bias

Algorithms have no affinity to any particular group, however, they do have a proclivity for the majority group due to their statistical basis. As outlined by Professor Moritz Hardt in a Medium article, classifiers generally improve with the number of data points used to train them since the error scales with the inverse square root of the number of samples, as shown below.

AI fairness

The error of a classifier often decreases as the inverse square root of the sample size. Four times as many samples means halving the error rate.

This leads to an unsettling reality that since there will, by definition, always be less data available about minorities, our models will tend to perform worse on those groups than on the majority. This assumption is only true if the majority and minority groups are drawn from separate distributions, if they are drawn from a single distribution then increasing sample size will be equally beneficial to both groups.

An example of this is the breast cancer detection algorithms we discussed previously. For this deep learning model, developed by researchers at MIT, of the 60,000 mammogram images in the dataset used to train the neural network, only 5% were mammograms of black women, who are 43% more likely to die from breast cancer. As a result of this, the algorithm performed more poorly when tested on black women, and minority groups in general. This could partially be accounted for because breast cancer often manifests at an earlier age among women of color, which indicates a disparate impact because the probability distribution of women of color was underrepresented.

This also presents another important question. Is accuracy a suitable proxy for fairness? In the above example, we assumed that a lower classification accuracy on a minority group corresponds to unfairness. However, due to the widely differing definitions and the somewhat ambiguous nature of fairness, it can sometimes be difficult to ensure that the variable we are measuring is a good proxy for fairness. For example, our algorithm may have 50% accuracy for both black and white women, but if there 30% false positives for white women and 30% false negatives for black women, this would also be indicative of disparate impact.

From this example, it seems almost intuitive that this is a form of discrimination since there is differential treatment on the basis of group affiliation. However, there are times when this group affiliation is informative to our prediction. For example, for an e-commerce website trying to decide what content to show its users, having an idea of the individual’s gender, age, or socioeconomic status is incredibly helpful. This implies that if we merely remove protected fields from our data, we will decrease the accuracy (or some other performance metric) of our model. Similarly, if we had sufficient data on both black and white women for the breast cancer model, we could develop an algorithm that used race as one of the inputs. Due to the differences in data distributions between the races, it is likely that the accuracy would have increased for both groups.

Thus, the ideal case would be to have an algorithm that contains these protected features and uses them to make algorithmic generalizations but is constrained by fairness metrics to prevent the algorithm from discriminating.

This is an idea proposed by Moritz Hardt and Eric Price in ‘Equality of Opportunity in Supervised Learning’. This has several advantages over other metrics, such as statistical parity and equalized odds, but we will discuss all three of these methods in the next section.

Definitions of Fairness

In this section we analyze some of the notions of fairness that have been proposed by machine learning fairness researchers. Namely, statistical parity, and then nuances of statistical parity such as equality of opportunity and equalized odds.

Statistical Parity

Statistical parity is the oldest and simplest method of enforcing fairness. It is expanded upon greatly in the arXiv article “Algorithmic decision making and the cost of fairness The formula for statistical parity is shown below.

AI fairness

The formula for statistical parity. In words, this describes that the outcome y is independent of parameter p — it has no impact on the outcome probability.

For statistical parity, the outcome will be independent of my group affiliation. What does this mean intuitively? It means that the same proportion of each group will be classified as positive or negative. For this reason, we can also describe statistical parity as demographic parity. For all demographic groups subsumed within p, statistical parity will be enforced.

For a dataset that has not had statistical parity applied, we can measure how far our predictions deviate from statistical parity by calculating the statistical parity distance shown below.

AI fairness

The statistical parity distance can be used to quantify the extent to which a prediction deviates from statistical parity.

This distance can provide us with a metric for how fair or unfair a given dataset is based on the group affiliation p.

What are the tradeoffs of using statistical parity?

Statistical parity doesn’t ensure fairness.

As you may have noticed though, statistical parity says nothing about the accuracy of these predictions. One group may be much more likely to be predicted as positive than another, and hence we might obtain large disparities between the false positive and true positive rates for each group. This itself can cause a disparate impact as qualified individuals from one group (p=0) may be missed out in favor of unqualified individuals from another group (p=1). In this sense, statistical parity is more akin to equality of outcome.

The figures below illustrate this nicely. If we have two groups — one with 10 people (group A=1), and one with 5 people (group A=0) — and we determine that 8 people (80%) in group A=1 achieved a score of Y=1, then 4 people (80%) in group A=0 would also have to be given a score of Y=1, regardless of other factors.

AI fairness

Illustration of statistical parity. Source: Duke University Privacy & Fairness in Data Science Lecture Notes

Statistical parity reduces algorithmic accuracy

The second problem with statistical parity is that a protected class may provide some information that would be useful for a prediction, but we are unable to leverage that information because of the strict rule imposed by statistical parity. Gender might be very informative for making predictions about items that people might buy, but if we are prevented from using it, our model becomes weaker and accuracy is impacted. A better method would allow us to account for the differences between these groups without generating disparate impact. Clearly, statistical parity is misaligned with the fundamental goal of accuracy in machine learning — the perfect classifier may not ensure demographic parity.

For these reasons, statistical parity is no longer considered a credible option by several machine learning fairness researchers. However, statistical parity is a simple and useful starting point that other definitions of fairness have built upon.

There are slightly more nuanced versions of statistical parity, such as true positive parity, false positive parity, and positive rate parity.

True Positive Parity (Equality of Opportunity)

This is only possible for binary predictions and performs statistical parity on true positives (the prediction output was 1 and the true output was also 1).

AI fairness

Equality of opportunity is the same as equalized odds, but is focused on the y=1 label.

It ensures that in both groups, of all those who qualified (Y=1), an equal proportion of individuals will be classified as qualified (C=1). This is useful when we are only interested in parity over the positive outcome.

AI fairness

Illustration of true positive parity. Notice that in the first group, all those with Y=1 (blue boxes) were classified as positives (C=1). Similarly, in the second group, all those classified as Y=1 were also classified as positive, but there was an additional false positive. This false positive was not considered in the definition of statistical parity. Source: Duke University Privacy & Fairness in Data Science Lecture Notes

False Positive Parity

This is also only applicable to binary predictions and focuses on false positives (the prediction output was 1 but the true output was 0). This is analogous to the true positive rate but provides parity across false positive results instead.

Positive Rate Parity (Equalized Odds)

This is a combination of statistical parity for true positives and false positives simultaneously and is also know as equalized odds.

AI fairness

Illustration of positive rate parity (equalized odds). Notice that in the first group, all those with Y=1 (blue boxes) were classified as positives (C=1). Similarly, in the second group, all those classified as Y=1 were also classified as positive. Of the population in A=1 that obtained Y=0, one of these was classified as C=1, giving a 50% false positive rate. Similarly, in the second group, two of these individuals are given C=1, corresponding to a 50% false positive rate. Source: Duke University Privacy & Fairness in Data Science Lecture Notes

Notice that for equal opportunity, we relax the condition of equalized odds that odds must be equal in the case that Y=0. Equalized odds and equality of opportunity are also more flexible and able to incorporate some of the information from the protected variable without resulting in disparate impact.

Notice that whilst all of these provide some form of a solution that can be argued to be fair, none of these are particularly satisfying. One reason for this is that there are many conflicting definitions of what fairness entails, and it is difficult to capture these in algorithmic form. These are good starting points but there is still much room for improvement.

Other Methods to Increase Fairness

Statistical parity, equalized odds, and equality of opportunity are all great starting points, but there are other things we can do to ensure that algorithms are not used to unduly discriminate individuals. Two such solutions which have been proposed are human-in-the-loop and algorithmic transparency.


This sounds like some kind of rollercoaster ride, but it merely refers to a paradigm whereby a human oversees the algorithmic process. Human-in-the-loop is often implemented in situations that have high risks if the algorithm makes a mistake. For example, missile detection systems that inform the military when a missile is detected allow individuals to review the situation and decide how to respond — the algorithm does not respond without human interaction. Just imagine the catastrophic consequences of running nuclear weapon systems with AI that had permission to fire when they detected a threat — one false positive and the entire world would be doomed.

Another example of this is the COMPAS system for recividism — the system does not categorize you as a recidivist and make a legal judgment. Instead, the judge reviews the COMPAS score and uses this as a factor in their evaluation of the circumstance. This raises new questions such as how humans interact with the algorithmic system. Studies using Amazon Mechanical Turk have shown that some individuals will follow the algorithm’s judgment wholeheartedly, as they perceive it to have greater knowledge than a human is likely to, other individuals take its output with a pinch of salt, and some ignore it completely. Research into human-in-the-loop is relatively novel but we are likely to see more of it as machine learning becomes more pervasive in our society.

Another important and similar concept is human-on-the-loop. This is similar to human-in-the-loop, but instead of the human being actively involved in the process, they are passively involved in the algorithm’s oversight. For example, a data analyst might be in charge of monitoring sections of an oil and gas pipeline to ensure that all of the sensors and processes are running appropriately and there are no concerning signals or errors. This analyst is in an oversight position but is not actively involved in the process. Human-on-the-loop is inherently more scalable than human-in-the-loop since it requires less manpower, but it may be untenable in certain circumstances — such as looking after those nuclear missiles!

Algorithmic Transparency

The dominant position in the legal literature for fairness is through algorithmic interpretability and explainability via transparency. The argument is that if an algorithm is able to be viewed publicly and analyzed with scrutiny, then it can be ensured with a high level of confidence that there is no disparate impact built into the model. Whilst this is clearly desirable on many levels, there are some downsides to algorithmic transparency.

Proprietary algorithms by definition cannot be transparent.

From a commercial standpoint, this idea is untenable in most circumstances — trade secrets or proprietary information may be leaked if algorithms and business processes are provided for all to see. Imagine Facebook or Twitter being asked to release their algorithms to the world so they can be scrutinized to ensure there are no biasing issues. Most likely I could download their code and go and start my own version of Twitter or Facebook pretty easily. Full transparency is only really an option for algorithms used in public services, such as by the government (to some extent), healthcare, the legal system, etc. Since legal scholars are predominantly concerned with the legal system, it makes sense that this remains the consensus at the current time.

In the future, perhaps regulations on algorithmic fairness may be a more tenable solution than algorithmic transparency for private companies that have a vested interest to keep their algorithms from the public eye. Andrew Tutt discusses this idea in his paper “An FDA For Algorithms”, which focused on the development of a regulatory body similar to the FDA to regulate algorithms. Algorithms could be submitted to the regulatory body, or perhaps third party auditing services, and analyzed to ensure they are suitable to be used without resulting in disparate impact.

Clearly, such an idea would require large amounts of discussion, money, and expertise to implement, but this seems like a potentially workable solution from my perspective. There is still a long way to go to ensure our algorithms are free of both disparate treatment and disparate impact. With a combination of regulations, transparency, human-in-the-loop, human-on-the-loop, and new and improved variations of statistical parity, we are part of the way there, but this field is still young and there is much work to be done — watch this space.

Final Comments

In this article, we have discussed at length multiple biases present within training data due to the way in which it is collected and analyzed. We have also discussed several ways in which to mitigate the impact of these biases and to help ensure that algorithms remain non-discriminatory towards minority groups and protected classes.

Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at a systematic advantage and certain unprivileged groups at a systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.

Some might say that these decisions were made on less information and by humans, which can have many implicit and cognitive biases influencing their decision. Automating these decisions provides more accurate results and to a large degree limits the extent of these biases. The algorithms do not need to be perfect, just better than what previously existed. The arc of history curves towards justice.

Some might say that algorithms are being given free rein to allow inequalities to be systematically instantiated, or that data itself is inherently biased. That variables related to protected attributes should be removed from data to help mitigate these issues, and any variable correlated with the variables removed or restricted.

Both groups would be partially correct. However, we should not remain satisfied with unfair algorithms, there is also room for improvement. Similarly, we should not waste all of this data we have and remove all variables, as this would make systems perform much worse and would render them much less useful. That being said, at the end of the day, it is up to the creators of these algorithms and oversight bodies, as well as those in charge of collecting data, to try to ensure that these biases are handled appropriately.

Data collection and sampling procedures are often glazed over in statistics classes, and not understood well by the general public. Until such a time as a regulatory body appears, it is up to machine learning engineers, statisticians, and data scientists to ensure the equality of opportunity is embedded in our machine learning practices. We must be mindful of where our data comes from and what we do with it. Who knows who our decisions might impact in the future?

“The world isn’t fair, Calvin.”
“I know Dad, but why isn’t it ever unfair in my favor?”
― Bill Watterson, The Essential Calvin and Hobbes: A Calvin and Hobbes Treasury

Further Reading

[1] Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. The White House. 2016.

[2] Bias in computer systems. Batya Friedman, Helen Nissenbaum. 1996

[3] The Hidden Biases in Big Data. Kate Crawford. 2013.

[4] Big Data’s Disparate Impact. Solon Barocas, Andrew Selbst. 2014.

[5] Blog post: How big data is unfair. Moritz Hardt. 2014

[6] Semantics derived automatically from language corpora contain human-like biases. Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan

[7] Sex Bias in Graduate Admissions: Data from Berkeley. P. J. Bickel, E. A. Hammel, J. W. O’Connell. 1975.

[8] Simpson’s paradox. Pearl (Chapter 6). Tech report

[9] Certifying and removing disparate impact. Michael Feldman, Sorelle Friedler, John Moeller, Carlos Scheidegger, Suresh Venkatasubramanian

[10] Equality of Opportunity in Supervised Learning. Moritz Hardt, Eric Price, Nathan Srebro. 2016.

[11] Blog post: Approaching fairness in machine learning. Moritz Hardt. 2016.

[12] Machine Bias. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica. Code review:

[13] COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity. Northpointe Inc.

[14] Fairness in Criminal Justice Risk Assessments: The State of the Art
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth. 2017.

[15] Courts and Predictive Algorithms. Angèle Christin, Alex Rosenblat, and danah boyd. 2015. Discussion paper

[16] Limitations of mitigating judicial bias with machine learning. Kristian Lum. 2017.

[17] Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. John C. Platt. 1999.

[18] Inherent Trade-Offs in the Fair Determination of Risk Scores. Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. 2016.

[19] Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Alexandra Chouldechova. 2016.

[20] Attacking discrimination with smarter machine learning. An interactive visualization by Martin Wattenberg, Fernanda Viégas, and Moritz Hardt. 2016.

[21] Algorithmic decision making and the cost of fairness. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq. 2017.

[22] The problem of Infra-marginality in Outcome Tests for Discrimination. Camelia Simoiu, Sam Corbett-Davies, Sharad Goel. 2017.

[23] Equality of Opportunity in Supervised Learning. Moritz Hardt, Eric Price, Nathan Srebro. 2016.

[24] Elements of Causal Inference. Peters, Janzing, Schölkopf

[25] On causal interpretation of race in regressions adjusting for confounding and mediating variables. Tyler J. VanderWeele and Whitney R. Robinson. 2014.

[26] Counterfactual Fairness. Matt J. Kusner, Joshua R. Loftus, Chris Russell, Ricardo Silva. 2017.

[27] Avoiding Discrimination through Causal Reasoning. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf. 2017.

[28] Fair Inference on Outcomes. Razieh Nabi, Ilya Shpitser

[29] Fairness Through Awareness. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Rich Zemel. 2012.

[30] On the (im)possibility of fairness. Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian. 2016.

[31] Why propensity scores should not be used. Gary King, Richard Nielson. 2016.

[32] Raw Data is an Oxymoron. Edited by Lisa Gitelman. 2013.

[33] Blog post: What’s the most important thing in Statistics that’s not in the textbooks. Andrew Gelman. 2015.

[34] Deconstructing Statistical Questions. David J. Hand. 1994.

[35] Statistics and the Theory of Measurement. David J. Hand. 1996.

[36] Measurement Theory and Practice: The World Through Quantification. David J. Hand. 2010

[37] Survey Methodology, 2nd Edition. Robert M. Groves, Floyd J. Fowler, Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, Roger Tourangeau. 2009

[38] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. 2016.

[39] Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang. 2017.

[40] Big Data’s Disparate Impact. Solon Barocas, Andrew Selbst. 2014.

[41] It’s Not Privacy, and It’s Not Fair. Cynthia Dwork, Deirdre K. Mulligan. 2013.

[42] The Trouble with Algorithmic Decisions. Tal Zarsky. 2016.

[43] How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem. Amanda Levendowski. 2017.

[44] An FDA for Algorithms. Andrew Tutt. 2016

This article was originally published on Towards Data Science and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI research updates.

We’ll let you know when we release more summary articles like this one.



Executive Interview: Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency 




As CTO of the Cybersecurity & Infrastructure Security Agency of the DHS, Brian Gattoni is charged with understanding and advising on cyber and physical risks to the nation’s critical infrastructure. 

Understanding and Advising on Cyber and Physical Risks to the Nation’s Critical Infrastructure 

Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency

Brian R. Gattoni is the Chief Technology Officer for the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. CISA is the nation’s risk advisor, working with partners to defend against today’s threats and collaborating to build a secure and resilient infrastructure for the future. Gattoni sets the technical vision and strategic alignment of CISA data and mission services. Previously, he was the Chief of Mission Engineering & Technology, developing analytic techniques and new approaches to increase the value of DHS cyber mission capabilities. Prior to joining DHS in 2010, Gattoni served in various positions at the Defense Information Systems Agency and the United States Army Test & Evaluation Command. He holds a Master of Science Degree in Cyber Systems & Operations from the Naval Postgraduate School in Monterey, California, and is a Certified Information Systems Security Professional (CISSP).  

AI Trends: What is the technical vision for CISA to manage risk to federal networks and critical infrastructure? 

Brian Gattoni: Our technology vision is built in support of our overall strategy. We are the nation’s risk advisor. It’s our job to stay abreast of incoming threats and opportunities for general risk to the nation. Our efforts are to understand and advise on cyber and physical risks to the nation’s critical infrastructure.  

It’s all about bringing in the data, understanding what decisions need to be made and can be made from the data, and what insights are useful to our stakeholders. The potential of AI and machine learning is to expand on operational insights with additional data sets to make better use of the information we have.  

What are the most prominent threats? 

The Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security is the Nation’s risk advisor.

The sources of threats we frequently discuss are the adversarial actions of nation-state actors and those aligned with nation-state actors and their interests, in disrupting national critical functions here in the U.S. Just in the past month, we’ve seen increased activity from elements supporting what we refer to in the government as Hidden Cobra [malicious cyber activity by the North Korean government]. We’ve issued joint alerts with our partners overseas and the FBI and the DoD, highlighting activity associated with Chinese actors. On people can find CISA Insights, which are documents that provide background information on particular cyber threats and the vulnerabilities they exploit, as well as a ready-made set of mitigation activities that non-federal partners can implement.   

What role does AI play in the plan? 

Artificial intelligence has a great role to play in the support of the decisions we make as an agency. Fundamentally, AI is going to allow us to apply our decision processes to a scale of data that humans just cannot keep up with. And that’s especially prevalent in the cyber mission. We remain cognizant of how we make decisions in the first place and target artificial intelligence and machine learning algorithms that augment and support that decision-making process. We’ll be able to use AI to provide operational insights at a greater scale or across a greater breadth of our mission space.  

How far along are you in the implementation of AI at the CISA? 

Implementing AI is not as simple as putting in a new business intelligence tool or putting in a new email capability. Really augmenting your current operations with artificial intelligence is a mix of the culture change, for humans to understand how the AI is supposed to augment their operations. It is a technology change, to make sure you have the scalable compute and the right tools in place to do the math you’re talking about implementing. And it’s a process change. We want to deliver artificial intelligence algorithms that augment our operators’ decisions as a support mechanism.  

Where we are in the implementation is closer to understanding those three things. We’re working with partners in federally funded research and development centers, national labs and the departments own Science and Technology Data Analytics Tech Center to develop capability in this area. We’ve developed an analytics meta-process which helps us systemize the way we take in data and puts us in a position to apply artificial intelligence to expand our use of that data.  

Do you have any interesting examples of how AI is being applied in CISA and the federal government today? Or what you are working toward, if that’s more appropriate. 

I have a recent use case. We’ve been working with some partners over the past couple of months to apply AI to a humanitarian assistance and disaster relief type of mission. So, within CISA, we also have responsibilities for critical infrastructure. During hurricane season, we always have a role to play in helping advise what the potential impacts are to critical infrastructure sites in the affected path of a hurricane.  

We prepared to conduct an experiment leveraging AI algorithms and overhead imagery to figure out if we could analyze the data from a National Oceanic and Atmospheric Administration flight over the affected area. We compared that imagery with the base imagery from Google Earth or ArcGIS and used AI to identify any affected critical infrastructure. We could see the extent to which certain assets, such as oil refineries, were physically flooded. We could make an assessment as to whether they hit a threshold of damage that would warrant additional scrutiny, or we didn’t have to apply resources because their resilience was intact, and their functions could continue.   

That is a nice use case, a simple example of letting a computer do the comparisons and make a recommendation to our human operators. We found that it was very good at telling us which critical infrastructure sites did not need any additional intervention. To use a needle in a haystack analogy, one of the useful things AI can help us do is blow hay off the stack in pursuit of the needle. And that’s a win also. The experiment was very promising in that sense.  

How does CISA work with private industry, and do you have any examples of that?  

We have an entire division dedicated to stakeholder engagement. Private industry owns over 80% of the critical infrastructure in the nation. So CISA sits at the intersection of the private sector and the government to share information, to ensure we have resilience in place for both the government entities and the private entities, in the pursuit of resilience for those national critical functions. Over the past year we’ve defined a set of 55 functions that are critical for the nation.  

When we work with private industry in those areas we try to share the best insights and make decisions to ensure those function areas will continue unabated in the face of a physical or cyber threat. 

Cloud computing is growing rapidly. We see different strategies, including using multiple vendors of the public cloud, and a mix of private and public cloud in a hybrid strategy. What do you see is the best approach for the federal government? 

In my experience the best approach is to provide guidance to the CIO’s and CISO’s across the federal government and allow them the flexibility to make risk-based determinations on their own computing infrastructure as opposed to a one-size-fits-all approach.   

We issue a series of use cases that describeat a very high levela reference architecture about a type of cloud implementation and where security controls should be implemented, and where telemetry and instrumentation should be applied. You have departments and agencies that have a very forward-facing public citizen services portfolio, which means access to information, is one of their primary responsibilities. Public clouds and ease of access are most appropriate for those. And then there are agencies with more sensitive missions. Those have critical high value data assets that need to be protected in a specific way. Giving each the guidance they need to handle all of their use cases is what we’re focused on here. 

I wanted to talk a little bit about job roles. How are you defining the job roles around AI in CISA, as in data scientists, data engineers, and other important job titles and new job titles?  

I could spend the remainder of our time on this concept of job roles for artificial intelligence; it’s a favorite topic for me. I am a big proponent of the discipline of data science being a team sport. We currently have our engineers and our analysts and our operators. And the roles and disciplines around data science and data engineers have been morphing out of an additional duty on analysts and engineers into its own sub sector, its own discipline. We’re looking at a cadre of data professionals that serve almost as a logistics function to our operators who are doing the mission-level analysis. If you treat data as an asset that has to be moved and prepared and cleaned and readied, all terms in the data science and data engineering world now, you start to realize that it requires logistics functions similar to any other asset that has to be moved. 

If you get professionals dedicated to that end, you will be able to scale to the data problems you have without overburdening your current engineers who are building the compute platforms, or your current mission analysts who are trying to interpret the data and apply the insights to your stakeholders. You will have more team members moving data to the right places, making data-driven decisions. 

Are you able to hire the help you need to do the job? Are you able to find qualified people? Where are the gaps? 

As the domain continues to mature, as we understand more about the different roles, we begin to see gapseducation programs and training programs that need to be developed. I think maybe three, five years ago, you would see certificates from higher education in data science. Now we’re starting to see full-fledged degrees as concentrations out of computer science or mathematics. Those graduates are the pipeline to help us fill the gaps we currently have. So as far as our current problems, there’s never enough people. It’s always hard to get the good ones and then keep them because the competition is so high. 

Here at CISA, we continue to invest not only in our own folks that are re-training, but in the development of a cyber education and training group, which is looking at the partnerships with academia to help shore up that pipeline. It continually improves. 

Do you have a message for high school or college students interested in pursuing a career in AI, either in the government or in business, as to what they should study? 

Yes and it’s similar to the message I give to the high schoolers that live in my house. That is, don’t give up on math so easily. Math and science, the STEM subjects, have foundational skills that may be applicable to your future career. That is not to discount the diversity and variety of thought processes that come from other disciplines. I tell my kids they need the mathematical foundation to be able to apply the thought processes you learn from studying music or studying art or studying literature. And the different ways that those disciplines help you make connections. But have the mathematical foundation to represent those connections to a computer.   

One of the fallacies around machine learning is that it will just learn [by itself]. That’s not true. You have to be able to teach it, and you can only talk to computers with math, at the base level.  

So if you have the mathematical skills to relay your complicated human thought processes to the computer, and now it can replicate those patterns and identify what you’re asking it to do, you will have success in this field. But if you give up on the math part too earlyit’s a progressive disciplineif you give up on algebra two and then come back years later and jump straight into calculus, success is going to be difficult, but not impossible. 

You sound like a math teacher.  

A simpler way to say it is: if you say no to math now, it’s harder to say yes later. But if you say yes now, you can always say no later, if data science ends up not being your thing.  

Are there any incentives for young people, let’s say a student just out of college, to go to work for the government? Is there any kind of loan forgiveness for instance?  

We have a variety of programs. The one that I really like, that I have had a lot of success with as a hiring manager in the federal government, especially here at DHS over the past 10 years, is a program called Scholarship for Service. It’s a CyberCorps program where interested students, who pass the process to be accepted can get a degree in exchange for some service time. It used to be two years; it might be more now, but they owe some time and service to the federal government after the completion of their degree. 

I have seen many successful candidates come out of that program and go on to fantastic careers, contributing in cyberspace all over. I have interns that I hired nine years ago that are now senior leaders in this organization or have departed for private industry and are making their difference out there. It’s a fantastic program for young folks to know about.  

What advice do you have for other government agencies just getting started in pursuing AI to help them meet their goals? 

My advice for my peers and partners and anybody who’s willing to listen to it is, when you’re pursuing AI, be very specific about what it can do for you.   

I go back to the decisions you make, what people are counting on you to do. You bear some responsibility to know how you make those decisions if you’re really going to leverage AI and machine learning to make decisions faster or better or some other quality of goodnessThe speed at which you make decisions will go both ways. You have to identify your benefit of that decision being made if it’s positive and define your regret if that decision is made and it’s negative. And then do yourself a simple HIGH-LOW matrix; the quadrant of high-benefit, low-regret decisions is the target. Those are ones that I would like to automate as much as possible. And if artificial intelligence and machine learning can help, that would be great. If not, that’s a decision you have to make. 

I have two examples I use in our cyber mission to illustrate the extremes here. One is for incident triage. If a cyber incident is detected, we have a triage process to make sure that it’s real. That presents information to an analyst. If that’s done correctly, it has a high benefit because it can take a lot of work off our analysts. It has lowtomedium regret if it’s done incorrectly, because the decision is to present information to an analyst who can then provide that additional filter. So that’s a high benefit, low regret. That’s a no-brainer for automating as much as possible. 

On the other side of the spectrum is protecting next generation 911 call centers from a potential telephony denial of service attack. One of the potential automated responses could be to cut off the incoming traffic to the 911 call center to stunt the attack. Benefit: you may have prevented the attack. Regret: potentially you’re cutting off legitimate traffic to a 911 call center, and that has life and safety implications. And that is unacceptable. That’s an area where automation is probably not the right approach. Those are two extreme examples, which are easy for people to understand, and it helps illustrate how the benefit regret matrix can work. How you make decisions is really the key to understanding whether to implement AI and machine learning to help automate those decisions using the full breadth of data.  

Learn more about the Cybersecurity & Infrastructure Security Agency.  


Continue Reading


Making Use Of AI Ethics Tuning Knobs In AI Autonomous Cars 




Ethical tuning knobs would be a handy addition to self-driving car controls, the author suggests, if for example the operator was late for work and needed to exceed the speed limit. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

There is increasing awareness about the importance of AI Ethics, consisting of being mindful of the ethical ramifications of AI systems.   

AI developers are being asked to carefully design and build their AI mechanizations by ensuring that ethical considerations are at the forefront of the AI systems development process. When fielding AI, those responsible for the operational use of the AI also need to be considering crucial ethical facets of the in-production AI systems. Meanwhile, the public and those using or reliant upon AI systems are starting to clamor for heightened attention to the ethical and unethical practices and capacities of AI.   

Consider a simple example. Suppose an AI application is developed to assess car loan applicants. Using Machine Learning (ML) and Deep Learning (DL), the AI system is trained on a trove of data and arrives at some means of choosing among those that it deems are loan worthy and those that are not. 

The underlying Artificial Neural Network (ANN) is so computationally complex that there are no apparent means to interpret how it arrives at the decisions being rendered. Also, there is no built-in explainability capability and thus the AI is unable to articulate why it is making the choices that it is undertaking (note: there is a movement toward including XAI, explainable AI components to try and overcome this inscrutability hurdle).   

Upon the AI-based loan assessment application being fielded, soon thereafter protests arose by some that assert they were turned down for their car loan due to an improper inclusion of race or gender as a key factor in rendering the negative decision.   

At first, the maker of the AI application insists that they did not utilize such factors and professes complete innocence in the matter. Turns out though that a third-party audit of the AI application reveals that the ML/DL is indeed using race and gender as core characteristics in the car loan assessment process. Deep within the mathematically arcane elements of the neural network, data related to race and gender were intricately woven into the calculations, having been dug out of the initial training dataset provided when the ANN was crafted. 

That is an example of how biases can be hidden within an AI system. And it also showcases that such biases can go otherwise undetected, including that the developers of the AI did not realize that the biases existed and were seemingly confident that they had not done anything to warrant such biases being included. 

People affected by the AI application might not realize they are being subjected to such biases. In this example, those being adversely impacted perchance noticed and voiced their concerns, but we are apt to witness a lot of AI that no one will realize they are being subjugated to biases and therefore not able to ring the bell of dismay.   

Various AI Ethics principles are being proffered by a wide range of groups and associations, hoping that those crafting AI will take seriously the need to consider embracing AI ethical considerations throughout the life cycle of designing, building, testing, and fielding AI.   

AI Ethics typically consists of these key principles: 

1)      Inclusive growth, sustainable development, and well-being 

2)      Human-centered values and fairness 

3)      Transparency and explainability 

4)      Robustness, security, and safety 

5)      Accountability   

We certainly expect humans to exhibit ethical behavior, and thus it seems fitting that we would expect ethical behavior from AI too.   

Since the aspirational goal of AI is to provide machines that are the equivalent of human intelligence, being able to presumably embody the same range of cognitive capabilities that humans do, this perhaps suggests that we will only be able to achieve the vaunted goal of AI by including some form of ethics-related component or capacity. 

What this means is that if humans encapsulate ethics, which they seem to do, and if AI is trying to achieve what humans are and do, the AI ought to have an infused ethics capability else it would be something less than the desired goal of achieving human intelligence.   

You could claim that anyone crafting AI that does not include an ethics facility is undercutting what should be a crucial and integral aspect of any AI system worth its salt. 

Of course, trying to achieve the goals of AI is one matter, meanwhile, since we are going to be mired in a world with AI, for our safety and well-being as humans we would rightfully be arguing that AI had better darned abide by ethical behavior, however that might be so achieved.   

Now that we’ve covered that aspect, let’s take a moment to ponder the nature of ethics and ethical behavior.  

Considering Whether Humans Always Behave Ethically   

Do humans always behave ethically? I think we can all readily agree that humans do not necessarily always behave in a strictly ethical manner.   

Is ethical behavior by humans able to be characterized solely by whether someone is in an ethically binary state of being, namely either purely ethical versus being wholly unethical? I would dare say that we cannot always pin down human behavior into two binary-based and mutually exclusive buckets of being ethical or being unethical. The real-world is often much grayer than that, and we at times are more likely to assess that someone is doing something ethically questionable, but it is not purely unethical, nor fully ethical. 

In a sense, you could assert that human behavior ranges on a spectrum of ethics, at times being fully ethical and ranging toward the bottom of the scale as being wholly and inarguably unethical. In-between there is a lot of room for how someone ethically behaves. 

If you agree that the world is not a binary ethical choice of behaviors that fit only into truly ethical versus solely unethical, you would therefore also presumably be amenable to the notion that there is a potential scale upon which we might be able to rate ethical behavior. 

This scale might be from the scores of 1 to 10, or maybe 1 to 100, or whatever numbering we might wish to try and assign, maybe even including negative numbers too. 

Let’s assume for the moment that we will use the positive numbers of a 1 to 10 scale for increasingly being ethical (the topmost is 10), and the scores of -1 to -10 for being unethical (the -10 is the least ethical or in other words most unethical potential rating), and zero will be the midpoint of the scale. 

Please do not get hung up on the scale numbering, which can be anything else that you might like. We could even use letters of the alphabet or any kind of sliding scale. The point being made is that there is a scale, and we could devise some means to establish a suitable scale for use in these matters.   

The twist is about to come, so hold onto your hat.   

We could observe a human and rate their ethical behavior on particular aspects of what they do. Maybe at work, a person gets an 8 for being ethically observant, while perhaps at home they are a more devious person, and they get a -5 score. 

Okay, so we can rate human behavior. Could we drive or guide human behavior by the use of the scale? 

Suppose we tell someone that at work they are being observed and their target goal is to hit an ethics score of 9 for their first year with the company. Presumably, they will undertake their work activities in such a way that it helps them to achieve that score.   

In that sense, yes, we can potentially guide or prod human behavior by providing targets related to ethical expectations. I told you a twist was going to arise, and now here it is. For AI, we could use an ethical rating or score to try and assess how ethically proficient the AI is.   

In that manner, we might be more comfortable using that particular AI if we knew that it had a reputable ethical score. And we could also presumably seek to guide or drive the AI toward an ethical score too, similar to how this can be done with humans, and perhaps indicate that the AI should be striving towards some upper bound on the ethics scale. 

Some pundits immediately recoil at this notion. They argue that AI should always be a +10 (using the scale that I’ve laid out herein). Anything less than a top ten is an abomination and the AI ought to not exist. Well, this takes us back into the earlier discussion about whether ethical behavior is in a binary state.   

Are we going to hold AI to a “higher bar” than humans by insisting that AI always be “perfectly” ethical and nothing less so?   

This is somewhat of a quandary due to the point that AI overall is presumably aiming to be the equivalent of human intelligence, and yet we do not hold humans to that same standard. 

For some, they fervently believe that AI must be held to a higher standard than humans. We must not accept or allow any AI that cannot do so. 

Others indicate that this seems to fly in the face of what is known about human behavior and begs the question of whether AI can be attained if it must do something that humans cannot attain.   

Furthermore, they might argue that forcing AI to do something that humans do not undertake is now veering away from the assumed goal of arriving at the equivalent of human intelligence, which might bump us away from being able to do so as a result of this insistence about ethics.   

Round and round these debates continue to go. 

Those on the must-be topnotch ethical AI are often quick to point out that by allowing AI to be anything less than a top ten, you are opening Pandora’s box. For example, it could be that AI dips down into the negative numbers and sits at a -4, or worse too it digresses to become miserably and fully unethical at a dismal -10. 

Anyway, this is a debate that is going to continue and not be readily resolved, so let’s move on. 

If you are still of the notion that ethics exists on a scale and that AI might also be measured by such a scale, and if you also are willing to accept that behavior can be driven or guided by offering where to reside on the scale, the time is ripe to bring up tuning knobs. Ethics tuning knobs. 

Here’s how that works. You come in contact with an AI system and are interacting with it. The AI presents you with an ethics tuning knob, showcasing a scale akin to our ethics scale earlier proposed. Suppose the knob is currently at a 6, but you want the AI to be acting more aligned with an 8, so you turn the knob upward to the 8. At that juncture, the AI adjusts its behavior so that ethically it is exhibiting an 8-score level of ethical compliance rather than the earlier setting of a 6. 

What do you think of that? 

Some would bellow out balderdash, hogwash, and just unadulterated nonsense. A preposterous idea or is it genius? You’ll find that there are experts on both sides of that coin. Perhaps it might be helpful to provide the ethics tuning knob within a contextual exemplar to highlight how it might come to play. 

Here’s a handy contextual indication for you: Will AI-based true self-driving cars potentially contain an ethics tuning knob for use by riders or passengers that use self-driving vehicles?   

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: 

Why this is a moonshot effort, see my explanation here: 

For more about the levels as a type of Richter scale, see my discussion here: 

For the argument about bifurcating the levels, see my explanation here:   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: 

To be wary of fake news about self-driving cars, see my tips here: 

The ethical implications of AI driving systems are significant, see my indication here:   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms:   

Self-Driving Cars And Ethics Tuning Knobs 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.   

This seems rather straightforward. You might be wondering where any semblance of ethics behavior enters the picture. Here’s how. Some believe that a self-driving car should always strictly obey the speed limit. 

Imagine that you have just gotten into a self-driving car in the morning and it turns out that you are possibly going to be late getting to work. Your boss is a stickler and has told you that coming in late is a surefire way to get fired.   

You tell the AI via its Natural Language Processing (NLP) that the destination is your work address. 

And, you ask the AI to hit the gas, push the pedal to the metal, screech those tires, and get you to work on-time.

But it is clear cut that if the AI obeys the speed limit, there is absolutely no chance of arriving at work on-time, and since the AI is only and always going to go at or less than the speed limit, your goose is fried.   

Better luck at your next job.   

Whoa, suppose the AI driving system had an ethics tuning knob. 

Abiding strictly by the speed limit occurs when the knob is cranked up to the top numbers like say 9 and 10. 

You turn the knob down to a 5 and tell the AI that you need to rush to work, even if it means going over the speed limit, which at a score of 5 it means that the AI driving system will mildly exceed the speed limit, though not in places like school zones, and only when the traffic situation seems to allow for safely going faster than the speed limit by a smidgen.   

The AI self-driving car gets you to work on-time!   

Later that night, when heading home, you are not in as much of a rush, so you put the knob back to the 9 or 10 that it earlier was set at. 

Also, you have a child-lock on the knob, such that when your kids use the self-driving car, which they can do on their own since there isn’t a human driver needed, the knob is always set at the topmost of the scale and the children cannot alter it.   

How does that seem to you? 

Some self-driving car pundits find the concept of such a tuning knob to be repugnant. 

They point out that everyone will “cheat” and put the knob on the lower scores that will allow the AI to do the same kind of shoddy and dangerous driving that humans do today. Whatever we might have otherwise gained by having self-driving cars, such as the hoped-for reduction in car crashes, along with the reduction in associated injuries and fatalities, will be lost due to the tuning knob capability.   

Others though point out that it is ridiculous to think that people will put up with self-driving cars that are restricted drivers that never bend or break the law. 

You’ll end-up with people opting to rarely use self-driving cars and will instead drive their human-driven cars. This is because they know that they can drive more fluidly and won’t be stuck inside a self-driving car that drives like some scaredy-cat. 

As you might imagine, the ethical ramifications of an ethics tuning knob are immense. 

In this use case, there is a kind of obviousness about the impacts of what an ethics tuning knob foretells.   

Other kinds of AI systems will have their semblance of what an ethics tuning knob might portend, and though it might not be as readily apparent as the case of self-driving cars, there is potentially as much at stake in some of those other AI systems too (which, like a self-driving car, might entail life-or-death repercussions).   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here:   

To be wary of fake news about self-driving cars, see my tips here: 

The ethical implications of AI driving systems are significant, see my indication here:   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms:   


If you really want to get someone going about the ethics tuning knob topic, bring up the allied matter of the Trolley Problem.   

The Trolley Problem is a famous thought experiment involving having to make choices about saving lives and which path you might choose. This has been repeatedly brought up in the context of self-driving cars and garnered acrimonious attention along with rather diametrically opposing views on whether it is relevant or not. 

In any case, the big overarching questions are will we expect AI to have an ethics tuning knob, and if so, what will it do and how will it be used. 

Those that insist there is no cause to have any such device are apt to equally insist that we must have AI that is only and always practicing the utmost of ethical behavior. 

Is that a Utopian perspective or can it be achieved in the real world as we know it?   

Only my crystal ball can say for sure.  

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:] 


Continue Reading


Application of AI to IT Service Ops by IBM and ServiceNow Exemplifies a Trend 




AI combined with IT service operations is seen as having the potential to automate many tasks while improving response times and decreasing costs (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The application of AI to IT service operations has the potential to automate many tasks and drive down the cost of operations. 

The trend is exemplified by the recent agreement between IBM and ServiceNow to leverage IBM’s AI-powered cloud infrastructure with ServiceNow’s intelligent workflow systems, as reported in Forbes. 

The goal is to reduce resolution times and lower the cost of outages, which according to a recent report from Aberdeen, can cost a company $260,000 per hour.  

David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow

“Digital transformation is no longer optional for anyone, and AI and digital workflows are the way forward,” stated David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow. “The four keys to success with AI are the ability 1) to automate IT, 2) gain deeper insights, 3) reduce risks, and 4) lower costs across your business,” Parsons said.   

The two companies plan to combine their tools in customer engagement to address each of these factors. “The first phase will bring together IBM’s AIOps software and professional services with ServiceNow’s intelligent workflow capabilities to help companies meet the digital demands of this moment,” Parsons stated. 

Arvind Krishna, Chief Executive Officer of IBM stated in a press release on the announcement, “AI is one of the biggest forces driving change in the IT industry to the extent that every company is swiftly becoming an AI company.” ServiceNow’s cloud computing platform helps companies manage digital workflows for enterprise IT operations.  

By partnering with ServiceNow and their market leading Now Platform, clients will be able to use AI to quickly mitigate unforeseen IT incident costs. “Watson AIOps with ServiceNow’s Now Platform is a powerful new way for clients to use automation to transform their IT operations and mitigate unforeseen IT incident costs,” Krishna stated. 

The IT service offering squarely positions IBM at aiming for AI in business. “When we talk about AI, we mean AI for business, which is much different than consumer AI,” stated Michael Gilfix of IBM in the Forbes account. He is the Vice President of Cloud Integration and Chief Product Officer of Cloud Paks at IBM. “AI for business is all about enabling organizations to predict outcomes, optimize resources, and automate processes so humans can focus their time on things that really matter,” he stated.   

IBM Watson has handled more than 30,000 client engagements since inception in 2011, the company reports. Among the benefits of this experience is a vast natural language processing vocabulary, which can parse and understand huge amounts of unstructured data. 

Ericsson Scientists Develop AI System to Automatically Resolve Trouble Tickets 

Another experience involving AI in operations comes from two AI scientists with Ericsson, who have developed a machine learning algorithm to help application service providers manage and automatically resolve trouble tickets. 

Wenting Sun, senior data science manager, Ericsson

Wenting Sun, senior data science manager at Ericsson in San Francisco, and Alka Isac, data scientist in Ericsson’s Global AI Accelerator outside Boston, devised the system to help quickly resolve issues with the complex infrastructure of an application service provider, according to an account on the Ericsson BlogThese could be network connection response problems, infrastructure resource limitations, or software malfunctioning issues. 

The two sought to use advanced NLP algorithms to analyze text information, interpret human language and derive predictions. They also took advantage of features/weights discovered from a group of trained models. Their system uses a hybrid of an unsupervised clustering approach and supervised deep learning embedding. “Multiple optimized models are then ensembled to build the recommendation engine,” the authors state.  

The two describe current trouble ticket handling approaches as time-consuming, tedious, labor-intensive, repetitive, slow, and prone to error. Incorrect triaging often results, which can lead to a reopening of a ticket and more time to resolve, making for unhappy customers. When personnel turns over, the human knowledge gained from years of experience can be lost.  

Alka Isac, data scientist in Ericsson’s Global AI Accelerator

We can replace the tedious and time-consuming triaging process with intelligent recommendations and an AI-assisted approach,” the authors stated, with a time to resolution expected to be reduced up to 75% and avoidance of multiple ticket reopenings  

Sun leads a team of data scientists and data engineers to develop AI/ML applications in the telecommunication domain. She holds a bachelor’s degree in electrical and electronics engineering and a PhD degree in intelligent control. She also drives Ericsson’s contributions to the AI open source platform Acumos (under Linux foundation’s Deep Learning Foundation).  

As a Data Scientist in Ericsson’s Global AI Accelerator, Isac is part of a team of Data Scientists focusing on reducing the resolution time of tickets for Ericsson’s Customer Support Team. She holds a master’s degree in Information Systems Management majoring in Data Science. 

Survey Finds AI Is Helpful to IT 

In a survey of 154 IT and business professionals at companies with at least one AI-related project in general production, AI was found to deliver impressive results to IT departments, enhancing the performance of systems and making help desks more helpful, according to a recent account in ZDNet.  

The survey was conducted by ITPro Today working with InformationWeek and Interop. 

Beyond benefits of AI for the overall business, many respondents could foresee the greatest benefits going right to the IT organization itself63% responded that they hope to achieve greater efficiencies within IT operations. Another 45% aimed for improved product support and customer experience, and another 29% sought improved cybersecurity systems.   

The top IT use case was security analytics and predictive intelligence, cited by 71% of AI leaders. Another 56% stated AI is helping with the help desk, while 54% have seen a positive impact on the productivity of their departments. “While critics say that the hype around AI-driven cybersecurity is overblown, clearly, IT departments are desperate to solve their cybersecurity problems, and, judging by this question in our survey, many of them are hoping AI will fill that need,” stated Sue Troy, author of the survey report.   

AI expertise is in short supply. More than two in three successful AI implementers, 67%, report shortages of candidates with needed machine learning and data modeling skills, while 51seek greater data engineering expertise. Another 42% reported compute infrastructure skills to be in short supply.    

Read the source articles and information in Forbes, the IBM press release on the alliance with ServiceNow, on the Ericsson Blog, in ZDNet and from ITPro Today . 


Continue Reading
Blockchain5 hours ago

Top 10 Blockchain-as-a-Service (BaaS) Providers

Esports6 hours ago

Where to Find the Electirizer and Magmarizer in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports7 hours ago

How to get Electabuzz and Electivire in Pokémon Sword and Shield’s The Crown Tundra expansion

South Africa
Esports8 hours ago

Cloud9 terminate contracts of JT, motm, Sonic, T.c

Esports8 hours ago

How to get Absol in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports8 hours ago

Loops Esports’ Federal named MVP of the PMPL Americas season 2

Esports8 hours ago

Loops Esports win PMPL Americas season 2, 3 teams qualify for the PMGC

Esports9 hours ago

How to evolve Tyrunt and Amaura in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports11 hours ago

Here are the scores and standings for the PUBG Mobile EMEA League 2020 Finals

Esports11 hours ago

PUBG Mobile Global Championship to highlight player achievements with Esports Annual Awards 2020

Esports12 hours ago

Rivals League member Emma Handy on her first top finish at the 2020 Grand Finals

Esports13 hours ago

Best moveset for Sirfetch’d in Pokémon Go

Esports13 hours ago

How to get Galarian Yamask in Pokémon Go

Esports14 hours ago

How to Climb in Fall Guys

Esports15 hours ago

Phasmophobia Server Version Mismatch: How to Fix the Error

Esports15 hours ago

Animal Crossing Nintendo Switch Bundle Restocked and Available Again

Esports15 hours ago

Animal Crossing Joe Biden: Visiting Joe Biden’s Animal Crossing Island

Esports15 hours ago

Among Us Matchmaker is Full: How to Fix the Error

Esports15 hours ago

How to get the Reins of Unity in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports15 hours ago

Apex Legends Season 7 UFO Teaser Arrives In-Game

Esports15 hours ago

Bjergsen Retires, Takes Up Head Coach Role for Team SoloMid

Esports15 hours ago

Heroic beat Astralis to complete lower bracket gauntlet, reach final at DreamHack Open Fall

Esports16 hours ago

How to get Victini in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports16 hours ago

How to “head to the Giant’s Bed to find the Mayor” in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports16 hours ago

How to complete Legendary Clue? 4 and catch Necrozma in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports17 hours ago

TSM Doublelift: “The entire Worlds experience after the first week, we probably had a 10-percent win rate in scrims”

Esports18 hours ago

Call of Duty: Warzone players report game-breaking glitch at the start of matches

Esports18 hours ago

All Minecraft MC Championship 11 teams

Esports18 hours ago

Washington Justice re-signs Decay, acquires Mag

Esports19 hours ago

Silver Lining Warzone Blueprint: How to Get

Esports19 hours ago

League of Legends pros react to Bjergsen’s retirement announcement

Esports19 hours ago

Comstock Warzone Blueprint: How to Get

Blockchain News19 hours ago

Concerns Arise as North Korea’s Financial Services Commission Unsure of Its Cryptocurrency Mandate

Esports19 hours ago

Genshin Impact Resin System Change Introduced in Latest Patch

Esports19 hours ago

Revolution Warzone Blueprint: How to Get and Build

Esports19 hours ago

Red Crown Warzone Blueprint: How to Get

Esports19 hours ago

Animal Crossing’s Turnip Prices Will Hit All-Time High on ‘ Ally Island’

Esports19 hours ago

Black Ops Cold War Playstation Exclusive Zombie Mode Teased

Esports19 hours ago

BR Solo Survivor Warzone Mode Recently Added

Esports20 hours ago

Blinding Lights Fortnite Emote: How Much Does it Cost?