Connect with us

AI

Towards ‘Eternal Sunshine’? New Links Found Between Memory and Emotion

Avatar

Published

on


If you can’t explain it simply, you don’t understand it well enough. — Albert Einstein

Disclaimer: This article draws and expands upon material from (1) Christoph Molnar’s excellent book on Interpretable Machine Learning which I definitely recommend to the curious reader, (2) a deep learning visualization workshop from Harvard ComputeFest 2020, as well as (3) material from CS282R at Harvard University taught by Ike Lage and Hima Lakkaraju, who are both prominent researchers in the field of interpretability and explainability. This article is meant to condense and summarize the field of interpretable machine learning to the average data scientist and to stimulate interest in the subject.

Machine learning systems are becoming increasingly employed in complex high-stakes settings such as medicine (e.g. radiology, drug development), financial technology (e.g. stock price prediction, digital financial advisor), and even in law (e.g. case summarization, litigation prediction). Despite this increased utilization, there is still a lack of sufficient techniques available to be able to explain and interpret the decisions of these deep learning algorithms. This can be very problematic in some areas where the decisions of algorithms must be explainable or attributable to certain features due to laws or regulations (such as the right to explanation), or where accountability is required.

The need for algorithmic accountability has been highlighted many times, the most notable cases of which are Google’s facial recognition algorithm that labeled some black people as gorillas, and Uber’s self-driving car which ran a stop sign. Due to the inability of Google to fix the algorithm and remove the algorithmic bias that resulted in this issue, they solved the problem by removing words relating to monkeys from Google Photo’s search engine. This illustrates the alleged black box nature of many machine learning algorithms.

The black box problem is predominantly associated with the supervised machine learning paradigm due to its predictive nature.

Accuracy alone is no longer enough.

Academics in deep learning are acutely aware of this interpretability and explainability problem, and whilst some argue that these models are essentially black boxes, there have been several developments in recent years which have been developed for visualizing aspects of deep neural networks such the features and representations they have learned. The term info-besity has been thrown around to refer to the difficulty of providing transparency when decisions are made on the basis of many individual features, due to an overload of information. The field of interpretability and explainability in machine learning has exploded since 2015 and there are now dozens of papers on the subject, some of which can be found in the references.

As we will see in this article, these visualization techniques are not sufficient for completely explaining the complex representations learned by deep learning algorithms, but hopefully, you will be convinced that the black box interpretation of deep learning is not true — we just need better techniques to be able to understand and interpret these models.

If these in-depth educational content is useful for you, you can subscribe to our AI Research mailing list at the bottom of this article to be alerted when we release new research updates.

The Black Box

All algorithms in machine learning are to some extent black boxes. One of the key ideas of machine learning is that the models are data-driven — the model is configured from the data. This fundamentally leads us to problems such as (1) how we should interpret the models, (2) how to ensure they are transparent in their decision making, and (3) making sure the results of the said algorithm are fair and statistically valid.

For something like linear regression, the models are very well understood and highly interpretable. When we move to something like a support vector machine (SVM) or a random forest model, things get a bit more difficult. In this sense, there is no white or black box algorithm in machine learning, the interpretability exists as a spectrum or a ‘gray box’ of varying grayness.

It just so happens, that at the far end of our ‘gray’ area is the neural network. Even further in this gray area is the deep neural network. When you have a deep neural network with 1.5 billion parameters — as the GPT-2 algorithm for language modeling has — it becomes extremely difficult to interpret the representations that the model has learned.

In February 2020, Microsoft released the largest deep neural network in existence (probably not for long), Turing-NLG. This network contains 17 billion parameters, which is around 1/5th of the 85 billion neurons present in the human brain (although in a neural network, parameters represent connections, of which there are ~100 trillion in the human brain). Clearly, interpreting a 17 billion parameter neural network will be incredibly difficult, but its performance may be far superior to other models because it can be trained on huge amounts of data without becoming saturated — this is the idea that more complex representations can be stored by a model with a greater number of parameters.

interpretable ML

Comparison of Turing-NLG to other deep neural networks such as BERT and GPT-2. Source

Obviously, the representations are there, we just do not understand them fully, and thus we must come up with better techniques to be able to interpret the models. Sadly, it is more difficult than reading coefficients as one is able to do in linear regression!

 interpretable ML

Neural networks are powerful models, but harder to interpret than simpler and more traditional models.

Often, we do not care how an algorithm came to a specific decision, particularly when they are operationalized in low-risk environments. In these scenarios, we are not limited in our selection of algorithms by any limitation on the interpretability. However, if interpretability is important within our algorithm — as it often is for high-risk environments — then we must accept a tradeoff between accuracy and interpretability.

So what techniques are available to help us better interpret and understand our models? It turns out there are many of these, and it is helpful to make a distinction between what these different types of techniques help us to examine.

Local vs. Global

Techniques can be local, to help us study a small portion of the network, as is the case when looking at individual filters in a neural network.

Techniques can be global, allowing us to build up a better picture of the model as a whole, this could include visualizations of the weight distributions in a deep neural network, or visualizations of neural network layers propagating through the network.

Model-Specific vs. Model-Agnostic

A technique that is highly model-specific is only suitable for use by a single type of models. For example, layer visualization is only applicable to neural networks, whereas partial dependency plots can be utilized for many different types of models and would be described as model-agnostic.

Model-specific techniques generally involve examining the structure of algorithms or intermediate representations, whereas model-agnostic techniques generally involve examining the input or output data distribution.

 interpretable ML

The distinction between different model visualization techniques and interpretability metrics. Source

I will discuss all of the above techniques throughout this article, but will also discuss where and how they can be put to use to help provide us with insight into our models.

Being Right for the Right Reasons

One of the issues that arise from our lack of model explainability is that we do not know what the model has been trained on. This is best illustrated with an apocryphal example (there is some debate as to the truth of the story, but the lessons we can draw from it are nonetheless valuable).

Hide and Seek

According to AI folklore, in the 1960s, the U.S. Army was interested in developing a neural network algorithm that was able to detect tanks in images. Researchers developed an algorithm that was able to do this with remarkable accuracy, and everyone was pretty happy with the result.

However, when the algorithm was tested on additional images, it performed very poorly. This confused the researchers as the results had been so positive during development. After a while of everyone scratching their heads, one of the researchers noticed that when looking at the two sets of images, the sky was darker in one set of images than the other.

It became clear that the algorithm had not actually learned to detect tanks that were camouflaged, but instead was looking at the brightness of the sky!

Whilst this story exacerbates one of the common criticisms of deep learning, there is truth to the fact that in a neural network, and especially a deep neural network, you do not really know what the model is learning.

This powerful criticism and the increasing importance of deep learning in academia and industry is what has led to an increased focus on interpretability and explainability. If an industry professional cannot convince their client that they understand what the model they built is doing, should it be really be used when there are large risks, such as financial losses or people’s lives?

Interpretability

At this point, you might be asking yourself how visualization can help us to interpret a model, given that there may be an infinite number of viable interpretations. Defining and measuring what interpretability means is not a trivial task, and there is little consensus on how to evaluate it.

There is no mathematical definition of interpretability. Two proposed definitions in the literature are:

“Interpretability is the degree to which a human can understand the cause of a decision.” — Tim Miller

“Interpretability is the degree to which a human can consistently predict the model’s result.” — Been Kim

The higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made. A model is better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model. One way we can start to evaluate model interpretability is via a quantifiable proxy.

proxy is something that is highly correlated with what we are interested in studying but is fundamentally different from the object of interest. Proxies tend to be simpler to measure than the object of interest, or like in this case, just measurable — whereas our object of interest (like interpretability) may not be.

The idea of proxies is prevalent in many fields, one of which is psychology where they are used to measure abstract concepts. The most famous proxy is probably the intelligence quotient (IQ) which is a proxy for intelligence. Whilst the correlation between IQ and intelligence is not 100%, it is high enough that we can gain some useful information about intelligence from measuring IQ. There is no known way for directly measuring intelligence.

An algorithm that uses dimensional reduction to allow us to visualize high-dimensional data in a lower-dimensional space provides us with a proxy to visualize the data distribution. Similarly, a set of training images provides us with a proxy of the full data distribution of interest, but will inevitably be somewhat different to the true distribution (if you did a good job constructing the training set, it should not differ too much from a given test set).

What about post-hoc explanations?

Post-hoc explanations (or explaining after the fact) can be useful but sometimes misleading. These merely provide a plausible rationalization for the algorithmic behavior of a black box, not necessarily concrete evidence and so should be used cautiously. Post-hoc rationalization can be done with quantifiable proxies, and some of the techniques we will discuss do this.

Choosing a Visualization

Designing a visualization requires us to think about the following factors:

  • The audience to whom we are presenting (the who) — is this being done for debugging purposes? To convince a client? To convince a peer-reviewer for a research article?
  • The objective of the visualization (the what)— are we trying to understand the inputs (such as if EXIF metadata from an image is being read correctly so that an image does not enter a CNN sideways), outputs, or parameter distributions of our model? Are we interested in how inputs evolve through the network or a static feature of the network like a feature map or filter?
  • The model being developed (the how)— clearly, if you are not using a neural network, you cannot visualize feature maps of a network layer. Similarly, feature importance can be used for some models, such as XGBoost or Random Forest algorithms, but not others. Thus the model selection inherently biases what techniques can be used, and some techniques are more general and versatile than others. Developing multiple models can provide more versatility in what can be examined.

Deep models present unique challenges for visualization: we can answer the same questions about the model, but our method of interrogation must change! Because of the importance of this, we will mainly focus on deep learning visualization for the rest of the article.

Subfields of Deep Learning Visualization

There are largely three subfields of deep learning visualization literature:

  1. Interpretability & Explainability: helping to understand how deep learning models make decisions and their learned representations.
  2. Debugging & Improving: helping model curators and developers construct and troubleshoot their models, with the hope of expediting the iterative experimentation process to ultimately improve performance.
  3. Teaching Deep Learning: helping to educate amateur users about artificial intelligence — more specifically, machine learning.

Why is interpreting a neural network so difficult?

To understand why interpreting a neural network is difficult and non-intuitive, we have to understand what the network is doing to our data.

Essentially, the data we pass to the input layer — this could be an image or a set of relevant features for predicting a variable — can be plotted to form some complex distribution like that shown in the image below (this is only a 2D representation, imagine it in 1000 dimensions).

 interpretable ML

If we ran this data through a linear classifier, the model would try its best to separate the data, but since we are limited to a hypothesis class that only contains linear functions, our model will perform poorly since a large portion of the data is not linearly separable.

 interpretable ML

This is where neural networks come in. The neural network is a very special function. It has been proven that a neural network with a single hidden layer is capable of representing the hypothesis class of all non-linear functions, as long as we have enough nodes in the network. This is known as the universal approximation theorem.

It turns out that the more nodes we have, the larger our class of functions we can represent. If we have a network with only ten layers and are trying to use it to classify a million images, the network will quickly saturate and reach maximum capacity. If we have 10 million parameters, it will be able to learn a much better representation of the network, as the number of non-linear transformations increases. We say this model has a larger model capacity.

People use deep neural networks instead of a single layer because the amount of neurons needed in a single layer network increases exponentially with model capacity. The abstraction of hidden layers significantly reduces the need for more neurons but this comes at a cost for interpretability. The deeper we go, the less interpretable the network becomes.

The non-linear transformations of the neural network allow us to remap our data into a linearly separable space. At the output layer of a neural network, it then becomes arbitrary for us to separate our initially non-linear data into two classes using a linear classifier, as illustrated below.

 interpretable ML

The transformation of a non-linear dataset to one that is linearly separable using a neural network. Source

The question is, how do we know what is going on within this multi-layer non-linear transformation, which may contain millions of parameters?

Imagine a GAN model (two networks fighting each other in order to mimic the distribution of the input data) working on a 512×512 image dataset. When images are introduced into a neural network, each pixel becomes a feature of the neural network. For an image of this size, the number of features is 262,144. This means we are performing potentially 8 or 9 convolutional and non-linear transformations on over 200,000 features. How can one interpret this?

Go even more extreme to the case of 1024×1024 images, which have been developed by NVIDIA’s implementation of StyleGAN. Since the number of pixels increases by a factor of four with a doubling of image size, we would have over a million features as our input to the GAN. So we now have a one million feature neural network, performing convolutional operations and non-linear activations, and doing this over a dataset of hundreds of thousands of images.

Hopefully, I have convinced you that interpreting deep neural networks is profoundly difficult. Although the operations of a neural network may seem simple, they can produce wildly complex outcomes via some form of emergence.

Visualizations

For the remainder of this article, I will discuss visualization techniques that can be used for deep neural networks, since they present the greatest challenge in the interpretability and explainability of machine learning.

Weight Histograms

Weight histograms are generally applicable to any data type, so I have chosen to cover these first. Weight histograms can be very useful in determining the overall distribution of weights across a deep neural network. In general, histograms display the number of occurrences of a given value relative to each other values. If the distribution of weights is uniform, a normal distribution, or takes on some ordered structure can tell us useful information.

For example, if we want to check that all our network layers are learning from a given batch, we can see how the weight distributions change after training on the batch. Whilst this may not seem the most useful visualization at first, we can still gain valuable insight from weight histograms.

Below shows weight and bias histograms for a four-layer network in Tensorboard — Tensorflow’s main visualization tool.

 interpretable ML

Weight histograms in Tensorboard.

For those of you who are not familiar, there is another tool for plotting weight distributions is Weights and Biases (W&B), which is a relatively new company specializing in experiment tracking for deep learning. When training a large network such as a GAN with millions of parameters, the experiment tracking provided by W&B is very helpful for logging purposes and offers more functionality than Tensorboard (and is free for those of you in academia).

 interpretable ML

Weight histograms in Weights and Biases.

Saliency Maps

Going back to the tank problem we discussed previously, how could we troubleshoot this network to ensure the classifier is examining the correct portions of an image to make its predictions? One way to do this is with saliency maps.

Saliency maps were proposed in the paper “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps” in 2013, along with class maximization (discussed later). The idea behind them is fairly simple. First, we compute the gradient of the output category with respect to the input image. This gives us an indicator as to how our classification changes with respect to small changes in each of the input image pixels. If the small change creates a positive gradient, then we know that changes to that pixel increase the output value. By visualizing the gradients, we can examine which pixels are the most important for activation and ensure that portions of the image being examined correspond to the object of interest.

 interpretable ML

Saliency maps provide a visual representation of the input sensitivity of an output class.

Saliency maps provide us with a method for computing the spatial support of a given class in a given image (image-specific class saliency map). This means that we can look at a classification output from a convolution network, perform backpropagation, and look at which parts of the image were involved in classifying the image as a given class.

 interpretable ML

Examples of class-specific images and their prospective saliency maps for that class. Source

Another simple adjustment to the saliency method known as rectified saliency can be used. This involves clipping negative gradients during the backpropagation step so as to only propagate positive gradient information. Thus, only information related to an increase in output is communicated. You can find more in the paper “Visualizing and Understanding Convolutional Networks.”

Given an image with pixel locations i and j, and with c color channels (red, blue, and green in RGB images), we backpropagate the output to find the derivative that corresponds to each pixel. We then take the maximum of the absolute value across all color channels of the weights and use this as the ij-th value of the saliency map M.

 interpretable ML

The saliency map M is a 2D image with pixel locations i and j. The value of the map at each point is the maximum absolute value of the derivative found from backpropagation out of all the image color channels, c.

Visualizing saliency maps can easily be done in Keras using the Keras functions ‘visualize_saliency’ and ‘visualize_saliency_with_losses’.

Occlusion Maps

A similar technique to saliency mapping for discerning the importance of pixels in an image’s prediction is occlusion mapping. In occlusion mapping, we are still developing a map related to an image’s output. However, this time we are interested in how blocking out part of the image affects the prediction output of the image.

Occlusion based methods systematically occlude (blocking out) portions of the input image using a grey square and monitoring the classifier output. The image below — which shows an image classifier aiming to predict melanoma — clearly shows the model is localizing the objects within the scene, as the probability of the correct class drops significantly when the object is occluded (the heat map gets darker in the regions where the melanoma is because occluding this reduces the classifier output performance).

 interpretable ML

Classifier showing an occlusion map for a classifier predicting melanoma. Source

Occlusion mapping is fairly simple to implement as it just involves distorting the image at a given pixel location and saving the prediction output to plot in a heat map. A good implementation of this on GitHub by Akshay Chawla can be found here.

Class Maximization

One very powerful technique in studying neural networks is class maximization. This allows us to view the exemplar of a class, i.e. the input that would cause the class value of the classifier to be maximized in the output. For image data, we would call this the image exemplar of a class. Mathematically, this corresponds to:

 interpretable ML

Where x* corresponds to the image exemplar of class c. This notation says we want the image that gives us the maximum possible output for what class c is, which can be interpreted as what is the perfect c?

The outputs of this from a large scale classification network are fascinating. Below are some images generated by Nguyen, Yosinski, and Clune in their 2016 paper on deep convolutional network visualization. They performed class maximization on a deep convolutional neural network which was trained on the ILSVRC-2013 dataset.

 interpretable ML

Images generated from class maximization on a deep convolutional network. Source

Activation Maximization

Similar to class maximization, activation maximization helps us to visualize the exemplar of convolutional filters. Class maximization is a subset of activation maximization whereby the output softmax layer of a classification algorithm is maximized. Mathematically, activation maximization can be described as:

 interpretable ML

Where x* corresponds to the exemplar of hidden layer l or filter f in a deep neural network. This notation says we want the input (an image in the case of a convolutional network) that maximizes the filter or layer. This is illustrated below for the 8 layers of a deep convolutional neural network.

 interpretable ML

Images generated from activation maximization on a deep convolutional network. Source

LIME (Local Interpretable Model-Agnostic Explanations)

LIME stands for local interpretable model-agnostic explanations and even has its own Python package. Because the method was designed to be model-agnostic, it can be applied to many different machine learning models. It was first shown in papers by Marco Tulio Ribeiro and colleagues, including “Model-Agnostic Interpretability of Machine Learning” and ‘“Why Should I Trust You?”: Explaining the Predictions of Any Classifier’both published in 2016.

Local surrogate models are interpretable models that are used to explain individual predictions of black box machine learning models. LIME is an implementation of local surrogate models.

Surrogate models are trained to approximate the predictions of the underlying black box model.

Instead of training a global surrogate model, LIME focuses on training local surrogate models to explain individual predictions.

 interpretable ML

Explaining individual predictions to a human decision-maker. Source

In LIME, we perturb the input and analyze how our predictions change. Despite how it may sound, this is very different from occlusion mapping and saliency mapping. Our aim is to approximate the underlying model, f, using an interpretable model, g (such as a linear model with a few coefficients) from a set of possible models, G, at a given location governed by a proximity measure, πₓWe also add a regularizer, Ω, to make sure the interpretable model is as simple as possible. This is illustrated in the equation below.

 interpretable ML

The explanation model for instance x is the model g (e.g. linear regression model) that minimizes loss L (e.g. mean squared error), which measures how close the explanation is to the prediction of the original model f (e.g. an xgboost model), while the model complexity Ω(g) is kept low (e.g. prefer fewer features). G is the family of possible explanations, for example, all possible linear regression models. The proximity measure πₓ defines how large the neighborhood around instance x is that we consider for the explanation.

LIME for images works differently than LIME for tabular data and text. Intuitively, it would not make much sense to perturb individual pixels, since many more than one pixel contribute to one class. Randomly changing individual pixels would probably not change the predictions by much. Therefore, variations of the images are created by segmenting the image into “superpixels” and turning superpixels off or on.

 interpretable ML

An image of a cat that has been segmented into superpixels. Source

Superpixels are interconnected pixels with similar colors and can be turned off by replacing each pixel with a user-defined color such as gray. The user can also specify a probability for turning off a superpixel in each permutation.

 interpretable ML

Explaining an image classification prediction made by Google’s Inception neural network. The top 3 classes predicted are “Electric Guitar” (p = 0.32), “Acoustic guitar” (p = 0.24) and “Labrador” (p = 0.21). Source

The fidelity measure (how well the interpretable model approximates the black box predictions, given by our loss value L) gives us a good idea of how reliable the interpretable model is in explaining the black box predictions in the neighborhood of the data instance of interest.

LIME is also one of the few methods that works for tabular data, text and images.

Note that we can also generate global surrogate models, which follow the same idea but are used as an approximate model for the entire black box algorithm, not just a localized subset of the algorithm.

Partial Dependency Plots

The partial dependence plot shows the marginal effect one or two features have on the predicted outcome of a machine learning model. If we are analyzing the market price of a metal like gold using a dataset with a hundred features, including the value of gold in previous days, we will find that the price of gold has a much higher dependence on some features than others. For example, the gold price might be closely linked to the oil price, whilst not strongly linked to the price of avocados. This information becomes visible in a partial dependency plot.

 interpretable ML

An example of partial dependency plots for bike rentals with respect to temperature, humidity, and wind speed. We see that of the three variables, the temperature has the strongest dependency on the number of bike rentals. Source

Note that this is not the same as a linear regression model. If this was performed on a linear regression model, each of the partial dependency plots would be linear. The partial dependency plot allows us to see the relationship in its full complexity, which may be linear, exponential, or some other complex relationship.

One of the main pitfalls of the partial dependency plot is that it can only realistically show a 2D interpretation involving one or two features. Thus, modeling higher-order interaction terms between multiple variables is difficult.

There is also an inherent assumption of independence of the variables, which is often not the case (such as a correlation between height and weight, which are two common parameters in medical datasets). These correlations between variables may render one of them redundant or present issues to the algorithm due to multicollinearity. Where this becomes a problem, using Accumulated Local Effects (ALE) is much preferred, as it does not suffer from the same pitfalls as partial dependency plots when it comes to collinearity.

To avoid overinterpreting the results in data-sparse feature regions it is helpful to add a rug plot to the bottom of the partial dependency plot to see where data-rich and data-sparse regions are present.

Individual Conditional Expectation (ICE)

ICE is similar to partial dependency plots, except a different line is plotted for each instance in the dataset. Thus, the partial dependency plot gives us an averaged view of the dependency of a feature variable on the output variable, whereas ICE allows us to see the instance-specific dependency of a feature variable. This is useful when interaction variables are present that could be masked when looking at the average result, but become very apparent when using ICE.

 interpretable ML

An example of individual conditional expectation plots for bike rentals with respect to temperature, humidity, and wind speed. We see that each of the plots does not exhibit any heterogeneity between instances and so it is unlikely that any significant interaction terms are present. Source

Different types of ICE plots exist, such as centered and derivative ICE plots also exist but essentially provide the same information in different forms.

Shapley Values

The Shapley value is a concept drawn from an aspect of cooperative game theory developed in 1953 by Lloyd Shapley. In cooperative game theory, the Shapley value optimizes the payout for each player based on their average contribution over all permutations. When applied to machine learning, we assume that each feature is a player in the game, all working together to maximize the prediction, which can be considered the payout. The Shapley value assigns a portion of the payout to each feature based on its contribution to the output value.

For example, if you are looking at house prices and you remove a single feature from the analysis, how does this affect the model prediction? If the predicted value goes down by an amount, we can infer that this feature contributed this much to the prediction. Of course, it is not exactly that simple, we must perform this computation for every possible combination of features, which means we need to run  models where x is the number of features.

Thus, the Shapley value is the average marginal contribution of a feature value across all possible coalitions.

 interpretable ML

Equation for the Shapley value, ϕ, from cooperative game theory.

This equation may look daunting, so let’s examine it piece by piece from right to left. To know that marginal contribution of our point xᵢ, we calculate the prediction value of our model using all features in our feature subset, S, that do not contain feature xᵢ, and we subtract this from the prediction value of the subset with that feature still present. We then scale this for the total number of permutations of features and then sum all of these contributions. Thus, we now have a value which is essentially the average contribution of a feature for a trained model using every possible subset of features.

This discussion may seem quite abstract, so an example would be helpful. The example used in Christoph’s book is an excellent one to consider involving house prices. If we have features for predicting a house price which involve (1) the size of the apartment (numeric), (2) the proximity to a nearby park (binary), and (3) the floor of the building the apartment is on. To calculate the Shapley values for each feature, we take every possible subset of features and predict the output in each case (including the case with no features). We then sum the marginal contributions of each feature.

 interpretable ML

All possible feature permutations needed to consider to calculate the Shapley value for the simple house price prediction model. Source

A player can be an individual feature value, e.g. for tabular data, but a player can also be a group of feature values. For example, to explain an image, pixels can be grouped to superpixels and the prediction distributed among them.

As far as I know, there is no official package for Shapley values on Python, but there are some repositories available that have implemented it for machine learning. One such package can be found here.

The main disadvantage of the Shapley value is that it is very computationally expensive and time-consuming for large numbers of features due to the exponential increase in the number of possible permutations for a linear increase in the number of features. Thus, for applications where the number of features is very large, the Shapley value is typically approximated using a subset of feature permutations.

Anchors

First introduced in a 2018 paper by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, the same researchers that created LIME. It also has its own Python package that was developed by Marco. It is also available in the ALIBI package for Python.

Anchors address a key shortcoming of local explanation methods like LIME which proxy the local behavior of the model in a linear way. It is, however, unclear to what extent the explanation holds up in the region around the instance to be explained since both the model and data can exhibit non-linear behavior in the neighborhood of the instance. This approach can easily lead to overconfidence in the explanation and misleading conclusions on unseen but similar instances. The anchor algorithm tackles this issue by incorporating coverage, the region where the explanation applies, into the optimization problem.

Similar to LIME, anchors can be used on text, tabular, and image data. For images, we first segment them into superpixels whilst still maintaining local image structure. The interpretable representation then consists of the presence or absence of each superpixel in the anchor. Several image segmentation techniques can be used to split an image into superpixels, such as slic or quickshift.

The algorithm supports a number of standard image segmentation algorithms (felzenszwalb, slic and quickshift) and allows the user to provide a custom segmentation function.

 interpretable ML

Anchor of a beagle being superimposed on other image backgrounds without predictive accuracy being reduced when classified using the Inception network. Source

Counterfactuals

Counterfactuals are the opposite of anchors. Anchors are features that when present are sufficient to anchor a prediction (i.e. prevent it from being changed by altering other features). In the anchor section, we looked at an example where these anchors were superpixels of an image. Every superpixel in the image that was not part of an anchor was, in fact, a counterfactual — we can alter the prediction by altering the counterfactuals, and not by altering the anchors.

Counterfactuals were first proposed in the Wachter et al 2017 paper titled “Counterfactual explanations without opening the black box: Automated decisions and the GDPR”. The basic idea of counterfactuals is that we want to find the smallest change we can make to the smallest number of features in order to get the desired output we want.

A counterfactual explanation of a prediction describes the smallest change to the feature values that changes the prediction to a predefined output.

 interpretable ML

What is a counterfactual? It is the smallest change to our feature space that allows us to cross a decision boundary. Source

This may sound like an underdefined task, as there are many ways in which we could alter our instance in order for it to meet our desired output. This phenomenon is known as the ‘Rashomon effect’ and as a result, we must cast our problem in the form of an optimization problem. Firstly, we want to ensure that we change as few features as possible, and change these features by the smallest amount possible, whilst also maintaining instances that are likely given the joint distribution of the data. The loss function for our optimization problem can be cast as

 interpretable ML

The loss function to be minimized as part of the counterfactual optimization problem.

The first term of the loss function represents the quadratic distance between the model prediction f’(x’) and the expected output y’. The second term represents a distance metric between the original instance and the counterfactual instance. The quadratic term has a scaling parameter that scales the importance of the prediction output to the distance between the normal instance x and the counterfactual instance x’.

The distance metric we use is the Manhattan distance because the counterfactual should not only be close to the original instance but should also change as few features as possible. The distance function is described as

This is the Manhattan distance scaled using the median absolute deviation.

If we have a small scaling parameter, the distance metric because more important and we prefer to see counterfactuals that are close to our normal instance. If we have a large scaling parameter, the prediction becomes more important and we are laxer on how close the counterfactual is to representing our normal instance.

When we run our algorithm, we do not need to select a value for our scaling parameter. Instead, the authors suggest that a tolerance, ϵ, is given by the user which represents how far we will tolerate the prediction being from our output. This is represented as

 interpretable ML

An additional constraint of our optimization problem.

Our optimization problem can then succinctly be described as

 interpretable ML

Our goal is to find the counterfactual x’ that minimizes our overall loss function whilst varying the scaling parameter λ.

The optimization mechanism for counterfactuals can be described as a ‘growing spheres’ approach, whereby the input instance, x, output value, y’, and tolerance parameter, ϵ, are given by the user. Initially, a small value for the scaling parameter, λ, is set. A random instance within the current ‘sphere’ of allowed counterfactuals is sampled and then used as a starting point for optimization until the instance satisfies the above constraint (i.e. if the difference between the prediction and the output value is below our tolerance). We then add this instance to our list of counterfactuals and increase the value of λ, which is effectively growing the size of our ‘sphere’. We do this recursively, generating a list of counterfactuals. At the end of the procedure, we select the counterfactual which minimizes the loss function.

Counterfactuals are implemented in the Python package ALIBI, which you can read about here (they also have an alternate description that may be helpful and clearer than my own).

Other Techniques

There are other techniques that I have not touched upon here which I refer the interested reader to. These include, but are not limited to:

Accumulated Local Effects

Feature Importance

Dimensional Reduction Techniques (PCA, t-SNE)

SHapley Additive exPlanations (SHAP)

Model Distillation

A good repository of topics on machine learning interpretability can also be found on this GitHub page which covers papers, lectures, and other blogs with material on the subject.

Final Comments

Deep learning visualization is a complex topic that has only just begun to be researched in the last few years. However, it will become more important as deep learning techniques become more integrated into our data-driven society. Most of us may value performance over understanding, but I think that being able to interpret and explain models would provide a competitive edge for individuals and companies in the future, there certainly will be a market for it.

Visualization is not the only method or the best method of interpreting or explaining the results of deep neural networks, but they are certainly a method and they can provide us with useful insight into the decision making process of complex networks.

“The problem is that a single metric, such as classification accuracy, is an incomplete description of most real-world tasks.” — Doshi-Velez and Kim 2017

References

Here are papers that I referenced in this article as well as papers I think the reader may find informative on the topic of algorithmic interpretability and explainability.

[1] Towards A Rigorous Science of Interpretable Machine Learning — Doshi-Velez and Kim, 2017

[2] The Mythos of Model Interpretability — Lipton, 2017

[3] Transparency: Motivations and Challenges — Weller, 2019

[4]An Evaluation of the Human-Interpretability of Explanation — Lage et. al., 2019

[5] Manipulating and Measuring Model Interpretability — Poursabzi-Sangdeh, 2018

[6] Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Predictions Model — Letham and Rudin, 2015

[7] Interpretable Decision Sets: A Joint Framework for Description and Prediction — Lakkaraju et. al., 2016

[8] Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions — Li et. al., 2017

[9] The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification — Kim et. al., 2014

[10] Learning Optimized Risk Scores — Ustun and Rudin, 2017

[11] Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission — Caruana et. al., 2015

[12] “Why Should I Trust You?” Explaining the Predictions of Any Classifier — Ribeiro et. al., 2016

[13] Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead — Rudin, 2019

[14] Interpretation of Neural Networks is Fragile — Ghorbani et. al., 2019

[15] Visualizing Deep Neural Network Decisions: Prediction Difference Analysis — Zintgraf et. al., 2017

[16] Sanity Checks for Saliency Maps — Adebayo et. al., 2018

[17] A Unified Approach to Interpreting Model Predictions — Lundberg and Lee, 2017

[18] Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) — Kim et. al., 2018

[19] Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR — Wachter et. al., 2018

[20] Actionable Recourse in Linear Classification — Ustun et. al., 2018

[21] Causal Interpretations of Black-Box Models — Zhao and Hastie, 2018

[22] Learning Cost-Effective and Interpretable Treatment Regimes — Lakkaraju and Rudin, 2017

[23] Human-in-the-Loop Interpretability Prior — Lage et. al., 2018

[24] Faithful and Customizable Explanations of Black Box Models — Lakkaraju et. al., 2019

[25] Understanding Black-box Predictions via Influence Functions — Koh and Liang, 2017

[26] Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability — Kleinberg and Mullainathan, 2019

[27] Understanding Neural Networks Through Deep Visualization — Yosinski et al., 2015

[28] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps — Simonyan, Vedaldi, and Zisserman, 2014

[29] Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks — Nguyen, Yosinski, and Clune, 2016

[30] Explanation in artificial intelligence: Insights from the social sciences — Tim Miller, 2017

[31] Examples are not enough, learn to criticize! Criticism for interpretability —Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo, 2016

[32] What’s Inside the Black Box? AI Challenges for Lawyers and Researchers — Ronald Yu and Gabriele Spina Ali, 2019

This article was originally published on Towards Data Science and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI research updates.

We’ll let you know when we release more summary articles like this one.

Source: https://plato-ai.network/towards-eternal-sunshine-new-links-found-between-memory-and-emotion/

AI

Executive Interview: Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency 

Avatar

Published

on

As CTO of the Cybersecurity & Infrastructure Security Agency of the DHS, Brian Gattoni is charged with understanding and advising on cyber and physical risks to the nation’s critical infrastructure. 

Understanding and Advising on Cyber and Physical Risks to the Nation’s Critical Infrastructure 

Brian Gattoni, CTO, Cybersecurity & Infrastructure Security Agency

Brian R. Gattoni is the Chief Technology Officer for the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security. CISA is the nation’s risk advisor, working with partners to defend against today’s threats and collaborating to build a secure and resilient infrastructure for the future. Gattoni sets the technical vision and strategic alignment of CISA data and mission services. Previously, he was the Chief of Mission Engineering & Technology, developing analytic techniques and new approaches to increase the value of DHS cyber mission capabilities. Prior to joining DHS in 2010, Gattoni served in various positions at the Defense Information Systems Agency and the United States Army Test & Evaluation Command. He holds a Master of Science Degree in Cyber Systems & Operations from the Naval Postgraduate School in Monterey, California, and is a Certified Information Systems Security Professional (CISSP).  

AI Trends: What is the technical vision for CISA to manage risk to federal networks and critical infrastructure? 

Brian Gattoni: Our technology vision is built in support of our overall strategy. We are the nation’s risk advisor. It’s our job to stay abreast of incoming threats and opportunities for general risk to the nation. Our efforts are to understand and advise on cyber and physical risks to the nation’s critical infrastructure.  

It’s all about bringing in the data, understanding what decisions need to be made and can be made from the data, and what insights are useful to our stakeholders. The potential of AI and machine learning is to expand on operational insights with additional data sets to make better use of the information we have.  

What are the most prominent threats? 

The Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security is the Nation’s risk advisor.

The sources of threats we frequently discuss are the adversarial actions of nation-state actors and those aligned with nation-state actors and their interests, in disrupting national critical functions here in the U.S. Just in the past month, we’ve seen increased activity from elements supporting what we refer to in the government as Hidden Cobra [malicious cyber activity by the North Korean government]. We’ve issued joint alerts with our partners overseas and the FBI and the DoD, highlighting activity associated with Chinese actors. On CISA.gov people can find CISA Insights, which are documents that provide background information on particular cyber threats and the vulnerabilities they exploit, as well as a ready-made set of mitigation activities that non-federal partners can implement.   

What role does AI play in the plan? 

Artificial intelligence has a great role to play in the support of the decisions we make as an agency. Fundamentally, AI is going to allow us to apply our decision processes to a scale of data that humans just cannot keep up with. And that’s especially prevalent in the cyber mission. We remain cognizant of how we make decisions in the first place and target artificial intelligence and machine learning algorithms that augment and support that decision-making process. We’ll be able to use AI to provide operational insights at a greater scale or across a greater breadth of our mission space.  

How far along are you in the implementation of AI at the CISA? 

Implementing AI is not as simple as putting in a new business intelligence tool or putting in a new email capability. Really augmenting your current operations with artificial intelligence is a mix of the culture change, for humans to understand how the AI is supposed to augment their operations. It is a technology change, to make sure you have the scalable compute and the right tools in place to do the math you’re talking about implementing. And it’s a process change. We want to deliver artificial intelligence algorithms that augment our operators’ decisions as a support mechanism.  

Where we are in the implementation is closer to understanding those three things. We’re working with partners in federally funded research and development centers, national labs and the departments own Science and Technology Data Analytics Tech Center to develop capability in this area. We’ve developed an analytics meta-process which helps us systemize the way we take in data and puts us in a position to apply artificial intelligence to expand our use of that data.  

Do you have any interesting examples of how AI is being applied in CISA and the federal government today? Or what you are working toward, if that’s more appropriate. 

I have a recent use case. We’ve been working with some partners over the past couple of months to apply AI to a humanitarian assistance and disaster relief type of mission. So, within CISA, we also have responsibilities for critical infrastructure. During hurricane season, we always have a role to play in helping advise what the potential impacts are to critical infrastructure sites in the affected path of a hurricane.  

We prepared to conduct an experiment leveraging AI algorithms and overhead imagery to figure out if we could analyze the data from a National Oceanic and Atmospheric Administration flight over the affected area. We compared that imagery with the base imagery from Google Earth or ArcGIS and used AI to identify any affected critical infrastructure. We could see the extent to which certain assets, such as oil refineries, were physically flooded. We could make an assessment as to whether they hit a threshold of damage that would warrant additional scrutiny, or we didn’t have to apply resources because their resilience was intact, and their functions could continue.   

That is a nice use case, a simple example of letting a computer do the comparisons and make a recommendation to our human operators. We found that it was very good at telling us which critical infrastructure sites did not need any additional intervention. To use a needle in a haystack analogy, one of the useful things AI can help us do is blow hay off the stack in pursuit of the needle. And that’s a win also. The experiment was very promising in that sense.  

How does CISA work with private industry, and do you have any examples of that?  

We have an entire division dedicated to stakeholder engagement. Private industry owns over 80% of the critical infrastructure in the nation. So CISA sits at the intersection of the private sector and the government to share information, to ensure we have resilience in place for both the government entities and the private entities, in the pursuit of resilience for those national critical functions. Over the past year we’ve defined a set of 55 functions that are critical for the nation.  

When we work with private industry in those areas we try to share the best insights and make decisions to ensure those function areas will continue unabated in the face of a physical or cyber threat. 

Cloud computing is growing rapidly. We see different strategies, including using multiple vendors of the public cloud, and a mix of private and public cloud in a hybrid strategy. What do you see is the best approach for the federal government? 

In my experience the best approach is to provide guidance to the CIO’s and CISO’s across the federal government and allow them the flexibility to make risk-based determinations on their own computing infrastructure as opposed to a one-size-fits-all approach.   

We issue a series of use cases that describeat a very high levela reference architecture about a type of cloud implementation and where security controls should be implemented, and where telemetry and instrumentation should be applied. You have departments and agencies that have a very forward-facing public citizen services portfolio, which means access to information, is one of their primary responsibilities. Public clouds and ease of access are most appropriate for those. And then there are agencies with more sensitive missions. Those have critical high value data assets that need to be protected in a specific way. Giving each the guidance they need to handle all of their use cases is what we’re focused on here. 

I wanted to talk a little bit about job roles. How are you defining the job roles around AI in CISA, as in data scientists, data engineers, and other important job titles and new job titles?  

I could spend the remainder of our time on this concept of job roles for artificial intelligence; it’s a favorite topic for me. I am a big proponent of the discipline of data science being a team sport. We currently have our engineers and our analysts and our operators. And the roles and disciplines around data science and data engineers have been morphing out of an additional duty on analysts and engineers into its own sub sector, its own discipline. We’re looking at a cadre of data professionals that serve almost as a logistics function to our operators who are doing the mission-level analysis. If you treat data as an asset that has to be moved and prepared and cleaned and readied, all terms in the data science and data engineering world now, you start to realize that it requires logistics functions similar to any other asset that has to be moved. 

If you get professionals dedicated to that end, you will be able to scale to the data problems you have without overburdening your current engineers who are building the compute platforms, or your current mission analysts who are trying to interpret the data and apply the insights to your stakeholders. You will have more team members moving data to the right places, making data-driven decisions. 

Are you able to hire the help you need to do the job? Are you able to find qualified people? Where are the gaps? 

As the domain continues to mature, as we understand more about the different roles, we begin to see gapseducation programs and training programs that need to be developed. I think maybe three, five years ago, you would see certificates from higher education in data science. Now we’re starting to see full-fledged degrees as concentrations out of computer science or mathematics. Those graduates are the pipeline to help us fill the gaps we currently have. So as far as our current problems, there’s never enough people. It’s always hard to get the good ones and then keep them because the competition is so high. 

Here at CISA, we continue to invest not only in our own folks that are re-training, but in the development of a cyber education and training group, which is looking at the partnerships with academia to help shore up that pipeline. It continually improves. 

Do you have a message for high school or college students interested in pursuing a career in AI, either in the government or in business, as to what they should study? 

Yes and it’s similar to the message I give to the high schoolers that live in my house. That is, don’t give up on math so easily. Math and science, the STEM subjects, have foundational skills that may be applicable to your future career. That is not to discount the diversity and variety of thought processes that come from other disciplines. I tell my kids they need the mathematical foundation to be able to apply the thought processes you learn from studying music or studying art or studying literature. And the different ways that those disciplines help you make connections. But have the mathematical foundation to represent those connections to a computer.   

One of the fallacies around machine learning is that it will just learn [by itself]. That’s not true. You have to be able to teach it, and you can only talk to computers with math, at the base level.  

So if you have the mathematical skills to relay your complicated human thought processes to the computer, and now it can replicate those patterns and identify what you’re asking it to do, you will have success in this field. But if you give up on the math part too earlyit’s a progressive disciplineif you give up on algebra two and then come back years later and jump straight into calculus, success is going to be difficult, but not impossible. 

You sound like a math teacher.  

A simpler way to say it is: if you say no to math now, it’s harder to say yes later. But if you say yes now, you can always say no later, if data science ends up not being your thing.  

Are there any incentives for young people, let’s say a student just out of college, to go to work for the government? Is there any kind of loan forgiveness for instance?  

We have a variety of programs. The one that I really like, that I have had a lot of success with as a hiring manager in the federal government, especially here at DHS over the past 10 years, is a program called Scholarship for Service. It’s a CyberCorps program where interested students, who pass the process to be accepted can get a degree in exchange for some service time. It used to be two years; it might be more now, but they owe some time and service to the federal government after the completion of their degree. 

I have seen many successful candidates come out of that program and go on to fantastic careers, contributing in cyberspace all over. I have interns that I hired nine years ago that are now senior leaders in this organization or have departed for private industry and are making their difference out there. It’s a fantastic program for young folks to know about.  

What advice do you have for other government agencies just getting started in pursuing AI to help them meet their goals? 

My advice for my peers and partners and anybody who’s willing to listen to it is, when you’re pursuing AI, be very specific about what it can do for you.   

I go back to the decisions you make, what people are counting on you to do. You bear some responsibility to know how you make those decisions if you’re really going to leverage AI and machine learning to make decisions faster or better or some other quality of goodnessThe speed at which you make decisions will go both ways. You have to identify your benefit of that decision being made if it’s positive and define your regret if that decision is made and it’s negative. And then do yourself a simple HIGH-LOW matrix; the quadrant of high-benefit, low-regret decisions is the target. Those are ones that I would like to automate as much as possible. And if artificial intelligence and machine learning can help, that would be great. If not, that’s a decision you have to make. 

I have two examples I use in our cyber mission to illustrate the extremes here. One is for incident triage. If a cyber incident is detected, we have a triage process to make sure that it’s real. That presents information to an analyst. If that’s done correctly, it has a high benefit because it can take a lot of work off our analysts. It has lowtomedium regret if it’s done incorrectly, because the decision is to present information to an analyst who can then provide that additional filter. So that’s a high benefit, low regret. That’s a no-brainer for automating as much as possible. 

On the other side of the spectrum is protecting next generation 911 call centers from a potential telephony denial of service attack. One of the potential automated responses could be to cut off the incoming traffic to the 911 call center to stunt the attack. Benefit: you may have prevented the attack. Regret: potentially you’re cutting off legitimate traffic to a 911 call center, and that has life and safety implications. And that is unacceptable. That’s an area where automation is probably not the right approach. Those are two extreme examples, which are easy for people to understand, and it helps illustrate how the benefit regret matrix can work. How you make decisions is really the key to understanding whether to implement AI and machine learning to help automate those decisions using the full breadth of data.  

Learn more about the Cybersecurity & Infrastructure Security Agency.  

Source: https://www.aitrends.com/executive-interview/executive-interview-brian-gattoni-cto-cybersecurity-infrastructure-security-agency/

Continue Reading

AI

Making Use Of AI Ethics Tuning Knobs In AI Autonomous Cars 

Avatar

Published

on

Ethical tuning knobs would be a handy addition to self-driving car controls, the author suggests, if for example the operator was late for work and needed to exceed the speed limit. (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider  

There is increasing awareness about the importance of AI Ethics, consisting of being mindful of the ethical ramifications of AI systems.   

AI developers are being asked to carefully design and build their AI mechanizations by ensuring that ethical considerations are at the forefront of the AI systems development process. When fielding AI, those responsible for the operational use of the AI also need to be considering crucial ethical facets of the in-production AI systems. Meanwhile, the public and those using or reliant upon AI systems are starting to clamor for heightened attention to the ethical and unethical practices and capacities of AI.   

Consider a simple example. Suppose an AI application is developed to assess car loan applicants. Using Machine Learning (ML) and Deep Learning (DL), the AI system is trained on a trove of data and arrives at some means of choosing among those that it deems are loan worthy and those that are not. 

The underlying Artificial Neural Network (ANN) is so computationally complex that there are no apparent means to interpret how it arrives at the decisions being rendered. Also, there is no built-in explainability capability and thus the AI is unable to articulate why it is making the choices that it is undertaking (note: there is a movement toward including XAI, explainable AI components to try and overcome this inscrutability hurdle).   

Upon the AI-based loan assessment application being fielded, soon thereafter protests arose by some that assert they were turned down for their car loan due to an improper inclusion of race or gender as a key factor in rendering the negative decision.   

At first, the maker of the AI application insists that they did not utilize such factors and professes complete innocence in the matter. Turns out though that a third-party audit of the AI application reveals that the ML/DL is indeed using race and gender as core characteristics in the car loan assessment process. Deep within the mathematically arcane elements of the neural network, data related to race and gender were intricately woven into the calculations, having been dug out of the initial training dataset provided when the ANN was crafted. 

That is an example of how biases can be hidden within an AI system. And it also showcases that such biases can go otherwise undetected, including that the developers of the AI did not realize that the biases existed and were seemingly confident that they had not done anything to warrant such biases being included. 

People affected by the AI application might not realize they are being subjected to such biases. In this example, those being adversely impacted perchance noticed and voiced their concerns, but we are apt to witness a lot of AI that no one will realize they are being subjugated to biases and therefore not able to ring the bell of dismay.   

Various AI Ethics principles are being proffered by a wide range of groups and associations, hoping that those crafting AI will take seriously the need to consider embracing AI ethical considerations throughout the life cycle of designing, building, testing, and fielding AI.   

AI Ethics typically consists of these key principles: 

1)      Inclusive growth, sustainable development, and well-being 

2)      Human-centered values and fairness 

3)      Transparency and explainability 

4)      Robustness, security, and safety 

5)      Accountability   

We certainly expect humans to exhibit ethical behavior, and thus it seems fitting that we would expect ethical behavior from AI too.   

Since the aspirational goal of AI is to provide machines that are the equivalent of human intelligence, being able to presumably embody the same range of cognitive capabilities that humans do, this perhaps suggests that we will only be able to achieve the vaunted goal of AI by including some form of ethics-related component or capacity. 

What this means is that if humans encapsulate ethics, which they seem to do, and if AI is trying to achieve what humans are and do, the AI ought to have an infused ethics capability else it would be something less than the desired goal of achieving human intelligence.   

You could claim that anyone crafting AI that does not include an ethics facility is undercutting what should be a crucial and integral aspect of any AI system worth its salt. 

Of course, trying to achieve the goals of AI is one matter, meanwhile, since we are going to be mired in a world with AI, for our safety and well-being as humans we would rightfully be arguing that AI had better darned abide by ethical behavior, however that might be so achieved.   

Now that we’ve covered that aspect, let’s take a moment to ponder the nature of ethics and ethical behavior.  

Considering Whether Humans Always Behave Ethically   

Do humans always behave ethically? I think we can all readily agree that humans do not necessarily always behave in a strictly ethical manner.   

Is ethical behavior by humans able to be characterized solely by whether someone is in an ethically binary state of being, namely either purely ethical versus being wholly unethical? I would dare say that we cannot always pin down human behavior into two binary-based and mutually exclusive buckets of being ethical or being unethical. The real-world is often much grayer than that, and we at times are more likely to assess that someone is doing something ethically questionable, but it is not purely unethical, nor fully ethical. 

In a sense, you could assert that human behavior ranges on a spectrum of ethics, at times being fully ethical and ranging toward the bottom of the scale as being wholly and inarguably unethical. In-between there is a lot of room for how someone ethically behaves. 

If you agree that the world is not a binary ethical choice of behaviors that fit only into truly ethical versus solely unethical, you would therefore also presumably be amenable to the notion that there is a potential scale upon which we might be able to rate ethical behavior. 

This scale might be from the scores of 1 to 10, or maybe 1 to 100, or whatever numbering we might wish to try and assign, maybe even including negative numbers too. 

Let’s assume for the moment that we will use the positive numbers of a 1 to 10 scale for increasingly being ethical (the topmost is 10), and the scores of -1 to -10 for being unethical (the -10 is the least ethical or in other words most unethical potential rating), and zero will be the midpoint of the scale. 

Please do not get hung up on the scale numbering, which can be anything else that you might like. We could even use letters of the alphabet or any kind of sliding scale. The point being made is that there is a scale, and we could devise some means to establish a suitable scale for use in these matters.   

The twist is about to come, so hold onto your hat.   

We could observe a human and rate their ethical behavior on particular aspects of what they do. Maybe at work, a person gets an 8 for being ethically observant, while perhaps at home they are a more devious person, and they get a -5 score. 

Okay, so we can rate human behavior. Could we drive or guide human behavior by the use of the scale? 

Suppose we tell someone that at work they are being observed and their target goal is to hit an ethics score of 9 for their first year with the company. Presumably, they will undertake their work activities in such a way that it helps them to achieve that score.   

In that sense, yes, we can potentially guide or prod human behavior by providing targets related to ethical expectations. I told you a twist was going to arise, and now here it is. For AI, we could use an ethical rating or score to try and assess how ethically proficient the AI is.   

In that manner, we might be more comfortable using that particular AI if we knew that it had a reputable ethical score. And we could also presumably seek to guide or drive the AI toward an ethical score too, similar to how this can be done with humans, and perhaps indicate that the AI should be striving towards some upper bound on the ethics scale. 

Some pundits immediately recoil at this notion. They argue that AI should always be a +10 (using the scale that I’ve laid out herein). Anything less than a top ten is an abomination and the AI ought to not exist. Well, this takes us back into the earlier discussion about whether ethical behavior is in a binary state.   

Are we going to hold AI to a “higher bar” than humans by insisting that AI always be “perfectly” ethical and nothing less so?   

This is somewhat of a quandary due to the point that AI overall is presumably aiming to be the equivalent of human intelligence, and yet we do not hold humans to that same standard. 

For some, they fervently believe that AI must be held to a higher standard than humans. We must not accept or allow any AI that cannot do so. 

Others indicate that this seems to fly in the face of what is known about human behavior and begs the question of whether AI can be attained if it must do something that humans cannot attain.   

Furthermore, they might argue that forcing AI to do something that humans do not undertake is now veering away from the assumed goal of arriving at the equivalent of human intelligence, which might bump us away from being able to do so as a result of this insistence about ethics.   

Round and round these debates continue to go. 

Those on the must-be topnotch ethical AI are often quick to point out that by allowing AI to be anything less than a top ten, you are opening Pandora’s box. For example, it could be that AI dips down into the negative numbers and sits at a -4, or worse too it digresses to become miserably and fully unethical at a dismal -10. 

Anyway, this is a debate that is going to continue and not be readily resolved, so let’s move on. 

If you are still of the notion that ethics exists on a scale and that AI might also be measured by such a scale, and if you also are willing to accept that behavior can be driven or guided by offering where to reside on the scale, the time is ripe to bring up tuning knobs. Ethics tuning knobs. 

Here’s how that works. You come in contact with an AI system and are interacting with it. The AI presents you with an ethics tuning knob, showcasing a scale akin to our ethics scale earlier proposed. Suppose the knob is currently at a 6, but you want the AI to be acting more aligned with an 8, so you turn the knob upward to the 8. At that juncture, the AI adjusts its behavior so that ethically it is exhibiting an 8-score level of ethical compliance rather than the earlier setting of a 6. 

What do you think of that? 

Some would bellow out balderdash, hogwash, and just unadulterated nonsense. A preposterous idea or is it genius? You’ll find that there are experts on both sides of that coin. Perhaps it might be helpful to provide the ethics tuning knob within a contextual exemplar to highlight how it might come to play. 

Here’s a handy contextual indication for you: Will AI-based true self-driving cars potentially contain an ethics tuning knob for use by riders or passengers that use self-driving vehicles?   

Let’s unpack the matter and see.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/ 

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/ 

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there. 

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).   

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Ethics Tuning Knobs 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.   

This seems rather straightforward. You might be wondering where any semblance of ethics behavior enters the picture. Here’s how. Some believe that a self-driving car should always strictly obey the speed limit. 

Imagine that you have just gotten into a self-driving car in the morning and it turns out that you are possibly going to be late getting to work. Your boss is a stickler and has told you that coming in late is a surefire way to get fired.   

You tell the AI via its Natural Language Processing (NLP) that the destination is your work address. 

And, you ask the AI to hit the gas, push the pedal to the metal, screech those tires, and get you to work on-time.

But it is clear cut that if the AI obeys the speed limit, there is absolutely no chance of arriving at work on-time, and since the AI is only and always going to go at or less than the speed limit, your goose is fried.   

Better luck at your next job.   

Whoa, suppose the AI driving system had an ethics tuning knob. 

Abiding strictly by the speed limit occurs when the knob is cranked up to the top numbers like say 9 and 10. 

You turn the knob down to a 5 and tell the AI that you need to rush to work, even if it means going over the speed limit, which at a score of 5 it means that the AI driving system will mildly exceed the speed limit, though not in places like school zones, and only when the traffic situation seems to allow for safely going faster than the speed limit by a smidgen.   

The AI self-driving car gets you to work on-time!   

Later that night, when heading home, you are not in as much of a rush, so you put the knob back to the 9 or 10 that it earlier was set at. 

Also, you have a child-lock on the knob, such that when your kids use the self-driving car, which they can do on their own since there isn’t a human driver needed, the knob is always set at the topmost of the scale and the children cannot alter it.   

How does that seem to you? 

Some self-driving car pundits find the concept of such a tuning knob to be repugnant. 

They point out that everyone will “cheat” and put the knob on the lower scores that will allow the AI to do the same kind of shoddy and dangerous driving that humans do today. Whatever we might have otherwise gained by having self-driving cars, such as the hoped-for reduction in car crashes, along with the reduction in associated injuries and fatalities, will be lost due to the tuning knob capability.   

Others though point out that it is ridiculous to think that people will put up with self-driving cars that are restricted drivers that never bend or break the law. 

You’ll end-up with people opting to rarely use self-driving cars and will instead drive their human-driven cars. This is because they know that they can drive more fluidly and won’t be stuck inside a self-driving car that drives like some scaredy-cat. 

As you might imagine, the ethical ramifications of an ethics tuning knob are immense. 

In this use case, there is a kind of obviousness about the impacts of what an ethics tuning knob foretells.   

Other kinds of AI systems will have their semblance of what an ethics tuning knob might portend, and though it might not be as readily apparent as the case of self-driving cars, there is potentially as much at stake in some of those other AI systems too (which, like a self-driving car, might entail life-or-death repercussions).   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Conclusion   

If you really want to get someone going about the ethics tuning knob topic, bring up the allied matter of the Trolley Problem.   

The Trolley Problem is a famous thought experiment involving having to make choices about saving lives and which path you might choose. This has been repeatedly brought up in the context of self-driving cars and garnered acrimonious attention along with rather diametrically opposing views on whether it is relevant or not. 

In any case, the big overarching questions are will we expect AI to have an ethics tuning knob, and if so, what will it do and how will it be used. 

Those that insist there is no cause to have any such device are apt to equally insist that we must have AI that is only and always practicing the utmost of ethical behavior. 

Is that a Utopian perspective or can it be achieved in the real world as we know it?   

Only my crystal ball can say for sure.  

Copyright 2020 Dr. Lance Eliot  

This content is originally posted on AI Trends.  

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] 

http://ai-selfdriving-cars.libsyn.com/website 

Source: https://www.aitrends.com/ai-insider/making-use-of-ai-ethics-tuning-knobs-in-ai-autonomous-cars/

Continue Reading

AI

Application of AI to IT Service Ops by IBM and ServiceNow Exemplifies a Trend 

Avatar

Published

on

AI combined with IT service operations is seen as having the potential to automate many tasks while improving response times and decreasing costs (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor 

The application of AI to IT service operations has the potential to automate many tasks and drive down the cost of operations. 

The trend is exemplified by the recent agreement between IBM and ServiceNow to leverage IBM’s AI-powered cloud infrastructure with ServiceNow’s intelligent workflow systems, as reported in Forbes. 

The goal is to reduce resolution times and lower the cost of outages, which according to a recent report from Aberdeen, can cost a company $260,000 per hour.  

David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow

“Digital transformation is no longer optional for anyone, and AI and digital workflows are the way forward,” stated David Parsons, Senior Vice President of Global Alliances and Partner Ecosystem at ServiceNow. “The four keys to success with AI are the ability 1) to automate IT, 2) gain deeper insights, 3) reduce risks, and 4) lower costs across your business,” Parsons said.   

The two companies plan to combine their tools in customer engagement to address each of these factors. “The first phase will bring together IBM’s AIOps software and professional services with ServiceNow’s intelligent workflow capabilities to help companies meet the digital demands of this moment,” Parsons stated. 

Arvind Krishna, Chief Executive Officer of IBM stated in a press release on the announcement, “AI is one of the biggest forces driving change in the IT industry to the extent that every company is swiftly becoming an AI company.” ServiceNow’s cloud computing platform helps companies manage digital workflows for enterprise IT operations.  

By partnering with ServiceNow and their market leading Now Platform, clients will be able to use AI to quickly mitigate unforeseen IT incident costs. “Watson AIOps with ServiceNow’s Now Platform is a powerful new way for clients to use automation to transform their IT operations and mitigate unforeseen IT incident costs,” Krishna stated. 

The IT service offering squarely positions IBM at aiming for AI in business. “When we talk about AI, we mean AI for business, which is much different than consumer AI,” stated Michael Gilfix of IBM in the Forbes account. He is the Vice President of Cloud Integration and Chief Product Officer of Cloud Paks at IBM. “AI for business is all about enabling organizations to predict outcomes, optimize resources, and automate processes so humans can focus their time on things that really matter,” he stated.   

IBM Watson has handled more than 30,000 client engagements since inception in 2011, the company reports. Among the benefits of this experience is a vast natural language processing vocabulary, which can parse and understand huge amounts of unstructured data. 

Ericsson Scientists Develop AI System to Automatically Resolve Trouble Tickets 

Another experience involving AI in operations comes from two AI scientists with Ericsson, who have developed a machine learning algorithm to help application service providers manage and automatically resolve trouble tickets. 

Wenting Sun, senior data science manager, Ericsson

Wenting Sun, senior data science manager at Ericsson in San Francisco, and Alka Isac, data scientist in Ericsson’s Global AI Accelerator outside Boston, devised the system to help quickly resolve issues with the complex infrastructure of an application service provider, according to an account on the Ericsson BlogThese could be network connection response problems, infrastructure resource limitations, or software malfunctioning issues. 

The two sought to use advanced NLP algorithms to analyze text information, interpret human language and derive predictions. They also took advantage of features/weights discovered from a group of trained models. Their system uses a hybrid of an unsupervised clustering approach and supervised deep learning embedding. “Multiple optimized models are then ensembled to build the recommendation engine,” the authors state.  

The two describe current trouble ticket handling approaches as time-consuming, tedious, labor-intensive, repetitive, slow, and prone to error. Incorrect triaging often results, which can lead to a reopening of a ticket and more time to resolve, making for unhappy customers. When personnel turns over, the human knowledge gained from years of experience can be lost.  

Alka Isac, data scientist in Ericsson’s Global AI Accelerator

We can replace the tedious and time-consuming triaging process with intelligent recommendations and an AI-assisted approach,” the authors stated, with a time to resolution expected to be reduced up to 75% and avoidance of multiple ticket reopenings  

Sun leads a team of data scientists and data engineers to develop AI/ML applications in the telecommunication domain. She holds a bachelor’s degree in electrical and electronics engineering and a PhD degree in intelligent control. She also drives Ericsson’s contributions to the AI open source platform Acumos (under Linux foundation’s Deep Learning Foundation).  

As a Data Scientist in Ericsson’s Global AI Accelerator, Isac is part of a team of Data Scientists focusing on reducing the resolution time of tickets for Ericsson’s Customer Support Team. She holds a master’s degree in Information Systems Management majoring in Data Science. 

Survey Finds AI Is Helpful to IT 

In a survey of 154 IT and business professionals at companies with at least one AI-related project in general production, AI was found to deliver impressive results to IT departments, enhancing the performance of systems and making help desks more helpful, according to a recent account in ZDNet.  

The survey was conducted by ITPro Today working with InformationWeek and Interop. 

Beyond benefits of AI for the overall business, many respondents could foresee the greatest benefits going right to the IT organization itself63% responded that they hope to achieve greater efficiencies within IT operations. Another 45% aimed for improved product support and customer experience, and another 29% sought improved cybersecurity systems.   

The top IT use case was security analytics and predictive intelligence, cited by 71% of AI leaders. Another 56% stated AI is helping with the help desk, while 54% have seen a positive impact on the productivity of their departments. “While critics say that the hype around AI-driven cybersecurity is overblown, clearly, IT departments are desperate to solve their cybersecurity problems, and, judging by this question in our survey, many of them are hoping AI will fill that need,” stated Sue Troy, author of the survey report.   

AI expertise is in short supply. More than two in three successful AI implementers, 67%, report shortages of candidates with needed machine learning and data modeling skills, while 51seek greater data engineering expertise. Another 42% reported compute infrastructure skills to be in short supply.    

Read the source articles and information in Forbes, the IBM press release on the alliance with ServiceNow, on the Ericsson Blog, in ZDNet and from ITPro Today . 

Source: https://www.aitrends.com/aiops/application-of-ai-to-it-service-ops-by-ibm-and-servicenow-exemplifies-a-trend/

Continue Reading
Blockchain8 hours ago

Top 10 Blockchain-as-a-Service (BaaS) Providers

Esports9 hours ago

Where to Find the Electirizer and Magmarizer in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports10 hours ago

How to get Electabuzz and Electivire in Pokémon Sword and Shield’s The Crown Tundra expansion

South Africa
Esports10 hours ago

Cloud9 terminate contracts of JT, motm, Sonic, T.c

Esports10 hours ago

How to get Absol in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports11 hours ago

Loops Esports’ Federal named MVP of the PMPL Americas season 2

Esports11 hours ago

Loops Esports win PMPL Americas season 2, 3 teams qualify for the PMGC

Esports12 hours ago

How to evolve Tyrunt and Amaura in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports14 hours ago

Here are the scores and standings for the PUBG Mobile EMEA League 2020 Finals

Esports14 hours ago

PUBG Mobile Global Championship to highlight player achievements with Esports Annual Awards 2020

Esports15 hours ago

Rivals League member Emma Handy on her first top finish at the 2020 Grand Finals

Esports16 hours ago

Best moveset for Sirfetch’d in Pokémon Go

Esports16 hours ago

How to get Galarian Yamask in Pokémon Go

Esports17 hours ago

How to Climb in Fall Guys

Esports17 hours ago

Phasmophobia Server Version Mismatch: How to Fix the Error

Esports17 hours ago

Animal Crossing Nintendo Switch Bundle Restocked and Available Again

Esports17 hours ago

Animal Crossing Joe Biden: Visiting Joe Biden’s Animal Crossing Island

Esports17 hours ago

Among Us Matchmaker is Full: How to Fix the Error

Esports17 hours ago

How to get the Reins of Unity in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports18 hours ago

Apex Legends Season 7 UFO Teaser Arrives In-Game

Esports18 hours ago

Bjergsen Retires, Takes Up Head Coach Role for Team SoloMid

Denmark
Esports18 hours ago

Heroic beat Astralis to complete lower bracket gauntlet, reach final at DreamHack Open Fall

Esports18 hours ago

How to get Victini in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports19 hours ago

How to “head to the Giant’s Bed to find the Mayor” in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports19 hours ago

How to complete Legendary Clue? 4 and catch Necrozma in Pokémon Sword and Shield’s The Crown Tundra expansion

Esports19 hours ago

TSM Doublelift: “The entire Worlds experience after the first week, we probably had a 10-percent win rate in scrims”

Esports21 hours ago

Call of Duty: Warzone players report game-breaking glitch at the start of matches

Esports21 hours ago

All Minecraft MC Championship 11 teams

Esports21 hours ago

Washington Justice re-signs Decay, acquires Mag

Esports21 hours ago

Silver Lining Warzone Blueprint: How to Get

Esports21 hours ago

League of Legends pros react to Bjergsen’s retirement announcement

Esports21 hours ago

Comstock Warzone Blueprint: How to Get

Blockchain News22 hours ago

Concerns Arise as North Korea’s Financial Services Commission Unsure of Its Cryptocurrency Mandate

Esports22 hours ago

Genshin Impact Resin System Change Introduced in Latest Patch

Esports22 hours ago

Revolution Warzone Blueprint: How to Get and Build

Esports22 hours ago

Red Crown Warzone Blueprint: How to Get

Esports22 hours ago

Animal Crossing’s Turnip Prices Will Hit All-Time High on ‘ Ally Island’

Esports22 hours ago

Black Ops Cold War Playstation Exclusive Zombie Mode Teased

Esports22 hours ago

BR Solo Survivor Warzone Mode Recently Added

Esports23 hours ago

Blinding Lights Fortnite Emote: How Much Does it Cost?

Trending