Connect with us

# The Dirichlet Process the Chinese Restaurant Process and other representations

Published

on

This article is the third part of the series on Clustering with Dirichlet Process Mixture Models. The previous time we defined the Finite Mixture Model based on Dirichlet Distribution and we posed questions on how we can make this particular model infinite. We briefly discussed the idea of taking the limit of the model when the k number of clusters tends to infinity but as we stressed the existence of such an object is not trivial (in other words, how do we actually “take the limit of a model”?). As a reminder, the reason why we want to take make k infinite is because in this way we will have a non-parametric model which does not require us to predefine the total number of clusters within the data.

Update: The Datumbox Machine Learning Framework is now open-source and free to download. Check out the package com.datumbox.framework.machinelearning.clustering to see the implementation of Dirichlet Process Mixture Models in Java.

Even though our target is to build a model which is capable of performing clustering on datasets, before that we must discuss about Dirichlet Processes. We will provide both the strict mathematical definitions and the more intuitive explanations of DP and we will discuss ways to construct the process. Those constructions/representations can be seen as a way to find occurrences of Dirichlet Process in “real life”.

Despite the fact that I tried to adapt my research report in such a way so that these blog posts are easier to follow, it is still important to define the necessary mathematical tools and distributions before we jump into using the models. Dirichlet Process models are a topic of active research, but they do require having a good understanding of Statistics and Stochastic Processes before using them. Another problem is that as we will see in this article, Dirichlet Processes can be represented/constructed with numerous ways. As a result several academic papers use completely different notation/conventions and examine the problem from different points of view. In this post I try to explain them as simple as possible and use the same notation. Hopefully things will become clearer with the two upcoming articles which focus on the definition of Dirichlet Process Mixture Models and on how to actually use them to perform cluster analysis.

## 1. Definition of Dirichlet Process

A Dirichlet process over a Θ space is a stochastic process. It is a probability distribution over “probability distributions over Θ space” and a draw from it is a discrete distribution. More formally a Dirichlet Distribution is a distribution over probability measures. A probability measure is a function of subsets of space Θ to [0,1]. G is a DP distributed random probability measure, denoted as , if for any partition (A1,…An) of space Θ we have that .

Figure 1: Marginals on ﬁnite partitions are Dirichlet distributed.

The DP has two parameters: The first one is the base distribution G0 which serves like a mean . The second one is the strength parameter α which is strictly positive and serves like inverse-variance . It determines the extent of the repetition of the values of the output distribution. The higher the value of a, the smaller the repetition; the smaller the value, the higher the repetition of the values of output distribution. Finally the Θ space is the parameter space on which we define the DP. Moreover the space Θ is also the definition space of G0 which is the same as the one of G.

A simpler and more intuitive way to explain a Dirichlet Process is the following. Suppose that we have a space Θ that can be partitioned in any finite way (A1,…,An) and a probability distribution G which assigns probabilities to them. The G is a specific probability distribution over Θ but there are many others. The Dirichlet Process on Θ models exactly this; it is a distribution over all possible probability distributions on space Θ. The Dirichlet process is parameterized with the G0 base function and the α concentration parameter. We can say that G is distributed according to DP with parameters α and G0 if the joint distribution of the probabilities that G assigns to the partitions of Θ follows the Dirichlet Distribution. Alternatively we can say that the probabilities that G assigns to any finite partition of Θ follows Dirichlet Distribution.

Figure 2: Graphical Model of Dirichlet Process

Finally above we can see the graphical model of a DP. We should note that α is a scalar hyperparameter, G0 is the base distribution of DP, G a random distribution over Θ parameter space sampled from the DP which assigns probabilities to the parameters and θi is a parameter vector which is drawn from the G distribution and it is an element of Θ space.

## 2. Posterior Dirichlet Processes

The Posterior Dirichlet Processes were discussed by Ferguson. We start by drawing a random probability measure G from a Dirichlet Process, . Since G is a probability distribution over Θ we can also sample from this distribution and draw independent identically distributed samples θ1,…, θn ~ G. Since draws from a Dirichlet Process are discrete distributions, we can represent where is a short notation for which is a delta function that takes 1 if and 0 elsewhere. An interesting effect of this is that since G is defined this way, there is a positive probability of different samples having the same value . As we will see later on, this creates a clustering effect that can be used to perform Cluster Analysis on datasets.

By using the above definitions and observations we want to estimate the posterior of the Dirichlet Process given the samples θ. Nevertheless since we know that and by using the Bayes Rules and the Conjugacy between Dirichlet and Multinomial we have that and .

Equation 1: Posterior Dirichlet Process

This property is very important and it is used by the various DP representations.

## 3. Dirichlet Process representations

In the previous segments we defined the Dirichlet Process and presented its theoretical model. One important question that we must answer is how do we know that such an object exists and how we can construct and represent a Dirichlet Process.

The first indications of existence was provided by Ferguson who used the Kolmogorov Consistency Theorem, gave the definition of a Dirichlet Process and described the Posterior Dirichlet Process. Continuing his research, Blackwell and MacQueen used the de Finetti’s Theorem to prove the existence of such a random probability measure and introduced the Blackwell-MacQueen urn scheme which satisfies the properties of Dirichlet Process. In 1994 Sethuraman provided an additional simple and direct way of constructing a DP by introducing the Stick-breaking construction. Finally another representation was provided by Aldous who introduced the Chinese Restaurant Process as an effective way to construct a Dirichlet Process.

The various Representations of the Dirichlet Process are mathematically equivalent but their formulation differs because they examine the problem from different points of view. Below we present the most common representations found in the literature and we focus on the Chinese Restaurant Process which provides a simple and computationally efficient way to construct inference algorithms for Dirichlet Process.

### 3.1 The Blackwell-MacQueen urn scheme

The Blackwell-MacQueen urn scheme can be used to represent a Dirichlet Process and it was introduced by Blackwell and MacQueen. It is based on the Pólya urn scheme which can be seen as the opposite model of sampling without replacement. In the Pólya urn scheme we assume that we have a non-transparent urn that contains colored balls and we draw balls randomly. When we draw a ball, we observe its color, we put it back in the urn and we add an additional ball of the same color. A similar scheme is used by Blackwell and MacQueen to construct a Dirichlet Process.

This scheme produces a sequence of θ12,… with conditional probabilities . In this scheme we assume that G0 is a distribution over colors and each θn represents the color of the ball that is placed in the urn. The algorithm is as follows:

· With probability proportional to α we draw and we add a ball of this color in the urn.

· With probability proportional to n-1 we draw a random ball from the urn, we observe its color, we place it back to the urn and we add an additional ball of the same color in the urn.

Previously we started with a Dirichlet Process and derived the Blackwell-MacQueen scheme. Now let’s start reversely from the Blackwell-MacQueen scheme and derive the DP. Since θi were drawn in an iid manner from G, their joint distribution will be invariant to any finite permutations and thus they are exchangeable. Consequently by using the de Finetti’s theorem, we have that there must exist a distribution over measures to make them iid and this distribution is the Dirichlet Process. As a result we prove that the Blackwell-MacQueen urn scheme is a representation of DP and it gives us a concrete way to construct it. As we will see later, this scheme is mathematically equivalent to the Chinese Restaurant Process.

### 3.2 The Stick-breaking construction

The Stick-breaking construction is an alternative way to represent a Dirichlet Process which was introduced by Sethuraman. It is a constructive way of forming the distribution and uses the following analogy: We assume that we have a stick of length 1, we break it at position β1 and we assign π1 equal to the length of the part of the stick that we broke. We repeat the same process to obtain π2, π3,… etc; due to the way that this scheme is defined we can continue doing it infinite times.

Based on the above the πk can be modeled as , where the while as in the previous schemes the θ are sampled directly by the Base distribution . Consequently the G distribution can be written as a sum of delta functions weighted with πk probabilities which is equal to . Thus the Stick-breaking construction gives us a simple and intuitively way to construct a Dirichlet Process.

### 3.3 The Chinese Restaurant Process

The Chinese Restaurant Process, which was introduced by Aldous, is another effective way to represent a Dirichlet Process and it can be directly linked to Blackwell-MacQueen urn scheme. This scheme uses the following analogy: We assume that there is a Chinese restaurant with infinite many tables. As the customers enter the restaurant they sit randomly to any of the occupied tables or they choose to sit to the first available empty table.

The CRP defines a distribution on the space of partitions of the positive integers. We start by drawing θ1,…θn from Blackwell-MacQueen urn scheme. As we discussed in the previous segments, we expect to see a clustering effect and thus the total number of unique θ values k will be significantly less than n. Thus this defines a partition of the set {1,2,…,n} in k clusters. Consequently drawing from the Blackwell-MacQueen urn scheme induces a random partition of the {1,2,…,n} set. The Chinese Restaurant Process is this induced distribution over partitions. The algorithm is as follows:

· The 1st customer sits always on 1st table

· The n+1th customer has 2 options:

o Sit on the 1st unoccupied table with probability

o Sit on any of the kth occupied tables with probability
where is the number of people sitting on that table

Where α is the dispersion value of DP and n is the total number of customers in the restaurant at a given time. The latent variable zi stores the table number of the ith customer and takes values from 1 to kn where kn is the total number of occupied tables when n customers are in the restaurant. We should note that the kn will always be less or equal to n and on average it is about . Finally we should note that the probability of table arrangement is invariant to permutations. Thus the zi is exchangeable which implies that tables with same size of customers have the same probability.

The Chinese Restaurant Process is strongly connected to Pólya urn scheme and Dirichlet Process. The CRP is a way to specify a distribution over partitions (table assignments) of n points and can be used as a prior on the space of latent variable zi which determines the cluster assignments. The CRP is equivalent to Pólya’s urn scheme with only difference that it does not assign parameters to each table/cluster. To go from CRP to Pólya’s urn scheme we draw for all tables k=1,2… and then for every xi which is grouped to table zi assign a . In other words assign to the new xi the parameter θ of the table. Finally since we can’t assign the θ to infinite tables from the beginning, we can just assign a new θ every time someone sits on a new table. Due to all the above, the CRP can help us build computationally efficient algorithms to perform Cluster Analysis on datasets.

In this post, we discussed the Dirichlet Process and several ways to construct it. We will use the above ideas in the next article. We will introduce the Dirichlet Process Mixture Model and we will use the Chinese Restaurant Representation in order to construct the Dirichlet Process and preform Cluster Analysis. If you missed few points don’t worry as things will start becoming clearer with the next two articles.

I hope you found this post interesting. If you did, take a moment to share it on Facebook and Twitter. 🙂

My name is Vasilis Vryniotis. I’m a Data Scientist, a Software Engineer, author of Datumbox Machine Learning Framework and a proud geek. Learn more

# Beyond Limits and The Carnrite Group Create Alliance to Drive AI Innovation in Oil & Gas, Utilities, Power and Industrial Sectors.

Published

on

Beyond Limits, an industrial and enterprise-grade AI technology company built for the most demanding sectors, and The Carnrite Group, a leading management consulting firm focused on the energy and industrials sectors, today announced a strategic alliance.

Under the new multimillion-dollar revenue driving agreement, The Carnrite Group and Beyond Limits will provide strategic consulting services on the state of AI technologies and innovative use cases for Carnrite’s client base across the globe in the oil and gas, utilities, power and industrials sectors. Additionally, The Carnrite Group will receive IP licensing rights to Beyond Limits’ cutting-edge Cognitive AI technology, providing customers with direct access to Beyond Limits’ solutions.

“This is a very exciting time for Beyond Limits to gain such a valuable partner as The Carnrite Group,” said AJ Abdallat, CEO and founder of Beyond Limits. “Through Carnrite’s vast network, we hope to provide valuable guidance and increase awareness of the benefits of AI in critical sectors, including boosting operational insights, improving operating conditions, and ultimately, increase adoption of this next generation technology.”

Many sectors are experiencing a significant surge in demand for AI. This is particularly true in the energy and industrial sectors, where continued commodity price volatility has forced companies to find innovative ways to further reduce costs. The AI market is expected to rise to \$7.78 billion by 2024, an increase of 22.49% from 2019.

“The Carnrite Group prides itself on helping clients address complex challenges and make difficult business decisions,” said Al Carnrite, CEO of The Carnrite Group. “Our agreement with Beyond Limits allows us to add their powerful Cognitive AI to our portfolio of consulting services while reinforcing our commitment to offer technologies that create value for our clients.”

Source: AJ Abdallat, CEO and founder, Beyond Limits.

# LALAL.AI – AI-Powered High-Quality Audio Splitting | Review

Published

on

Are you a music lover, musician, sound producer or in the field of the music industry who keep trying with new vocals to get the best output for your new upcoming music track? Are you seriously looking for a perfect platform where there is a  clear separation of vocals and instruments from existing tracks?

“Then LALAL.AI is the prefered next-generation music separation platform for quick and precise stem extraction to separate instrumental and vocal tracks easily.”

Developed by a team of specialists with a unique neural network based on 20TB of data which uses a machine-learning algorithm to identify and extract voice tracks and instrumentals from music tracks who are in the field of new emerging trending technologies Artificial Intelligence, Machine Learning, Mathematical optimization and data signal processing.

Mostly this tool is very useful for people who are into the music industry like DJs, sound producers, singers, musicians and even karaoke lovers.

With Lalal.ai, users can do lots of tasks: separate backtracks and voices from songs, podcasts, create karaoke song packs, extract movies lines for translation, and many more.

A new processing filter to improve the experience and signal separation quality has been added and this filter has three processing levels Mild, Normal, Aggressive.

### Key Features:

> Audio splitting with superior performance

> AI-Powered user-friendly vocal remover

> No third party software involved

> Easy to Use

> API Integration

### Process Steps Involved:

> Just open lalal.ai in your browser

> Drag and Drop the audio file of your choice

> Let the lalal.ai do the separation process

### How It Works (steps involved)

The output file format is the same as what you uploaded. If you upload an mp3 file then you get the output result in mp3 and so on.

### Packages Involved:

Mainly this is offering 3 packages as of now

> Lite

> Professional

> On-Demand

Still experimenting with API integration for audio splitting and other purposes and sharing new ideas and solutions to help to make the life of millions of people easier. You can check this comparison test with Spleeter here and check their latest press releases here and here.

# 10 Ways Machine Learning Practitioners Can Build Fairer Systems

Published

on

### @skylerwhartonSkyler Wharton

Software Engineer (ML & Backend) @ Airbnb. My opinions are my own. [they/them]

An introduction to the harm that ML systems cause and to the power imbalance that exists between ML system developers and ML system participants …and 10 concrete ways for machine learning practitioners to help build fairer ML systems.

Image caption: Photo by Koshu Kunii on Unsplash. Image description: Photo of Black Lives Matter protesters in Washington, D.C. — 2 signs say “Black Lives Matter” and “White Silence is Violence.”

Machine learning systems are increasingly used as tools of oppression. All too often, they’re used in high-stakes processes without participants’ consent and with no reasonable opportunity for participants to contest the system’s decisions — like when risk assessment systems are used by child welfare services to identify at-risk children; when a machine learning (or “ML”) model decides who sees which online ads for employment, housing, or credit opportunities; or when facial recognition systems are used to surveil neighborhoods where Black and Brown people live.

## ML systems are deployed widely because they are viewed as “neutral” and “objective.”

In reality though, machine learning systems reflect the beliefs and biases of those who design and develop them.

As a result, ML systems mirror and amplify the beliefs and biases of their designers, and are at least as susceptible to making mistakes as human arbiters.

When ML systems are deployed at scale, they cause harm — especially when their decisions are wrong. This harm is disproportionately felt by members of marginalized communities [1]. This is especially evident in this moment, when people protesting as part of the global movement for Black Lives are being tracked by police departments using facial recognition systems [2] and when an ML system was recently used to determine students’ A-level grades in the U.K. after the tests were cancelled due to the pandemic, jeopardizing the futures of poorer students, many of whom are people of color and immigrants [3].

In this post, I’ll describe some examples of harm caused by machine learning systems. Then I’ll offer some concrete recommendations and resources that machine learning practitioners can use to develop fairer machine learning systems. I hope this post encourages other machine learning practitioners to start using and educating their peers about practices for developing fairer ML systems within their teams and companies.

## How machine learning systems cause harm

In June 2020, Robert Williams, a Black man, was arrested by the Detroit Police Department because a facial recognition system identified him as the person who committed a recent shoplifting; however, visual comparison of his face to the face in the photo clearly revealed that they weren’t the same person [4].

Nevertheless, Mr. Williams was arrested, interrogated, kept in custody for more than 24 hours, released on bail on his own money, and had to court before his case was dismissed.

This “accident” significantly harmed Mr. Williams and his family:

• He felt humiliated and embarrassed. When interviewed by the New York Times about this incident, he said, “My mother doesn’t know about it. It’s not something I’m proud of … It’s humiliating.”
• It caused lasting trauma to him and his family. Had Mr. Williams resisted arrest — which would have been reasonable given that it was unjust — he could have been killed. As it was, the experience was harrowing. He and his wife now wonder whether they need to put their two young daughters into therapy.
• It put his job — and thus his ability to support himself and his family — at risk. He could have lost his job, even though his case was ultimately dismissed; companies have fired employees with impunity for far less. Fortunately, his boss was understanding of the situation, but his boss still advised him not to tell others at work.
• It nearly resulted in him having a permanent criminal record. When Mr. Williams went to court, his case was initially dismissed “without prejudice,” which meant that he could still be charged later. Only after the false positive received widespread media attention did the prosecutor apologize and offer to expunge his record and fingerprints.

The harms caused here by a facial recognition system used by a local police department are unacceptable.

Specifically, Dr. Sapieżyński and collaborators discovered that women are more likely to receive ads for supermarket, janitor, and preschool jobs, whereas men are more likely to receive ads for taxi, artificial intelligence, and lumber jobs. (The researchers acknowledge that the study was limited to binary genders due to restrictions in Facebook’s advertising tools.) They similarly discovered that Black people are more likely to receive ads for taxi, janitor, and restaurant jobs, whereas white people are more likely to receive ads for secretary, artificial intelligence, and lumber jobs.

Facebook’s ad delivery system is an example of a consumer-facing ML system that causes harm to those who participate in it:

• It perpetuates and amplifies gender- and race-based employment stereotypes for people who use Facebook. For example, women are shown ads for jobs that have historically been associated with “womanhood” (e.g., caregiving or cleaning jobs); seeing such ads reinforces their own — and also other genders’ — perceptions of jobs that women can or “should” do. This is also the case for the ads shown to Black people.
• It restricts Black users’ and woman users’ access to economic opportunity. The advertisements that Facebook shows to Black people and women are for noticeably lower-paying jobs. If Black people and women do not even know about available higher-paying jobs, then they are unable to apply for and be hired for them.

In the case of both aforementioned algorithmic systems, the harm they cause goes deeper: they amplify existing systems of oppression, often in the name of “neutrality” and “objectivity.” In other words, the examples above are not isolated incidents; they contribute to long-standing patterns of harm.

For example, Black people, especially Black men and Black masculine people, have been systematically overpoliced, targeted, and murdered for the last four hundred years. This is undoubtedly still true, as evidenced by the recent murders by the police of George Floyd, Breonna Taylor, Tony McDade, and Ahmaud Arbery and recent shooting by the police of Jacob Blake.

Commercial facial recognition systems allow police departments to more easily and subtly target Black men and masculine people, including to target them at scale. A facial recognition system can identify more “criminals” in an hour than a hundred police officers could in a month, and it can do so less expensively. Thus, commercial facial recognition systems allow police departments to “mass produce” their practice of overpolicing, targeting, and murdering Black people.

Moreover, in 2018, computer science researchers Joy Buolamwini and Dr. Timnit Gebru showed that commercial facial recognition systems are significantly less accurate for darker-skinned people than they are for lighter-skinned people [7]. Indeed, when used for surveillance, facial recognition systems identify the wrong person up to 98% of the time [8]. As a result, when allowed to be used by police departments, commercial facial recognition systems cause harm not only by “scaling” police forces’ discriminatory practices but also by identifying the wrong person the majority of the time.

Facebook’s ad delivery system also amplifies a well-documented system of oppression: wealth inequality by race. In the United States, the median adjusted household income of white and Asian households is 1.6x greater than that of Black and Hispanic households (~\$71K vs. \$43K), and the median net worth of white households is 13x greater than that of Black households (~\$144K vs. \$11K) [9]. Thus, by consistently showing ads for only lower-paying jobs to the millions of Black people who use Facebook, Facebook is entrenching and widening the wealth gap between Black people and more affluent demographic groups (especially white people) in the United States. Facebook’s ad delivery system is likely similarly amplifying wealth inequities in other countries around the world.

## How collecting labels for machine learning systems causes harm

Harm is not only caused by machine learning systems that have been deployed; harm is also caused while machine learning systems are being developed. That is, harm is often caused while labels are being collected for the purpose of training machine learning models.

For example, in February 2019, The Verge’s Casey Newton released a piece about the working conditions inside Cognizant, a vendor that Facebook hires to label and moderate Facebook content [10]. His findings were shocking: Facebook was essentially running a digital sweatshop.

What they discovered:

• Employees were underpaid: In Phoenix, AZ, a moderator made \$28,800/year (versus the \$240,000/year total compensation of a full-time Facebook employee).
• Working conditions at Cognizant were abysmal: Employees were often fired after making just a few mistakes a week. Since a “mistake” occurred when two employees disagreed about how a piece of content should be moderated, resentment grew between employees. Fired employees often threatened to return to work and harm their old colleagues. Additionally, employees were micromanaged: they got two 15-minute breaks and one 30-minute lunch per day. Much of their break time was spent waiting in line for the bathroom, as often >500 people had to share six bathroom stalls.
• Employees’ mental health was damaged: Moderators spent most of their time reviewing graphically violent or hateful content, including animal abuse, child abuse, and murders. As a result of watching six hours per day of violent or hateful content, employees developed severe anxiety, often while still in training. After leaving the company, employees developed symptoms of PTSD. While employed, employees had access to only nine minutes of mental health support per day; after they left the company, they had no mental health support from Facebook or Cognizant.

Similar harms are caused by crowdsourcing platforms like Amazon Mechanical Turk, through which individuals, academic labs, or companies submit tasks for “crowdworkers” to complete:

• Employees are underpaid. Mechanical Turk and other similar platforms are premised on a large amount of unpaid labor: workers are not paid to find tasks, for tasks they start but can’t complete due to vague instructions, for tasks rejected by task authors for often arbitrary reasons, or for breaks. As a result, the median wage for a crowdworker on Mechanical Turk is approximately \$2/hour [11]. Workers who do not live in the United States, are women, and/or are disabled are likely to earn much less per hour [12].
• Working conditions are abysmal. Workers’ income fluctuates over time, so they can’t plan for themselves or their families for the long-term; workers don’t get healthcare or any other benefits; and workers have no legal protections.
• Employees’ mental health is damaged. Crowdworkers often struggle to find enough well-paying tasks, which causes stress and anxiety. For example, workers report waking up at 2 or 3am in order to get tasks that pay better [11].

Contrary to popular belief, many people who complete tasks on crowdsourcing platforms do so in order to earn the bulk of their income. Thus, people who work for private labeling companies like Cognizant and people who work for crowdsourcing platforms like Mechanical Turk have a similar goal: to complete labeling tasks in a safe and healthy work environment in exchange for fair wages.

## Why these harms are happening

At this point, you might be asking yourself, “Why are these harms happening?” The answer is multifaceted: there are many reasons why deployed machine learning systems cause harm to their participants.

When ML systems are used

A big reason that machine learning systems cause harm is due to the contexts in which they’re used. That is, because machine learning systems are considered “neutral” and “objective,” they’re often used in high-stakes decision processes as a way to save money. High-stakes decision processes are inherently more likely to cause harm, since a mistake made during the decision process could have a significant negative impact on someone’s life.

At best, introducing a machine learning system into a high-stakes decision process does not affect the probability that the system causes harm; at worst, it increases the probability of harm, due to machine learning models’ tendency to amplify biases against marginalized groups, human complacency around auditing the model’s decisions (since they’re “neutral” and “objective”), and that machine learning models’ decisions are often uninterpretable.

How ML systems are designed

Machine learning systems also cause harm because of how they’re designed. For example, when designing a system, engineers often do not account for the possibility that the system could make an incorrect decision; thus, machine learning systems often do not include a mechanism for participants to feasibly contest the decision or seek recourse.

Whose perspectives are centered when ML systems are designed

Another reason that ML systems cause harm is that the perspectives of people who are most likely to be harmed by them are not centered when the system is being designed.

Systems designed by people will reflect the beliefs and biases — both conscious and unconscious — of those people. Machine learning systems are overwhelmingly built by a very homogenous group of people: white, Asian-American, or Asian heterosexual cisgender men who are between 20 and 50 years old, who are able-bodied and neurotypical, who are American and/or who live in the United States, and who have a traditional educational background, including a degree in computer science from one of ~50 elite universities. As a result, machine learning systems are biased towards the experiences of this narrow group of people.

Additionally, machine learning systems are used in often contexts that disproportionately involve historically marginalized groups (like predicting recidivism or surveilling “high crime” neighborhoods) or to determine access to resources that have long been unfairly denied to marginalized groups (like housing, employment opportunities, credit and loans, and healthcare). For example, since Black people have historically been denied fair access to healthcare, machine learning systems used in such contexts display similar patterns of discrimination, because they hinge on historical assumptions and data [13]. As a result, unless deliberate action is taken to center the experiences of the groups that ML systems are arbitrating, machine learning systems lead to history repeating itself.

At the intersection of the aforementioned two points is a chilling realization: the people who design machine learning systems are rarely the people who are affected by machine learning systems. This rings eerily similar to the fact that most police do not live in the cities where they work [14].

Lack of transparency around when ML systems are used

Harm is also caused by machine learning systems because it’s often unclear when an algorithm has been used to make a decision. This is because companies are not required to disclose when and how machine learning systems are used (much less get participants’ consent), even when the outcomes of those decisions affect human lives. If someone is unaware that they’ve been affected by an ML system, then they can’t attribute harm they may have experienced to it.

Additionally, even if a person knows or suspects that they’ve been harmed by a machine learning system, proving that they’ve been discriminated against is difficult or impossible, since the complete set of decisions made by the ML system is private and thus cannot be audited for discrimination. As a result, harm that machine learning systems cause often cannot be “proven.”

Lack of legal protection for ML system participants

Finally, machine learning systems cause harm because there is currently very little regulatory or legal oversight around when and how machine learning systems are used, so companies, governments, and other organizations can use them to discriminate against participants with impunity.

With respect to facial recognition, this is slowly changing: in 2019, San Francisco became the first major city to ban the use of facial recognition by local government agencies [15]. Since then, several other cities have done the same, including Oakland, CA; Somerville, MA; and Boston, MA [16, 17].

Nevertheless, there are still hundreds of known instances of local government agencies using facial recognition, including at points of entry into the United States like borders and airports and by local police for unspecified purposes [18]. Use of facial recognition systems in these contexts — especially given that the majority of their decisions are likely wrong [8] — have real-world impact, including harassment, unjustified imprisonment, and deportation.

With respect to other types of machine learning systems, there have been few legal advances.

## Call to action

Given the contexts in which ML systems are used, the current lack of legal and regulatory oversight for such contexts, and the lack of societal power that people harmed by ML systems tend to have (due to their, e.g., race, gender, disability, citizenship, and/or wealth), ML system developers have massively more power than participants.

Image caption: There are huge power imbalances in machine learning system development: ML system developers have more power than ML system participants, and labeling task requesters have more power than labeling agents. [Image source: http://www.clker.com/clipart-scales-uneven.html] Image description: Imbalanced scale image — ML system developer & labeling task requester weigh more than ML system participant & labeling agent

There’s a similar power dynamic between people who design labeling tasks and people who complete labeling tasks: labeling task requesters have more power than labeling agents.

Here, ML system developer is defined as anyone who is involved in the design, development, and deployment of machine learning systems, including machine learning engineers and data scientists and also software engineers of other technical disciplines, product managers, engineering managers, UX researchers, UX writers, lawyers, mid-level managers, and C-suite executives. All of these roles are included in order to emphasize that even if you don’t work directly on a machine learning system, if you work at a company or organization that uses machine learning systems, then you have power to affect change on when and how machine learning is used at your company.

Let me be clear: individual action is not enough — we desperately need well-designed legislation to guide when and how ML systems can be used. Importantly, there should be some contexts in which ML systems cannot be used, no matter how “accurate” they are, because the probability of misuse and mistakes are too great — like police departments using facial recognition systems [19].

Unfortunately, we do not have necessary legislation and regulation in place yet. In the meantime, as ML system developers, we should intentionally consider the ML systems that we, our teams, or our companies own and utilize.

## How to build fairer machine learning systems

If you are a machine learning system developer — especially if you are machine learning practitioner, like an ML engineer or data scientist — here are 10 ways you can help build machine learning systems that are more fair:

#1

When designing a new ML system or evaluating an existing ML system, ask yourself and your team the following questions about the context in which the system is being deployed/is deployed [20]:

• What could go wrong when this ML system is deployed?
• When something goes wrong, who is harmed?
• How likely is it that something will go wrong?
• Does the harm disproportionately fall on marginalized groups?

Use your answers to these questions to evaluate how to proceed. For example, if possible, proactively engineer solutions that prevent harms from occurring (e.g., add safeguards to prevent harm, like including human intervention and mechanisms for participants to contest system decisions, and inform participants that a machine learning algorithm is being used). Alternately, if the likelihood and scale of harm are too high, do not deploy it. Instead, consider pursuing a solution that does not depend on machine learning or that uses machine learning in a less risky way. Deploying a biased machine learning system can cause real-world harm to system participants as well as reputational damage to your company [21, 22, 23].

#2

Utilize best practices for developing fairer ML systems. Machine learning fairness researchers have been designing and testing best practices for several years now. For example, one best practice is to, when releasing a dataset for public or internal use, simultaneously release a datasheet, a short document that shares information that consumers of the dataset need in order to make informed decisions about using it (e.g., mechanisms or procedures used to collect the data, whether an ethical review process was conducted, whether or not the dataset relates to people) [24].

Similarly, when releasing a trained model for public or internal use, simultaneously release a model card, a short document that shares information about the model (e.g., evaluation results (ideally disaggregated across different demographic groups and communities), intended usage(s), usages to avoid, insight into model training processes) [25].

Finally, consider implementing a company-wide process for internal algorithmic auditing, like that which Deb RajiAndrew Smart, and their collaborators proposed in their 2020 paper Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing.

#3

Work with your company or organization to develop partnerships with advocacy organizations that represent groups of people that machine learning systems tend to marginalize, in order to responsibly engage marginalized communities as stakeholders. Examples of such organizations include Color Of Change and the NAACP. Then, while developing new machine learning systems or evaluating existing machine learning systems, seek and incorporate their feedback.

#4

Hire machine learning engineers and data scientists from underrepresented backgrounds, especially Black people, Indigenous people, Latinx people, disabled people, transgender and nonbinary people, formerly incarcerated people, and people from countries that are underrepresented in technology (e.g., countries in Africa, countries in Southeast Asia, and counties in South America). Note that this will require rethinking how talent is discovered and trained [26] — consider recruiting from historically-black colleges and universities (HBCUs) in the U.S. and coding and data science bootcamps or starting an internal program like Slack’s Next Chapter.

On a related note, work with your company to support organizations that foster talent from underrepresented backgrounds, like AI4ALLBlack Girls CodeCode2040NCWITTECHNOLOchicas, TransTech, and Out for Undergrad. Organizations like these are critical for increasing the number of people from underrepresented backgrounds in technology jobs, including in ML/AI jobs, and all of them have a proven track record of success. Additionally, consider supporting organizations like these with your own money and time.

#5

Work with your company or organization to sign the Safe Face Pledge, an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology. This pledge was jointly drafted by the Algorithmic Justice League and the Center on Technology & Privacy at Georgetown Law, and has already been signed by many leading ethics and privacy experts.

#6

Learn more about the ways in which machine learning systems cause harm. For example, here are seven recommended resources to continue learning:

1. [Book] Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil (2016)
2. [Book] Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Noble (2018)
3. [Book] Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard (2018)
4. [Book] Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks (2019)
5. [Book] Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019)
6. [Book] Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L. Gray and Siddharth Suri (2019)
7. [Film] Coded Bias (2020)

Additionally, you can learn more about harms caused by ML systems by reading the work of journalists and researchers who are uncovering biases in machine learning systems. In addition to the researchers and journalists I’ve already named in this essay (e.g., Dr. Piotr SapieżyńskiCasey Newton, Joy BuolamwiniDr. Timnit GebruDeb RajiAndrew Smart), some examples include Julia Angwin (and anything written by The Markup), Khari JohnsonMoira WeigelLauren Kirchner, and anything written by Upturn. The work of journalists and researchers serve as important case studies for how not to design machine learning systems, which is valuable for ML practitioners’ who are aiming to develop fair and equitable ML systems.

#7

Learn about ways in which existing machine learning systems have been improved in order to cause less harm. For example, IBM has worked to improve the performance of their commercial facial recognition system with respect to racial and gender bias (direct link), Google has worked to reduce gender bias in Google Translate (direct link), and Jigsaw (within Google) has worked to change Perspective AI (a public API for hate speech detection algorithm) to less often classify phrases containing frequently targeted groups (e.g., Muslims, women, queer people) as being hate speech (direct link).

#8

Conduct an audit of a machine learning system for disparate impact. Disparate impact occurs when, even though a policy or system is neutral, one group of people is adversely affected more than another. Facebook’s ad delivery system is an example of a system causing disparate impact.

For example, use Project Lighthouse, a methodology that Airbnb released earlier this year that uses anonymized demographic data to measure user experience discrepancies that may be due to discrimination or bias, or ArthurAI, an ML monitoring framework that allows you to also monitor model bias. (Full disclosure: I work at Airbnb.)

Alternatively, hire an algorithmic consulting firm to conduct an audit of a machine learning system that your team or company owns, like O’Neil Risk Consulting & Algorithmic Auditing or the Algorithmic Justice League.

#9

When hiring third-party vendors or using crowdsourcing platforms for machine learning labeling tasks, be critical of who you choose to support. Inquire about the working conditions of the people who will be labeling for you. Additionally, if possible, make an onsite visit to the vendor to gauge working conditions for yourself. What is their hourly pay? Do they have healthcare and other benefits? Are they full-time employees or contractors? Do they expose their workforce to graphically violent or hateful content? Are there opportunities for career growth and advancement within the company?

#10

Give a presentation to your team or company about harms that machine learning systems’ cause and how to mitigate them. The more people who understand the harms that machine learning systems cause and the power imbalance that currently exists between ML system developers and ML system participants, the more likely it is that we can affect change on our teams and in our companies.

#11

Finally, the bonus #11 in this list is, if you are eligible to do so in the United States, VOTE. There is so much at stake in this upcoming election, including the rights of BIPOC people, immigrants, women, LGBTQ people, and disabled people as well as — quite literally — the future of our democracy. If you are not registered to vote, please do so now: Register to vote. If you are registered to vote but have not requested your absentee or mail-in ballot, please do so now: Request your absentee ballotEven though Joe Biden is far from the perfect candidate, we need to elect him and Kamala Harris; this country, the people in it, and so many people around the world cannot survive another four years of a Trump presidency.

## Conclusion

Machine learning systems are incredibly powerful tools; unfortunately though, they can be either agents of empowerment or agents of harm. As machine learning practitioners, we have a responsibility to recognize the harm that systems we build cause and then act accordingly. Together, we can work toward a world in which machine learning systems are used responsibly, do not reinforce existing systemic biases, and uplift and empower people from marginalized communities.

This piece was inspired in part by Participatory Approaches to Machine Learning, a workshop at the 2020 International Conference on Machine Learning (ICML) that I had the opportunity to attend in July. I would like to deeply thank the organizers of this event for calling attention to the power imbalance between ML system developers and ML system participants and for creating a space to discuss it: Angela ZhouDavid MadrasInioluwa Deborah RajiBogdan KulynychSmitha Milli, and Richard Zemel. Also published at here.

## References

[1] Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. Published 2016.

[2] NYPD used facial recognition to track down Black Lives Matter activistThe Verge. August 18, 2020.

[3] An Algorithm Determined UK Students’ Grades. Chaos EnsuedWired. August 15, 2020.

[4] Wrongfully Accused by an AlgorithmThe New York Times. June 24, 2020.

[5] Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes. Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. CSCW 2019.

[6] Turning the tables on Facebook: How we audit Facebook using their own marketing tools. Piotr Sapiezynski, Muhammad Ali, Aleksandra Korolova, Alan Mislove, Aaron Rieke, Miranda Bogen, and Avijit Ghosh. Talk given at PAML Workshop at ICML 2020.

[7] Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Joy Buolamwini and Timnit Gebru. ACM FAT* 2018.

[8] Facial-recognition software inaccurate in 98% of cases, report findsCNET. May 13, 2018.

[9] On Views of Race and Inequality, Blacks and Whites Are Worlds Apart: Demographic trends and economic well-beingPew Research Center. June 27, 2016.

[10] The Trauma Floor: The secret lives of Facebook moderators in AmericaThe Verge. February 25, 2019.

[11] The Internet Is Enabling a New Kind of Poorly Paid HellThe Atlantic. January 23, 2018.

[12] Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Benjamin V. Hanrahan, Jeffrey P. Bigham, and Chris Callison-Burch. CHI Late Breaking Work 2019.

[13] Millions of black people affected by racial bias in health-care algorithmsNature. October 24, 2019.

[14] Most Police Don’t Live In The Cities They ServeFiveThirtyEight. August 20, 2014.

[15] San Francisco’s facial recognition technology ban, explainedVox. May 14, 2019.

[16] Beyond San Francisco, more cities are saying no to facial recognitionCNN. July 17, 2019.

[17] Boston is second-largest US city to ban facial recognitionSmart Cities Dive. July 6, 2020.

[18] Ban Facial Recognition: Map. Accessed August 30, 2020.

[19] Defending Black Lives Means Banning Facial RecognitionWired. July 10, 2020.

[20] Credit for the framing goes to Dr. Cathy O’Neil, of O’Neil Risk Consulting & Algorithmic Auditing.

[21] Amazon reportedly scraps internal AI recruiting tool that was biased against womenThe Verge. October 10, 2018.

[22] Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling techThe Verge. January 12, 2018.

[23] Facebook’s ad-serving algorithm discriminates by gender and raceMIT Technology Review. April 5, 2019.

[24] Datasheets for Datasets. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. ArXiv preprint 2018.

[25] Model Cards for Model Reporting. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. ACM FAT* 2019.