Zephyrnet Logo

Twitter aims to fight bias by examining its own machine learning algorithms

Date:

The company’s new Responsible ML initiative will make Twitter’s algorithms more transparent, will invite user feedback and give users more choice in how ML affects their experience.

twitter-ios-app-logo.jpg

Image: NurPhoto/Getty Images

Recent pressure on social media companies to curb posts that present misinformation and foment unrest has resulted in Twitter taking the lead by launching a new initiative designed to root out problematic outcomes generated by its machine learning algorithms.

More about artificial intelligence

Calling the project “Responsible ML,” Twitter’s Jutta Williams and Rumman Chowdhury said in a blog post that Twitter’s algorithms haven’t necessarily acted in ways it intended. “These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product,” Williams and Chowdhury said. 

SEE: Digital transformation: A CXO’s guide (free PDF) (TechRepublic)

Twitter’s Responsible ML will be acting on four pillars that it believes represent a responsible view of machine learning technology: 

  1. Taking responsibility for its own algorithmic decisions. 
  2. Ensuring equity and fairness in outcomes. 
  3. Being transparent about how algorithms work and why they decide what they do.
  4. Enabling user agency and algorithmic choice.

The group leading the Responsible ML initiative is Twitter’s ML Ethics, Transparency and Accountability group, also called META. “Our Responsible ML working group is interdisciplinary and is made up of people from across the company, including technical, research, trust and safety and product teams,” Williams and Chowdhury said. 

The four pillars mentioned above are an ultimate goal of the initiative, but getting there is a different story. To start, the team is “conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms we use,” all of which will be shared publicly in the coming months. Williams and Chowdury said the public can expect to see reports on, among other things, how Twitter’s image cropping algorithm has a gender and racial bias, a fairness assessment of Twitter home timelines across racial groups and an analysis of content recommendations based on political ideology. 

The analyses Twitter is conducting will allow it to apply what it learns to the platform in various ways. As an example, Twitter cited the elimination of an image cropping algorithm in October 2020, as mentioned above as one of its analysis points.

Twitter said that the changes it makes may not always result in visible product changes, but “it will lead to heightened awareness and important discussions around the way we build and apply ML.”

As mentioned above, Twitter wants to be public about what it learns and what it does with that data. To that end, Twitter is inviting feedback on changes and will be held accountable “in the form of peer-reviewed research, data-insights, high-level descriptions of our findings or approaches and even some of our unsuccessful attempts to address these emerging challenges,” Williams and Chowdury said.

Twitter users wishing to participate in the initiative are invited to ask questions using the Twitter hashtag #AskTwitterMETA, and for those that want to get even more in-depth, there are several jobs on the META team open around the world right now.

Also see

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.techrepublic.com/article/twitter-aims-to-fight-bias-by-examining-its-own-machine-learning-algorithms/#ftag=RSS56d97e7

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?