Zephyrnet Logo

How AI can empower communities and strengthen democracy

Date:

Each Fourth of July for the past five years I’ve written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good.

This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes. AI literacy is also, as Microsoft CTO Kevin Scott asserted, a critical part of being an informed citizen in the 21st century.

This year, I posed the question on Twitter to gather a broader range of insights. Thank you to everyone who contributed.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

This selection is not meant to be comprehensive, and some ideas included here may be in the early stages, but they represent ways AI might enable the development of more free and just societies.

Machine learning for open source intelligence 

Open source intelligence, or OSINT, is the collection and analysis of freely available public material. This can power solutions for cryptology and security, but it can also be used to hold governments accountable.

Crowdsourced efforts by groups like Bellingcat were once looked upon as interesting side projects. But findings based on open source evidence from combat zones — like an MH-17 being shot down over Ukraine and a 2013 sarin gas attack in Syria — have proved valuable to investigative authorities.

Groups like the International Consortium of Investigative Journalists (ICIJ) are using machine learning in their collaborative work. Last year, the ICIJ’s Marina Walker Guevara detailed lessons drawn from the Machine Learning for Investigations reporting process, conducted in partnership with Stanford AI Lab.

In May, researchers from Universidade Nove de Julho in Sao Paulo, Brazil published a systematic review of AI for open source intelligence that found nearly 250 examples of OSINT that uses AI in works published between 1990 and 2019. Topics range from AI for crawling web text and documents to applications for social media, business, and — increasingly — cybersecurity.

Along similar lines, an open source initiative out of Swansea University is currently using machine learning to investigate alleged war crimes happening in Yemen.

AI for emancipation 

Last month, shortly after the start of some of the largest protests in U.S. history engulfed American cities and spread around the world, I wrote about an analysis of AI bias in language models. Although I did not raise the point in that piece,  the study stood out as the first time I’ve come across the word “emancipation” in AI research. The term came up in relation to researchers’ best practice recommendations for NLP bias analysts in the field of sociolinguistics.

I asked lead author Su Lin Blodgett to speak more about this idea, which would treat marginalized people as coequal researchers or producers of knowledge. Blodgett said she’s not aware of any AI system today that can be defined as emancipatory in its design, but she is excited by the work of groups like the Indigenous Protocol and Artificial Intelligence Working Group.

Blodgett said AI that touches on emancipation includes NLP projects to help revitalize or reclaim languages and projects for creating natural language processing for low-resource languages. She also cited AI aimed at helping people resist censorship and hold government officials accountable.

Chelsea Barabas explored similar themes in an ACM FAccT conference presentation earlier this year. Barabas drew on the work of anthropologist Laura Nader, who finds that anthropologists tend to study disadvantaged groups in ways that perpetuate stereotypes. Instead, Nader called for anthropologists to expand their fields of inquiry to include “study of the colonizers rather than the colonized, the culture of power rather than the culture of the powerless, the culture of affluence rather than the culture of poverty.”

In her presentation, Barabas likewise urged data scientists to redirect their critical gaze in the interests of fairness. As an example, both Barbara and Blodgett endorsed research that scrutinizes “white collar” crimes with the level of attention typically reserved for other offenses.

In Race After Technology, Princeton University professor Ruha Benjamin also champions the notion of abolitionist tools in tech. Catherine D’Ignazio and Lauren F. Klein’s Data Feminism and Sasha Costanza-Chock’s Design Justice offer further examples of data sets that can be used to challenge power.

Racial bias detection for police officers

Taking advantage of NLP’s ability to process data at scale, Stanford University researchers examined recordings of conversations between police officers and people stopped for traffic violations. Using computational linguistics, the researchers were able to demonstrate that officers paid less respect to Black citizens during traffic stops. Part of the focus of the work published in the Proceedings of the National Academy of Science in 2017 was to highlight ways police  body camera footage can be used as a way to build trust between communities and law enforcement agencies. The analysis used a collection of recordings collected over the course of years, drawing conclusions from a batch of data instead of parsing instances one by one.

An algorithmic bill of rights

The idea of an algorithmic bill of rights recently came up in a conversation with Black roboticist about building better AI. The notion was introduced in the 2019 book A Human’s Guide to Machine Intelligence and further fleshed out by Vox staff writer Sigal Samuel.

A core tenet of the idea is transparency, meaning each person has the right to know when an algorithm is making a decision and any factors considered in that process. An algorithmic bill of rights would also include freedom from bias, data portability, freedom to grant or refuse consent, and a right to dispute algorithmic results with human review.

As Samuel points out in her reporting, some of these notions, such as freedom from bias, have appeared in laws proposed in Congress, such as the 2019 Algorithmic Accountability Act.

Fact-checking and fighting misinformation

Beyond bots that provide citizen services or promote public accountability, AI can be used to fight deepfakes and misinformation. Examples include Full Fact’s work with Africa Check, Chequeado, and the Open Data Institute to automate fact-checking as part of the Google AI Impact Challenge.

Deepfakes are a major concern heading into the U.S. election season this fall. In a fall 2019 report about upcoming elections, the New York University Stern Center for Business and Human Rights warned of domestic forms of disinformation, as well as potential external interference from China, Iran, or Russia. [Was this report referencing the upcoming elections?] The Deepfake Detection Challenge aims to help counter such deceptive videos, and Facebook has introduced a data set of videos for training and benchmarking deepfake detection systems.

Pol.is

Recommendation algorithms from companies like Facebook and YouTube — with documented histories of stoking division to boost user engagement — have been identified as another threat to democratic society.

Pol.is uses machine learning to achieve opposite aims, gamifying consensus and grouping citizens on a vector map. To reach consensus, participants need to revise their answers until they reach agreement. Pol.is has been used to help draft legislation in Taiwan and Spain.

Algorithmic bias and housing

In Los Angeles County, individuals who are homeless and White exit homelessness at a rate 1.4 times greater than people of color, a fact that could be related to housing policy or discrimination. Citing structural racism, a homeless population count for Los Angeles released last month found that Black people make up only 8% of the county population but nearly 34% of its homeless population.

To redress this injustice, the University of Southern California Center for AI in Society will explore how artificial intelligence can help ensure housing is fairly distributed. Last month, USC announced $1.5 million in funding to advance the effort together with the Los Angeles Homeless Services Authority.

The University of Southern California’s school for social work and the Center of AI in Society have been investigating ways to reduce bias in the allocation of housing resources since 2017. Homelessness is a major problem in California and could worsen in the months ahead as more people face evictions due to pandemic-related job losses. 

Putting AI ethics principles into practice

Putting ethical AI principles into practice is not just an urgent matter for tech companies, which have virtually all released vague statements about their ethical intentions in recent years. As a study from the UC Berkeley Center for Long-Term Cybersecurity found earlier this year, it’s also increasingly important that governments establish ethical guidelines for their own use of the technology.

Through the Organization for Economic Co-operation and Development (OECD) and G20, many of the world’s democratic governments have committed to AI ethics principles. But deciding what constitutes ethical use of AI is meaningless without implementation. Accordingly, in February the OECD established the Public Observatory to help nations put these principles into practice.

At the same time, governments around the world are formulating their own ethical parameters. Trump administration officials introduced ethical guidelines for federal agencies in January that, among other things, encourage public participation in establishing AI regulation. However, the guidelines also reject regulation the White House considers overly burdensome, such as bans on facial recognition technology.

One analysis recently found the need for more AI expertise in government. A joint Stanford-NYU study released in February examines the idea of “algorithmic governance,” or AI playing an increasing role in government. Analysis of AI used by the U.S. federal government today found that more than 40% of agencies have experimented with AI but only 15% of those solutions can be considered highly sophisticated. The researchers implore the federal government to hire more in-house AI talent for vetting AI systems and warn that algorithmic governance could widen the public-private technology gap and, if poorly implemented, erode public trust, or give major corporations an advantage over small businesses.

Another crucial part of the equation is how governments choose to award contracts to AI startups and tech giants. In what was believed to be a first, last fall the World Economic Forum, U.K. government, and businesses like Salesforce worked together to produce a set of rules and guidelines for government employees in charge of procuring services or awarding contracts.

Such government contracts are an important space to watch as businesses with ties to far-right or white supremacist groups — like Clearview AI and Banjo — sell surveillance software to governments and law enforcement agencies. Peter Thiel’s Palantir has also collected a number of lucrative government contracts in recent months. Earlier this week, Palmer Luckey’s Anduril, also backed by Thiel, raised $200 million and was awarded a contract to build a digital border wall using surveillance hardware and AI.

Ethics documents like those mentioned above invariably espouse the importance of “trustworthy AI.” If you roll your eyes at the phrase, I certainly don’t blame you. It’s a favorite of governments and businesses peddling principles to push through their agendas. The White House uses it, the European Commission uses it, and tech giants and groups advising the U.S. military on ethics use it, but efforts to put ethics principles into action could someday give the term some meaning and weight.

Protection against ransomware attacks

Before local governments began scrambling to respond to the coronavirus and structural racism, ransomware attacks had established themselves as another growing threat to stability and city finances.

In 2019, ransomware attacks on public-facing institutions like hospitals, schools, and governments were rising at unprecedented rates, siphoning off public funds to pay ransoms, recover files, or replace hardware.

Security companies working with cities told VentureBeat earlier this year that machine learning is being used to combat these attacks through approaches like anomaly detection and quickly isolating infected devices.

Robot fish in city pipes

Beyond averting ransomware attacks, AI can help municipal governments avoid catastrophic financial burdens by monitoring infrastructure, catching leaks or vulnerable city pipes before they burst.

Engineers at the University of Southern California built a robot for pipe inspections to address these costly issues. Named Pipefish, it can swim into city pipe systems through fire hydrants and collect imagery and other data.

Facial recognition protection with AI

When it comes to shielding people from facial recognition systems, efforts range from shirts to face paint to full-on face projections.

EqualAIs was developed at MIT’s Media Lab in 2018 to make it tough for facial recognition to identify subjects in photographs, project manager Daniel Pedraza told VentureBeat. The tool uses adversarial machine learning to modify images in order to evade facial recognition detection and preserve privacy. EqualAIs was developed as a prototype to show the technical feasibility of attacking facial recognition algorithms, creating a layer of protection around images uploaded in public forums like Facebook or Twitter. Open source code and other resources from the project are available online.

Other apps and AI can recognize and remove people from photos or blur faces to protect individuals’ identity.  University of North Carolina at Charlotte assistant professor Liyue Fan published work that applies differential privacy to images for added protection when using pixelization to hide a face. Should tech like EqualAIs be widely adopted, it may give a glimmer of hope to privacy advocates who call Clearview AI the end of privacy.

Legislators in Congress are currently considering a bill that would prohibit facial recognition use by federal officials and withhold some funding to state or local governments that choose to use the technology.

Whether you favor the idea of a permanent ban, a temporary moratorium, or minimal regulation, facial recognition legislation is an imperative issue for democratic societies. Racial bias and false identification of crime suspects are major reasons people across the political landscape are beginning to agree that facial recognition is unfit for public use today.

ACM, one of the largest groups for computer scientists in the world, this week urged governments and businesses to stop using the technology. Members of Congress have also voiced concern about use of the tech at protests or political rallies. Experts testifying before Congress have warned that if facial recognition becomes commonplace in these settings, it has the potential to dampen people’s constitutional right to free speech.

Protestors and others might have used face masks to evade detection in the past, but in the COVID-19 era, facial recognition systems are getting better at recognizing people wearing masks.

Final thoughts

This story is written with the clear understanding that techno-solutionism is no panacea and AI can be used for both positive and negative purposes. And the series is published on an annual basis because we all deserve to keep dreaming about ways AI can empower people and help build stronger communities and a more just society.

We hope you enjoyed this year’s selection. If you have additional ideas, please feel free to comment on the tweet or email khari@venturebeat.com to share suggestions for stories on this or related topics.

Source: http://feedproxy.google.com/~r/venturebeat/SZYF/~3/-NXtQAwz4u4/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?