Zephyrnet Logo

Big Tech bankrolling AI ethics research and events seems very familiar. Ah, yes, Big Tobacco all over again

Date:

Analysis Big tech’s approach to avoiding AI regulation looks a lot like Big Tobacco’s campaign to shape smoking rules, according to academics who say machine-learning ethics standards need to be developed outside of the influence of corporate sponsors.

In a paper included in the Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21) next month, Mohamed Abdalla, a doctoral student in computer science at the University of Toronto, and Moustafa Abdalla, a doctoral student on deferral from Harvard Medical School, explore how Big Tech has adopted strategies similar to those used by Big Tobacco.

The analogy “is not perfect,” the two brothers acknowledge, but is intended to provide a historical touchstone and “to leverage the negative gut reaction to Big Tobacco’s funding of academia to enable a more critical examination of Big Tech.” The comparison is also not an assertion that Big Tech is deliberately buying off researchers; rather, the researchers argue that “industry funding warps academia regardless of intentionality due to perverse incentives.”

The authors point out that Google has said as much about unwelcome research, insisting that criticism from Oracle-funded advocacy group Campaign for Accountability should be discounted because it is financed by a hostile competitor. Coincidentally, the Campaign for Accountability in 2017 published a post that begins, “Google has paid scholars millions to produce hundreds of papers supporting its policy interests, following in the footsteps of the oil and tobacco industries.”

Big tech in this instance is defined as: Google, Amazon, Facebook, Microsoft, Apple, Nvidia, Intel, IBM, Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI. But the boffins’ argument applies to a far larger set of companies that have a commercial interest in AI-powered systems.

Sigh, oh for the Noughties

The brothers Abdalla cite the mid 2010s as the point at which public attitudes about Big Tech began to sour. And they see similarities in Facebook CEO Mark Zuckerberg’s 2018 acknowledgement that “it’s clear now that we didn’t do enough” to prevent interference in the 2016 US election to “A Frank Statement to Cigarette Smokers,” Big Tobacco’s 1954 acknowledgement that smoking has health implications.

“Just like Big Tobacco, in response to a worsening public image, Big Tech had started to fund various institutions and causes to ‘ensure the ethical development of AI,’ and to focus on ‘responsible development,'” they state in their paper.

hiring

I’m fired: Google AI in meltdown as ethics unit co-lead forced out just weeks after coworker ousted

READ MORE

“Facebook promised its ‘commitment to the ethical development and deployment of AI.’ Google published its best practices for the ‘ethical’ development of AI. Microsoft has claimed to be developing an ethical checklist, a claim that has recently been called into question. Amazon co-sponsored, alongside the National Science Foundation, a $20m program on ‘fairness in AI.'”

The researchers see parallels between the way Big Tech’s funds academic research and conferences and the way Big Tobacco funded the Tobacco Industry Research Committee, later called the Council for Tobacco Research.

Big Tech gains influence over AI ethicists through the selective funding of research projects, they contend. And they show that 58 per cent of AI ethics faculties have received funding from Big Tech, which they say can influence their work.

“This is because, to bring in research funding, faculty will be pressured to modify their work to be more amenable to the views of Big Tech,” they state in their paper. “This influence can occur even without the explicit intention of manipulation, if those applying for awards and those deciding who deserve funding do not share the same underlying views of what ethics is or how it ‘should be solved.'”

They point to Partnership on AI, founded in 2016 by Amazon, Facebook Google, and Microsoft, among others, to formulate AI best practices as a group. They say that it has shown little interest in engaging with civil society, citing the departure of human rights group Access Now from the organization as a sign of its narrow focus on corporate concerns.

The researchers also point to the problematic nature of conference funding, noting that NeurIPS, a leading machine-learning conference, has had at least two Big Tech sponsors at the highest funding tier since 2015 and has had even more lately.

“When considering workshops relating to ethics or fairness, all but one have at least one organizer who is affiliated or was recently affiliated with Big Tech,” the paper says. “For example, there was a workshop about ‘Responsible and Reproducible AI’ sponsored solely by Facebook.”

The brothers Abdalla acknowledge there have been many remedies proposed to deal with Big Tech’s influence on society and they leave those to policymakers. But they do ask academics to consider adopting a stricter code of ethics for AI research and operating separately from the traditional computer science department.

“Such a separation would permit academia-industry relationships for technical problems where such funding is likely more acceptable, while ensuring that our development of ethics remains free of influence from Big Tech money,” they argue.

Exploiting uncertainty

In a phone interview with The Register, Frank Pasquale, professor of Law at Brooklyn Law School and author of The Black Box Society: The Secret Algorithms That Control Money and Information, suggested the comparison between Big Tobacco and Big Tech while provocative has some merit.

“I think it really is important that we find concrete metaphors that represent to people the type of harms that are at stake online,” he said, noting that it’s difficult to illustrate to people the impact of irresponsible or malicious decisions by tech firms.

Pasquale said he’s seen a draft of the paper and observed that the parallel that really struck him was the way that tobacco companies and tech companies have weaponized uncertainty.

Tobacco firms, he said, would raise doubts by saying things like, “Who knows whether smoking really causes cancer?”

I think the merchants-of-doubt approach successfully deflected a lot of lawmaking

“I think the merchants-of-doubt approach successfully deflected a lot of lawmaking,” he said, noting that a lot of academics today say the same thing about potential harms from YouTube and other online platforms, as a justification for further funding and study.

Pasquale argues that the key is to have more support for public and private sector researchers so they don’t have to depend on funding from the firms they’re investigating. He also stressed the importance of making data from these firms available to unaffiliated, independent researchers.

Nobody at Amazon, Facebook, and Microsoft wanted to comment.

“Academic collaborations have always been part of Google’s DNA,” a spokesperson for the internet giant told The Register. “In the past 15 years, we’ve provided more than 6,500 grants to academic and external research communities, and we’re committed to continuing these important collaborations.

“Partnering with the external research ecosystem brings fresh perspective to shared problems, and supporting their research helps advance critical areas of computer science. We support these collaborations through a variety of open-application programs including the Google Faculty Research Award Program, the PhD Fellowship Program, the Visiting Researcher Program, the Research Scholar Program and Award for Inclusion Research Program which give unrestricted funding to faculty and graduate students.” ®

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://go.theregister.com/feed/www.theregister.com/2021/04/29/tech_ai_funding/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?