Zephyrnet Logo

AI Ethics Guidelines from Diverse Groups: The Consensus?

Date:

Share…Share on Facebook

Facebook

Tweet about this on Twitter

Twitter

Share on LinkedIn

Linkedin

Share on Reddit

Reddit

Share on StumbleUpon

StumbleUpon

Share on Tumblr

Tumblr

Print this page

Print

Email this to someone

email

We are yet to see a holistic framework for the ethical development of artificial intelligence applications that can be applied to every industry in every country around the world. A lot of work is being done by corporate entities as well as academia, not to mention special-interest groups that warn of the dangers of uncontrolled AI proliferation. Nevertheless, we’re still a long way from a consensus on what it should involve.

This snapshot of the views of various entities with regard to AI principles could offer a clue to what is really missing in our quest for AI governance of the future.

Google

Despite the company disbanding its Advanced Technology External Advisory Council (ATEAC) after only one week due to internal controversy, Google already has the recipe for proper AI research. As listed in their blog, here are the key points:

“We will assess AI applications in view of the following objectives. We believe that AI should:

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for uses that accord with these principles.”

While there are a lot of good points in there, a lot of questions crawl out of the woodwork when you talk about weaponizing AI. For example, where is their stand against using AI in warfare or other harmful acts like cyber attacks? As part of that last point above, the blog does say that they will evaluate the “primary purpose and use” and see if it is “related to or adaptable to a harmful use,” but little more than that.

Microsoft

Microsoft has a slightly different set of beliefs, some of which align with Google’s, but only in the broader sense. Again, there’s nothing that directly mentions ways to tackle the AI weaponization issue.

“Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.

  • Fairness: AI systems should treat all people fairly
  • Inclusiveness: AI systems should empower everyone and engage people
  • Reliability & Safety: AI systems should perform reliably and safely
  • Transparency: AI systems should be understandable
  • Privacy & Security: AI systems should be secure and respect privacy
  • Accountability: AI systems should have algorithmic accountability”

To be fair, neither company has the ability to control what happens at the international level, so it’s understandable that their AI tenets are limited to positive applications. Not condonable, but understandable. So let’s see where the European Union stands on AI principles.

European Union

The EU’s stance is a lot more inclusive in the press release it issued earlier this month, and it does account for various aspects including how AI can and cannot be applied.

“AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”

This looks a lot closer to what we all want to see, and the very first item covers the misuse of AI, albeit in a very generic way. However, it does bring up “societal and environmental well-being”, which is clearly an allusion to not using AI to disrupt social and environmental balances. It looks like the EU has mulled over this for a longer time than Google or Microsoft.

But it’s the Future of Life Institute that clearly outlines and addresses the dangers of uncontrolled AI development.

Future of Life Institute’s Asilomar Principles

These guidelines have been in place for the past two years, and so far offer the only viable base for a framework of any sort.

“Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Ethics and Values

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • AI Arms Race: An arms race in lethal autonomous weapons should be avoided.”

As you can see, this is a lot more comprehensive, and it looks like we’re getting there. The only thing that’s missing is the involvement of the government, which is crucial for any of this to work. This is addressed by the guidelines arrived at by attendees of the New Work Summit that was hosted by The New York Times earlier this year.

New Work Summit

“Attendees at the New Work Summit, hosted by the New York Times, worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence:

  • Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  • Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  • Privacy: Users should be able to easily opt out of data collection.
  • Diversity: A.I. technology should be developed by inherently diverse teams.
  • Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  • Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  • Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  • Collective governance: Companies should work together to self-regulate the industry.
  • Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  • “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

After looking at this final list of guidelines, we’re still seeing a gap in how these issues will be addressed at various levels. The New Work Summit does cover collective governance and regulations, but fails to mention that regulatory bodies need a proper framework by which to guide the development of AI. What’s missing is that nobody is telling the government what it needs to do, and that’s the weakest link in the chain right now.

The America AI Initiative executive order signed by Trump earlier this year are as lacking of government accountability as the EU’s guidelines. Everybody seems to love telling everybody else what they should do, but offer very vague support for these initiatives. Trump’s order mentions nothing of where government agencies will get additional funding, but rather encourages them to reallocate spending. Not an easy pill for bureaucracy to swallow.

Governments in countries like the United States should be the ones taking the first step. They’re the ones who should be taking this bull by the horns and wrestling it to the ground. If AI is to remain subservient to humans, this is where it starts.

Unfortunately, that would require a tectonic shift in government policy itself, so don’t hold your breath. We’ll continue to muddle through for the next few years until a serious transgression by an AI entity brings everything to the forefront and makes it an urgent matter of international interest.

The question is, are we going to repeat history by waiting for something bad to happen before we react? To analogize, do we need a major global incident like WWII in order to set up a NATO? Can’t we be more proactive and setup a failsafe now when AI is still in its nascency?

These are the hard questions governments must answer because such a massive initiative requires financial and other resources that only governments can provide and control. There won’t be any lack of participants, but the participants cannot host the show.

Share…Share on Facebook

Facebook

Tweet about this on Twitter

Twitter

Share on LinkedIn

Linkedin

Share on Reddit

Reddit

Share on StumbleUpon

StumbleUpon

Share on Tumblr

Tumblr

Print this page

Print

Email this to someone

email

Source: https://1reddrop.com/2019/04/13/ai-ethics-guidelines-from-diverse-groups-the-consensus/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?