Zephyrnet Logo

Getting to trustworthy AI

Date:

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Artificial intelligence will be key to helping humanity travel to new frontiers and solve problems that today seem insurmountable. It enhances human expertise, makes predictions more accurate, automates decisions and processes, frees humans to focus on higher value work, and improves our overall efficiency.

But public trust in the technology is at a low point, and there is good reason for that. Over the past several years, we’ve seen multiple examples of AI that makes unfair decisions, or that doesn’t give any explanation for its decisions, or that can be hacked.

To get to trustworthy AI, organizations have to resolve these problems with investments on three fronts: First, they need to nurture a culture that adopts and scales AI safely. Second, they need to create investigative tools to see inside black box algorithms. And third, they need to make sure their corporate strategy includes strong data governance principles.

1. Nurturing the culture

Trustworthy AI depends on more than just the responsible design, development, and use of the technology. It also depends on having the right organizational operating structures and culture. For example, many companies that may have concerns about bias in their training data also have expressed concern that their work environments are not conducive to nurturing women and minorities to their ranks. There is indeed, a very direct correlation! To get started and really think about how to make this culture shift, organizations need to define what responsible AI looks like within their function, why it’s unique, and what the specific challenges are.

To ensure fair and transparent AI, organizations must pull together task forces of stakeholders from different backgrounds and disciplines to design their approach. This method will reduce the likelihood of underlying prejudice in the data that’s used to create AI algorithms that could result in discrimination and other social consequences.

Task force members should include experts and leaders from various domains who can understand, anticipate, and mitigate relevant issues as necessary. They must have the resources to develop, test, and quickly scale AI technology.

For example, machine learning models for credit decisioning can exhibit gender bias, unfairly discriminating against female borrowers if uncontrolled. A responsible-AI task force can roll out design thinking workshops to help designers and developers think through the unintended consequences of such an application and find solutions. Design thinking is foundational to a socially responsible AI approach.

To ensure this new thinking becomes ingrained in the company culture, all stakeholders from across an organization — from data scientists and CTOs to Chief Diversity and Inclusivity officers must play a role. Fighting bias and ensuring fairness is a socio-technological challenge that is solved when employees who may not be used to collaborating and working with each other start doing so, specifically about data and the impacts models can have on historically disadvantaged people.

2. Trustworthy tools

Organizations should seek out tools to monitor transparency, fairness, explainability, privacy, and robustness of their AI models. These tools can point teams to problem areas so that they can take corrective action (such as introducing fairness criteria in the model training and then verifying the model output).

Here are some examples of such investigative tools:

There are versions of these tools that are freely available via open source and others that are commercially available. When choosing these tools, it is important to first consider what you need the tool to actually do and whether you need the tool to perform on production systems or those still in development. You must then determine what kind of support you need and at which price, breadth, and depth. An important consideration is whether the tools are trusted and referenced by global standards boards.

3. Developing data and AI governance

Any organization deploying AI must have clear data governance in effect. This includes building a governance structure (committees and charters, roles and responsibilities) as well as creating policies and procedures on data and model management. With respect to humans and automated governance, organizations should adopt frameworks for healthy dialog that help craft data policy.

This as an opportunity to promote data and AI literacy in an organization. For highly regulated industries, organizations can find specialized tech partners that can also ensure that the model risk management framework meets supervisory standards.

There are dozens of AI governance boards around the world that are working with industry in order to help set standards for AI. IEEE is one single example. IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. Its work encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. Such international standards bodies can help guide your organization to adopt standards that are right for you and your market.

Conclusion

Curious how your org ranks when it comes to AI-ready culture, tooling, and governance? Assessment tools can help you determine how well prepared your organization is to scale AI ethically on these three fronts.

There is no magic pill to making your organization a truly responsible steward of artificial intelligence. AI is meant to augment and enhance your current operations, and a deep learning model can only be as open-minded, diverse, and inclusive as the team developing it.

Phaedra Boinodiris, FRSA, is an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and Ethics. She has focused on inclusion in technology since 1999 and is a member of the Cognitive World Think Tank on enterprise AI.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Checkout PrimeXBT
Trade with the Official CFD Partners of AC Milan
Source: https://venturebeat.com/2021/03/14/getting-to-trustworthy-ai/

spot_img

Latest Intelligence

spot_img