Zephyrnet Logo

IBM’s Arin Bhowmick explains why AI trust is hard to achieve in the enterprise

Date:

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise.

This interview has been edited for clarity and brevity.

VentureBeat: It seems a lot of organizations are still not trustful of AI. Do you think that’s improving?

Arin Bhowmick: I do think it’s improved or is getting better. But we still have a long way to go. We haven’t historically been able to bake in trust and fairness and explainable AI into the products and experiences. From an IBM standpoint, we are trying to create reliable technology that can augment [but] not really replace human decision-making. We feel that trust is essential to the adoption. It allows organizations to understand and explain recommendations and outcomes.

What we are essentially trying to do is akin to a nutritional label. We’re looking to have a similar kind of transparency in AI systems. There is still some hesitation in adoption of AI because of a lack of trust. Roughly 80-85% of some of the professionals that took part in an IBM survey from different organizations said their organization has been pretty negatively impacted by problems such as bias, especially in the data. I would say 80% or more agree that consumers are more likely to choose services from a company that offers transparency and an ethical framework for how its AI models are built.

VentureBeat: As an AI model runs, it can generate different results as the algorithms learn more about the data. How much does that lack of consistency impact trust?

Bhowmick: The AI model used to do the prediction is as good as the data. It’s not just models. It’s about what it does and the insight it provides at that point in time that develops trust. Does it tell the user why the recommendation is made or is significant, how it came up with the recommendations, and how confident it is? AI tends to be a black box. The trick around developing trust is to unravel the black box.

VentureBeat: How do we achieve that level of AI explainability?

Bhowmick: It’s hard. Sometimes it’s hard to even judge the root cause of a prediction and insight. It depends on how the model was constructed. Explainability is also hard because when it is provided to the end user, it’s full of technical mumbo jumbo. It’s not in the voice and tone that the user actually understands.

Sometimes explainability is also a little bit about the “why,” rather than the “what.” Giving an example of explainability in the context of the tasks that the user is doing is really, really hard. Unless the developers who are creating these AI-based [and] infused systems actually follow the business process, the context is not going to be there.

VentureBeat: How do we even measure this?

Bhowmick: There is a fairness score and a bias score. There is a concept of model accuracy. Most tools that are available do not provide a realistic score of the element of bias. Obviously, the higher the bias, the worse your model is. It’s pretty clear to us that a lot of the source of the bias happens to be in the data and the assumptions that are used to create the model.

What we tried to do is we baked in a little bit of bias detection and explainability into the tooling itself. It will look at the profile of the data and match it against other items and other AI models. We’ll be able to tell you that what you’re trying to produce already has built-in bias, and here’s what you can do to fix it.

VentureBeat: That then becomes part of the user experience?

Bhowmick: Yes, and that’s very, very important. Whatever bias feeds into the system has huge ramifications. We are creating ethical design practices across the company. We have developed specific design thinking exercises and workshops. We run workshops to make sure that we are considering ethics at the very beginning of our business process planning and design cycle. We’re also using AI to improve AI. If we can build in sort of bias and explainable AI checkpoints along the way, inherently we will scale better. That’s sort of the game plan here.

VentureBeat: Will every application going have an AI model embedded within it?

Bhowmick: It’s not about the application, it’s about whether there are things within that application that AI can help with. If the answer is yes, most applications will have infused AI in them. It will be unlikely that applications will not have AI.

VentureBeat: Will most organizations embed AI engines in their applications or simply involve external AI capabilities via an application programming interface (API)?

Bhowmick: Both will be true. I think the API would be good for people who are getting started. But as the level of AI maturity increases, there will be more information that is specific to a problem statement that is specific to an audience. For that, they will likely have to build custom AI models. They might leverage APIs and other tooling, but to have an application that really understands the user and really gets at the crux of the problem, I think it’s important that it’s built in-house.

VentureBeat: Overall, what’s your best AI advice to organizations?

Bhowmick: I still find that our level of awareness of what is AI and what it can do, and how it can help us, is not high. When we talk to customers, all of them want to go into AI. But when you ask them what are the use cases, they sometimes are not able to articulate that.

I think adoption is somewhat lagging because of people’s understanding and acceptance of AI. But there’s enough information on AI principles to read up on. As you develop an understanding, then look into tooling. It really comes down to awareness.

I think we’re in the hype cycle. Some industries are ahead, but if I could give one piece of advice to everyone, it would be don’t force-fit AI. Make sure you design AI in your system in a way that makes sense for the problem you’re trying to solve.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Source: https://venturebeat.com/2021/02/16/ibms-arin-bhowmick-explains-why-ai-trust-is-hard-to-achieve-in-the-enterprise/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?