Zephyrnet Logo

Exploring Responsible AI: Insights, Frameworks & Innovations with Ravit Dotan

Date:

In our latest episode of Leading with Data, we had the privilege of speaking with Ravit Dotan, a renowned expert in AI ethics. Ravit Dotan’s diverse background, including a PhD in philosophy from UC Berkeley and her leadership in AI ethics at Bria.ai, uniquely positions her to offer profound insights into responsible AI practices. Throughout our conversation, Ravit emphasized the importance of integrating responsible AI considerations from the inception of product development. She shared practical strategies for startups, discussed the significance of continuous ethics reviews, and highlighted the critical role of public engagement in refining AI approaches. Her insights provide a roadmap for businesses aiming to navigate the complex landscape of AI responsibility.

You can listen to this episode of Leading with Data on popular platforms like SpotifyGoogle Podcasts, and Apple. Pick your favorite to enjoy the insightful content!

[embedded content]

Key Insights from our Conversation with Ravit Dotan

  • Responsible AI should be considered from the start of product development, not postponed until later stages.
  • Engaging in group exercises to discuss AI risks can raise awareness and lead to more responsible AI practices.
  • Ethics reviews should be conducted at every stage of feature development to assess risks and benefits.
  • Testing for bias is crucial, even if a feature like gender is not explicitly included in the AI model.
  • The choice of AI platform can significantly impact the level of discrimination in the system, so it’s important to test and consider responsibility aspects when selecting a foundation for your technology.
  • Adapting to changes in business models or use cases may require changing the metrics used to measure bias, and companies should be prepared to embrace these changes.
  • Public engagement and expert consultation can help companies refine their approach to responsible AI and address broader issues.

Join our upcoming Leading with Data sessions for insightful discussions with AI and Data Science leaders!

Let’s look into the details of our conversation with Ravit Dotan!

What is the most dystopian scenario you can imagine with AI?

As the CEO of TechBetter, I’ve pondered deeply about the potential dystopian outcomes of AI. The most troubling scenario for me is the proliferation of disinformation. Imagine a world where we can no longer rely on anything we find online, where even scientific papers are riddled with misinformation generated by AI. This could erode our trust in science and reliable information sources, leaving us in a state of perpetual uncertainty and skepticism.

How did you transition into the field of responsible AI?

My journey into responsible AI began during my PhD in philosophy at UC Berkeley, where I specialized in epistemology and philosophy of science. I was intrigued by the inherent values shaping science and noticed parallels in machine learning, which was often touted as value-free and objective. With my background in tech and a desire for positive social impact, I decided to apply the lessons from philosophy to the burgeoning field of AI, aiming to detect and productively use the embedded social and political values.

What does responsible AI mean to you?

Responsible AI, to me, is not about the AI itself but the people behind it – those who create, use, buy, invest in, and insure it. It’s about developing and deploying AI with a keen awareness of its social implications, minimizing risks, and maximizing benefits. In a tech company, responsible AI is the outcome of responsible development processes that consider the broader social context.

When should startups begin to consider responsible AI?

Startups should think about responsible AI from the very beginning. Delaying this consideration only complicates matters later on. Addressing responsible AI early on allows you to integrate these considerations into your business model, which can be crucial for gaining internal buy-in and ensuring engineers have the resources to tackle responsibility-related tasks.

How can startups approach responsible AI?

Startups can begin by identifying common risks using frameworks like the AI RMF from NIST. They should consider how their target audience and company could be harmed by these risks and prioritize accordingly. Engaging in group exercises to discuss these risks can raise awareness and lead to a more responsible approach. It’s also vital to tie in business impact to ensure ongoing commitment to responsible AI practices.

What are the trade-offs between focusing on product development and responsible AI?

I don’t see it as a trade-off. Addressing responsible AI can actually propel a company forward by allaying consumer and investor concerns. Having a plan for responsible AI can aid in market fit and demonstrate to stakeholders that the company is proactive in mitigating risks.

How do different companies approach the release of potentially risky AI features?

Companies vary in their approach. Some, like OpenAI, release products and iterate quickly upon identifying shortcomings. Others, like Google, may hold back releases until they are more certain about the model’s behavior. The best practice is to conduct an Ethics review at every stage of feature development to weigh the risks and benefits and decide whether to proceed.

Can you share an example where considering responsible AI changed a product or feature?

A notable example is Amazon’s scrapped AI recruitment tool. After discovering the system was biased against women, despite not having gender as a feature, Amazon chose to abandon the project. This decision likely saved them from potential lawsuits and reputational damage. It underscores the importance of testing for bias and considering the broader implications of AI systems.

How should companies handle the evolving nature of AI and the metrics used to measure bias?

Companies must be adaptable. If a primary metric for measuring bias becomes outdated due to changes in the business model or use case, they need to switch to a more relevant metric. It’s an ongoing journey of improvement, where companies should start with one representative metric, measure, and improve upon it, and then iterate to address broader issues.

While I don’t categorize tools strictly as open source or proprietary in terms of responsible AI, it’s crucial for companies to consider the AI platform they choose. Different platforms may have varying levels of inherent discrimination, so it’s essential to test and take into account the responsibility aspects when selecting the foundation for your technology.

What advice do you have for companies facing the need to change their bias measurement metrics?

Embrace the change. Just as in other fields, sometimes a shift in metrics is unavoidable. It’s important to start somewhere, even if it’s not perfect, and to view it as an incremental improvement process. Engaging with the public and experts through hackathons or red teaming events can provide valuable insights and help refine the approach to responsible AI.

Summing-up

Our enlightening discussion with Ravit Dotan underscored the vital need for responsible AI practices in today’s rapidly evolving technological landscape. By incorporating ethical considerations from the start, engaging in group exercises to understand AI risks, and adapting to changing metrics, companies can better manage the social implications of their technologies.

Ravit’s perspectives, drawn from her extensive experience and philosophical expertise, stress the importance of continuous ethics reviews and public engagement. As AI continues to shape our future, the insights from leaders like Ravit Dotan are invaluable in guiding companies to develop technologies that are not only innovative but also socially responsible and ethically sound.

For more engaging sessions on AI, data science, and GenAI, stay tuned with us on Leading with Data.

Check our upcoming sessions here.

spot_img

Latest Intelligence

spot_img