Zephyrnet Logo

White House and Congress Propose AI Legislation as FDA Continues to Act as AI Regulatory Gatekeeper | BioSpace

Date:

AI FDA Gatekeeper
Image Credit: Nicole Bean for BioSpace

As artificial intelligence (AI) continues to be adopted by the pharmaceutical industry, regulatory bodies in the United States and other countries are evolving to tackle challenges within the industry. These include data privacy, bias, accuracy and access, as well as appropriate uses of AI and incorporating checks and balances into its usage. On October 30, the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence  (EO) instructed agencies to design plans for the appropriate and responsible use of AI across industries, including charging HHS with developing a safety program. Congress responded within weeks with an in-depth AI proposal while the FDA has steadily been working to address AI.

Because the EO is just a blueprint, industries do not know as of yet what the rules are going to be. The broadness of the EO makes sense according to Rafael Rosengarten, co-founder and CEO of  Genialis and co-founder and board director of the Alliance for Artificial Intelligence in Healthcare (AAIH). “The government’s main concern is around security and safety, such as public safety. In fact, most of the teams and the cabinet-level offices are called out by name, like the National Institutes of Standard Technology, Department of Defense, Homeland Security, etc.,” he noted. As AI is being adopted throughout different industries, there has been a growing concern around how to regulate a horizontal technology on both a sectoral and agency basis because of potential gaps between different agencies’ purviews, including in the healthcare and pharmaceutical industries.

The EO mandates that HHS create a task force with the aim of crafting a safety program by 2024. Although the FDA is evolving to regulate AI, Danny Tobey, M.D., JD, partner and chair of AI & Data Analytics at DLA Piper law firm, believes the EO’s call for an HHS task force will add to good governance of AI across the healthcare value chain. He notes that all agencies have the best intentions but are limited by their jurisdiction. “The FDA is focused on one thing, CMS is focused on another and the HHS has its mandates at the department level, so I like an HHS-level perspective on this,” he said. Tobey further clarified that because the FDA’s job is safety and efficacy, the FDA will remain one of the most important gatekeepers.

HHS Safety Program

For the HHS to create a safety program within a year is an ambitious goal, especially considering the complexities of AI, not to mention its applications in healthcare. Richard Marcil, co-founder of ConversationHEALTH, explained that the different aspects of the task force must “define the scope of harms and unsafe practices of AI in healthcare, establish a robust reporting system, determine investigation and remediation mechanisms, and provide clear legal and ethical considerations of AI in healthcare.” He noted that the task force must collaborate with healthcare providers, researchers and patients to establish the needed balance between innovation and patient safety.

However, this is no easy task to do for an industry that is already acutely aware of the necessity for accurate and diverse data in clinical trials and research data for the effective use of AI. There are many uncertainties regarding how HHS is going to accomplish this goal and what the priorities of the task force will be. There are a lot of value judgments baked into deploying AI in the healthcare space. AI can be the greatest cure, or the greatest cause of healthcare inequity in the future because it has the potential to either further stratify health outcomes by access to technology, or to help democratize access to medical information and care. Therefore, Tobey pointed out that “the HHS task force is going to have hard conversations surrounding philosophical questions and tradeoffs between accuracy, transparency, accessibility and fairness. People with the best intentions will have to make hard choices.” Communications released by the National Institute of Standards and Technology (NIST) regarding AI elaborate on the point that every AI principle an industry holds dear will come with tradeoffs with other AI principles it values.

Future AI Legislation

The EO is just a part of the AI puzzle. In mid-November 2023, a bipartisan group of senators released the proposed Artificial Intelligence Research, Innovation, and Accountability Act of 2023, which is similar to the European approach to AI. Tobey, who was in the process of reviewing the legislation at the time of this interview, commented that while the first half is “very American” in nature, in that it focuses on promoting innovation, “the second half is about defining some of the familiar safeguards we’ve seen globally, like transparency reports and certification, but arguably for a narrower swath of AI systems, although the final convergence of remains to be seen.” Tobey stated that the industry should expect many more proposals to come forward before any are passed by Congress. He anticipates that future legislative proposals will continue to be more consistent with the European Union’s approach and draft legislation in Korea and other countries. Despite the different data privacy concerns and regulations of countries, he predicts there will be a global AI convergence among Asia, Europe and the U.S. because there is an understanding of the risks, which are less subjective and culturally dependent than privacy.

It Comes Down to Data

As in other industries, the biggest hurdle for the pharmaceutical industry is getting enough high-quality data in one place to make the AI accurate and as unbiased as possible. The healthcare data the industry has right now were collected for insurance and billing purposes. They were not intended for healthcare research or scientific purposes, but are being used in drug discovery and development. The flurry of proposed legislation provides industry leaders the opportunity to guide governmental agencies to create standards and procedures that modernize data collection by focusing on the appropriate data needed for research and development.

Access to quality data is key for the industry to continue to grow. Stacie Calad-Thomson, chief strategy officer, BioSymetrics and vice chairperson at AAIH, highlights that legislation needs to be focused on data to promote innovation and competition. She commented, “There’s not access to a lot of data. People are hoarding data and not sharing data, which creates monopolies. It also means we are not really sharing and advancing as fast as we all could.” Data gaps, however, reflect larger foundational problems, namely disparities in access to health care and human biases in how these issues are treated. Rosengarten stated, “Acknowledging this shines a bright light on the problem at the level of access and human bias in patient care, it [the EO] can only help now in terms of mandating data collection.”

Rosengarten is unsure how HHS is going to set up a system that makes the healthcare system more equitable. However, he clarified that placing mandates around AI-based devices is a good start, as AI needs to meet some standard of representing the patient population it’s intended to be used in and used for. These mandates will need to expand back to clinical trials, as clinical trials typically collect data from the healthiest sick people for trials, and thus are not representative of the true patient population for a drug or diagnostic. This calls for a systematic change in healthcare because, as Rosengarten stated, “I don’t think the problem is really the data collection, to be honest—it’s what’s happening underneath the data collection.”

The preference for data quality over quantity is a convergence Tobey has been observing: “Both technologists and medical scientists are coming to the conclusion around the same time that domain-specific smaller parameter models might be, in some ways, more effective. That’s good. I like the increased emphasis on good data rather than big data.”

Marcil suggested large pharma companies may have an advantage due to their extensive datasets, but that legislation allows for new competition. “New private and public datasets are emerging—and will continue to do so—extending beyond what’s available in academia.” At the same time, “A new breed of fundamentally AI-powered companies—spanning biogenomics to personalized medicines—poses a broader competitive challenge,” he said. “These innovative companies operate with fundamentally different business models and collaborate within a fresh ecosystem of data providers and partners.”  

Future Indicators

AI offers countless possibilities for the future. Responsible use of AI will allow the industry to thrive. Marcil foresees a harmonious blend of both human and machine, with human expertise and AI capabilities converging to create a boon for society. Proper use AI will allow the industry as a whole to massively scale preventive and acute care, from initial access to ongoing support, Marcil said: “The synergy between humans and AI holds immense promise for the future of healthcare.”

Lori Ellis is the head of insights at BioSpace. She analyzes and comments on industry trends for BioSpace and clients. Her current focus is on the ever-evolving impact of technology on the pharmaceutical industry. You can reach her at lori.ellis@biospace.com. Follow her on LinkedIn.

spot_img

Latest Intelligence

spot_img