close
close

As AI transforms drug development, the FDA struggles to find guardrails

Life sciences industry consultants expect the U.S. Food and Drug Administration to release new guidance on the use of AI in clinical trials and drug development by year's end.

The technology, with enormous potential to speed development and improve drug effectiveness — and trigger legal problems — has evolved so quickly that even the FDA has struggled to get a handle on it.

Last year, the FDA issued separate draft guidance for medical devices that would essentially allow manufacturers to pre-determine a device's future capabilities during the first submission of a product before it enters the market without resubmitting it for approval later.

AI and machine learning can extract data from electronic health records and other sources and use it to draw useful conclusions, from how a drug might affect specific patients to optimizing dosage.

It can predict side effects in specific populations, improve patient recruitment for clinical trials, test compounds, and improve post-marketing safety monitoring – among many other potentially transformative applications.

AI has been so useful to clinicians that since 2016, about 300 drug applications to the FDA have involved the use of AI in some form, Khair ElZarrad, director of the Office of Medical Policy at the FDA's Center for Drug Evaluation and Research, said recently in a speech FDA podcast.

The expected guidance is likely to address issues such as patient safety and the quality and reliability of data flowing into and out of AI algorithms, said Sarah Thompson Schick, an attorney at Reed Smith who advises medical device companies.

Another consideration: “Is AI suitable for what you’re doing?” added Schick, who also discussed the issues in this recent video.

“How do we ensure these issues are addressed in the continuous improvement and training of AI models used in key research and development activities. And how do we mitigate potential risks associated with these issues?”

Both the FDA and industry continue to consider how and to what extent AI should be used in research and development, especially as the technology advances, Schick said.

Last month, the FDA published a “special notice” in the Journal of the American Medical Association outlining the agency’s growing concerns about the use of AI in clinical research, medical product development, and clinical care.

Among them: FDA officials see a need for specialized tools that allow for more thorough evaluation of large language models “in the contexts and environments in which they will be used.”

The article in JAMA also pointed to the development potential of AI models that require continuous monitoring of AI performance.

“The Agency expresses its concern that the recurring, local assessment of AI throughout its lifecycle is necessary for both the safety and effectiveness of the product over time and that the effort required to do so is beyond any current regulatory system or capabilities “AI could go beyond the development and clinical communities,” Hogan Lovells partner Robert Church and his colleagues wrote in a note to clients last month.

The FDA also expressed concern about a level playing field, as large technology companies have capital and computing resources that startups and academic institutions cannot compete with. The agency noted that the latter may need support to ensure that AI models are safe and effective.

The agency emphasized the importance of ensuring that human clinicians remain involved in understanding how results are generated and advocating for high-quality evidence of benefit.

Troy Tazbaz, director of the FDA's Digital Health Center of Excellence, said in a recent blog post that standards and best practices “for the AI ​​development lifecycle as well as risk management frameworks” can help mitigate risks.

These include “approaches to ensure that the suitability, collection and quality of data are consistent with the intent and risk profile of the AI ​​model being trained.”

ElZarrad listed a number of challenges, some of which may be reflected in the expected guidance.

One of them is the variability in the quality, size and “representativeness” of data sets for training AI models. “Responsible use of AI actually requires that the data used to develop these models be fit for purpose and fit for use. This is our concept that we want to highlight and clarify.”

He noted that it is often difficult to understand how AI models are developed and reach their conclusions. “This may necessitate or require us to think about new approaches to transparency.”

There are numerous potential privacy issues associated with AI, many of which involve patient data. AI developers must ensure they comply with the Health Insurance Portability and Accountability Act, more commonly known as HIPAA, as well as a variety of other federal and state laws. In general, the patient data used is aggregated and anonymized, says Schick.

While life sciences leaders welcome additional guidance, they will not remain idle until they receive it. “I don’t think companies are necessarily waiting for the FDA,” Schick added.