close
close

And just like that: What the viral introduction of a clinical AI app means for pharmaceutical research and development

And just like that: What the viral introduction of a clinical AI app means for pharmaceutical research and development

David Shaywitz

In 2011 we saw the triumph of technologies such as the cloud and smartphones. Apps had become a thing: social networking apps like Instagram (the iPhone's “App of the Year” in 2011) and Twitter, utility apps like Evernote and Dropbox, navigation apps like Google Maps and Waze, and gaming apps like Angry Birds.

But in medicine, as I wrote this year ForbesThe “killer app” was… a comparatively old-fashioned e-textbook called Up-To-Date. The company that founded it was founded in 1992.

Written and reviewed by medical experts, Up-To-Date was the go-to resource in the 2010s for all medical professionals, from serious medical students to overworked residents to experienced clinicians, to find up-to-date, reputable information about medical conditions they care about patients who need the most effective treatment possible.

Ten years later, in 2021, Up-To-Date was still the app of choice; The same applies to 2022 and 2023.

But today that could change. When a colleague recently mentioned that young doctors now appear to be using an AI-based resource called Open Evidence, I was surprised and a little skeptical.

But when I asked clinical colleagues who work with young doctors every day, I learned that the rumors seemed to be true.

Robert Wachter, chief medical officer at UCSF, wrote on X: “I think [Open Evidence] becomes a contact point for residents. It handles complex case-based prompts, addresses clinical cases holistically, and provides really good references.”

Medical colleagues at Harvard reported similar experiences; One told me the shot had gone “viral,” adding: “I’ve NEVER seen anything like it.”

I know that my academic colleagues will closely examine both the use and impact of open evidence, with a particular focus on the impact on patient care.

Technology adoption lessons

For TR's biopharmaceuticals-focused readers, the Open Evidence example serves (or should) serve as a vivid reminder that things don't change—until they suddenly do. A year ago everyone was using Up-To-Date; Nowadays, many young doctors rely on open evidence.

When it comes to new technologies, change is typically driven by “lead users” (to use MIT professor Eric von Hippel’s term) – frontline employees who are focused on solving a pressing problem and are comfortable using the approach that seems most effective to them.

If you're a resident, your pressing problem is the overwhelming amount of things you're dealing with, coming at you from everywhere at once. You are committed to providing the best possible care to your patients and are motivated to use whatever resource you find most useful.

That Open Evidence appears to have reached this threshold (at least for a number of early career physicians) is strong evidence of its perceived value. Presumably, stressed residents aren't just using Open Evidence because they're curious about AI or because there's a department initiative to use AI; They use it because they see open evidence as the best solution to their problem. It is a tool that is used because of its tangible added value.

For these busy young physicians, AI through open evidence is not the proverbial “solution in search of a problem.” It is a customized tool tailored to their immediate, urgent needs.

There is an analogy from the field of genetics. I remember for years hearing endless criticism of doctors for refusing to use genetics in clinical practice; The urgent need to better train clinicians in genetics has been a well-known and oft-repeated plea.

However, when a genetic diagnostic test (non-invasive prenatal test or NIPT) became available that could reliably assess specific fetal chromosomal abnormalities using a peripheral blood sample and in many cases obviated the need for amniocentesis, adoption was rapid and widespread. Patients, physicians and payers all seemed to embrace it – because the benefits were tangible.

Implications for AI in the pharmaceutical industry

Which, as expected, brings us back to AI in the pharmaceutical industry.

In my last three articles I have argued the following:

It turned out that readers were even more skeptical about the use of AI in research and development than the use of human genetics – and passionate geneticists were often the most critical.

As one reader (not from the Boston area, by the way) and genetics enthusiast wrote:

I also think you're way too optimistic about AI – I don't like statements like “New technologies like AI will help improve scientific understanding and enable better decisions” at all. We have no idea yet exactly how transformative AI will be (or not), and we're sure it will give value fans the hype that drives so many scam companies to slap a branded faceplate on GPT4 or make money from VCs there is no real vision beyond “AI+$$$$$=awesomeness”.

The AI ​​divide in pharmaceutical research and development

I appreciated the openness and perspective, which was certainly familiar, and highlighted the wide divide that exists in pharmaceutical research and development between AI optimists and skeptics.

On the pro-AI side, there appear to be two broadly distinct cohorts: a small group of science-minded enthusiasts genuinely interested in exploring the potential of AI in research and development, and a larger group of “digital transformers” .

Aviv Regev

The scientists interested in AI tend to have very little status and organizational influence in most big pharma companies, as I have seen, although there are exceptions (Aviv Regev of Genentech/Roche comes to mind). At best, they seem to be more commonly viewed as adorable (a word I've actually heard used by digital transformers).

The mission of digital transformers is to implement comprehensive business initiatives that are launched from the C-suite, driven by management consultants, and focused on operational efficiency, typically assessed using short-term process metrics. These organizational ambitions, which invariably emphasize the adoption of AI across the company, are being touted by CEOs in Davos and by big pharma executives at industry conferences like HLTH.

However, making a means to an end can be problematic. Goodhart's Law (see here) states: “When a measure becomes a goal, it is no longer a good measure.” When the mere use of AI becomes a goal rather than a tool, the result can be a flood of performative AI and a lack of thoughtful application to address the most critical issues facing a pharmaceutical company: the discovery and development of the next original, effective medicine.

Consequently, it is understandable why the vast majority of pharmaceutical R&D veterans remain generally skeptical of AI in R&D, as it appears to carry all the “Next Great Corporate Initiative” stigmas one must endure when it is is about actually doing great science and getting results with powerful new medicines.

Even the wild hype surrounding AI does not inspire confidence. While most startups aim high and tend to launch with bold promises, the extravagant expectations raised by AI startups may be in a class of their own.

As industry chemist and respected “In the Pipeline” blogger Derek Lowe recently reminded readers, in 2014 Recursion Pharma “declared back then that they would develop 100 drugs in 10 years” – an outlandish proposal that makes it difficult for many veteran drugmakers made developers take them seriously.

Derek Lowe

I fear that understandable skepticism can easily turn into reflexive cynicism (I discussed the “cynicism trap” here), which could result in R&D teams overlooking early but authentically promising opportunities that could be truly transformative.

It is particularly disappointing to me to sense some of this cynicism coming from geneticists in particular, since at the time many of these geneticists were focusing on the tools and technologies of large-scale genetics, they were on the receiving end of critics who doubted the promise of the approach.

A representative article by Stephen S. Hall in Scientific American in 2010 was titled “Revolution Postponed: Why the Human Genome Project Was Disappointing.”

The subheading of Hall's article reads: “The human genome project has not yet produced the medical miracles that scientists had promised. Biologists now disagree about what, if anything, went wrong – and what needs to happen next.”

However, over time and with enormous effort (and financial resources), the value of the human genome project and related ventures (such as the UK Biobank) began to be proven (arguably). (See, for example, this 2020 article by Richard Gibbs.)

While genetics may have failed to live up to some of the most hopeful early expectations (see Princeton geneticist and computer scientist Olga Troyanskaya's thoughtful comments here), by any reasonable estimate the efforts have proven extraordinarily helpful to science, medicine, biopharmaceutical research, and -Development.

Conclusion

I expect that AI will ultimately prove similarly transformative and, if developed wisely and used judiciously, will be seen as an essential tool for addressing the increasing complexity of biopharma research and development. It is less certain when this will occur noticeable Useful AI tools to advance R&D science will hit the market this year? This decade?

Like the young doctors now relying on open evidence, pharma R&D scientists will soon discover – perhaps sooner than you think – that the use of AI has become second nature to us, part of our work, and we ask Maybe we wonder how we managed to survive for so long without being able to survive.