Patient-Centric Regulatory Affairs & Policy
PatientXDesign-Icon-RGB.jpg

Blog

Recent Posts

Artificial Intelligence Assurance Lab at FDA - Could Jeff Shuren's Idea Work?

At the recent meeting of the National Organization for Rare Disorders (NORD), Jeff Shuren proposed an idea to the attendees that has gotten little press, except for this small article on the InsideHealthPolicy media site. But, I think this could be a really big deal. So, I wanted to make sure you all had heard about it. 

Jeff Shuren’s proposal is to create a “safe space” for the development and evaluation of artificial intelligence (AI) technologies. The AI Assurance Lab would house large datasets on which AI technology could be designed and validated. This approach would help create balance between pre-market regulatory requirements and post-market surveillance, presumably by reassuring FDA that a particular AI device had been developed with known datasets. Jeff Shuren also indicated that AI technology should be regulated using “community based solutions” according to the article. This implies that manufacturers of AI medical devices may need to get comfortable with using shared resources overseen by the FDA to develop their technology. 

On the one hand, I LOVE this idea of a shared resource consisting of high quality, rigorous data that meets FDA’s expectations for generalizability, bias, equity, and applicability for the development of AI driven medical devices. This could level the playing field for smaller companies to develop AI algorithms without the expense of gathering big datasets on which to train their devices, while making sure their device is developed with a good quality dataset already vetted by the Agency. Validation protocols could be consistent and uncertainty reduced. Predetermined Change Control Plans (PCCP) could be developed with FDA in the AI Assurance Lab that account for the known sets of data and the indications for use of the device. The AI Assurance Lab could create more certainty and transparency around the pre-market regulatory expectations for AI device manufacturers, and streamline review. It almost sounds too good to be true - and it probably is. 

The reality is that this idea would create complex, though not insurmountable, challenges for both FDA and industry. Let’s start with the data. New AI-driven medical devices are being developed all the time for novel therapeutic indications. This means that the data needed to train them must contain the information necessary to teach the algorithm about the therapeutic indication in question. It is unclear how many datasets would be necessary to support multiple therapeutic areas in Shuren’s Assurance Lab idea. It is possible that the Lab could start with one therapeutic area focus and grow from there. But, how does the Agency choose which data to start with? This becomes even more problematic for rare disease and orphan products. The likelihood that FDA would have datasets for use by a range of manufacturers seems unlikely. 

Then, there is the question of where the data comes from? As those of us who work with real-world data (RWD) and real-world evidence (RWE) know, high quality, rigorous data is expensive and not always readily available. Though FDA has seen some success gathering RWD from various sources at the National Evaluation System for Health Technology (NEST), difficulties aggregating data from various sources persist. Therefore, it is unclear where the data for the AI Assurance Lab would come from or on what therapeutic areas it would focus. 

Finally, I wonder about the idea of using “community based solutions” as a practical matter for industry. I agree that this is a good idea, in theory. But, in practice, I wonder how well the medical device industry will play in the same data sandbox for the development of their AI devices. One could argue that the data used to train an algorithm could be a competitive advantage for a given manufacturer, especially if they invest in collection of that data themselves. Is it possible that AI device manufacturers would consider sharing and collaborating with each other when it comes to the data used to train their device algorithms? Perhaps I am being a bit cynical. But, I’m not sure how well that will work. 

I do believe we need rational, understandable, consistent and transparent regulation of AI-drivin medical devices. I also believe that having access to reliable, rigorous, high quality data for the training of these devices could make development of these devices more equitable for smaller device manufacturers. Given the limitations I’ve identified here, I am not entirely sure about Shuren’s idea. But, if it did work, I would love to be proven wrong about it.