Patient-Centric Regulatory Affairs & Policy
PatientXDesign-Icon-RGB.jpg

Blog

Recent Posts

Is it Possible to Truly Control for Bias in AI Training Models? We Can’t Afford Not To.

I recently listened to a recorded webcast entitled AI’s Moment in Health from the Milken Institutes Future of Health Summit 2023. The panel included Brian Anderson, Srini Iyer, Pat Thistlewaite, Luciana Borio, and Herman Sanchez. This panel engaged in one of the best discussions I have heard about the role of AI in healthcare and the challenges we face as AI becomes embedded in our health institutions, medical devices, and clinical decision-making. I was particularly struck by the discussion about bias in the models being used to train AI and the potential negative impact this could have on patients. I see bias as a particularly urgent issue as AI device technology is moving faster than global regulatory processes are able to control them or mitigate their risk on public health. 

One of the greatest risks to patients from use of AI in medical devices, and healthcare in general, is the presence of bias in the models used to train them. Historically, clinical care in the United States has been based on the needs of white men. [I’m not going to cite the millions of pages of literature on this topic, please stop reading if this offended you.]. Women, minorities, disadvantaged populations, and marginalized communities have rarely been used to inform clinical care across therapeutic areas and conditions. Now that we are looking to train AI to treat people, make clinical decisions, and put more power into the hands of patients to make treatment decisions for themselves, we need to make sure that we are NOT relying on these old, biased data and practices when training these new technologies. 

One of the panelists said something that struck me deeply:  we use our histories to train AI models. Read that again…we use OUR HISTORIES to train our AI models. This should scare us profoundly. Our history in the United States is blotted by racism, disenfranchisement, inequity, and discrimination. Training our AI models on this history is akin to training an entire generation of children to hate, ignore, and suppress entire segments of the population [oh wait….]. Innovators MUST make sure they are looking up from these historical, easy to access, datasets and invest in collection of data from communities typically excluded from these data. These include Asian, Pacific-Islander, Native American, First Nation, and Indigenous Peoples, Women, Children, Black and Brown Peoples, individuals with physical and learning disabilities, incarcerated people, those of low socio-economic status, rural and urban communities, and so many others I am certain to have missed. When we look at the basic epidemiology of disease, we can easily see the diversity of factors that impact the risk of diseases and positive treatment outcomes. How can we afford to not include these considerations in the training of our AI models? 

It is of urgent importance that we use the tools we have available to identify bias and address it. The panel was clear that we now have the technology and analytic tools to identify the presence of bias and mitigate it, if not eradicate it, from these AI training models. In my opinion, we have no excuse for not investing in broader data collection to train these models. Further, the need to control bias would argue for continuous learning models to which additional data can be added over time. While this may mean greater effort from innovators to define how they will train and maintain validation of these models under evolving regulatory frameworks, this is an investment we cannot afford to miss.  

There is currently no agreement on how industry defines, measures, or mitigates bias in AI medical devices or healthcare applications. Innovators in this space need to make a concerted effort to collaborate on this issue, define bias and acknowledge the problems with historical data, and then work with regulators to ensure consistency and compliance. Regulators should then hold innovators accountable to their agreed upon frameworks for addressing and controlling bias.  While the technology is moving quickly, rational policy agreed upon by both industry and regulators is not an insurmountable problem. But, there is urgency. Globally, regulators are developing a patchwork of regulations and guidelines to put guardrails around AI medical technology. Failing to organize and harmonize these regulations, particularly when it comes to addressing bias, will create barriers to technology and increase costs and risks for patients. 

Finally, the AI medical technology industry and regulators will need to create trust with the patient community. Given the history of disenfranchisement of marginalized populations and the perpetuation of these issues in the data being used to train AI, there is reason for skepticism. This, again, is why industry leaders must make an intentional effort to invest in collecting meaningful data from these populations to include in AI training models and use transparency in their communication about their efforts to create equity. Disparities in healthcare have persisted for generations. AI brings to the forefront the potential catastrophic consequences should these disparities and bias continue to persist in the way we train the future of healthcare technology.