Patient-Centric Regulatory Affairs & Policy
PatientXDesign-Icon-RGB.jpg

Blog

Recent Posts

FDA Raises the Bar on Real-World Evidence (RWE) Studies: Is It Worth the Cost?

In December, FDA issued an updated draft guidance on Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. The guidance is hefty and presents a detailed view of what FDA expects to see from industry sponsored RWE studies. You are going to need an epidemiologist, a data scientist, and a biostatistician (and a stiff drink). The bar has been raised high when it comes to real-world evidence (RWE) studies being submitted to support FDA regulatory decision-making.

FDA has always required that RWE studies be fit-for-purpose and that the data have quality provenience that can be assessed. FDA’s expectations that RWE studies be good quality, rigorous, and conducted under a protocol and with IRB approval are not new. But the depth of data assessment, documentation, and requirements for submission outlined in this guidance have been augmented to the point that FDA had to acknowledge the fact that files under current review would not necessarily be held to the new standard set forth in this updated guidance. One has to wonder if the burden and cost laid down in this guidance is worth the increased certainty FDA might gain from industry ticking all of these new boxes.

That real-world data (RWD) used in RWE studies needs to be “relevant and reliable” is echoed throughout the guidance. In fact, this phrase is repeated 15 times by my count. That FDA feels the need to drill this point home in the guidance may reflect frustration with previous lackluster RWE submissions. According to FDA, relevance and reliability are the means by which a sponsor can demonstrate fit-for-purpose. Fortunately, FDA does provide ways to ensure study data is both relevant and reliable, rather than relying on industry’s interpretation of the requirement. However, achieving this standard will require significant investment of time and resources by sponsors.

Sponsors must follow Good Clinical Practice (GCP) guidelines for their RWE study. These guidelines primarily focus on prospective clinical research trials. However, even retrospective medical chart review studies can show compliance with GCP and enhance the reliability of a RWE study. Similarly, sponsors will need to show evidence of good data management for their study. This may seem odd given that much of the data may come from an EMR or claims database. But to the extent that the producer of the data and the sponsor can show good data management practices, this will enhance the perceived reliability of the study. 

The need for diverse and representative data is no surprise, but it can be difficult to identify the right study population in the RWD that is available to sponsors. To the extent possible, the RWD should reflect the target indicated population to demonstrate relevance to the regulatory decision being made. Assessing and mitigating bias in the RWD being used is important and often not well documented by sponsors. Addressing issues of bias and how it may impact the relevance of the RWD being used is a must for FDA. Finally, RWE studies should always be conducted with a study protocol and FDA wants to see a statistical analysis plan (SAP) before a study is conducted. Gone are the days of data mining to identify a dataset for support of a regulatory submission. Post-hoc analyses are not acceptable to review teams. They want to know that you had a plan going into the study before you conduct your analysis. 

The guidance goes on to detail requirements for documenting data availability, including the outcomes of interest to the study. This can be problematic for retrospective RWE studies that want to use EHR, Medicare data, or claims data. The clinical outcomes of interest may not be detailed enough in these sources to be of value under the new guidance. There are also detailed expectations for covariate evaluation, including potential confounders and effect modifiers, as well as the need to source data that can provide longitudinality to the study. Linking datasets effectively to create longitudinal RWD can be tricky and costly. Further, FDA expects documentation of how the data was accrued and whether it was done under specific procedures that are consistent across data sources. For those who have tried to work with RWD, this is a tall order and likely beyond the reach of some industry sponsors. One of the most interesting recommendations is for industry to ask data owners to provide FDA directly with information about their data. Are we seeing the advent of the RWD Master File? 

Some of the guidance reads like an Research Epidemiology 101 textbook. There is little doubt that FDA expects to see GCP guided, protocol- and SAP-driven, retrospective or prospective observational research studies done at the highest level of rigor when generating RWE. And don’t forget to get FDA’s okay using the pre-submission process before you start a RWE study.  While I agree that these studies should be held to a high standard, my concern is that this guidance fails to account for the time, cost, and effort studies like these will take. This guidance effectively makes RWE studies out of reach for small and startup device companies, and may prevent larger companies from seeking new indications or investing in innovative products. 

A greater desire by regulators for clinical data to support medical device safety and effectiveness is here to stay, not just at FDA, but globally. While the motivations are genuinely for the benefit of public health, the reality is that clinical and observational research is expensive, complicated, time consuming, and burdensome. These costs will ultimately be passed down to payers and patients. I am not sure the incremental confidence FDA has in its regulatory decisions is worth the cost.