Skip to main content

Advertisement

Log in

Evaluating the Feasibility of Electronic Health Records and Claims Data Sources for Specific Research Purposes

  • Original Research
  • Published:
Therapeutic Innovation & Regulatory Science Aims and scope Submit manuscript

Abstract

Data collected in real-world clinical settings are increasingly being used to evaluate therapeutic options. While in its infancy for research assessing effectiveness, especially comparative effectiveness in the regulatory environment, electronic health records (EHR) and administrative insurance claims data are used extensively by both manufacturers and regulators to evaluate post-marketing safety of products in the real world. The feasibility of using these data for analysis in a research study depends on the specific research question and the availability, quality and relevance of the collected data to address the scientific question. It is unlikely that any specific database could be ‘qualified’ for use across all research questions, even within a specific therapeutic area, due to dependence of feasibility on the elements of the specific research question. This paper describes considerations for determining whether EHR or claims data can be used for specific research purposes. A new structured approach for assessing the feasibility of these data in research is proposed. The framework builds on and considers whether each element of the PICOTS framework for well-structured research questions is adequately captured to allow for viable reliance on EHR and claims data for that specific scientific question. Practical examples and discussion of the limitations of RWD for research are given along with approaches for interpretation of analyses using RWD.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1.
Figure 2.
Figure 3.
Figure 4.

Similar content being viewed by others

References

  1. Wachtell K, Lagergvist B, Olivecrona GK, et al. Novel trial designs: lessons learned from thrombus aspiration during St-segment elevation myocardial infarction in Scandinavia (TASTE) trial. Curr Cardiol Rep. 2016;18(1):11.

    Article  Google Scholar 

  2. U.S. Food and Drug Administration. Framework for FDA’s real-world evidence program. https://www.fda.gov/media/120060/download. Updated December 2018. Accessed 10 Mar 2019.

  3. Liao KP, Ananthakrishnan AN, Kumar V, et al. Methods to develop an electronic medical record phenotype algorithm to compare the risk of coronary artery disease across 3 chronic disease cohorts. PLoS ONE. 2015;10(8):e0136651. https://doi.org/10.1371/journal.pone.0136651.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Newton KM, Peissig PL, Kho AN, et al. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network. J Am Med Inform Assoc. 2013;20:e147–e154154. https://doi.org/10.1136/amiajnl-2012-000896.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Jonsson-Funk M, Landi SN. Misclassification in administrative claims data: quantifying the impact on treatment effects. Curr Epidemiol Rep. 2014;1:175–85. https://doi.org/10.1007/s40471-014-0027-z.

    Article  Google Scholar 

  6. Glynn RJ, Schneeweiss S, Stürmer T. Indications for propensity scores and review of their use in pharmacoepidemiology. Basic Clin Pharmacol Toxicol. 2006;98:253–9. https://doi.org/10.1111/j.1742-7843.2006.pto_293.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Brookhart MA, Wyss R, Layton JB, Stürmer T. Propensity score methods for confounding control in nonexperimental research. Circ Cardiovasc Qual Outcomes 2013;6:1–8. https://doi.org/10.1161/CIRCOUTCOMES.113.000359.

    Article  Google Scholar 

  8. Ray WA. Evaluating medication effects outside of clinical trials: new user designs. Am J Epidemiol. 2003;158:915–20. https://doi.org/10.1093/aje/kwg231.

    Article  PubMed  Google Scholar 

  9. Girman CJ, Ritchey ME, Zhou W, Dreyer NA. Considerations in characterizing real-world data relevance and quality for regulatory purposes: a commentary. Pharmacoepidemiol Drug Saf. 2018. https://doi.org/10.1002/pds.4697.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Richardson WS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123:A12–A1313.

    CAS  PubMed  Google Scholar 

  11. Guyatt G, Rennie D, editors. Users' guides to the medical literature: a manual for evidence-based clinical practice / The Evidence-Based Medicine Working Group. Chicago, IL: AMA Press; 2002.

    Google Scholar 

  12. Food and Drug Administration Guidance for Industry and Staff. Best practices for conducting and reporting pharmacoepidemiologic safety studies using electronic healthcare data. https://www.fda.gov/ucm/groups/fdagov-public/@fdagov-drugs-gen/documents/document/ucm243537.pdf. Accessed 25 Feb 2019.

  13. Walker AM, Patrick AR, Lauer MS, Hornbrook M, Marin M, Platt R, et al. A tool for assessing the feasibility of comparative effectiveness research. Comp Effect Res. 2013;3:11–20. https://doi.org/10.2147/CER.S40357.

    Article  Google Scholar 

  14. Temple R. A regulator’s view of comparative effectiveness research. Clin Trials. 2012;9:56–655. https://doi.org/10.1177/1740774511422548.

    Article  PubMed  Google Scholar 

  15. Girman CJ, Faries D, Ryan P, et al. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research. J Comp Eff Res. 2014;3(3):259–70. https://doi.org/10.2217/cer.14.16.

    Article  PubMed  Google Scholar 

  16. Schneeweiss A. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiol Drug Saf. 2006;15:291–303. https://doi.org/10.1002/pds.1200.

    Article  PubMed  Google Scholar 

  17. Lipscombe LL, Hwee J, Webster L, Shah BR, Booth GL, Tu K. Identifying diabetes cases from administrative data: a population-based validation study. BMC Health Serv Res. 2018;18:316. https://doi.org/10.1186/s12913-018-3148-0.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Hux JE, Flintoft V, Ives F, Bica A. Diabetes in Ontario: determination of prevalence and incidence using a validated administrative data algorithm. Diabetes Care 2002;23:512–6. https://doi.org/10.2337/diacare.25.3.512.

    Article  Google Scholar 

  19. Franklin JM, Glynn RJ, Martin D, Schneeweiss S. Evaluating the use of nonrandomized real-world data analyses for regulatory decision making. Clin Pharmacol Ther. 2019;105(4):867–77. https://doi.org/10.1002/cpt.1351.

    Article  PubMed  Google Scholar 

  20. Gatto NM, Reynolds RF, Campbell UB. A structured preapproval and postapproval comparative study design framework to generate valid and transparent real-world evidence for regulatory decisions. Clin Pharmacol Ther. 2019;106(1):103–15. https://doi.org/10.1002/cpt.1480.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mary E. Ritchey PhD, FISPE.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ritchey, M.E., Girman, C.J. Evaluating the Feasibility of Electronic Health Records and Claims Data Sources for Specific Research Purposes. Ther Innov Regul Sci 54, 1296–1302 (2020). https://doi.org/10.1007/s43441-020-00139-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43441-020-00139-x

Keywords

Navigation