What is Clinical Informatics Anyway?

When I began my clinical informatics elective I planned to spend my time taking online courses on how to improve my Epic efficiency. After a day or so of online sessions I realized that (a) I’m already an Epic master (sort of) and (b) I hoped there was more to “clinical informatics” than faster documentation. So let’s begin at the beginning: what is clinical informatics?

As I found out, that’s not an entirely straightforward question to answer. The simplest definition of informatics is the use of computational systems to analyze data and help make decisions, nudge behavior, or perform tasks that improve human life. Apply that specifically to the hospital and you’ve got “clinical informatics”. There’s a lot that’s contained under that umbrella. Implementing an order set that encourages adherence to antibiotic guidelines? That’s clinical informatics. Analyzing how AI-assisted X-ray interpretation affects resident education and patient time in the department? That’s clinical informatics. Testing out a natural language processing system that predicts billing codes? You guessed it: clinical informatics. 

In the early aughts, widespread use of EMRs created an opportunity for rapid experimentation. Around this time, Dr. Charles Friedman, a big name in the field and at that time the Office of the National Coordinator for Health IT’s Chief Scientific Officer, put forward his Fundamental Theorem for Biomedical Informatics: “A person working in partnership with an information resource is ‘better’ than that same person unassisted.”(1) In other words, this wasn’t a question of machines replacing people but rather offloading tasks and providing assistance to improve the output of the person. Suddenly, a physician could choose a medication from a hard-coded pharmacopeia in her EMR and be provided with commonly accepted doses of that medication. When that same physician ordered a T4 level, the system might recommend she order a TSH first. Or perhaps when ordering a pulmonary angiography CT, a hard stop would be triggered to remind that physician of the guidelines for appropriately using that imaging modality. Pretty soon, however, clinical informatics researchers began to realize that no matter how logical the intervention, it didn’t always have the desired effect. Physicians chafed against anything that added extra steps to their documentation or put a hard stop to their actions without providing reasonable alternatives. Additionally, sometimes interventions that seemed like no-brainers ended up being complete duds. Take, for instance, the decision by some healthcare systems to limit the number of active open charts to reduce the occurrence of wrong-patient medication orders. These decisions were largely based on “expert recommendations” and implementation varied widely depending on the whims of the individual system. In 2017, a cleverly-designed study demonstrated that for at least one large hospital the switch from a maximum of four charts down to two charts and then back to four had no significant effect on wrong-patient medication errors.(2) Two years later a randomized clinical trial again demonstrated no difference between restricting the number of open charts to one versus allowing as many as four.(3) 

This brings me to two Lemmas of my own, which, after 2 weeks of studying the field, I feel modestly qualified to propose.

  • Lemma 1: “Appeals to convenience are stronger drivers of behavioral change than appeals to logic or morality.”

  • Lemma 2: “Even if it seems obvious, you can’t know the effect of the intervention until you study it.” 

We’ve come a long way from the dawn of clinical informatics and the first EMR. So, for the remainder of this post I’d like to present a digest of sorts of seven recent articles that illustrate the breadth of clinical informatics and hint towards where we’re going. For each I’ll provide a brief summary, but for those interested I’d recommend reading the entire article. 

Clinical Informatics Digest

Anderson et al. 2024. “Deep Learning Improves Physician Accuracy in the Comprehensive Detection of Abnormalities on Chest X-Rays.” Scientific Reports 14 (1): 25151.

This article by Pamela Anderson and colleagues demonstrated improved accuracy by non-radiologists for interpreting chest x-rays when assisted by a deep learning-based computer vision software. This software would overlay a box over regions of interest that represented a predicted abnormality, which improved correct interpretation of the chest x-ray by Emergency Medicine, Family Medicine, and Internal Medicine physicians when compared to physicians not using this software.

Born et al. 2024. “The Role of Information Systems in Emergency Department Decision-Making-a Literature Review.” Journal of the American Medical Informatics Association: JAMIA 31 (7): 1608–21. 

This systematic review evaluates how interventions using information systems have affected decision making in the Emergency Department. They divide interventions into two categories: Active systems that present recommendations directly to physicians based on patient data and passive systems that provide access to patient records and current guidelines without any interpretation. They also divide decision-making into two phases: the initial Heuristic Decision-Making (first impressions and management of life threats) and the subsequent Analytical Decision-Making (interpretation of data and determination of ultimate disposition). Authors conclude that active interventions achieve their desired effect much more often when delivered during the Analytical phase, while passive interventions are more effective when delivered during the Heuristic phase. 

Dutta et al. 2025. “Result Push Notifications Improve Time to Emergency Department Disposition: A Pragmatic Observational Study.Annals of Emergency Medicine 85 (1): 53–62.

In this study authors observe a faster time-to-disposition for emergency physicians who elected to use push-notifications for imaging and laboratory studies than those who did not. This association doesn’t prove effectiveness of push notifications in increasing department throughput, but suggests they may play a role.

Herpe et al. 2024. “Effectiveness of an Artificial Intelligence Software for Limb Radiographic Fracture Recognition in an Emergency Department.” Journal of Clinical Medicine 13 (18). https://doi.org/10.3390/jcm13185575.

 In this retrospective study authors analyzed whether implementation of AI software to identify fractures on limb radiographs helped reduce discrepancies between emergency provider and radiology interpretations and/or decreased length of stay. The results do not demonstrate a clinically relevant difference overall but for arm radiographs specifically there is a modest reduction in interpretation discrepancy. Authors also note a modest reduction in length of stay after the AI system was employed.

Jennings et al. 2023. “The Effectiveness of a Noninterruptive Alert to Increase Prescription of Take-Home Naloxone in Emergency Departments.” Journal of the American Medical Informatics Association: JAMIA 30 (4): 683–91. 

Authors in this study used a two-part intervention to increase prescription of take-home naloxone in patients presenting with concern for opioid abuse. First, they inserted a reminder in the basic ED provider note template to use a specific smartphrase if they suspected opioid overdose. Second, they developed a “.opioid” smartphrase that contained reminders for appropriate documentation and discharge treatment plan. There was a significant increase in prescription of outpatient naloxone when physicians employed the dot phrase as compared to when they did not.

Nong et al. 2025. “Expectations of Healthcare AI and the Role of Trust: Understanding Patient Views on How AI Will Impact Cost, Access, and Patient-Provider Relationships.” Journal of the American Medical Informatics Association: JAMIA, March, ocaf031.

The success of AI in medical care likely depends at least in part on the willingness of the public to trust the technology. This article presents results of a cross-sectional nationwide survey to evaluate patient expectations of the benefits of AI and how baseline trust in the medical system affects these expectations. Unsurprisingly, they find that patients overall have low expectations for the benefits that AI might provide to their healthcare. Furthermore, lower trust in the medical system predicts lower expectations. 

Ye et al. 2024. “The Role of Artificial Intelligence for the Application of Integrating Electronic Health Records and Patient-Generated Data in Clinical Decision Support.” AMIA Summits on Translational Science Proceedings AMIA Summit on Translational Science 2024 (May): 459–67. 

The ubiquity of health monitors (think smartwatches, smartphones, and those rings that track your sleep and heart rate variability) means that patients have more access to continuously generated personal health data than ever before. This narrative review examines the current state of how personal health data has been integrated into electronic health records and the potential for AI to parse this data and provide recommendations for clinicians.



Authorship

Written by Stanford Schor, MD PhD, PGY-3 University of Cincinnati Department of Emergency Medicine

Review, Editing, and Posting: Jeffery Hill, MD MEd, Associate Professor, University of Cincinnati Department of Emergency Medicine

Cite As: Schor, S. Hill, J. What is Clinical Informatics Anyway? TamingtheSRU. www.tamingthesru.com/blog/electives/clinical-informatics. 5/30/2025