Hyperlactatemia in Critical Illness: History, Mechanisms, and the “Great Debate"

Since the turn of the century, lactate has become a mainstay in emergency medicine and critical care laboratories. Some clinicians may hate it, others may love it, but very few can feign apathy on the subject. The utility of lactate in the emergency department and the ICU in guiding resuscitations, predicting mortality, or identifying occult critical illness continues to be discussed in the literature, most fervently in the realm of sepsis [1-4]. But what are the humble beginnings of this molecule? Most fundamentally, how is lactate generated in the setting of critical illness? And how did it come to be so firmly embedded in our understanding of the pathophysiology of critically ill patients?

 Joseph von Scherer circa 1867. Public Domain via Wikimedia Commons -https://upload.wikimedia.org/wikipedia/commons/d/db/J._v._Scherer.jpg

Joseph von Scherer circa 1867. Public Domain via Wikimedia Commons -https://upload.wikimedia.org/wikipedia/commons/d/db/J._v._Scherer.jpg

To answer these questions, we have to look back to the 19th century to Wurzburg, Germany. In the winter of 1842 Johann Joseph Scherer, a physician with a penchant for chemical analysis, was investigating a seasonal epidemic of puerperal fever. He published his exploits in the aptly, if unoriginally, titled “Investigations of pathological substances obtained during the epidemic of puerperal fever which occurred in the winter of 1842-1843 in and around Wurzburg”[5]. In these pages he described several cases of unfortunate young women who had succumbed to sepsis of various sources. When analyzing postmortem samples, Scherer identified high levels of lactate - the first time this molecule that had been identified in blood obtained from sick human subjects. And thus the association between lactate and “badness” was born.

Continued observational studies demonstrated lactate in leukemic patients, patients with hemorrhagic shock, and a myriad of other clinical maladies [5]. A proposed mechanism of hyperlactatemia was not available until nearly fifty years later in 1891, when Japanese chemist, Trasaburo Araki, observed that induced anemia in canines precipitated a rise in serum lactate. The association between hypovolemia and elevated lactate led him to suggest reduced oxygen delivery as the cause [5]. A decade later, Fletcher and Hopkins stimulated amphibian muscle fibers under variable concentrations of oxygen, finding that lactate accumulated most rapidly under anaerobic conditions and subsequently dissipated upon return to atmospheric conditions [6]. The conclusions from these studies, amongst others, stood the test of time and lactate became known as a product of hypoxia.

Table 1 - Classification of lactic acidosis, adapted from Cohen and Woods [7], with examples.

Over the years, the biochemical pathways leading to lactate generation continued to be fleshed out. In 1976, in their expansive volume “Clinical and Biochemical Aspects of Lactic Acidosis”, Cohen and Woods offered up a synthesis of the available literature on the mysterious molecule, including a classification of lactic acidosis (Table 1) [7].

Hypoxia remained as the primary explanation for lactate elevation in critical illness, and accumulating data continued to support it. In the late 1980’s, Astiz found increasing oxygen delivery led to increased oxygen consumption, both of which were associated with a decreased lactate in ten patients with septic shock [8]. As recently as 1998, Friedman demonstrated that increased delivery of O2 to tissues of patients in septic shock; either from fluid bolus, addition of vasoactive medications, or augmenting ventilatory support via increasing positive pressure; correlated with decreased blood lactate levels during the initial phase of resuscitation [9].

With the advent of early goal-directed therapy in the treatment of sepsis, and the subsequent search for a noninvasive biomarker to monitor dynamic resuscitations, came a vigorous new debate. A number of individuals have published opinions on the subject, but for the sake of simplicity they can be grouped into either the traditional or modern camp. In 2013, Marik and Bellomo published a review that aired several grievances with the traditional thinking surrounding hyperlactatemia in critical illness, specifically sepsis [10]. Their primary claim was that rising lactate in sepsis results from increased stimulation of the sodium potassium ATPase enzyme via catecholamine surge, rather than tissue oxygen deficit (See Figure 1, pathway 1).

Figure 1 - Mechanisms leading to lactate production in a skeletal myocyte. Pathway 1 – epinephrine binds beta2 receptors, triggering a rise in intracellular Camp, which in turn activates both the Na/K ATPase (producing ADP and favoring pyruvate production) and increases glycogenolysis, as well as glycolytic flux. Pathway 2 – Pyruvate dehydrogenase converts pyruvate to acetyl CoA for the tricarboxylic acid cycle. If cofactors, such as thiamine, are diminished, PDH activity is suboptimal, favoring lactate production. Pathway 3 – oxygen deficit leads to decreased throughput of the electron transport chain, again inhibiting the TCA cycle and shunting pyruvate to lactate. ILlustration by Chris Shaw, MD

The authors’ claims are supported by several basic science investigations that have described an association of adrenergic blockade with decreased lactate production in both in vivo and in vitro models of hemorrhagic shock [11-13]. Further, they reference two investigations that demonstrated a direct correlation between cardiac performance and elevated serum lactate, suggesting that lactate production may serve as an evolutionary response to severe physiologic stress, and thus provide some survival advantage during critical illness [14,15].

While pragmatists may shrug at the struggle for mechanistic supremacy, there are significant implications for clinical medicine in the outcome. Lactate clearance, the observation of decreasing serum lactate in a given amount of time, has been identified and utilized as a marker for predicted mortality in critically ill patients [1,2]. The data supporting this trend are relatively strong. If the “modern view” is correct, and lactate is indeed a primarily a surrogate of adrenergic tone, rather than tissue hypoxia, it may change the way we perceive lactate. Indeed it seems plausible that lactate is an energy boon for vital tissues during stress. Perhaps this molecule has been misappropriated as the villain, when in fact, it is simply a marker of appropriate response to physiologic insult. 

COntent by Chris Shaw, MD

Peer Editing and post by Ryan LaFollette, MD


  1. Jones, A. E. (2011). Point: should lactate clearance be substituted for central venous oxygen saturation as goals of early severe sepsis and septic shock therapy? Yes. CHEST Journal140(6), 1406-1408.
  2. Rivers, E. P., Elkin, R., & Cannon, C. M. (2011). Counterpoint: should lactate clearance be substituted for central venous oxygen saturation as goals of early severe sepsis and septic shock therapy? No. CHEST Journal140(6), 1408-1413.
  3. Zhang, Z., & Xu, X. (2014). Lactate clearance is a useful biomarker for the prediction of all-cause mortality in critically ill patients: a systematic review and meta-analysis. Critical care medicine42(9), 2118-2125.
  4. Ferreruela, M., Raurich, J. M., Ayestarán, I., & Llompart-Pou, J. A. (2017). Hyperlactatemia in ICU patients: Incidence, causes and associated mortalityJournal of Critical Care.
  5. Kompanje, E. J. O., Jansen, T. C., van der Hoven, B., & Bakker, J. (2007). The first demonstration of lactic acid in human blood in shock by Johann Joseph Scherer (1814–1869) in January 1843. Intensive care medicine33(11), 1967-1971.
  6. Fletcher, W. M., & Hopkins, F. G. (1907). Lactic acid in amphibian muscle. The Journal of physiology35(4), 247-309.
  7. Cohen, R. D., Woods, H. F., & Krebs, H. A. (1976). Clinical and biochemical aspects of lactic acidosis. Blackwell.
  8. Astiz, M. E., Rackow, E. C., Falk, J. L., Kaufman, B. S., & Weil, M. H. (1987). Oxygen delivery and consumption in patients with hyperdynamic septic shock. Critical care medicine15(1), 26-28.
  9. Friedman, G., De Backer, D., Shahla, M., & Vincent, J. L. (1998). Oxygen supply dependency can characterize septic shock. Intensive care medicine24(2), 118-123.
  10. Marik, P. E., Bellomo, R., & Demla, V. (2013). Lactate clearance as a target of therapy in sepsis: a flawed paradigm. OA Crit Care1(1), 3.
  11. Halmagyi, D. F. J., Kennedy, M., & Varga, D. (1971). Combined adrenergic receptor blockade and circulating catecholamines in hemorrhagic shock. European Surgical Research3(6), 378-388.
  12. Luchette, F. A., Robinson, B. R., Friend, L. A., McCarter, F., Frame, S. B., & James, J. H. (1999). Adrenergic antagonists reduce lactic acidosis in response to hemorrhagic shock. Journal of Trauma and Acute Care Surgery46(5), 873-880.
  13. McCarter, F. D., James, J. H., Luchette, F. A., Wang, L., Friend, L. A., King, J. K., ... & Fischer, J. E. (2001). Adrenergic blockade reduces skeletal muscle glycolysis and Na+, K+-ATPase activity during hemorrhage. Journal of Surgical Research99(2), 235-244.
  14. Revelly, J. P., Tappy, L., Martinez, A., Bollmann, M., Cayeux, M. C., Berger, M. M., & Chioléro, R. L. (2005). Lactate and glucose metabolism in severe sepsis and cardiogenic shock. Critical care medicine33(10), 2235-2240.
  15. Levy, Bruno, et al. "Myocardial lactate deprivation is associated with decreased cardiovascular performance, decreased myocardial energetics, and early death in endotoxic shock." Intensive care medicine 33.3 (2007): 495-502.