|Myeloid-Derived Suppressor Cells Mediate Immunosuppression After Cardiopulmonary Bypass
Objectives: Cardiopulmonary bypass is associated with severe immune dysfunctions. Particularly, a cardiopulmonary bypass–related long-lasting immunosuppressive state predisposes patients to a higher risk of postoperative complications, such as persistent bacterial infections. This study was conducted to elucidate mechanisms of post-cardiopulmonary bypass immunosuppression. Design: In vitro studies with human peripheral blood mononuclear cells. Setting: Cardiosurgical ICU, University Research Laboratory. Patients: Seventy-one patients undergoing cardiac surgery with cardiopulmonary bypass (enrolled May 2017 to August 2018). Interventions: Peripheral blood mononuclear cells before and after cardiopulmonary bypass were analyzed for the expression of immunomodulatory cell markers by real-time quantitative reverse transcription polymerase chain reaction. T cell effector functions were determined by enzyme-linked immunosorbent assay, carboxyfluorescein succinimidyl ester staining, and cytotoxicity assays. Expression of cell surface markers was assessed by flow cytometry. CD15+ cells were depleted by microbead separation. Serum arginine was measured by mass spectrometry. Patient peripheral blood mononuclear cells were incubated in different arginine concentrations, and T cell functions were tested. Measurements and Main Results: After cardiopulmonary bypass, peripheral blood mononuclear cells exhibited significantly reduced levels of costimulatory receptors (inducible T-cell costimulator, interleukin 7 receptor), whereas inhibitory receptors (programmed cell death protein 1 and programmed cell death 1 ligand 1) were induced. T cell effector functions (interferon γ secretion, proliferation, and CD8+-specific cell lysis) were markedly repressed. In 66 of 71 patients, a not yet described cell population was found, which could be characterized as myeloid-derived suppressor cells. Myeloid-derived suppressor cells are known to impair immune cell functions by expression of the arginine-degrading enzyme arginase-1. Accordingly, we found dramatically increased arginase-1 levels in post-cardiopulmonary bypass peripheral blood mononuclear cells, whereas serum arginine levels were significantly reduced. Depletion of myeloid-derived suppressor cells from post-cardiopulmonary bypass peripheral blood mononuclear cells remarkably improved T cell effector function in vitro. Additionally, in vitro supplementation of arginine enhanced T cell immunocompetence. Conclusions: Cardiopulmonary bypass strongly impairs the adaptive immune system by triggering the accumulation of myeloid-derived suppressor cells. These myeloid-derived suppressor cells induce an immunosuppressive T cell phenotype by increasing serum arginine breakdown. Supplementation with L-arginine may be an effective measure to counteract the onset of immunoparalysis in the setting of cardiopulmonary bypass. Drs. Hübner and Tomasi contributed equally to this work. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (simone.kreth Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Factors Associated With Quality of Death in Korean ICUs As Perceived by Medical Staff: A Multicenter Cross-Sectional Survey
Objectives: Facilitating a high quality of death is an important aspect of comfort care for patients in ICUs. The quality of death in ICUs has been rarely reported in Asian countries. Although Korea is currently in the early stage after the implementation of the “well-dying” law, this seems to have a considerable effect on practice. In this study, we aimed to understand the status of quality of death in Korean ICUs as perceived by medical staff, and to elucidate factors affecting patient quality of death. Design: A multicenter cross-sectional survey study. Setting: Medical ICUs of two tertiary-care teaching hospitals and two secondary-care hospitals. Patients: Deceased patients from June 2016 to May 2017. Interventions: Relevant medical staff were asked to complete a translated Quality of Dying and Death questionnaire within 48 hours after a patient’s death. A higher Quality of Dying and Death score (ranged from 0 to 100) corresponded to a better quality of death. Measurements and Main Results: Total 416 completed questionnaires were obtained from 177 medical staff (66 doctors and 111 nurses) of 255 patients. All 20 items of the Quality of Dying and Death received low scores. Quality of death perceived by nurses was better than that perceived by doctors (33.1 ± 18.4 vs 29.7 ± 15.3; p = 0.042). Performing cardiopulmonary resuscitation and using inotropes within 24 hours before death were associated with poorer quality of death, whereas using analgesics was associated with better quality of death. Conclusions: The quality of death of patients in Korean ICUs was considerably poorer than reported in other countries. Provision of appropriate comfort care, avoidance of unnecessary life-sustaining care, and permission for more frequent visits from patients’ families may correspond to better quality of death in Korean medical ICUs. It is also expected that the new legislation would positively affect the quality of death in Korean ICUs. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (yjlee1117 Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Neurologic Complications of Infective Endocarditis: A Joint Model for a Septic Thromboembolism and Inflammatory Small Vessel Disease
Objectives: Embolic events from vegetations are commonly accepted as the main mechanism involved in neurologic complications of infective endocarditis. The pathophysiology may imply other phenomena, including vasculitis. We aimed to define the cerebral lesion spectrum in an infective endocarditis rat model. Design: Experimental model of Staphylococcus aureus or Enterococcus faecalis infective endocarditis. Neurologic lesions observed in the infective endocarditis model were compared with three other conditions, namely bacteremia, nonbacterial thrombotic endocarditis, and healthy controls. Setting: Research laboratory of a university hospital. Subjects: Male Wistar rats. Interventions: Brain MRI, neuropathology, immunohistochemistry for astrocyte and microglia, and bacterial studies on brain tissue were used to characterize neurologic lesions. Measurements and Main Results: In the infective endocarditis group, MRI revealed at least one cerebral lesion in 12 of 23 rats (52%), including brain infarctions (n = 9/23, 39%) and cerebral microbleeds (n = 8/23, 35%). In the infective endocarditis group, neuropathology revealed brain infarctions (n = 12/23, 52%), microhemorrhages (n = 10/23, 44%), and inflammatory processes (i.e., cell infiltrates including abscesses, vasculitis, meningoencephalitis, and/or ependymitis; n = 11/23, 48%). In the bacteremia group, MRI studies were normal and neuropathology revealed only hemorrhages (n = 2/11, 18%). Neuropathologic patterns observed in the nonbacterial thrombotic endocarditis group were similar to those observed in the infective endocarditis group. Immunochemistry revealed higher microglial activation in the infective endocarditis group (n = 11/23, 48%), when compared with the bacteremia (n = 1/11, 9%; p = 0.03) and nonbacterial thrombotic endocarditis groups (n = 0/7, 0%; p = 0.02). Conclusions: This original model of infective endocarditis recapitulates the neurologic lesion spectrum observed in humans and suggests synergistic mechanisms involved, including thromboembolism and cerebral vasculitis, promoted by a systemic bacteremia-mediated inflammation. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (marie.cantier; mikael.mazighi Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Pro: Should Patients With Acute Respiratory Distress Syndrome on Veno-Venous Extracorporeal Membrane Oxygenation Have Ventilatory Support Reduced to the Lowest Tolerable Settings?
No abstract available
|Why the Adjunctive Corticosteroid Treatment in Critically Ill Patients With Septic Shock (ADRENAL) Trial Did Not Show a Difference in Mortality?
No abstract available
|Impact on Patient Outcomes of Pharmacist Participation in Multidisciplinary Critical Care Teams: A Systematic Review and Meta-Analysis
Objectives: The objective of this systematic review and meta-analysis was to assess the effects of including critical care pharmacists in multidisciplinary ICU teams on clinical outcomes including mortality, ICU length of stay, and adverse drug events. Data Sources: PubMed, EMBASE, and references from previous relevant systematic studies. Study Selection: We included randomized controlled trials and nonrandomized studies that reported clinical outcomes such as mortality, ICU length of stay, and adverse drug events in groups with and without critical care pharmacist interventions. Data Extraction: We extracted study details, patient characteristics, and clinical outcomes. Data Synthesis: From the 4,725 articles identified as potentially eligible, 14 were included in the analysis. Intervention of critical care pharmacists as part of the multidisciplinary ICU team care was significantly associated with the reduced likelihood of mortality (odds ratio, 0.78; 95% CI, 0.73–0.83; p < 0.00001) compared with no intervention. The mean difference in ICU length of stay was –1.33 days (95% CI, –1.75 to –0.90 d; p < 0.00001) for mixed ICUs. The reduction of adverse drug event prevalence was also significantly associated with multidisciplinary team care involving pharmacist intervention (odds ratio for preventable and nonpreventable adverse drug events, 0.26; 95% CI, 0.15–0.44; p < 0.00001 and odds ratio, 0.47; 95% CI, 0.28–0.77; p = 0.003, respectively). Conclusions: Including critical care pharmacists in the multidisciplinary ICU team improved patient outcomes including mortality, ICU length of stay in mixed ICUs, and preventable/nonpreventable adverse drug events. Drs. Lee and Ryu contributed equally to this work as co-first authors. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (eykimjcb777 Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Adjusting for Disease Severity Across ICUs in Multicenter Studies
Objectives: To compare methods to adjust for confounding by disease severity during multicenter intervention studies in ICU, when different disease severity measures are collected across centers. Design: In silico simulation study using national registry data. Setting: Twenty mixed ICUs in The Netherlands. Subjects: Fifty-five–thousand six-hundred fifty-five ICU admissions between January 1, 2011, and January 1, 2016. Interventions: None. Measurements and Main Results: To mimic an intervention study with confounding, a fictitious treatment variable was simulated whose effect on the outcome was confounded by Acute Physiology and Chronic Health Evaluation IV predicted mortality (a common measure for disease severity). Diverse, realistic scenarios were investigated where the availability of disease severity measures (i.e., Acute Physiology and Chronic Health Evaluation IV, Acute Physiology and Chronic Health Evaluation II, and Simplified Acute Physiology Score II scores) varied across centers. For each scenario, eight different methods to adjust for confounding were used to obtain an estimate of the (fictitious) treatment effect. These were compared in terms of relative (%) and absolute (odds ratio) bias to a reference scenario where the treatment effect was estimated following correction for the Acute Physiology and Chronic Health Evaluation IV scores from all centers. Complete neglect of differences in disease severity measures across centers resulted in bias ranging from 10.2% to 173.6% across scenarios, and no commonly used methodology—such as two-stage modeling or score standardization—was able to effectively eliminate bias. In scenarios where some of the included centers had (only) Acute Physiology and Chronic Health Evaluation II or Simplified Acute Physiology Score II available (and not Acute Physiology and Chronic Health Evaluation IV), either restriction of the analysis to Acute Physiology and Chronic Health Evaluation IV centers alone or multiple imputation of Acute Physiology and Chronic Health Evaluation IV scores resulted in the least amount of relative bias (0.0% and 5.1% for Acute Physiology and Chronic Health Evaluation II, respectively, and 0.0% and 4.6% for Simplified Acute Physiology Score II, respectively). In scenarios where some centers used Acute Physiology and Chronic Health Evaluation II, regression calibration yielded low relative bias too (relative bias, 12.4%); this was not true if these same centers only had Simplified Acute Physiology Score II available (relative bias, 54.8%). Conclusions: When different disease severity measures are available across centers, the performance of various methods to control for confounding by disease severity may show important differences. When planning multicenter studies, researchers should make contingency plans to limit the use of or properly incorporate different disease measures across centers in the statistical analysis. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Dr. Brakenhoff and Dr. Plantinga contributed equally. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (N.L.Plantinga Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Recommendation of New Medical Alarms Based on Audibility, Identifiability, and Detectability in a Randomized, Simulation-Based Study
Objectives: Accurate and timely identification of existing audible medical alarms is not adequate in clinical settings. New alarms that are easily heard, quickly identifiable, and discernable from one another are indicated. The “auditory icons” (brief sounds that serve as metaphors for the events they represent) have been proposed as a replacement to the current international standard. The objective was to identify the best performing icons based on audibility and performance in a simulated clinical environment. Design: Three sets of icon alarms were designed using empirical methods. Subjects participated in a series of clinical simulation experiments that examined the audibility, identification accuracy, and response time of each of these icon alarms. A statistical model that combined the outcomes was used to rank the alarms in overall efficacy. We constructed the “best” and “worst” performing sets based on this ranking and prospectively validated these sets in a subsequent experiment with a new subject sample. Setting: Experiments were conducted in simulated ICU settings at the University of Miami. Subjects: Medical trainees were recruited from a convenience sample of nursing students and anesthesia residents at the institution. Interventions: In Experiment 1 (formative testing), subjects were exposed to one of the three sets of alarms; identical setting and instruments were used throughout. In Experiment 2 (summative testing), subjects were exposed to one of the two sets of alarms, assembled from the best and worst performing alarms from Experiment 1. Measurements and Main Results: For each alarm, we determined the minimum sound level to reach audibility threshold in the presence of background clinical noise, identification accuracy (percentage), and response time (seconds). We enrolled 123 medical trainees and professionals for participation (78 with < 6 yr of training). We identified the best performing icon alarms for each category, which matched or exceeded the other candidate alarms in identification accuracy and response time. Conclusions: We propose a set of eight auditory icon alarms that were selected through formative testing and validated through summative testing for adoption by relevant regulatory bodies and medical device manufacturers. This work was performed at the University of Miami. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http;/mcneer Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|A 360° Rotational Positioning Protocol of Organ Donors May Increase Lungs Available for Transplantation
Objectives: To evaluate the improvement in lung donation and immediate lung function after the implementation of a 360° rotational positioning protocol within an organ procurement organization in the Midwest. Design: Retrospective observational study. Setting: The Midwest Transplant Network from 2005 to 2017. Rotational positioning of donors began in 2008. Subjects: Potential deceased lung donors. Interventions: A 360° rotational protocol. Presence of immediate lung function in recipients, change in PaO2:FIO2 ratio during donor management, initial and final PaO2:FIO2 ratio, and proportion of lungs donated were measured. Outcomes were compared between rotated and nonrotated donors. Measurements and Main Results: A total of 693 donors were analyzed. The proportion of lung donations increased by 10%. The difference between initial PaO2:FIO2 ratio and final PaO2:FIO2 ratio was significantly different between rotated and nonrotated donors (36 ± 116 vs 104 ± 148; p < 0.001). Lungs transplanted from rotated donors had better immediate function than those from nonrotated donors (99.5% vs 68%; p < 0.001). Conclusions: There was a statistically significant increase in lung donations after implementing rotational positioning of deceased donors. Rotational positioning significantly increased the average difference in PaO2:FIO2 ratios. There was also superior lung function in the rotated group. The authors recommend that organ procurement organizations consider adopting a rotational positioning protocol for donors to increase the lungs available for transplantation. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (mendezmar Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
|Clinician Perception of a Machine Learning–Based Early Warning System Designed to Predict Severe Sepsis and Septic Shock
Objective: To assess clinician perceptions of a machine learning–based early warning system to predict severe sepsis and septic shock (Early Warning System 2.0). Design: Prospective observational study. Setting: Tertiary teaching hospital in Philadelphia, PA. Patients: Non-ICU admissions November–December 2016. Interventions: During a 6-week study period conducted 5 months after Early Warning System 2.0 alert implementation, nurses and providers were surveyed twice about their perceptions of the alert’s helpfulness and impact on care, first within 6 hours of the alert, and again 48 hours after the alert. Measurements and Main Results: For the 362 alerts triggered, 180 nurses (50% response rate) and 107 providers (30% response rate) completed the first survey. Of these, 43 nurses (24% response rate) and 44 providers (41% response rate) completed the second survey. Few (24% nurses, 13% providers) identified new clinical findings after responding to the alert. Perceptions of the presence of sepsis at the time of alert were discrepant between nurses (13%) and providers (40%). The majority of clinicians reported no change in perception of the patient’s risk for sepsis (55% nurses, 62% providers). A third of nurses (30%) but few providers (9%) reported the alert changed management. Almost half of nurses (42%) but less than a fifth of providers (16%) found the alert helpful at 6 hours. Conclusions: In general, clinical perceptions of Early Warning System 2.0 were poor. Nurses and providers differed in their perceptions of sepsis and alert benefits. These findings highlight the challenges of achieving acceptance of predictive and machine learning–based sepsis alerts. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Dr. Ginestra, Dr. Schweickert, Ms. Meadows, Mr. Lynch, and Ms. Pavan helped with data collection; Dr. Ginestra helped with analysis and interpretation of the data, and drafting of the article; and all authors helped with conception and design, and critical revision of the article for important intellectual content. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http;/craigumscheid Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
Anapafseos 5 . Agios Nikolaos