391 - Team Resuscitation For Pediatrics (TRAP); Application and Validation of a Pediatric Resuscitation Quality Instrument in Non-Simulated Resuscitations
Sunday, April 24, 2022
3:30 PM – 6:00 PM US MT
Poster Number: 391 Publication Number: 391.314
Shannon Flood, Children's Hospital Colorado, DENVER, CO, United States; Laura Rochford, Children's Hospital Colorado, Westminster, CO, United States; Sarah Halstead, University of Colorado School of Medicine, Golden, CO, United States; Patrick Mahar, Children's Hospital Colorado, Aurora, CO, United States; Beth M. D'Amico, Baylor College of Medicine/Texas Childrens Hospital, Houston, TX, United States; Michelle Alletag, -, Denver, CO, United States; Jan Leonard, University of Colorado Anschutz Medical Campus, Aurora, CO, United States; Tara Neubrand, Children's Hospital Colorado/University of Colorado, Denver, CO, United States
Fellow Children's Hospital Colorado DENVER, Colorado, United States
Background: Resuscitation of pediatric cardiac and respiratory arrest is a high-risk and low frequency event in the pediatric emergency department (PED). Resuscitation team performance assessment tools have been developed and validated for use in the simulation environment but no tool currently exists to evaluate clinical performance in non-simulated pediatric resuscitations. The Team Resuscitation for Pediatrics (TRAP) tool is a team performance assessment tool, modified from a validated simulation resuscitation tool, that has demonstrated content validity evidence.
Objective: To determine the inter-rater reliability of the TRAP tool for assessment of clinical performance during video review of non-simulated resuscitations in the PED.
Design/Methods: This is a validation study assessing inter-rater reliability of the TRAP tool. Videos of medical resuscitations at a free standing, tertiary PED were collected and analyzed over a 6-month period. Trauma resuscitations and resuscitations occurring in rooms without video capability were excluded. Four pediatric emergency medicine attending physicians reviewed the videos and assessed resuscitation team performance based on the TRAP tool. Percent agreement and Fleiss’ Kappa were calculated to evaluate inter-rater reliability of assessment of clinical performance in 3 subcategories: team communication (TC), cardiac arrest (CA) and respiratory arrest (RA). Percent agreement ranges were established a priori as >80% considered good, 60-80% moderate and < 60% poor.
Results: Of 51 resuscitations occurring during the study period, 24 met inclusion criteria. All videos were scored for TC, 9 were scored for CA, and 20 were scored for RA. All subcategories demonstrated overall moderate agreement however individual items showed a wide range of agreement. Kappa scores were low on both individual items and overall. Of four items on the TC tool three met criteria for good agreement, indicated as green on Table 1. Of 34 items on the CA tool 12 met good agreement (Table 2) and of 27 items on the RA tool nine met good agreement (Table 3). Conclusion(s): This study demonstrated that clinical tools to assess resuscitation team performance of non-simulated, video-recorded resuscitations is feasible, however, the TRAP tool did not demonstrate adequate inter-rater reliability. Kappa scores were likely low secondary to low variability in responses. More objective items trended towards better percent agreement. Performance was likely limited by inherent subjectivity of some of the elements assessed, difficulty with visualizing specific elements on video, and limitations on training of assessors. Table 1: Team Communication Subcategory Percent Agreement (low to high) and Fleiss’ Kappa <img src=https://www.abstractscorecard.com/uploads/Tasks/upload/16020/FGOVBGGC-1177624-1-IMG(2).png width=440 hheight=128.122605363985 border=0 style=border-style: none;>*Red: poor agreement: < 60%; yellow: moderate agreement: 60-80%; green: good agreement: >80%