- Original Research
- Open Access
Emergency department quality and safety indicators in resource-limited settings: an environmental survey
International Journal of Emergency Medicine volume 8, Article number: 39 (2015)
As global emergency care grows, practical and effective performance measures are needed to ensure high quality care. Our objective was to systematically catalog and classify metrics that have been used to measure the quality of emergency care in resource-limited settings.
We searched MEDLINE, Embase, CINAHL, and the gray literature using standardized terms. The references of included articles were also reviewed. Two researchers screened titles and abstracts for relevance; full text was then reviewed by three researchers. A structured data extraction tool was used to identify and classify metrics into one of six Institute of Medicine (IOM) quality domains (safe, timely, efficient, effective, equitable, patient-centered) and one of three of Donabedian’s structure/process/outcome categories. A fourth expert reviewer blinded to the initial classifications re-classified all indicators, with a weighted kappa of 0.89.
A total of 1705 articles were screened, 95 received full text review, and 34 met inclusion criteria. One hundred eighty unique metrics were identified, predominantly process (57 %) and structure measures (27 %); 16 % of metrics were related to outcomes. Most metrics evaluated the effectiveness (52 %) and timeliness (28 %) of care, with few addressing the patient centeredness (11 %), safety (4 %), resource-efficiency (3 %), or equitability (1 %) of care.
The published quality metrics in emergency care in resource-limited settings primarily focus on the effectiveness and timeliness of care. As global emergency care is built and strengthened, outcome-based measures and those focused on the safety, efficiency, and equitability of care need to be developed and studied to improve quality of care and resource utilization.
The increasing burden of trauma and non-communicable diseases in low- and middle-income countries (LMICs) has emphasized the need for effective emergency care to alleviate the morbidity and mortality associated with acute illness and injury [1–4]. This has led international organizations, including the World Bank, World Health Organization, and United Nations Children’s Fund, to place substantial emphasis on the development and strengthening of systems of emergency care in resource-limited settings [4–6].
As emergency care in LMICs expands, there is a growing need to measure and improve the quality and safety of this care. Well-developed quality assurance systems currently exist in high-income countries [7, 8], where the development and use of quality indicators has led to major improvements in the standard of emergency care provided . While systematic performance measurement is the foundation of quality health care , quality and safety indicators used in developed countries may not be appropriate in resource-limited settings [9, 11]. Indeed, little is known about the metrics being used to measure emergency care in LMICs, and to our knowledge, no study has cataloged which metrics are being used. This limits the ability of emergency departments (EDs) in low-resource settings to implement quality assurance programs.
The objective of this systematic review is to catalog and classify existing performance metrics that have been used to measure the quality of ED care in resource-limited settings.
A medical librarian searched MEDLINE, Embase, and CINAHL from the earliest available date to September 30, 2013. The following search terms were used: quality, quality assurance, quality indicators, utilization review, combined with any of the following: emergency, emergency medical services, emergency service, accident and emergency, emergency department, and any of the following: developing countries, third world, and resource limited (or low or poor). The search was restricted to English language articles.
The gray literature was searched through an internet-based search of the websites of relevant international and emergency medicine organizations such as the World Bank, International Federation for Emergency Medicine, International Medical Corps and United Nations, as well as a Google search using combinations of the following terms: quality, quality assurance, quality indicators (or measures), performance indicators (or measures), safety indicators (or measures), combined with any of the following: emergency, emergency medicine, emergency medical care, emergency services, emergency unit (or department), and any of the following: developing, resource-limited, and low- or middle-income countries.
Studies were eligible for inclusion if they were conducted in a low- or middle-income country, as defined by the World Bank classification system , and addressed quality markers, indicators, or metrics for care in an ED or emergency unit. Studies conducted in multiple countries were included if one or more of the countries was a LMIC. A metric was defined as a performance measure that assessed a predefined quality standard. If an article analyzed the quality of care in the hospital as a whole, including the ED, the article was included only if the article separately reported metrics measured in the ED. Studies of prehospital care, emergency obstetrics, and secondary injury prevention were excluded. If a study included both prehospital and in-hospital metrics, it was included only if the in-hospital metrics were separately listed.
Articles were excluded if they were opinions or review articles that did not feature original data. Articles that described potential indicators, but did not implement them or measure them, were also excluded, as the focus of the present review was on indicators that have been previously utilized.
One author (ELA) initially reviewed the titles and abstracts of all articles identified by the search terms to exclude all clearly ineligible articles. The remaining titles and abstracts were re-reviewed by two authors (ELA and SAR), and a consensus was reached to create a list of potentially relevant articles. The full text articles were then reviewed by three authors (ELA, SAR, and RHM) to confirm eligibility. Given the limited literature on this topic, articles were not excluded based on quality of the study or publication.
Data extraction and analysis
Three authors (ELA, SAR, and RHM) reviewed the full text of all relevant articles using a standardized form to extract individual quality metrics and study details. If a study examined both ED and hospital care, only the ED metrics were included. However, if a study looked exclusively at care within the ED, but included metrics or outcomes that occurred after the ED stay, such as mortality, these metrics were included.
For ease of comparison, certain structural metrics were collapsed into predefined categories. For example, metrics examining availability of specific medications were combined into a single metric by predefined medication class.
Once extracted, each metric was categorized by a predefined matrix based on the IOM framework of healthcare quality (Table 1) . These were then further classified into the Donabedian framework of health care consisting of causally linked and measurable categories (Table 2) .
Each quality metric was assigned to only one domain. A fourth author (JDS) then independently reviewed and classified the extracted quality metrics, with a weighted kappa of 0.89.
Articles applying the WHO/IATSIC guidelines
Several studies used the indicators in the World Health Organization (WHO) and International Association for Trauma and Intensive Care (IATSIC) Guidelines for Essential Trauma Care. These guidelines offer a toolkit of over 200 metrics for the internal assessment of trauma care at the hospital level, focused on human resources (staffing and training) and physical resources (infrastructure, equipment, and supplies). While the guidelines reference emergency care, they are intended to assess specifically trauma capacity of a hospital as a whole, which was not the present focus of this review. A number of studies have been performed applying the WHO/IATSIC indicators, and thus including them in the primary analysis would have disproportionately weighted the indicators of this study. Therefore, quality indicators found in this group of articles were examined separately. Only reported metrics from each study were extracted.
The literature search identified 1705 titles (Fig. 1). Of these, 97 were eligible for full text review. Two articles could not be located, after exhaustive search by a trained medical librarian. Of the 95 reviewed, 30 met inclusion criteria. The references of included articles were also reviewed, yielding an additional 4 articles for inclusion. In total, 34 articles were included, 6 of which reported the implementation of the WHO/IATSIC guidelines [15–49]. The summary characteristics of the non-WHO/IATSIC included articles are listed in Table 3. Detailed descriptions of each article included are listed in the Additional file 1.
Excluding the WHO/IATSIC articles that were analyzed separately, 180 quality metrics were extracted from the remaining 28 articles, including 129 unique indicators. The majority of all reported measures were not disease-specific (n = 126; 70 %) but focused on metrics that applied to patients with a variety of diseases. The 54 measures that were disease-specific focused on illnesses related to the following: respiratory (n = 23), systemic states (n = 9), hematologic (n = 7), circulatory and cardiovascular (n = 6), neurologic (n = 4), trauma (n = 2), endocrine, metabolic and nutritional disease (n = 1), fluid and electrolyte disorders (n = 1), and gastrointestinal diseases (n = 1).
Most metrics were process (n = 102; 57 %) and structure measures (n = 49; 27 %). Only 16 % (n = 29) were related to outcomes (Table 4). Regarding the IOM domains, most metrics evaluated the effectiveness of care (n = 94; 52 %). These were predominantly markers of effective processes, such as adherence to a full physical exam and appropriate test ordering, or of effective structures such as the availability of essential supplies. A small number dealt with the effectiveness of outcomes, such as mortality. Metrics assessing timeliness of care (n = 51; 28 %) dealt primarily with processes such as time to provider and outcomes such as length of stay. Few metrics addressed patient-centered care (n = 20; 11 %); those that did looked primarily at patient satisfaction. Seven percent (n = 14) of metrics addressed the safety of care and focused on complications of care and the appropriate use of medications. Resource-efficient (n = 5; 3 %) and equitable (n = 2; 1 %) measures were rare.
Among the articles that implemented the WHO/IATSIC surveys, 336 metrics were extracted. Many of these were repeated within articles; a total of 153 unique metrics were identified. The majority of the metrics were related to the effectiveness of the structure of care (n = 141; 92 %). The remaining metrics dealt with safe structure (n = 7; 5 %), safe process (n = 4; 2 %), and efficient process (n = 1; 1 %).
The EM quality literature provides strong evidence that quality improvement programs can improve quality of care and patient outcomes [7, 8, 49]. Understanding that improvement requires measurement [7, 8], the availability of applicable measures for emergency care in LMICs is essential. While there has been dramatic growth in the delivery of emergency care in LMICs over the last decade, little is known about the quality of care or how to evaluate it.
Through a rigorous search strategy and structured data extraction, this systematic review collected and analyzed published ED quality metrics in LMICs. Our data show that only a limited number of metrics have been reported, the majority of which focus on structures or processes of care, rather than on patient outcomes. The limited metrics suggest a pressing need to develop and implement performance measures that reflect the spectrum of emergency care in LMICs.
Our study shows that when applying a structured framework for quality metrics to the over 150 metrics currently used to measure the quality of emergency care in LMICs, these metrics do not achieve balance. The majority of these metrics are focused on process and structure, likely reflecting the greater availability of data in these domains. Process metrics, making up over half of all metrics reported in our study, were predominantly centered on operational measures looking at the effectiveness and timeliness of ED processes. Although literature in high-income countries suggests that the most successful performance measures for quality improvement are outcome metrics related to ED time intervals (length of stay, arrival to assessment/admission) and patient centeredness (72-h ED returns, patients who left without being seen) [8, 50, 51], we found that only 16 % of metrics in LMICS were outcome based. Even fewer (11 %) were patient-centered. Future work is needed to analyze why patient-centered and outcome indicators have not yet been implemented and to develop contextually appropriate measures.
It is interesting to note that few resource-efficient metrics were reported. EDs in LMICs are increasingly facing the pressures of high patient volumes, limited resources, and an ever-high burden of disease . Given this landscape, it is essential that the care being delivered is resource-efficient and high quality. Unlike in high-income settings, where individual providers typically do not face resource constraints, in a low-resource setting, the use of an expensive medication or lab test on one patient may consume the resources needed to treat the next. The limited resources and significant unmet need for health care in LMICs make it essential that care is both efficient and equitable. The ability to provide more with less is inherently tied to the ability to create streamlined processes and efficiency in both operations and supply-chain management. The WHO has noted that health systems in low-income countries have the greatest potential for increasing efficiency with minimal investment [53, 54]. Our study suggests the need to identify metrics that can measure these efficiencies and contribute to their improvement.
Prior efforts have been made to identify feasible quality metrics in emergency care. In addition to metrics which came out of the International Federation of Emergency Medicine’s Symposium for Quality and Safety, an expert consensus study conducted in South Africa identified 58 performance indicators that they deemed to be feasible to measure in low-resource settings (37 structure based, 20 process based, and 1 outcome based) [52, 55]. Our study shows that few of these are being reported currently. Only 20 of the proposed measures (34 %) were identified in our review (11 structural measures, 4 process measures in the non-WHO/IATSIC studies, and an additional 5 structural measures in the WHO/IATSIC studies). This suggests that there are a large number of potentially feasible quality indicators that are not being studied in LMICs. Future research should examine the reasons for this discrepancy and either modify these metrics or support their implementation as needed.
While the metrics identified in this review have all been successfully measured in LMICs, the ease of measurement is not compared or documented. The well-documented barriers that exist to measure these metrics in developed countries are likely more significant in LMICs. Lack of senior management with strong commitment or training in quality improvement methods, limited resources to collect and analyze data, and a lack of clarity around which metrics are most important have all been noted to limit institutional abilities to effectively measure the quality of care . These challenges are magnified in LMICs, particularly within EDs, as emergency medicine is in its nascency in many countries.
The study was limited to English language articles and may have missed metrics reported in the non-English language literature. Although our search terms were broad, there may be articles using different terms that we did not capture. Similarly, it will not capture unpublished metrics currently in use. There are no standardized definitions for the classification of metrics in the IOM and Donabedian domains, resulting in a degree of subjectivity in their classification. To address this, three authors reached consensus on each metric, and a fourth blinded reviewer re-classified the metrics with a kappa that was robust. Finally, the IOM and Donabedian domains were developed for high-resource settings, and their applicability to a lower-resource setting is unknown.
As emergency medicine continues to grow as a field in LMICs, there is an increasing need for effective metrics to measure the quality of this care. This systematic review of performance measures suggests that although there are a number of published quality metrics currently used to assess emergency care in LMICs, these do not adequately assess all aspects of emergency care. This study has demonstrated that broad metrics have been applied in LMICs, however identified the need for the development of more comprehensive measures that are locally applicable. As metrics for LMICs are developed, they must be implemented and then reported on to develop global standards of quality measurement in emergency care.
Hofman K, Primack A, Keusch G, Hrynkow S. Addressing the growing burden of trauma and injury in low- and middle-income countries. Am J Public Health. 2005;95(1):13–7.
Murray CJ, Lopez AD. The global burden of disease: a comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020. Cambridge, MA: Harvard University Press; 1996.
Murray CJL, Lopez AD. Global health statistics: a compendium of incidence prevalence and mortality estimates for over 200 conditions. Cambridge, MA: Harvard University Press; 1996.
Anderson P, Petrino R, Halpern P, Tintinalli J. The globalization of emergency medicine and its importance for public health. Bull World Health Organ. 2006;84(10):835–39.
World Bank. Minimum package of health services: criteria, method and data. Washington (DC): World Bank; 1995.
Gove S. Integrated management of childhood illness by outpatient health workers: technical basis and overview. Bull World Health Organ. 1997;75:7–24.
Graff L, Stevens C, Spaite D, Foody J. Measuring and improving quality in emergency medicine. Acad Emerg Med. 2002;9(11):1091–107.
Sørup CM, Jacobsen P, Folberg JL. Evaluation of emergency department performance—a systematic review on recommended performance and quality-in-care measures. Scand J Trauma Resusc Emerg Med. 2013;21(62):1–14.
Lindsay P, Schull M, Bronskill S, Anderson G. The development of indicators to measure the quality of clinical care in emergency departments following a modified-Delphi approach. Acad Emerg Med. 2002;9:1131–39.
Scott KW, Jha AK. Putting quality on the global health agenda. N Engl J Med. 2014;371(1):3–5.
Beattie E, Mackway-Jones K. A Delphi study to identify performance indicators for emergency medicine. Emerg Med J. 2004;21:47–50.
The World Bank. World Bank List of Economies. 2013. Accessed January 8, 2014; Available from: http://data.worldbank.org/about/country-classifications/country-and-lending-groups
Institute of Medicine. Crossing the quality chasm: a new health system for the twenty-first century. Washington, DC: National Academies Press; 2001.
Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1996;44:166–203.
Achan J, Tibenderana J, Kyabayinze D, Mawejje H, Mugizi R, Mpeka B, et al. Case management of severe malaria—a forgotten practice: experiences from health facilities in Uganda. PLos One. 2011;6(3):e17053.
Adamu A, Maigatari M, Lawal K, Iliyasu M. Waiting time for emergency abdominal surgery in Zaria, Nigeria. Afr Health Sci. 2010;10(1):46–53.
Akoglu S, Topacoglu H, Karcioglu O, Cimrin AH. Do the residents in the emergency department appropriately manage patients with acute asthma attach? A study of self-criticism. Adv Ther. 2004;21(6):348–56.
Borlina LP, Silva EL C e, Ghislandi G, Timi JRR. Emergency-room doctors’ knowledge about oral anticoagulants and its management. J Vasc Bras. 2010;9(2):24–8.
Chadha R, Singh A, Kalra J. Lean and queuing integration for the transformation of health care processes. Clin Gov. 2012;17(3):191–99.
Cinar O, Turkan H, Duzok E, Sener S, Uzun A, Durusu M, et al. Do we know how to use oxygen properly in the emergency department. J Clin Anal Med. 2010;1(3):1–3.
Goel A, Kumar S, Bagga M. Epidemiological and Trauma Injury and Severity Score (TRISS) analysis of trauma patients at a tertiary care centre in India. Natl Med J India. 2004;17(4):186–89.
Hashami Z, Haider A, Zafar SN, Kisat M, Moosa A, Siddiqui F, et al. Hospital-based trauma quality improvement initiatives: first step toward improving trauma outcomes in the developing world. J Trauma Acute Care Surg. 2013;75(1):60–8.
Idro R, Aloyo J. Manifestations, quality of emergency care and outcome of severe malaria in Mulago Hospital, Uganda. Afr Health Sci. 2004;4(1):50–7.
Jalili M, Shalileh K, Mojtahed A, Mojtahed M, Moradi-Lakeh M. Identifying causes of laboratory turnaround time delay in the emergency department. Arch Iran Med. 2012;15(12):759–63.
Kirenga JB, Okot-Nwang M. The proportion of asthma and patterns of asthma medications prescriptions among adult patients in the chest, accident and emergency units of a tertiary health care facility in Uganda. Afr Health Sci. 2012;12(1):48–53.
Loch A, Twin T, Zakaria IM, Abidin I, Ahmad WA, Hautmann O. Failure to improve door-to-needle time by switching to emergency physician-initiated thrombolysis for ST elevation myocardial infarction. Postgrad Med J. 2013;89:335–39.
Salleh FM, Fathil SM, Ahmad Z, Che’Man Z. Early goal-directed therapy in the management of severe sepsis/septic shock in an academic emergency department in Malaysia. Crit Care Shock. 2010;13:91–7.
Nayeri ND, Aghajani M. Patients’ privacy and satisfaction in the emergency department: a descriptive analytical study. Nurs Ethics. 2012;17(2):167–77.
Nguyen HB, Kuan WS, Batech M, Shrikhande P, Mahadevan M, Li CH, et al. Outcome effectiveness of the severe sepsis resuscitation bundle with addition of lactate clearance as a bundle item: a multi-national evaluation. Crit Care. 2011;15(5):R229.
Nolan T, Angos P, Cunha AJ, Muhe L, Qazi S, Simoes EA, et al. Quality of hospital care for seriously ill children in less-developed countries. Lancet. 2001;357:106–10.
Oliveira AC, Marziale MH, Paiva MH, Lopes AC. Knowledge and attitude regarding standard precautions in a Brazilian public emergency service: a cross-sectional study. Rev Esc Enferm USP. 2009;43(2):313–19.
Onwukike M, Olaloye OA, Oni OO. Teaching hospital perspective of the quality of trauma care in Lagos, Nigeria. World J Surg. 2001;25(1):112–15.
Parekh K, Russ S, Amsalem DA, Rambaran N, Wright SW. Who leaves the emergency department without being seen? A public hospital experience in Georgetown, Guyana. BMC Emerg Med. 2013;13:10.
Payal P, Sonu G, Anil GK, Prachi V. Management of polytrauma patients in emergency department: an experience of a tertiary care health institution of northern India. World J Emerg Med. 2013;4(1):15–9.
Razzak JA, Hyder AA, Akhtar T, Khan M, Khan UR. Assessing emergency medical care in low income countries: a pilot study from Pakistan. BMC Emerg Med. 2008;8:8.
Rauf W, Blitz JJ, Geyser MM, Rauf A. Quality improvement cycles that reduced waiting times at Tshwane District Hospital Emergency Department. SA Fam Pract. 2008;50(6):43–43e.
Rehmani R. Emergency section and overcrowding in a University Hospital of Karachi, Pakistan. J Pak Med Assoc. 2004;54(4):233–36.
Shahid M, Hameed K, Iqbal R, Afzal O, Nakeer R, Razzak J. Accuracy of diagnosis and relationship with quality of emergency medicine training program. J Coll Physicians Surg Pak. 2012;22(2):342–43.
Sultana A, Riaz R, Hameed S, Syed Arshad S, Iffat T, Arshia B, et al. Patient satisfaction in emergency department of District Head Quarters Hospital, Rawalpindi. Rawal Med J. 2010;35(1):85–90.
Tamburini G, Di Mario S, Maggi RS, Vilariam JN, Grove S. Evaluation of guidelines for emergency triage assessment and treatment in developing countries. Arch Dis Child. 1999;81(6):478–82.
Waxman MH, Kimaiyo S, Ongaro N, Wools-Kaloustian KK, Fanigan TP, Carter EJ. Initial outcomes of an emergency department rapid HIV testing program in western Kenya. AIDS Patient Care STDS. 2007;21(12):981–86.
Ye L, Zhou G, He X, Shen W, Gan J, Zhang M. Prolonged length of stay in the emergency department in high-acuity patients at a Chinese tertiary hospital. Emerg Med Australasia. 2012;24:634–40.
Aboutanos MB, Mora F, Rodas E, Salamea J, Parra MO, Salgado E, et al. Ratification of IATSIC/WHO’s guidelines for essential trauma care assessment in the South American region. World J Surg. 2010;34(11):2735–44.
Arreola-Risa C, Mock C, Vega Rivera F, Romero Hicks E, Guzmán Solana F, Porras Ramírez G, et al. Evaluating trauma care capabilities in Mexico with the World Health Organization’s Guidelines for Essential Trauma Care publication. Rev Panam Salud Publica. 2006;19(2):94–103.
Hanche-Olsen TP, Alemu L, Viste A, Wisborg T, Hansen KS. Trauma care in Africa: a status report from Botswana, guided by the World Health Organization’s “Guidelines for Essential Trauma Care.”. World J Surg. 2012;36(10):2371–83.
Mock C, Nguyen S, Quansah R, Arreola-Risa C, Viradia R, Joshipura M. Evaluation of trauma care capabilities in four countries using the WHO-IATSIC guidelines for essential trauma care. World J Surg. 2006;30:946–56.
Son NT, Mock C. Improvements in trauma care capabilities in Vietnam through use of the WHO-IATSIC guidelines for essential trauma care. Int J Inj Contr Saf Promot. 2006;13(2):125–27.
Tachfouti N, Bhatti JA, Nejjari C, Kanjaa N, Salmi LR. Emergency trauma care for severe injuries in a Moroccan Region: conformance to French and World Health Organization Standards. J Healthc Qual. 2011;33(1):30–8.
Molyneux E, Ahmad S, Robertson A. Improved triage and emergency care for children reduces inpatient mortality in a resource-constrained setting. Bull World Health Organ. 2006;84(4):314–19.
Alessandrini EA, Knapp J. Measuring quality in pediatric emergency care. Clin Ped Emerg Med. 2011;12(2):102–12.
Beniuk K, Boyle AA, Clarkson PJ. Emergency department crowding: prioritising quantified crowding measures using a Delphi study. Emerg Med J. 2012;29(11):868–71.
Maritz D, Hodkinson P, Wallis L. Identification of performance indicators for emergency centres in South Africa: results of a Delphi study. Int J Emerg Med. 2010;3(4):341–49.
Evans DB, Tandon A, Murray CJ, Lauer JA. Comparative efficiency of national health systems: cross national econometric analysis. BMJ. 2001;323(7308):307–10.
Kruk ME, Freedman LP. Assessing health system performance in developing countries: a review of the literature. Health Policy. 2008;85:263–76.
Lecky F, Benger J, Mason S, Cameron P, Walsh C. The International Federation for Emergency Medicine framework for quality and safety in the emergency department. Emerg Med J. 2014;31:926–29.
We thank Carole Foxman, medical librarian, for her help in executing the comprehensive literature search.
The authors declare that they have no competing interests.
ELA and SAR lead the literature search, reviewing titles and abstracts. MG conducted an independent review of the gray literature. ELA, SAR, and RHM then reviewed the full text of all articles to confirm eligibility. ELA, SAR, and RHM then reviewed the full text of all relevant articles using a standardized form to extract individual quality metrics and study details. JDS then independently re-classified all metrics. JDS, ELA, and SAR participated in the design of the study and performed the statistical analysis. JDS, ELA, SAR, and RHM conceived of the study and participated in its design and coordination. JDS, ELA, SAR, RHM, and MG helped to draft the manuscript. All authors read, edited, and approved the final manuscript.
About this article
Cite this article
Aaronson, E.L., Marsh, R.H., Guha, M. et al. Emergency department quality and safety indicators in resource-limited settings: an environmental survey. Int J Emerg Med 8, 39 (2015). https://doi.org/10.1186/s12245-015-0088-x
- Quality Indicator
- Emergency Care
- Quality Metrics
- Full Text Review
- Included Article