- Original Research Article
- Open Access
Identification of performance indicators for emergency centres in South Africa: results of a Delphi study
International Journal of Emergency Medicine volume 3, pages341–349 (2010)
Emergency medicine is a rapidly developing field in South Africa (SA) and other developing nations. There is a need to develop performance indicators that are relevant and easy to measure. This will allow identification of areas for improvement, create standards of care and allow inter-institutional comparisons to be made. There is evidence from the international literature that performance measures do lead to performance improvements.
To develop a broad-based consensus document detailing quality measures for use in SA Emergency Centres (ECs).
A three-round modified Delphi study was conducted over e-mail. A panel of experts representing the emergency medicine field in SA was formed. Participants were asked to provide potential performance indicators for use in SA, under subheaders of the various disciplines that are seen in emergency patients. These statements were collated and sent out to the panel for scoring on a 9-point Lickert scale. Statements that did not reach a predefined consensus were sent back to the panellist for reconsideration.
Consensus was reached on 99 out of 153 (65%) of the performance indicators proposed. These were further refined, and a synopsis of the statements is presented, classified as to whether the statements were thought to be feasible or not in the current circumstances.
A synopsis of the useful and feasible performance indicators is presented. The majority are structural and performance-based indicators appropriate to the development of the field in SA. Further refinement and research is needed to implement these indicators.
Within SA, the speciality of emergency medicine is facing pressures from increasing patient numbers, the burden of diseases (such as HIV, AIDS and TB), the burden of trauma and the inevitable resource constraints. The legacy of Apartheid has left a health system that is unable to provide adequate, reliable universal health coverage. The government is attempting to address this through implementation of the National Health Insurance scheme (NHI) , which aims to improve access to high-quality health care for the whole population. In order for this to succeed, it is incumbent upon health planners to define quality of care, and to develop ways to assess and measure the quality of the care that we provide.
Emergency Centres (ECs) are constantly striving to provide a higher level of patient care in a cost-effective manner. The challenge is for ECs to be flexible and able to adapt to changing conditions in the socio-political landscape while providing a constant, safe and reliable service .
Health care has been classified into Structure, Process and Outcome: each can be measured or quantified . Traditionally, there has been an emphasis on measuring outcomes. Outcome measures are those events occurring after the patient leaves the EC and typically include mortality, morbidity and quality of life. They are useful in informing patients of the quality of care that they can expect to receive from the local hospital, and also allow purchasers of health care to see that they are getting value for money. Most research on performance indicators (PIs) within the EC has focussed on the process-based measures of quality care (e.g., waiting times, overcrowding trends) . Outcome indicators (e.g., mortality) are less common in emergency medicine and difficult to measure due to the limited time that the patient is in the EC [4, 5].
While much has been written about the development of systems of emergency care in developed countries, little is known about similar processes in developing world settings. Various studies have looked at PIs in the EC for the developed world setting in the UK and Canada [4, 5], but these may not apply in the developing world setting. There is a pressing need in SA to develop quality and performance indicators that are relevant and easy to measure. This will allow health care providers to identify areas where improvement is needed, create standards of care and allow inter-institutional comparisons to be made. There is compelling evidence from the international literature that performance measures do lead to performance improvements .
There are few (if any) data on the development of performance indicators for ECs within the SA public health service. Rigorous quality assurance and clinical governance are being introduced into the private sector emergency care system, but this is not the case within most public health institutions. Where such systems are in place, they are not universally applied, which makes direct comparisons between facilities impossible.
Furthermore, the methods for the development of quality indicators within the EC are not well defined . Most of the research into the development of performance indicators within the EC has made use of the Delphi technique [4, 5, 7] or other consensus-based methods . The Delphi method is a structured process for collecting knowledge from a group of experts by means of focussed questionnaires interspersed with controlled opinion feedback. Proponents of the Delphi method recognise human judgement as a legitimate and useful input, and therefore believe that the use of experts, carefully selected, can lead to reliable and valid results .
As South Africa enters into a new age of health care, there will be both an opportunity and pressure for health-care providers to be accountable for the quality of care that they provide. As a result, it will be necessary to re-define and measure the quality of care that we provide in ECs, to ultimately improve the quality of emergency care.
The aim of this study is to develop a broad-based consensus document detailing quality measures for use in SA Emergency Centre.
A modified three-round Delphi study was undertaken. A panel of experts in the field of Emergency Medicine in SA (specialist emergency physicians, trauma surgeons and senior nurses) was invited via e-mail to participate in the study. These SA experts were chosen for their experience in emergency care, and were believed to represent public and private sectors, all geographic regions, and both academic and non-academic institutions; they also represented district, regional and central hospitals.
After they had agreed to participate, an e-mail with information on the Delphi process and instructions on how to proceed was sent to the panel members.
Panel members were contacted via e-mail only and given three reminders to respond at each round.
Delphi process and selection of indicator statements
In round 1, members of the Delphi group were invited to produce a list of statements that they considered important with regard to performance in the EC under the subheaders given in Table 1. All statements were collated and organised into a set of initial indicators,—duplications were omitted and those statements not applicable were removed.
This document was then sent out as round 2 via e-mail; the Delphi group was asked to rank their agreement with these statements on a 9-point Likert scale (0 = the statement is very poor as a quality indicator; 9= the statement is very good as a quality indicator) . Positive consensus was defined as 80% or more of replies scoring 7 and above; negative consensus was defined as 80% or more of replies scoring 3 and below. Beattie et al.  in their study defined positive consensus as 80% or more of replies scoring 6 and above and negative consensus as 80% or more of replies scoring 4 and below. We decided to use tighter clusters to ensure that those statements reaching consensus would be strongly agreed on.
In round 3, those statements from round 2 that had not reached consensus were returned for reconsideration in light of the group opinion. In this round, scores from round 2 were summarised, which allowed panel members to change their response in light of the group opinion, with the aim of achieving consensus. (Appendix 1 shows the format of the Lickert scales used.)
At the end of the process, a list of statements that had reached consensus (as either good or bad indicators of quality of care in the EC) was collated (Supplementary data).
Data from each questionnaire were stored on a password-protected work computer and kept anonymous; data were entered into a Microsoft Excel (Microsoft, Richmond, Va) spreadsheet and tabulated.
Descriptive statistics were calculated for the data, including means and percentages.
Ethical approval for this study was obtained from the University of Cape Town. Panel members consented to participate. Replies from each member were kept anonymous.
Thirty eligible participants were identified to participate in the study. All 30 agreed to participate, with 19 (63%) of these responding to round 1.
Round 2 was sent out to all those originally invited (30 individuals) and 13 newly qualified specialists in emergency medicine, of which in total 24 panel members responded.
Round 3 was only sent out to those who had replied to round 2; thus, the same 24 member panel group took part in round 3—of these, 21 (90%) responded.
At the end of round 1, a total of 559 statements were returned under the given subheadings. These statements were refined into a list of 153 statements. These statements were sent out as round 2.
At the end of round 2, 30 (20%) of the 153 statements had achieved positive consensus and none negative consensus. Thus, a total of 123 statements were sent out again in round 3 to the 24 panel members
A further 69 of the 123 statements achieved positive consensus and none negative consensus. Thus, at the end of round 3 there were 99 positive consensus statements [99 out of 153 (65%)] and no negative statements. Fifty-four statements did not reach consensus (Supplementary Material). Some statements were still felt to be duplications, and hence further refinement was done; a final list of 77 synopsis statements was produced. These were categorised as follows (Tables 2, 3, 4 and 5):
As process-, structure- or outcome-based statements
As both useful and feasible to measure, or useful but not currently feasible to measure
This study has produced a set of PIs that are all feasible and easily assessed in the current SA EM systems to evaluate and improve emergency centre quality of care. These indicators may need to be refined specific to local situations, although standardisation of at least some of the indicators will allow for direct comparison, audit and future studies. The list is not exhaustive, but provides a useful starting point for pilot studies and further research.
Historically physicians have not prioritised quality measurement and improvement, and the last few decades have seen the quality improvement movement shifting from an external regulatory requirement into an internally driven operation at the core of ECs in the developing world. This transition to a quality-driven revolution remains one of the greatest challenges facing emergency medicine in both the developed and developing world .
In their article on quality improvement in EM, Graff et al.  concluded that the definition of medical quality should include those factors that describe medical care that is important to all the stakeholders involved within the EC, such as doctors, nurses and patients. The most frequently used framework is that from the Institute of Medicine , which highlights the aims for any quality improvement intervention and should include safety, effectiveness, patient centeredness, timeliness, efficiency and fairness. The quality of public services in developing countries has been neglected, with little emphasis having been placed on quality improvement .
The Delphi technique has been successfully used elsewhere to develop PIs [4, 5]. This multifaceted and heterogeneous Delphi panel, with all members having considerable experience in the SA EM setting, has provided indicators with good generalisability and validity. Consensus methodology is a means of obtaining expert opinion and turning this into a reliable measure. There are no universally accepted or evidence-based criteria to define consensus, and 80% positive (or negative) response was chosen as a reasonable threshold given the nature of the statements in this study [14, 15].
PIs that assess structural components of ECs are more applicable to the current situation in SA, and the study reflects this. These should provide a good baseline and could be developed alongside national guidelines as to what is applicable for what level of EC.
The process PIs are useful and practical. They consist of time measures of flow and performance of vital clinical tasks, and of processes where documentation should be made that gives evidence that clinical process/protocols have been followed.
Outcome measures are difficult in the emergency environment where we seldom have information on outcome outside of the EC. A single measure of missed injuries is proposed as a feasible PI, but as in other studies it may not be a meaningful global measure of EM outcomes. Patient satisfaction is perhaps a better measure of outcome, and is largely weighted on timeliness and appropriateness of treatment, which should be gauged in the process PIs .
Werner and Ash express concerns that although performance measures do improve performance, many are designed to improve compliance to guidelines, which do not necessarily translate into clinical benefits . This needs to be borne in mind, especially for the process-based PIs. Adherence and improvement to PIs should not take away from the priorities in clinical care, which are not necessarily reflected by PIs—for example, there is no prioritization of PIs—and clearly not all are life- or even limb-threatening issues. Sheldon notes the importance of having good evidence to back indicators, as well as consideration of integrating PIs with local and national policies on quality initiatives . They also emphasize the importance of considering how the results of PIs will be analysed and appropriate actions to increase performance. Kruk et al.  have emphasised that performance indicators need to be relevant, reliable, feasible and evidence-based before they can be implemented locally. Thus, developed and developing countries may use very different indicators based on local conditions and policies.
Graff et al.  have identified a number of barriers to the measurement and implementation of quality improvement programmes. Most important is the lack of reliable, accurate data acquisition and analysis in ECs. This is especially challenging in resource-constrained developing world hospitals. For effective and accurate measurement, data need to be entered into a digital format. Most ECs in developing world settings have limited if any electronic records of EC patients, and data acquisition is through medical record acquisition, which is time consuming. Most ECs run a paper-based log book of EC admissions and discharges. This limits the usefulness and quality of the data. Secondly, the lack of senior administrative and clinical commitment to quality improvement within the EC is a major challenge. Traditionally, the EC has never been a priority within the hierarchical structure of the health care institution, and quality improvement has not formed part of the core aims. Furthermore, a lack of understanding concerning the aspects of quality measurement and improvement among senior staff and colleagues does not foster a team approach to prioritising the goal of improving patient care within the EC. The burden of diseases such as HIV/AIDS, malaria and trauma, and lack of qualified manpower within resource-poor systems have placed a tremendous burden on already overstretched health care systems. Many critics argue that scarce resources should be directed into solving these problems rather than highlighting further problems through measurement interventions . In order for performance monitoring to be successful, it is essential that sound leadership from emergency physicians be fostered to create a multidisciplinary team approach to improving patient care .
PIs need to be clearly defined, and tested for validity, reliability and responsiveness before they can be put into common practice . Further refinement and research are needed to guide this process.
Finally, the Centre for Health Economics of York  has highlighted some of the main types of unintended consequences of performance indicators that may be detrimental to patient care. These need to be considered when choosing indicators and analysing the results. Firstly, indicators may promote tunnel vision where managers may concentrate on a set of PIs while ignoring other important unmeasured aspects of health care. Secondly, suboptimisation involves pursuing narrow local goals while ignoring the overall objectives of the health system, while myopia is only concentrating on short-term goals and targets. Probably the most detrimental is the misrepresentation and deliberate manipulation of data to satisfy target requirements. Finally, gaming is the altering of behaviour to obtain a strategic advantage.
Creating a list of proposed indicators is one thing, but rolling them out to the ECs is the most difficult task. The PIs need to be further refined so as to ensure that all emergency physicians have the same understanding of the definition of the indicators to ensure an acceptable level of compliance. Further research is needed on how to approach and solve this issue. For example, the process of EC triage as always has been a controversial issue in South Africa. The need to prioritize the care of patients within South African ECs in response to long waiting times and overcrowding became obvious [20–23]. A staffed triage area has been identified as an essential process indicator of quality in this study. The Cape Triage group was convened in response to the variable level of triage practiced within South African ECs. Their goal was to develop and validate a new triage tool for use within South Africa. Using this platform, a multidisciplinary panel consisting of experts in the field of emergency care set out to accomplish this with the development of the Cape Triage Score that was rolled out across the Western Cape in January 2006 and will hopefully extend to the rest of South Africa in the near future. Extensive campaigns and training of health care providers in the use of the triage system under the auspices of the Cape Triage Group, the Division of Emergency Medicine of the University of Cape Town and Stellenbosch (UCT/US), and the Emergency Medicine Society of South Africa (EMSSA) have taken place. This campaign has shown positive results in terms of waiting times and mortality in many of the units within the Western Cape. This is an example of how by using an umbrella body like the EMSSA and the Division of Emergency Medicine UCT/US, we can use the PIs identified in this study as a starting point for further debate and discussion. In this way we can create benchmarking standards of good quality of care within our ECs and ensure that all health-care workers in the ECs have a common understanding of these quality indicators. This will ensure compliance with the performance indicators and improve the quality of care delivered. However, this is easier said than done, and we are still a long way off from achieving this goal. A concerted effort will be needed to get all those involved in emergency care under one roof to clarify and further refine these indicators. Governmental legislation and accreditation standards set out by the Department of Health and the Health Professions Council of South Africa will be needed to drive and enforce the process.
Emergency medicine is a rapidly developing speciality within the developing world, but the systems and processes in place are still largely immature and under development. Clear guidelines are needed for the development of the speciality within the developing world. Recent research  here in South Africa has identified key consensus areas for Emergency Medicine (EM) development in the developing world with respect to the scope of practice, staffing needs, training and research. The next step in this process is translation of these principles into clear and practical guidelines through focus group discussions to drive policy change, protocols, training and further research into EM development in the developing world.
The pool from which the Delphi panel was drawn is small, reflecting the numbers in this recently formed speciality in SA. The panel members invited are all either specialists or have wide experience in the SA setting, but the selection was open to the subjectivity of the author, as well as the availability of e-mail to the panelists during the study period. There is no universally acceptable response rate: the response rate in this study was poor—likely due to the panel members being part of a small number of time-pressured individuals, and perhaps some miscomprehension of the meaning and importance of the study . The use of e-mail may have been a factor in the poor and fluctuant response rates .
Consensus methodology has its shortcomings, and the most cited of these are that participants are not able to discuss issues and that the process may encourage participants to change their views according to the majority opinion . The panel in this study, with common background training, would mitigate this to some extent. It is important to note that Delphi methodology does not necessarily identify agreement: there is a difference between agreement and consensus, which means that these consensus statements are not a set of PIs ready for implementation—they are rather guidelines (which also, by their nature, identify areas for further debate and research) .
A consensus-designed set of EC PIs is presented for use in the SA setting. These represent the first attempt at a locally designed and appropriate quality of care indicator in the emergency medicine arena. There is a bias in the indicators presented towards structure-based indicators, which is appropriate for the currently developing field in SA. Further research and tailoring of these statements may be necessary at a local level, with standardization to allow comparison and audit of facilities. Despite the limitations mentioned, the proposed framework of indicators could be used to guide further research and allow for comparison across different health care systems.
Further research is currently underway to clearly define these performance indicators so that all ECs have the same working understanding of them. Variations in the level of understanding and compliance with these performance indicators will impact on the quality of care delivered. However, currently within the South African public health sector there are no uniformly agreed-upon quality markers and accreditation standards for our ECs. In a measure to address this pressing issue, the Emergency Medicine Society of South Africa (EMSSA) and National Department of Health (NDoH) are currently developing an accreditation process and NDoH regulations, which will lead to internal and external benchmarking for quality of care within the ECs.
The EMSSA has a pivotal role to ensure that the standard of care delivered within our ECs is improved to safeguard the health and well-being of the most vulnerable in our society.
Ncayiyana DJ (2008) National health insurance on the horizon for South Africa. SAMJ 98:4
Tregunno D, Baker GR, Barnsley J et al (2004) Competing values of emergency department performance: balancing multiple stakeholder perspectives. Health Serv Res 39(4):771–792
Donabedian A (1966) Evaluating the quality of medical care. Milbank Mem Fund Q 44:166–200
Lindsay P, Schull M, Bronskill S et al (2002) The development of indicators to measure the quality of clinical care in emergency departments following a modified-Delphi approach. Acad Emerg Med 9:1131–1139
Beattie E, Mackway-Jones K (2004) A Delphi study to identify performance indicators for emergency medicine. Emerg Med J 21:47–50
Werner RM, Asch DA (2007) Clinical concerns about clinical performance measurements. Ann Fam Med 5:159–163
Ospina MB, Bond K, Schull M et al (2007) Key indicators of overcrowding in Canadian emergency departments: a Delphi study. Can J Emerg Med 9:339–346
Hung G, Chalut D (2008) A consensus-established set of important indicators of pediatric emergency department performance. Ped Emerg Care 24:9–15
Thangaratinam S, Redman CWE (2005) The Delphi technique. Obstet Gynaecol 7:120–125
Likert A (1932) A technique for the measurement of attitudes. Arch Psychol (Frankf) 22:55
Graff L, Stevens C, Spaite D et al (2002) Measuring and improving quality in emergency medicine. Acad Emerg Med 9(11)
Institute of medicine (2001) Crossing the quality chasm: a new health system for the 21st century. National academy press, Washington; DC
Initiative for sub-district support: technical report No 3. What really improves the quality of primary health care? A review of local and international experience. Health Systems Trust
Williams PL, Webb C (1994) The Delphi technique: a methodological discussion. J Adv Nurs 19:180–186
Hasson F, Keeney S, McKenna H (2000) Research guidelines for the Delphi survey technique. J Adv Nurs 32:1008–1015
Sheldon T (1998) Promoting health care quality: what role performance indicators? Qual Health Care 7(Suppl):S45–S50
Kruk ME, Freedman LP (2008) Assessing health system performance in developing countries: a review of the literature. Health Policy 85:263–276
Goddard M, Mannion R, Smith P. The NHS performance framework: taking account of economic behaviour. Centre for health economics, The university of York. Discussion paper 158
Harris DR, Connolly H, Christenson J, et al. Pitfalls of email survey research. Can J Emerg Med 2003; 5. http://caep.ca/template.asp?id=E6946BBBF1804F4AAEF600DAF7F37B63#079. Accessed 27 Oct 2009
Bruijns SR, Wallis LA, Burch VC (2008) A prospective evaluation of the Cape triage score in the emergency department of an urban public hospital in South Africa. Emerg Med J 25:398–402
Bruijns SR, Wallis LA, Burch VC (2008) Effect of introduction of nurse triage on waiting times in a South African emergency department. Emerg Med J 25:395–397
Wallis LA, Gottschalk SB, Wood D et al (2006) The Cape triage score—a triage system for South Africa. SAMJ 96(1)
Gottschalk SB, Wood D, DeVries S et al (2006) The Cape triage score: a new triage system for South Africa. Proposal from the cape Triage group. Emerg Med J 28:149–153
Hodkinson PW, Wallis LA (2010) Emergency medicine in the developing world: a Delphi study. Acad Emerg Med 17(7)
We would like to thank the following Delphi panel members from South Africa who contributed to this research:
Dr Patricia Marie Saffy, Dr Fraser John Dawson Lamond, Dr Darryl Ross Wood, Dr Charl Jacques Van Loggerenberg , Dr Cleeve Chelmsford Robertson, Dr Andreas Engelbrecht, Dr Gerald Eric Dalbock, Dr Wayne Patrick Smith, Dr Jacques Goosen, Dr Andy Nicol, DrTim Hardcastle, Dr Dave Muckart, Ms Mandy Taubkin, Mr Theo Lighthelm, Dr Louis Jenkins, Dr Philip barker, Dr Denis Allard, Dr Glen Staples, Dr Elize Esterhuizen, Dr Predeep Navsaria, Dr Paul Kapp, Dr Lee Wallis, Dr Annemarie Kropman, Dr Heike Geduld, Dr Melanie Stander, Dr Japie De Jager, Dr Peter Hodkinson, Dr Charl Carstens, Dr Julian Fleming, Dr Monique Muller
Conflicts of interest
The views expressed in this paper are those of the author(s) and not those of the editors, editorial board or publisher.
This paper was presented in part at the Emergency Medicine in the Developing World Conference, Cape Town, South Africa, November 24-26, 2009.
Author DM conceived and carried out the study. Author DM analysed and summarised the data findings. Authors LW and PH were part of the Delphi panel, and they assisted with writing up and reviewing the manuscript.
Electronic supplementary material
Appendix 1: Examples of Lickert scale as used in rounds 2 and 3
Appendix 1: Examples of Lickert scale as used in rounds 2 and 3
|Proposed indicator/statement||Potential for use as a departmental performance indicator|
|Time taken to obtain urgent portable CXR||Comments|
|Should not be done with obvious tension pneumothorax|
A: Example of the 9-point Likert scale for the round 2 questionnaire
|Proposed indicator/statement||Potential for use as a departmental performance indicator|
|Time taken to obtain urgent portable CXR||1||2||3||4||5||6||7||8||9|
|Number of round 2 responses for each score|
B: Example of the format for the round 3 questionnaire (statements that had not reached consensus). (Figures in the bottom row are actual panel responses from round 2; top row represents the panel member’s score for round 3).