Skip to main content
  • Original research
  • Open access
  • Published:

Clinical care review systems in healthcare: a systematic review

Abstract

Background

Clinical care review is the process of retrospectively examining potential errors or gaps in medical care, aiming for future practice improvement. The objective of our systematic review is to identify the current state of care review reported in peer-reviewed publications and to identify domains that contribute to successful systems of care review.

Methods

A librarian designed and conducted a comprehensive literature search of eight electronic databases. We evaluated publications from January 1, 2000, through May 31, 2016, and identified common domains for care review. Sixteen domains were identified for further abstraction.

Results

We found that there were few publications that described a comprehensive care review system and more focus on individual pathways within the overall systems. There is inconsistent inclusion of the identified domains of care review.

Conclusion

While guidelines for some aspects of care review exist and have gained traction, there is no comprehensive standardized process for care review with widespread implementation.

Background

Clinical care review is the process of retrospectively examining potential errors or gaps in medical care, with a goal of future practice improvement. This goes by many different names, sometimes with different audiences or case types, including peer review, adverse event review, sentinel event review, and root cause analysis. The concept of care review is widely accepted and encouraged among safety and quality healthcare leaders. However, a paucity of literature exists discussing and describing the current state of clinical care review.

The challenges and risks of contemporary medical care are well described. Medical error and its resulting outcomes have been defined and measured in many different ways, leading to varying quantifications of the effects [1]. The Institute of Medicine’s (IOM) 1999 report entitled “To Err is Human” [2] estimated that as many as 44,000 to 98,000 deaths annually in the USA occur as a result of medical error. Publication of “To Err is Human” was a landmark event in the recognition of the role of adverse events in medical care in the USA. This represents a shift in the focus on adverse events to look toward systems issues as a cause or error and a call to identify and act to prevent medical error. The National Quality Foundation estimates that, in 2010, medical errors affected 15.5% of Medicare beneficiaries, with nearly half of these errors considered preventable [3]. More recently, Makary and Daniel estimated that as many as 250,000 deaths per year in the USA are due to medical error, making it the third leading cause of death by their estimation [1]. Review of adverse events allows for investigation into, and classification of the causes of, the event and presents an opportunity to modify systems and behaviors to prevent future similar errors. As a part of the strategic approach for increasing safety, the IOM’s “To Err is Human” recommended “Identifying and learning from errors by ... encouraging health care organizations and practitioners to develop and participate in voluntary reporting systems.” They went on to say “Such systems can focus on a much broader set of errors, mainly those that do no or minimal harm, and help detect system weaknesses that can be fixed before the occurrence of serious harm, thereby providing rich information to health care organizations in support of their quality improvement efforts” [1].

Given the long standing call for clinical care review, with limited literature to inform care review systems, we conducted a qualitative systematic review to identify characteristics discussed in existing models for care review. The objectives are to (1) describe the current state of care review and (2) identify elements from published care review systems that contribute to their success. This systematic review will allow for a more complete evaluation of the current state of clinical care review and will identify areas for future scholarly activity.

Methods

This is a qualitative systematic review of studies describing and evaluating care review systems. This study was exempt from our IRB review. This report adheres to the recommendations made in the preferred reporting items for systematic reviews (PRISMA) statement [4]. A protocol was written before the beginning of the investigation.

We included original research studies with any methodological design including cohort studies, case controls, and randomized trials, as well as commentaries, narrative reviews, letters to the editor, and abstracts in peer-reviewed journals that reported models for care review. Search results were limited to publications after January 1, 2000, to focus on publications since the release of “To Err is Human” [1]—a turning point in the way adverse events are analyzed and regarded. In choosing relevant publications, some articles described their process as the main purpose of the article, while others incidentally described a care review process, while instead focusing on a specific intervention or aspect of their mechanism for review. Either was acceptable, as they both shed light on a review system for analysis.

All types of patients and hospital settings were included, as well as recommendations from professional organizations and companies. This study’s investigators are physicians with involvement in quality improvement, adverse event identification and management, patient safety, and leadership of committees for clinical care review.

A senior expert librarian (P.E.) designed and conducted a comprehensive search of eight electronic databases, including Ovid MEDLINE, Ovid EMBASE, EBSCO CINAHL, Ovid CENTRAL, Ovid Cochrane Database of Systematic Reviews, Web of Science, and Scopus. Our search was done on June 10, 2016, and includes publications from January 1, 2000, through May 31, 2016. We included published conference abstracts in our search. There was no language restriction to the search strategy. Bibliography and reference lists of the articles obtained through database search were reviewed to identify additional publications for inclusion. The search strategy can be found in the Additional file 1.

Qualitative assessment and data abstraction process

Two investigators (L.W. and D.N.) identified common domains in the initial literature review to determine which data to abstract, and included additional variables determined to be clinically important based on their experience in the clinical care review process and practice improvement. Domains included were description of systems improvement, educational output and feedback, description of a standardized process and referral mechanism, consideration of the case outcome, deliverables of the review system including non-punitive process and recognition of excellence, multidisciplinary involvement, dedicated process leadership, reviewer training, case blinding/anonymity, and implementation of improvement recommendations by the investigating group. These are further described in Table 1.

Table 1 Descriptions of the 16 domains of care review

In phase I of the review, one investigator (L.W.) independently screened all titles yielded by the initial search strategy for possible inclusion. After identifying appropriateness for possible inclusion, phase II consisted of two reviewers (L.W. and D.N.) independently evaluating the abstracts of publications identified in phase I. The publications from phase II were then retrieved in full text and assessed for inclusion of domain abstraction in phase III by two independent reviewers. The agreed-upon articles were assessed by independent reviewers in duplicate, to abstract the identified domains of care review in phase III.

In phase II, disagreement between reviewers was reconciled by discussion and consensus. The investigators were not blinded to the authors, journals, or results of studies. In phase III, disagreements on the data abstraction were resolved by a third independent reviewer who assessed the article and determined if the theme was included in the care review process. Descriptions of the 16 domains were supplied to all reviewers prior to data abstraction for reference.

Critical appraisal is the process of systematically examining research evidence to assess its validity, results, and relevance before using it to inform a decision. Instruments developed to support quality appraisal usually share some basic criteria for the assessment of qualitative research. These include the need for research to have been conducted ethically, the consideration of relevance to inform practice or policy, the use of appropriate and rigorous methods, and the clarity, coherence of reporting, address of reliability, validity, and objectivity [5].

In considering the most appropriate instruments to use for critical appraisal, we considered using the Cochrane Collaboration Bias Appraisal Tool [6] and a modified Newcastle-Ottawa Scale tool [7]. The nature of our qualitative data abstraction precluded the use of these tools. While the studies we evaluated may have included randomized controlled trials and been at risk for bias, the results of the publications evaluated were not typically relevant to our goal of qualitative domain abstraction. Many publications we evaluated were narrative in nature—describing a process without presentation of data, either qualitative or quantitative. For those publications that did present data relevant to our domains, the effect of bias within the study was felt to be unlikely to impact our qualitative data collection because the abstracted domains—descriptions of processes—were not affected by the results of the studies. We reviewed the items described in the Standards for Reporting Qualitative Research (SRQR) [8] and the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) [9] statement. The SRQR and ENTREQ aim to improve the transparency of all aspects of qualitative research by providing clear standards for reporting qualitative research. When assessing the risk of bias, we decided not to exclude articles based on their quality assessment. All potentially valuable insights were included. From each study, we extracted the domains relevant to care review processes. We tabulated the results and created graphics based on frequencies. No quantitative data was appropriate for abstraction, so we did not perform a meta-analysis.

Results

The initial library search strategy identified 1318 titles for review. In phase I, 440 abstracts were reviewed, 76 of which were selected for full-text review in phase II. Fifteen articles from outside sources and bibliography review were also identified and reviewed. In total, 91 full-text articles were assessed, and after reconciliation between two independent reviewers, 47 articles were initially found to be appropriate for inclusion in our analysis of the domains of care review. One article was removed in the abstraction process, as both reviewers independently determined that it did not meet inclusion criteria [10] leading to 46 unique articles reviewed.

Domains were abstracted by two independent reviewers for each of the 46 articles in phase III. Articles that described a care review process from the same institution were consolidated to reflect the most complete view of that process possible, as aspects may have been reported differently in multiple articles/abstracts. Figure 1 shows the study selection process. Ultimately, we evaluated the care review systems from 35 unique institutions.

Fig. 1
figure 1

Study selection process

Study characteristics

Among the 46 studies, 35 represented unique institutions and 11 were same authors/institutions describing different aspects of the process or domains. The types of articles identified included 14 descriptive [11,12,13,14,15,16,17,18,19,20,21,22,23,24], three editorials [25,26,27] 15 prospective [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42], seven quality improvement projects [12, 43,44,45,46,47,48], and ten retrospective [11, 30, 49,50,51,52,53,54,55,56]. The 16 domains of successful care review that were identified for abstraction are presented and defined in Table 1.

The percentage of frequency of each component is shown in Fig. 2. The most commonly identified component of a care review process was utilizing an analysis of systems issues contributing to the case (32 institutions, 91.4%), followed by utilizing a standardized process for case review (30 institutions, 85.7%) and use of a structured case classification system (28 institutions, 80.0%). The least common components identified were recognition of excellence and use of case blinding/anonymity in reviews (5 institutions, 14.3%). Some articles were consistent with more than one article type and were classified as both.

Fig. 2
figure 2

Frequency of domain identification

Table 2 shows the distribution of all components in the full-text articles reviewed, with consolidation of same-institutions. No article/institution identified all 16 items evaluated by reviewers. Two institutions identified 14 of 16 items: Lehigh Valley and Johns Hopkins.

Table 2 Distribution of care review domains

Discussion

Our systematic review shows that, in the first two decades since the IOM report calling for improved safety systems, there have been few articles outlining a comprehensive clinical care review process. Additionally, most articles discuss their care review systems in the context of describing an aspect of their process, or corresponding improvement initiative.

Systems analysis—defined as the assessment of the effects of external forces such as policies, workflows, and software such as the electronic medical record on the critical event—was the most commonly identified care review process characteristic. Many identified articles describe the importance of evaluating how a person works within a system, rather than in isolation, to identify improvement opportunities. Assuming individuals are properly motivated with benign intent, looking at the system surrounding, the care avoids an antagonistic approach and supports the IOM’s underlying reasons for calling for care review processes—to prevent future errors.

Similarly, standardized processes and structured case classification were frequently discussed in the literature. To meet the IOM’s recommendations for creating care review “systems,” having a standardized process that uses structured classification is likely necessary. Without standardization, reviews would likely be sporadic, inefficient, and challenging to implement and subsequently inform future practice. Without structured classification, one could assume that conclusions would also be difficult to interpret.

Although some of these characteristics were common among reported care review systems, others are only rarely reported. Recognition of excellence and blinding of cases were reported in just five (14%) of the reports. Institutions that recognize excellence while performing care reviews were supportive of the practice, and one can understand why this would support the culture needed to have an effective care review system, and perhaps designers of future care review systems may wish to consider implementing this component. Similarly, anonymous review, or blinding, is intended to reduce bias and may allow a more objective review of each case. However, its infrequent mention may be indicative of unpublished prior experiences that may have supported avoiding this practice. From our experience, these are controversial topics, and future work is needed to understand the effects of specific characteristics on the overall care review process.

One additional characteristic that review processes must be supported by a functioning organizational system should receive particular attention. Although this was specifically identified by only 19 organizations, the downstream benefit to reviewing an episode of care and making recommendations for change in a non-functional system is likely lost. Key stakeholders in the process (physicians, nurse practitioners, physician assistants, nursing staff, support staff, etc.) are seemingly necessary for the care review process, and the administrative and leadership structure must be supportive of recommendations for change after care review is completed. This combination is strongly conducive to a process that engenders trust from the care team, which in turn bolsters the system as cases are referred for review, and staff engage in further problem solving.

Limitations

The articles evaluated come from a variety of settings—from consulting firms to in situ care review systems. Some authors strove to describe a comprehensive local practice, while others focused primarily on a particular component of a larger system. This heterogeneity limits the generalizability as the variability from one system to another may indicate institution- or system-specific adaptations to facilitate the process. A solution for one setting may not represent a good solution for another. We included articles from institutions and consulting firms describing or self-reporting care review systems, and it is not possible to know the true effectiveness of the processes described when removed from clinical context. It may be that there is an over-emphasis on some areas of care review believed to be ideal that are not practiced as described, and also possible that not all aspects of a process are represented. Particularly in the articles that discuss the care review process as the context for a specific project, it is possible that not all the details on the over-arching system of care review in place are described resulting in abstraction of domains in what is an incomplete description.

The domains we used during abstraction were determined by screening the included articles and supplemented by expert opinion. It is possible that there are additional variables that are more important, but less common, and were not included in our analysis. It is possible that a care review process we reviewed may include some of the 16 characteristics but did not specifically mention them in the articles reviewed. Additionally, the qualitative nature of the abstraction and interpretation of each item definitions are complex and may lead to less reliable results.

In an effort to reduce the effects of bias and definition complexity, all articles were reviewed in duplicate—both for inclusion in the study as well as abstraction of data. Disagreements were resolved by discussion and consensus for article inclusion and adjudicated by a third reviewer for the domain abstraction.

Conclusion

Despite increased discussion among institutions such as IOM and the National Patient Safety Foundation, in the last 16 years, there have been relatively few publications describing clinical care review processes and no clear evidence of a cultural shift to embrace clinical care review in an organized fashion. We have identified 16 domains of focus in a care review process and found that the approach to care review is highly variable as represented in the literature.

Future research

The effects of different aspects of care review processes have not been well studied. This presents an opportunity to evaluate processes that are present in many hospitals and health systems and identify truly effective, rather than simply common, practices, as identified within.

References

  1. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139.

    Article  PubMed  Google Scholar 

  2. Kohn LT, Corrigan JM, Donaldson MS (Institute of Medicine). To err is human: building a safer health system. Washington, DC: National Academy Press; 2000.

  3. Golladay, Kevin R, Collins, A. Blaine. Dallas (TX): Adverse events in hospitals: national incidence among Medicare beneficiaries. Department of Health and Human Services (US); 2010. p. 81. Report number: OEI-06-09-00090.

  4. Moder D, Liberati A, Tetzlaff J, Altman DG. The PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

  5. Noyes J, Booth A, Hannes K, Harden A, Harris J, Lewin S, Lockwood C, editors. Supplementary guidance for inclusion of qualitative research in cochrane systematic reviews of interventions. Version 1 (updated August 2011). Cochrane Collaboration Qualitative Methods Group, 2011. Chapter 4, Critical appraisal of qualitative research.

  6. Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011. Available from www.handbook.cochrane.org.

  7. Wells GA, Shea B, O'Connell D, et al. Quality assessment scales for observational studies. Ottawa Health Research Institute. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp. Accessed 19 Feb 2017.

  8. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

    Article  PubMed  Google Scholar 

  9. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12(1):181.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Keroack MA, Youngberg BJ, Cerese JL, Kresk C, Prellwitz LW, Trevelyan EW. Organizational factors associated with high performance in quality and safety in academic medical centers. Acad Med. 2007;82(12):1178–86.

    Article  PubMed  Google Scholar 

  11. Agee C. Professional peer review committee improves the peer review process. Physician Exec. 2007;33(1):52–5.

    PubMed  Google Scholar 

  12. Chan LS, Elabiad M, Zheng L, Wagman B, Low G, Chang R, Testa N, Hall SL. A medical staff peer review system in a public teaching hospital—an internal quality improvement tool. J Healthc Qual. 2014;36(1):37-44.

  13. Cosby KS. A framework for classifying factors that contribute to error in the emergency department. Ann Emerg Med. 2003;42(6):815–23.

    Article  PubMed  Google Scholar 

  14. Edwards MT. Minimizing bias in clinical peer review. Physician Exec. 2011;37(6):50–2. 54

    PubMed  Google Scholar 

  15. George V. Peer review in nursing: essential components of a model supporting safety and quality. J Nurs Adm. 2015;45(7–8):398–403.

    Article  PubMed  Google Scholar 

  16. Helmreich RL. On error management: lessons from aviation. BMJ. 2000;320(7237):781–5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Hitchings KS, Davies-Hathen N, Capuano TA, Morgan G, Bendekovits R. Peer case review sharpens event analysis. J Nurs Care Qual. 2008;23(4):296–304.

    Article  PubMed  Google Scholar 

  18. Kadar N. Peer review of medical practices: missed opportunities to learn. Am J Obstet Gynecol. 2014;211(6):596–601.

    Article  PubMed  Google Scholar 

  19. Lee JH, Vidyarthi A, Sehgal N, Auerback A, Wachter R. I-CaRe: a case review tool focused on improving inpatient care. Jt Comm J Qual Patient Saf. 2009;35(2):115–9. 61

    Article  PubMed  Google Scholar 

  20. McKay J, Pope L, Bowie P, Lough M. External feedback in general practice: a focus group of trained peer reviewers of significant event analyses. J Eval Clin Pract. 2009;15(1):142–7.

    Article  PubMed  Google Scholar 

  21. Nolan S. Nursing M&M reviews: learning from our outcomes. RN. 2008;71(1):36–40.

    PubMed  Google Scholar 

  22. Pronovost PJ, Holzmueller CG, Martinze E, Cafeo CL, Hunt D, Dickson C, Awad M, Makary M. A practical tool to learn from defects in patient care. Jt Comm J Qual Patient Saf. 2006;32(2):102–8.

    Article  PubMed  Google Scholar 

  23. Spath P. The real deal on holding successful case reviews. Hosp Peer Rev. 2007;32(10):145–8.

    PubMed  Google Scholar 

  24. Vincent C, Taylor-Adams S, Chapman EJ, Dewett D, Prior S, Strange P, Tizzard A. How to investigate and analyze clinical incidents: clinical risk unit and association of litigation and risk management protocol. Ann Fr Anesth Reanim. 2002;21(6):509-16.

  25. Edwards MT. Measuring clinical performance. Physician Exec. 2009;35(6):40–3.

    PubMed  Google Scholar 

  26. Maddox TM, Rumsfeld JS. Adverse clinical event peer review must evolve to be relevant to quality improvement. Circ Cardiovasc Qual Outcomes. 2014;7(6):807–8.

    Article  PubMed  Google Scholar 

  27. Wu AW, Lipshutz AK, Pronovost PJ. Effectiveness and efficiency of root cause analysis in medicine. JAMA. 2008;299(6):685–7.

    Article  CAS  PubMed  Google Scholar 

  28. Bender LC, Klingensmith ME, Freeman BD, Chapman WC, Dunagan WC, Gottlieb JE, Hall BL. Anonymous peer review in surgery morbidity and mortality conference. Am J Surg. 2009;198(2):270–6.

    Article  PubMed  Google Scholar 

  29. Calder LA, Forster A, Nelson M, Leclair J, Perry J, Vaillancourt C, Hebert G, Cwinn AA, Wells G, Stiell I. Adverse events among patients registered in high-acuity areas of the emergency department: a prospective cohort study. CJEM. 2010;12(5):421–30.

    Article  PubMed  Google Scholar 

  30. Carbo AR, Goodman EB, Totte C, Clardy P, Feinbloom D, Kim H, Kriegel G, Dierks M, Weingart SN, Sands K, Aronson M, Tess A. Resident case review at the departmental level: a win-win scenario. Am J Med. 2016;129(4):448–52.

    Article  PubMed  Google Scholar 

  31. Edwards MT, Benjamin EM. The process of peer review in U.S. hospitals. JCOM. 2009;16(10):461–7.

    Google Scholar 

  32. Edwards MT. Clinical peer review self-evaluation for US hospitals. Am J Med Qual. 2010;25(6):474–80.

    Article  PubMed  Google Scholar 

  33. Edwards MT. The objective impact of clinical peer review on hospital safety and quality. Am J Med Qual. 2011;26(2):110–9.

    Article  PubMed  Google Scholar 

  34. Edwards MT. A longitudinal study of clinical peer review’s impact on quality and safety in US hospitals. J Healthc Manag. 2013;58(5):369–84.

    Article  PubMed  Google Scholar 

  35. Forster A, Rose NG, van Walraven C, Stiell I. Adverse event following an emergency department visit. Qual Saf Health Care. 2007;16(1):17–22.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Jepson ZK, Darling CE, Kotkowski KA, Bird SB, Arce MW, Volturo GA, Reznek MA. Emergency department patient safety incident characterization: an observational analysis of the findings of a standardized peer review process. BMC Emerg Med. 2014;14:20.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Lovett PB, Massone RJ, Holmes MN, Hall RV, Lopez BL. Rapid response team activations within 24 hours of admission from the emergency department: an innovative approach for performance improvement. Acad Emerg Med. 2014;21(6):667–72.

    Article  PubMed  Google Scholar 

  38. McVeigh TP, Waters PS, Murphy R, O’Donogue GT, McLaughin R, Kerin MJ. Increasing reporting of adverse events to improve the educational value of the morbidity and mortality conference. J Am Coll Surg. 2013;216(1):50–6.

    Article  PubMed  Google Scholar 

  39. Reznek MA, Barton BA. Improved incident reporting following the implementation of a standardized emergency department peer review process. Int J Qual Health Care. 2014;26(3):278–86.

    Article  PubMed  Google Scholar 

  40. Reznek MA, Kotkowski KA, Arce MW, Jepson ZK, Bird SB, Darling CE. Patient safety incident capture resulting from incident reports: a comparative observational analysis. BMC Emerg Med. 2015;15:6.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Spigelman AD, Swan J. Measuring clinical audit and peer review practice in a diverse health care setting. ANZ J Surg. 2003;73(12):1041–3.

    Article  PubMed  Google Scholar 

  42. Pierluissi E, Fischer MA, Campbell AR, Landefeld CS. Discussion of medical errors in morbidity and mortality conferences. JAMA. 2003;290(21):2838–42.

    Article  CAS  PubMed  Google Scholar 

  43. Calder LA, Kwok ESH, Cwinn AA, Worthington J, Yelle J, Waggott M, Frank J. Enhancing the quality of morbidity and mortality rounds: the Ottawa M&M model. Acad Emerg Med. 2014;21(3):314–21.

    Article  PubMed  Google Scholar 

  44. Corcoran S. The quality review—asking staff and patients to inform the quality strategy in Central Manchester. Clinical Risk. 2015;21(1):3–6.

    Article  Google Scholar 

  45. Pagano LA, Lookinland S. Nursing morbidity and mortality conferences: promoting clinical excellence. Am J Crit Care. 2006;15(1):78–85.

    PubMed  Google Scholar 

  46. Strayer RJ, Why BD, Shearer PL. A novel program to improve patient safety by integrating peer review into the emergency medicine residency curriculum. J Emerg Med. 2014;47(6):696–701. e2

    Article  PubMed  Google Scholar 

  47. Thielen J. Failure to rescue as the conceptual basis for nursing clinical peer review. J Nurs Care Qual. 2014;29(2):155–63.

    Article  PubMed  Google Scholar 

  48. Woloshynowhych M, Nealth G, Vincent C. Case record review of adverse events: a new approach. Qual Saf Health Care. 2003;12(6):411–5.

    Article  Google Scholar 

  49. Bowie P, McCoy S, McKay J, Lough M. Learning issues raised by the educational peer review of significant event analyses in general practice. Qual Prim Care. 2005;13(2):75–83.

    Google Scholar 

  50. Berk WA, Welch RD, Levy PD, Jones JT, Arthur C, Kuhn GJ, King JJ, Bock BF, Sweeny PJ. The effect of clinical experience on the error rate of emergency physicians. Ann Emerg Med. 2008;52(5):497–501.

    Article  PubMed  Google Scholar 

  51. Branowicki P, Driscoll M, Hickey P, Renaud K, Sporing E. Exemplary professional practice through nurse peer review. J Pediatr Nurs. 2011;26(2):128–36.

    Article  PubMed  Google Scholar 

  52. Diaz L. Nursing peer review: developing a framework for patient safety. J Nurs Adm. 2008;38(11):475–9.

    Article  PubMed  Google Scholar 

  53. Katz RI, Lagasse RS. Factors influencing the reporting of adverse perioperative outcomes to a quality management program. Anesth Analg. 2000;90(2):344–50.

    CAS  PubMed  Google Scholar 

  54. Meeks DW, Meyer AN, Rose B, Walker YN, Singh H. Exploring new avenues to assess the sharp end of patient safety: an analysis of nationally aggregated peer review data. BMJ Qual Saf. 2014;23(12):1023–30.

    Article  PubMed  Google Scholar 

  55. Stenkelenberg J, van Roosmalen J. The maternal mortality review meeting: experiences from Kalabo District Hospital, Zambia. Trop Dr. 2002;32(4):219–23.

    Google Scholar 

  56. Thompson M, Ritchie W, Stonebridge A. Could sequential individual peer reviewed mortality audit data be used in appraisal? Surgeon. 2005;3(4):288–92.

    Article  CAS  PubMed  Google Scholar 

Download references

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

All the authors have contributed substantially to this manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to M. Fernanda Bellolio.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Search strategy. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Walker, L.E., Nestler, D.M., Laack, T.A. et al. Clinical care review systems in healthcare: a systematic review. Int J Emerg Med 11, 6 (2018). https://doi.org/10.1186/s12245-018-0166-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12245-018-0166-y

Keywords