SICSAG header

Methodology

Data collection
Data management
Presentation of the data
Quality Indicator Bar Charts
Funnel plots
Funnel plots for Standardised Mortality Ratios
APACHE II
CUSUM Methodology
Level of care
Delayed discharges
References

Data collection

Data were collected prospectively from all general adult ICUs, Combined Units and the majority of HDUs using the WardWatcher system developed for this purpose. In February 2015, an initial extract of 2014 data was sent to ISD servers. Validation queries relating to discharges, outcomes, ages and missing treatment information were then issued and fed back to individual units for checking by local and regional audit coordinators. A final validated extract was submitted to ISD in March 2015, which has been used for this report.

Along with the measures taken to ensure data validity, the comprehensiveness of the data, incorporating data on all patients receiving care in participating units during 2014, ensures that the findings included in this report have a high degree of reliability at the national, health board and individual unit level. 

Back to Top

Data management

SICSAG data has undergone an extensive review. All SICSAG data from 1995 onwards is now stored within a rationalised set of databases, and variables and values have been made consistent. SICSAG are constantly striving to improve data quality through ongoing validation and therefore the SICSAG database should be regarded as dynamic and the data may be subject to change.

All SICSAG data from 1998 to 2014 have been through a linkage process that aims to match SICSAG Critical Care episodes to Public Health and Intelligence (PHI), (formerly Information Services Division) SMR01 data scheme which collects data on all general / acute inpatient and day case admissions. All patients recorded in the SICSAG database should have SMR01 records relating to the same hospital stay. 96% of all SICSAG episodes have been matched to an SMR01 stay. This provides an alternative source of information on hospital, ultimate hospital, discharge dates and outcomes. Where the value of these fields is not documented in SICSAG, it has been overwritten with the value derived from linkage to SMR01.

Back to Top

Presentation of the data

The analysis of the data and the presentation of the findings are based on that adopted in previous annual reports.

Additional Tables, along with more detailed data on subject areas that are not included in the annual report, are available on the SICSAG website www.sicsag.scot.nhs.uk. Further information on the interpretation of funnel plots is also published on this website.

WardWatcher had a major upgrade during 2008/2009 with 2010 being the first complete year of data based on the upgraded version of WardWatcher. In 2014 there was another minor upgrade to the HAI variables. Changes that will affect trend data have been referred to in the text.

Back to Top

Quality Indicator Bar Charts

The Quality Indicator bar charts include significance testing in order to estimate the difference between two population proportions using a 99% confidence interval. The confidence limit for the difference between each year’s percentages for each unit was calculated, with the bar in green if the unit was significantly different from its percentage last year and had improved on the indicator, and a red bar if the unit was significantly different from its percentage reported last year but had performed worse against the indicator.

This section presents performance of hospitals against the Quality Indicators in a Red, Amber, Green (RAG) or traffic light chart format. Performance (decline, no change or improvement against the previous year), is measured as a statistically significant difference between the latest year’s performance and the previous year’s performance at the 99% confidence level (if one was to measure performance 100 times, one’s confidence interval would be expected to include the true proportion 99 out of these 100 times).

Back to Top

Funnel plots

A number of the clinical indicators within this report are presented in graphs called control charts. A control chart is a simple way of presenting data that can help guide quality improvement activities, by flagging up areas where there appears to be marked variation and where further local investigation might be beneficial. Control charts have been used widely in the manufacturing industry, and have more recently been applied in healthcare settings. While the presentation of clinical indicators as league Tables is advised against, the use of control charts has become increasingly popular.

Within this report funnel plots (a type of control chart) have been used to allow comparisons to be made between different services providers, in this case Critical Care Units.
A performance indicator is shown on the y-axis, while generally the number of admissions is shown on the x-axis. There is a data point for every unit in the funnel plot. There are five key lines in the funnel plots used in this report. The first is the average for the type of Critical
Care Unit (either ‘ICU or Combined Units’ or ‘HDU’). Plotted on either side of the average are two sets of warning limits. Warning limits are plotted at 2 and 3 standard deviations from the mean. Each of the five key lines is depicted in red on the charts.

Data points within the control limits (the red lines) are said to exhibit common cause variation or to be ‘in control’. Data points out with the control limits are said to exhibit something called ‘special cause variation’ (sometimes referred to as ‘outliers’).

SICSAG will always highlight units outside 2 standard deviations from the mean as “might be different” and outside 3 standard deviations as “are different”. It should be recognised that in a comparison of 25 units there is a considerable chance of an outlier at the 2 SD (5% or 1 in 20) level.  Differences may arise from many sources: differences in data accuracy, case-mix, service provision or practice.  Sometimes a difference will be just a random difference caused by chance alone. SICSAG would encourage readers to use the data to examine practice in the context of the factors listed.

For some performance indicators, more than a few units are outside the outer control limits. This typically arises when the units are heterogeneous, for instance ICU versus Combined Units, or Surgical versus Medical HDUs. Then small institutional factors contribute to more variability than would be expected by chance alone. These differences may not be particularly important nor point to real differences in the performance indicators. Although the positions of the units differ in the statistical sense, they might not be of any clinical significance.
To account for excess variability the control limits can be adjusted in several ways. In this report they are calculated with a procedure derived from Spiegelhalter1.

Back to Top

Funnel plots for Standardised Mortality Ratios

Over the time that the audit has been in existence, various units have been outliers at 2 SD level. We have sought reasons as to why they might be different and informed and supported individual units in seeking an explanation. Being an outlier at this level may be explained by data quality, questions over standards of care, different referral patterns, admission policies or resources but it also may be due to random variation. Therefore, we are using a very stringent definition of variance. For comparison, Hospital SMRs2 produced for the SPSP by PHI and also the Intensive Care National Audit & Research Centre (ICNARC) will use 3 SD to identify outliers.

Back to Top

APACHE II

The outcome measure used by SICSAG is the patients’ survival status (alive or dead) when they finally leave an acute hospital (even if this is not the original hospital). Patients admitted to ICU are at significant, but varied, risk of death. Simply comparing the proportion of patients who die in each unit can give a misleading impression because the severity of their illnesses is different. To overcome this, we use the APACHE II system to adjust for case-mix3. This is a validated scoring system4, which takes account of both the patients’ acute condition and their chronic health.

Certain groups of patients are excluded:

  • Less than 16 years of age
  • Unit stay less than 8 hours
  • Readmitted to unit during the same hospital admission
  • Primary diagnosis for which the system was not developed: burns, coronary artery bypass graft, and liver transplant.

WardWatcher provides similar codes as reasons for excluding unit admissions from APACHE II scoring.  Taking into account non-response, these were re-coded to reflect the hierarchy of decision-making within units.  Automatic exclusions such as ‘diagnosis’, ‘patient under 16’ and ‘patient stayed for less than eight hours’ were excluded first and existing codes changed to reflect this prioritisation.  Readmissions were excluded next, followed by ‘other’ cases where no rationale for automatic exclusion was provided.  The remaining exclusions were optional, where it was possible to generate a score but this was not done (e.g. HDU patients). 
If unit admissions are scored, case-mix adjusted mortality estimates may only be calculated in cases where an appropriate diagnosis is available.  All exclusions and cases with missing or excluded diagnoses (e.g. liver transplant) are shown schematically in the decision tree. 
APACHE II produces an expected mortality rate for a unit, which can be compared to the actual observed mortality rate to give a standardised mortality ratio (SMR).  An SMR significantly greater than 1 suggests that mortality is higher than expected, and a value of less than 1 that it is lower than expected.  It is important to interpret SMRs with caution. It should be appreciated that whilst the APACHE II scoring system adjusts for case-mix, it does not do so perfectly. This scoring system is now nearly 30 years old. Many units admit a relatively small number of patients each year and the confidence intervals around the SMR are therefore wide. Exact confidence intervals for SMR are calculated by the method described by Ulm5.

The standard APACHE II model has been recalibrated based on data from Scottish ICU and Combined Units between 2009 and 2011. The standard APACHE II model has been consistently over predicting mortality for patients admitted to Scottish ICU and Combined Units. This has meant that the old model was not as useful for calculating SMR for the Scottish population. The standard APACHE II model will continue to be available, and could be used to produce trend information and for international comparison. WardWatcher will continue to calculate predicted mortality based on the standard APACHE II model at this time.

Back to Top

CUSUM Methodology

CUSUM methodology CUSUM Methodology

Back to Top

Level of care

Level of care is calculated on a daily basis from the Augmented Care Period (ACP) page of WardWatcher. WardWatcher scores levels of care based on support of five organ systems: respiratory, cardiovascular, renal, neurological and dermatological.

  • Level 3 Advanced respiratory support (connected to a ventilator via ETT or tracheostomy) OR Two or more organ systems are being supported (except basic respiratory and basic cardiac)
  • Level 2 One organ supported
  • Level 1 Epidural or/and General observations requiring more monitoring than can be provided on a general ward
  • Level 0 A patient is assessed as level 0 if not assessed as level 1, 2 or 3 (e.g. no organ support and adequate monitoring could be provided on a general ward)

Level of care is based on the Intensive Care Society guidelines.

Back to Top

Delayed Discharges

Delayed discharges are calculated using the gap recorded in the ‘Ready for Discharge’ and ‘Actually Discharged’ date and time fields. In previous years the field ‘Gap Considered’ was also used in the calculation and data was only included if the discharge was recorded as ‘Abnormal’ for example due to bed shortages. However during recent training it was discovered that some delays were being recorded as ‘Normal’ as delays were normal in the unit, and therefore being excluded from the figures reported. Therefore the current definition of a delay only looks at the date and time fields and the gap between these. This definition poses a better reflection of delays across the units.

Back to Top

References

1        Spiegelhalter DJ. Handling over-dispersion of performance indicators. Quality and Safety in Health Care, 2005, 14 347-51.
2        Knaus WA, Draper EA, Wagner DP, Zimmerman JE. APACHE II: a severity of disease classification system. Critical Care Medicine 1995, 13(10): 818–29.
3        Livingston BM, MacKirdy FN, Howie JC, Jones R, Norrie JD. Assessment of the performance of five intensive care scoring models within a large Scottish database. Critical Care Medicine, 2000, 28(6) 1820-7.
4       Ulm K. Simple method to calculate the confidence interval of a standardized mortality ratio (SMR). American Journal of Epidemiology, 1990, 131(2):373-5.