Content Disclaimer Copyright @2014. All Rights Reserved. |
Links : Home Index (Subjects) Contact StatsToDo |
Introduction
Bench Mark
Compliance
Detecting drifts
Significant Departures
Quality control and its statistics are huge subjects, and pages on this site
cannot hope to cover but a tiny part of it that are more frequently used in health care.
Quality control itself encompasses many activities. In planning, this involves ensuring the resources and structure of organisations, setting up of proper policies and protocols, establish lines of management control, provision of training, selection of staff, and so on. In management, there are the detection of unexpected events and its subsequent analysis, the review of outcomes, and the cultivation of attitudes and culture. Statistics in quality control consists of statistical tools that can be used to support these activities. StatsToDo provides 4 sets of statistical tools that can be used in quality control.
The measurement of quality is often against a bench mark, and the important
issue here is how this bench mark is established. One way is to define it
arbitrarily, setting it against an ideal that one hopes to achieved. Examples
of these are that "50% of those on the waiting list will be seen within 2 weeks",
"the Caesarean Section rate will be less than 25%", and so on. Unfortunately,
arbitrarily set bench marks are often unrealistic and may be unachievable.
The following are programs or statistical tables in StatsToDo that supports the establishment of current population parameters, which will be the basis for defining bench marks that are achievable. The program in the Precision of Measurements Program Page is a standard algorithm to evaluate the precision of a particular measurement from a laboratory. Each sample is measured two or more times, and the values are subjected to the Analysis of Variance, and the within sample variance estimated. From this, the 95% confidence interval of a measurement and the coefficient of variation can be obtained. The Sample Size for Population Parameters Explanation Page explains and links to the following programs and statistical tables.
Once a bench mark is established, it is important to know whether the institution,
a group, or an individual conforms to the bench mark. The idea is to sample
the institution, group or the individual's performance, and the aim is to minimise
the number of samples necessary so that a decision can be made as soon as possible.
Examples in industry are whether a batch of bullets delivered to the army has a defective rate below specification, whether a batch of eggs delivered to the store conforms to the minimum size required. In medical care, it may be whether a newly introduced procedure exceeds a prescribed failure rate, or whether the blood loss in a particular operation exceeds that expected. The need to sample an unknown batch, to be able to draw conclusions with confidence yet as quickly and as cheaply as possible, drove the development of quality control statistics. The following programs and descriptions in StatsToDo supports these evaluations.
Once a bench mark is established and complied with, it is important to continuously
monitor the outcomes, so that a departure from the bench mark can be identified.
The major methods of doing this are the Shewhart and CUSUM charts as explained in CUSUM Explained page.
Shewhart charts are designed to detect a major departure from the bench mark quickly, while CUSM is designed to detect
small but persistent departures.
In both Shewhart and CUSUM charting, the sensitivity of detecting a departure from bench mark is traded against the frequency of false alarms, so the user must set the level of sensitivity according to the needs of the situation. Excessive sensitivity results in constant false alarms, which disrupts services and production as well as reducing the credibility of the monitoring system. Insufficient sensitivity however will result in delays in investigations and remedial actions. The CUSUM model to use depends on the nature of the numbers to be used. StatsToDo provides models for use supporting Normal, Poisson, Bernoulli, Binomial, Inverse Gaussian and Exponential distributions. Explanations and links to different models are available in the CUSUM Explained page. The Exponentially Weighted Moving Average (EWMA) is a time series measurement, where each measurement is weighted to reduce random variations, so that departure from the in control mean can be better detected. Details of EWMA are discussed in Exponentially Weighted Moving Average Explanation Page and will not be repeated here.
Much of quality control statistics are intended to assist and not judge the user. They evaluate a situation,
and triggering alarms that may lead to investigation and remedy, and of ensuring conformity to
minimal standards.
Every now and then, the situation is complicated by disputes, where individuals that performed poorly may argue that the data collected had not truly reflect their outcomes, or when remedial measures are very time consuming or expensive that there is a reluctance to accept that something needs to be done. In these situations, it is important to have methods that are statistically robust and reliable, and produce probability estimates that can be relied upon. The following methods are available from StatsToDo. The Binomial Test Explained Page provides a probability estimate whether the number of positives (k) in a sample (n) conforms to a prescribed proportion (prop). A typical example is a cardiac surgeon who had 5 deaths in 8 consecutive operations (62.5%), when the expected death rate (benchmark) is 14%. Using the binomial test, the probability that 5 deaths in 8 cases conforms to 14% is 0.002, very unlikely, so that a confident decision could be made that this death rate is excessive and cannot be ignored. The Poisson Test Program Page provides a probability estimate whether the number of events in a defined environment conforms to a bench mark. A typical example is an increase of falls in an age care institution, which jumps from the expected 4 per month to 6 following structural and staff changes. Poisson test shows the probability of 6 events conforms to the expected 4 is 0.1, which can be considered to be not statistically significant. A confident decision that no further action is necessary at this point can therefore be made. However, if the trend continues to the next month, and there are 12 falls when the bench mark should be 8, then the probability that 12 falls conform to a bench mark of 8 is 0.048, now worth investigating and closer monitoring. The Paired Difference Explained Page provides a probability estimate whether the mean and Standard Deviation conforms to a bench mark mean. A typical example is when the averaged blood loss from a particular major operation is 500mls, and a particular surgeon, evaluated over 10 operations, has an averaged blood loss of 700mls, with a standard deviation of 100 mls. The difference is 200mls (diff=700-500=200) to be compared with the bench mark of difference=0. The paired difference test show that the 95% confidence interval is 125mls to 275mls, significantly greater than the null value of 0. The conclusion that blood loss over these 10 operations significantly exceeded 500mls could therefore be made. |