Related Links:
R Explained Page
CUSUM Charts Explained Page
Introduction
Example R Code
Example Explained
X
Y
This page provides explanations and example R codes for CUSUM quality control charts, for detecting changes in counts that conforms to the Negative Binomial Distribution.
CUSUM Generally
CUSUM is a set of statistical procedures used in quality control. CUSUM stands for Cumulative Sum of Deviations.
In any ongoing process, be it manufacture or delivery of services and products, once the process is established and running, the outcome should be stable and within defined limits near a benchmark. The situation is said to be In Control
When things go wrong, the outcomes depart from the defined benchmark. The situation is then said to be Out of Control
In some cases, things go catastrophically wrong, and the outcomes departure from the benchmark in a dramatic and obvious manner, so that investigation and remedy follows. For example, the gear in an engine may fracture, causing the machine to seize. An example in health care is the employment of an unqualified fraud as a surgeon, followed by sudden and massive increase in mortality and morbidity.
The detection of catastrophic departure from the benchmark is usually by the Shewhart Chart, not covered on this site. Usually, some statistically improbable outcome, such as two consecutive measurements outside 3 Standard Deviations, or 3 consecutive measurements outside 2 Standard Deviations, is used to trigger an alarm that all is not well.
In many instances however, the departures from outcome benchmark are gradual and small in scale, and these are difficult to detect. Examples of this are changes in size and shape of products caused by progressive wearing out of machinery parts, reduced success rates over time when experienced staff are gradually replaced by novices in a work team, increases in client complaints to a service department following a loss of adequate supervision.
CUSUM is a statistical process of sampling outcome, and summing departures from benchmarks. When the situation is in control, the departures caused by random variations cancel each other numerically. In the out of control situation, departures from benchmark tend to be unidirectional, so that the sum of departures accumulates until it becomes statistically identifiable.
The mathematical process for CUSUM is in 2 parts. The common part is the summation of depertures from the bench mark (CUSUM), and graphically demonstrating it. The unique part is the calculation of the decision interval abbreviated as DI or h, and the reference value, abbreviated as k, which continuously adjustes the CUSUM and its variance. The two values of h and k depend on the following parameters
- The in control values
- The out of control values
- The Type I Error or false positive rate, expressed as the Average Run Length, abbreviated as ARL, the number of samples expected for a false positve decision when the situation is in control. ARL is the inverse of false positive rate. A false positive rate of 1% would have ARL=100
Proportions
Proportions can be handled under 3 common types of distribution
- The Binomial Distribution where the measurement is the number of the positive cases in a group of set sample size. The advantage of such an appropach is that the results tend to be stable, as short term variations are evened out with many cases. The disadvantage is that evaluation can only take place when the planned sample size per group has been reached, so conclusions tend to take a long time.
- The Negative Binomial Distribution Where the measurement is the number of negative cases between a set number of positive cases. Evaluation can take place after each time the set number of positive case is reached, so conclusions can be reached sooner. However the results tend to be more variable as it is influenced by short term variations.
- The Bernoulli Distribution where the measurement is either positive or negative for each case. Evaluation therefore takes place after each observation, so conclusions can be reached very quickly, but the results tend to be more chaotic as it varies with each observation.
- This page describes the Binomial Distribution. CUSUM for Binomial Distribution is discussed in the CUSUM for Binomial Distributed Proportion Explained Page, and that for Bernoulli Distribution in CUSUM for Bernoulli Distributed Proportion Explained Page
CUSUM for Counts based on the Negative Binomial Distribution
The Negative Binomial Distribution is based on the number of outcome negative cases between a set number of positive cases, and each sampling is completed when the defined number of positive cases have been reached. An example is the Caesarean Section rate in many obstetric units, say 20%, which is 4 normal deliveeries between each Caesarean Section, 8 between 2 Caesarean Section, 12 between 3 Caesarean Sections. The number of positive cases (e.g. Caesarean Section) is constant, and the number of negative cases (e.g. normal delivery) is the measurement.
Negative Binomial Distribution is an alternative to the Binomial distribution for CUSUM of proportions. It is sometimes preferred because each sample is quicker, and the results can be obtained when the defined number of positive cases is reached, rather than waiting for results from all the cases in a defined sample size to be completed. Negative Binomial Distribution can also be an alternative to Poisson distribution for CUSUM on counts, particularly if the assumptions of Poisson (variance=mean) cannot be met.
The parameters required are
- The number of positive cases in each sample. This remains constant throughout a CUSUM project
- The expected number of negative cases in each sample before the set number of positive cases is reached.
- The Average Run Length (ARL). This depends on a balance between the importance of detecting deviation against the cost of disruption in case of a false positive. The ARL in Binomial Distribution is based on the number of groups and not on number of cases. Please note: that the algorithm on this page is intended for a one tail monitoring, either an increase or a decrease in the value. If the user intends a two tail monitoring, to detect either increase of decrease, the two CUSUM charts should be created, each with half the ARL that of a one tail CUSUM.
Details of how the analysis is done and the results are describer in the panel R Code Explained. Conceptually, th algorithm is as follows
- The statistics is based on the odds ratio. If r=number of positive cases, and c=number of negative cases:
- mean (mu, μ) = r / c
- variance (v) = μ(1+1/c)
- μin control, vin control, μout of control and ARL are used to obtain the reference value (k) and decision interval (h), both expressed as odds
- The negative outcome count (n) obtained during monitoring is converted into odds, odd = r / n, which is then used to calculate CUSUM
- The CUSUM chart is therefore one of cumulative changes in odds. If the negative counts increases, the odds decreases. If the counts decreases, the odds increases.
References
CUSUM : Hawkins DM, Olwell DH (1997) Cumulative sum charts and charting for
quality improvement. Springer-Verlag New York. ISBN 0-387-98365-1 p 47-74, 147-148
Hawkins DM (1992) Evaluation of average run lengths of cumulative sum charts for an arbitrary data distribution. Journal Communications in Statistics - Simulation and Computation Volume 21, - Issue 4 Pages 1001-1020
https://cran.r-project.org/web/packages/CUSUMdesign/index.html
https://cran.r-project.org/web/packages/CUSUMdesign/CUSUMdesign.pdf
This section presents the R code in total so it can be copied and pasted into RStudio as a template. Detailed explanations for each step are in the next panel
For those unfamiliar with R, basic information on setting up R and common R procedures can be found in the R Explained Page
# CUSUM Negative Binomial Distribution
# Step 1: parameters and data
nPos = 3 # number of positives as decider (r)
icNeg = 12 # in control number of negatives between NPos (c0)
oocNeg =7 # out of control number of negatives between NPos (c1)
arl = 100
theModel = "F" #F for FIR, Z for zero, S for steady state
dat=c(12,14,13,14,11,14,12,9,12,9,9,10,7,10,5,7,9,10,8,8,7,5,6,7,6,
10,9,6,10,6,9,9,9,9,6,7,6,7,8,7,8,7,9,8,7,5,8,7,10,10)
# Step 2: Calculate k and h
# Step 2a: convert counts to odds and variance ref: Hawkins, p.147
icMu = nPos / icNeg # in control mean = r/c0
icVar = icMu * (1 + 1 / icNeg) # in control variance = icMu / (1+1/c0))
oocMu = nPos / oocNeg # out of control mean = r/c1
# Step 2b: Calculate k and h
#install.packages("CUSUMdesign") # if not already installed
library(CUSUMdesign)
result <- getH(distr=5, ICmean=icMu, ICvar=icVar, OOCmean=oocMu, ARL=arl, type=theModel)
k <- result$ref
h <- result$DI
if(oocMu<icMu)
{
h = -h
}
cat("Reference Value k=",k,"\tDecision Interval h=", h, "\n")
# Step 3: Create and plot CUSUM
# Step 3a: Create vector of cusum value
cusum <- vector()
cusumValue = 0
if(theModel=="F")
{
cusumValue = h / 2
}
for(i in 1 : length(dat))
{
mu = nPos / dat[i]
cusumValue = cusumValue + mu - k
if(oocMu>icMu) # mu Up count down
{
if(cusumValue<0)
{
cusumValue = 0
}
}
else # mu down count up
{
if(cusumValue>0)
{
cusumValue = 0
}
}
cusum[i] = cusumValue
}
# Step 3b: Plot CUSUM
plot(cusum,type="l")
abline(h=h)
# Step 4: Optional export of results
#myDataFrame <- data.frame(dat,cusum) #combine dat and cusum to dataframe
#myDataFrame #display dataframe
#write.csv(myDataFrame, "CusumNegBin.csv") # write dataframe to .csv file
The example is a made up one to demonstrate the numerical process, and the data is generated by the computer. It purports to be from a quality control exercise in an obstetric unit, using Caesarean Section Rate as the quality indicator.
- From records in the past, we established the benchmark Caesarean Section Rate to be 20% (0.2), and this can be capped if the junior staff and midwives are well trained and closely supervised.
- With time however, experienced staff leave and replaced by the less experienced and trained. The standard of supervision would gradually deteriorate, resulting in an increase in the Caesarean Section rate.
- We would like to trigger an alarm and reorganize the working and supervision framework when the Caesarean Section Rate increases to 30% (0.3) or more.
- As re-organizing working framework is time consuming and disruptive, we would like any false alarm to be no more frequent than once every sets of samples, so the average run length ARL = 100
The data is entered in step 1. This is the only part of the program that needs any editing
# Step 1: parameters and data
nPos = 3 # number of positives as decider (r)
icNeg = 12 # in control number of negatives between NPos (c0)
oocNeg =7 # out of control number of negatives between NPos (c1)
arl = 100
theModel = "F" #F for FIR, Z for zero, S for steady state
dat=c(12,14,13,14,11,14,12,9,12,9,9,10,7,10,5,7,9,10,8,8,7,5,6,7,6,
10,9,6,10,6,9,9,9,9,6,7,6,7,8,7,8,7,9,8,7,5,8,7,10,10) # number of negative cases between each set of 3 positive cases
Step 1 contains the parameters and the data. This is the part where the user can edit, and change the values to that required in his/her own analysis
The first 4 lines sets the parameters required for the analysis. The logic is as follows
- The number of positive cases (Caesarean Section) remains the same for in and out of control, when the proportions are differend (20% and 30% in this exercise)
- In this exercise, 3 is the smallest count that can be used. For in control, we can have 3 as 20% positives and 12 as 80% negatives. For out of control, we can have 3 as 30% positives and 7 as 70% negatives.
The 4th line, the model has 3 options, which sets the first value of the CUSUM
- F means Fast Initial Response, where the initial CUSUM value is set at half of the Decision Interval h. The rationale is that, if the situation is in control then CUSUM will gradually drift towards zero, but if the situation is already out of control, an alarm would be triggered early. The down side is that a false alarm is slightly more likely early on in the monitoring. As FIR is recommended by Hawkins, it is set as the default option
- Z is for zero, and CUSUM starts at the baseline value of 0. This will lower the risk of false alarm in the early stages of monitoring, but will detect the out of control situation slower if it already exists at the begining.
- S is for steady state, intended for when monitoring is already ongoing, and a new plot is being constructed. The CUSUM starts at the value when the previous chart ends.
- Each model will make minor changes to the value of the decision interval h. The setting of the initial values is mostly intended to determine how quickly an alarm can be triggered if the out of control situation exists from the beginning.
The last part, c() is a function that creates a vector (array) containing the comma separated values within the bracket. In this case, dat is the name of the vector which contains the number of normal deliveries between each set of 3 Caesarean Sections.
The remainder of the program does not require any editing or change by the user, unless he/she wishes to alter the program for specific purposes.
Step 2 calculates k and h from the input parameters, and this is divided into 2 parts.
# Step 2a: convert counts to odds and variance ref: Hawkins, p.147
icMu = nPos / icNeg # in control mean = r/c0
icVar = icMu * (1 + 1 / icNeg) # in control variance = icMu / (1+1/c0))
oocMu = nPos / oocNeg # out of control mean = r/c1
# Step 2b: Calculate k and h
#install.packages("CUSUMdesign") # if not already installed
library(CUSUMdesign)
result <- getH(distr=5, ICmean=icMu, ICvar=icVar, OOCmean=oocMu, ARL=arl, type=theModel)
k <- result$ref
h <- result$DI
if(oocMu<icMu)
{
h = -h
}
cat("Reference Value k=",k,"\tDecision Interval h=", h, "\n")
Step 2a converts the counts into means and variance in term of odds
Step 2b performs that statistical calculations using the odds parameters. The package CUSUMdesign needs to be alrady installed, and the library activated each time the program is used.
result is the object that contains the results of the analysis. The result required for this program are the reference value (k) and decision interval h. Please note that h is calculated as a positive value. If the CUSUM is designed to detect a decrease from in control value, then h needs to be changed to a negative value.
The last line displays the results we need
Reference Value k= 0.3305118 Decision Interval h= 3.666667
Please note that, although the parameters are entered as counts, k and h are in terms of odds of Caesarean Section. This means that, as count of negatives increases, odd decreases, and as count of negatives decreases, odd increases. In other words, the CUSUM chart will be the inverse of the negative counts.
Step 3 is divided into 2 parts. Step 3a calculates the cusum vector, and 3b plots the vector and h in a graph.
# Step 3a: Create vector of cusum value
cusum <- vector()
cusumValue = 0
if(theModel=="F")
{
cusumValue = h / 2
}
for(i in 1 : length(dat))
{
mu = nPos / dat[i]
cusumValue = cusumValue + mu - k
if(oocMu>icMu) # mu Up count down
{
if(cusumValue<0)
{
cusumValue = 0
}
}
else # mu down count up
{
if(cusumValue>0)
{
cusumValue = 0
}
}
cusum[i] = cusumValue
}
The first 6 lines of code in step 3a creates the empty cusum vector and sets the initial cusum value. The 10th line converts the negative count into odd of positive/negative, before it is used to calculate CUSUM. The remaining codes calculates the cusum value for each measurement, and places it in the cusum vector
# Step 3b: Plot the cusum vector and h
plot(cusum,type="l")
abline(h=h)
|
In step 3b, the first line plots the cusum vector, and the second line the decision interval h. The result plot is shown to the left.
|
# Step 4: Optional export of results
#myDataFrame <- data.frame(dat,cusum) #combine dat and cusum to dataframe
#myDataFrame #display dataframe
#write.csv(myDataFrame, "CusumNegBin.csv") # write dataframe to .csv file
Step 4 is optional, and in fact commented out, and included as a template only. Each line can be activated by removing the #
The first line places the two vectors, dat and cusum together into a dataframe
The second line displays the data, along with row numbers, in the console, which can then be copied and pasted into other applications for further processing
The third line saves the dataframe as a comma delimited .csv file. This is needed if the data is too large to handle by copy and paste from the console.
Please note: R Studio write files to User/Document/ folder by default. The path needs to be reset if the user wishes to save files using a specific folder. Discussion in the file I/O panel of the R Explained Page
<
|