Monday, March 24, 2008

An Example of Zeroing-In on a Workflow Issue

Today I’ll show a hard data example of how to measure workflow and visualize how it can analyzed. I’ve previously described how a new bigger office actually hurt workflow but what was the specific problem? Our office tries to keep the time someone waits during an appointment to 50minutes. But when we moved offices in June 2007 the number of people waiting longer than 50 minutes increased. The number of “errors” (waiting longer than 50min) went from an average of 9% of people to 12.5% (a 39% increase).





To find the problem a patient flow map during consultation was made.



Using data mining, the times between each stage of an appointment could be retrieved for 596 patients. Three different processes could be found in the appointments to analyze the patient flow. The time from when the patient arrived to when they had they’re medical history completed (includes registration and review of medical history – “Arr to MedHx”), the time between when the medical history is reviewed and the doctor completes the consultation (“MedHx to Doc”) and the time between when the doctor completes the consultation and the patient leaves (“Doc to Out”). Whenever doing a process flow map, start globally and if necessary a more detailed analysis can be added. In this process flow map we used four time stamps to create three time frames.


In 2007 Qtr 2 and Qtr 3 there is a spike in the “Average of "Arr to Out"” (total time). Just prior to the move (Qtr 2) it because of an increase in the Average time from arrival to medical history (registration & medical history review) but after the move it’s because of the doctor consultation time.

In the new office the doctors offices are located further from the consultation area (one-short stairwell away but a world apart) so doctors were constantly running down stairs for chart-work (and to surf the net). By adding dictation, phones and internet access in the consultation area the problem seems to be resolving. We could have done more studies to look at times with arrival to registration, registration to xray, xray to medical history, etc… but the possibilities are endless. Always start with a more global view to scan for the problem then drill down as necessary.

For those who are mathematically inclined keep reading. The number of people waiting greater than 50minutes is our error rate and is synonymous with the sigma value. The sigma value went from 3 to 2 which prompted the intervention. When we compare the time before the problem period (2007 Qtr 2 and 3) look at what changed:

Error Rate (% waiting > 50min): Up 39% (9% to 12% of patients)
Mean Wait: Up 10% (from 34 to 37min)
Stdev (standard deviation) of Overall Wait: Up 18% (from 10 to 12min)

But drill down on the two time frames, “Arr to MedHx” and “MedHx to Doc”:
Arrival to Medical History: Mean Wait Up 11% and StDev Up 1%
Medical History to Doc Consult: Mean Wait Up 25% and StDev Up 40%



The take home message is that it is not enough to know the average wait time; you have to know the amount of variation in the wait time as well. In this case, the increase in mean wait was a smaller part of the problem than the increase in variability (of how long it took the doctor). To put it into an ER wait times scenario, a hospital may claim that the average wait from arrival to initial assessment is only up 10% but the variation may be much different. High variability will lead to high error rates quickly whereas it moves the mean up slowly. In ER wait times, the more critical value is how many people are waiting too long for assessment/triage (see CTAS scale for Canadian standards). In our case, the mean was up modestly but the “error rate” was catastrophically elevated by high variability of one segment of the appointment. Luckily, the problem was easily rectified once we realized what was happening.

No comments: