I know it's been quite around these parts lately but a recent article caught my attention this morning.
High Performance EMS posted "Does Response Time Matter?" and it got me thinking.
The author states an example of a patient being "treated" by fellow citizens at an airport and having to wait 20 minutes for an ambulance to arrive. The author goes on to describe how we need to arrive quickly to save the public from themselves. After 30 years of telling them to call 911 for anything and convincing them that "seconds count!" what did we expect? While I agree that a delayed response to certain patient presentations could result in an adverse outcome, that points out a glaring omission from the story. Missing from the story is the patient outcome. The outcome will allow us to marry all the data from the response to determine the answer to the author's question in the headline.
The short answer is no, response times don't matter. And no, I don't have to pee. I have data that does not have any correlation between quality of treatment, outcome and response time. From my perch here at the data hub of a quite busy EMS system we have been trying to determine the quality of our EMS system and we rarely look at response times.
Don't get me wrong, we look and our Department statistician collects, quantifies, qualifies and reports to regulators the 90th percentile of all code 2 and code 3 calls to meet their requirements. We report it, they receive it. The document says nothing about the quality of care or patient outcome. The reason being that we can not guarantee a positive patient outcome, but can measure when we left and when we arrived. Imagine if we had to treat 90% of symptomatic asthmatics with oxygen within 5 minutes of arrival and document an improvement in condition. Can your system guarantee that? Why aren't EMS systems measured by the quality of their care instead of the quality of their response?
Apply this metric to any other industry and it fails. Industry is measured by their quality and efficiency, not the speed in which they complete their tasks. So long as we only look at one metric with any regularity we will continue to shuffle ambulances 2 blocks at 5 minute intervals to meet an average instead of realizing just leaving them still would bring the same outcome.
That's where I come in. My Medical Director and I, unhappy with the lack of actual patient care quality metrics, created our own in an effort to determine the quality of care being provided. We learned very quickly that our ambulances do not respond in a vacuum. Each patient receives a call taker, dispatcher, first response, ambulance response, assessment, treatment and some get transported. Once at hospital they receive a whole new level of care and review until they are finally sent home. It is hard to argue that the time it took to get an ambulance from point A to B has an impact on this outcome without any review of the call taker's coding of the call, the dispatcher's assignment of the ambulance all the way to the destination hospital capabilities and location.
We can all sit at the Pratt Street Ale House in Baltimore and discuss short times that had a bad outcome and long times that had a good outcome, but the worst part of all of this discussion is that so few systems measure anything more than response time.
If you consider response time your metric of success you have already failed. You have failed the patient who improves when you arrive "late" and discounting that response as a failure, yet trading high 5s when a 2 minute response yields a call to the Medical Examiner's Office.
We all know the stories of companies staffing ghost cars near the end of the month to bring down the monthly response metric to meet guidelines. It happens. But I also wonder if that flood of ambulances to help more people had any other impact.
The complication in tracking outcomes is the relationship your agency has with local hospitals. We may never have a seamless transfer of data but what we can do is pull data from the PCR to determine if the patient received the indicated treatments for the recorded chief complaint and observed complications. By reviewing your policies and protocols as well as your patient demographics you can quickly spot your core performance indicators and design tools to track them.
It may be nice to know that we make our 90th percentile in 8 of 10 districts on a regular basis, but what if those 2 districts happen to have the highest number of cardiac arrest survivals to discharge? Are they still a failure?
Widen your view to include more than how quick you can put the ambulance in park. This goes far beyond the lights and sirens System Status Management debate and speaks to the core of the reason we're out there to begin with:
To make someone's bad day better
Delays can hurt, but not unless you look deeper into your system to find out if that is the case...or not.
High Performance EMS posted "Does Response Time Matter?" and it got me thinking.
The author states an example of a patient being "treated" by fellow citizens at an airport and having to wait 20 minutes for an ambulance to arrive. The author goes on to describe how we need to arrive quickly to save the public from themselves. After 30 years of telling them to call 911 for anything and convincing them that "seconds count!" what did we expect? While I agree that a delayed response to certain patient presentations could result in an adverse outcome, that points out a glaring omission from the story. Missing from the story is the patient outcome. The outcome will allow us to marry all the data from the response to determine the answer to the author's question in the headline.
The short answer is no, response times don't matter. And no, I don't have to pee. I have data that does not have any correlation between quality of treatment, outcome and response time. From my perch here at the data hub of a quite busy EMS system we have been trying to determine the quality of our EMS system and we rarely look at response times.
Don't get me wrong, we look and our Department statistician collects, quantifies, qualifies and reports to regulators the 90th percentile of all code 2 and code 3 calls to meet their requirements. We report it, they receive it. The document says nothing about the quality of care or patient outcome. The reason being that we can not guarantee a positive patient outcome, but can measure when we left and when we arrived. Imagine if we had to treat 90% of symptomatic asthmatics with oxygen within 5 minutes of arrival and document an improvement in condition. Can your system guarantee that? Why aren't EMS systems measured by the quality of their care instead of the quality of their response?
Apply this metric to any other industry and it fails. Industry is measured by their quality and efficiency, not the speed in which they complete their tasks. So long as we only look at one metric with any regularity we will continue to shuffle ambulances 2 blocks at 5 minute intervals to meet an average instead of realizing just leaving them still would bring the same outcome.
That's where I come in. My Medical Director and I, unhappy with the lack of actual patient care quality metrics, created our own in an effort to determine the quality of care being provided. We learned very quickly that our ambulances do not respond in a vacuum. Each patient receives a call taker, dispatcher, first response, ambulance response, assessment, treatment and some get transported. Once at hospital they receive a whole new level of care and review until they are finally sent home. It is hard to argue that the time it took to get an ambulance from point A to B has an impact on this outcome without any review of the call taker's coding of the call, the dispatcher's assignment of the ambulance all the way to the destination hospital capabilities and location.
We can all sit at the Pratt Street Ale House in Baltimore and discuss short times that had a bad outcome and long times that had a good outcome, but the worst part of all of this discussion is that so few systems measure anything more than response time.
If you consider response time your metric of success you have already failed. You have failed the patient who improves when you arrive "late" and discounting that response as a failure, yet trading high 5s when a 2 minute response yields a call to the Medical Examiner's Office.
We all know the stories of companies staffing ghost cars near the end of the month to bring down the monthly response metric to meet guidelines. It happens. But I also wonder if that flood of ambulances to help more people had any other impact.
The complication in tracking outcomes is the relationship your agency has with local hospitals. We may never have a seamless transfer of data but what we can do is pull data from the PCR to determine if the patient received the indicated treatments for the recorded chief complaint and observed complications. By reviewing your policies and protocols as well as your patient demographics you can quickly spot your core performance indicators and design tools to track them.
It may be nice to know that we make our 90th percentile in 8 of 10 districts on a regular basis, but what if those 2 districts happen to have the highest number of cardiac arrest survivals to discharge? Are they still a failure?
Widen your view to include more than how quick you can put the ambulance in park. This goes far beyond the lights and sirens System Status Management debate and speaks to the core of the reason we're out there to begin with:
To make someone's bad day better
Delays can hurt, but not unless you look deeper into your system to find out if that is the case...or not.
Comments
It's not your emergency and if 15 seconds make that much of a difference, chances are you wouldn't have made much of a difference anyway.
When we say "response time doesn't matter" we set a bad default standard of care. All aspects of a response matter - including time - but the degree to which each effects outcome varies tremendously by case. Patient outcome is a difficult metric for many reasons and to say the data does not show it is a self serving interpretation of failing to prove a negative. If I could upload a graphic, I would submit a comparison of response times at Jersey City Medical Center correlated with ROSC rates. According to Mt Sinai, the trend lines cross somewhere around 4 minutes. Is it strictly a short arrival time that makes people better? Definitely not. My point was to simply acknowledge something we want to deny or minimize. Not that I want it to become the focus of our attention (too many systems already do that,) but so we can be freed from one of the controllable factors in order to focus on improved patient care.
I don't want any system toying with the numbers to make a specific measure work out. My hope instead is to make response time even less of a consideration so attention can be put toward reducing whatever is the next barrier of positive patient outcomes. Discounting the affect of time is not helpful with any outcome you do care to measure. Thanks for the opportunity to further the discussion.
Response time guidelines will always be altered, changed and redefined to match our capabilities and each community will choose the arbitrary number they think best fits their community. Like I said, some choose 8, some 12, mine chose 10. Not for any reason or referring to data on the subject, it just seemed like a good target to aim for. So far the data we have point to ALS within 4 minutes reach in case of cardiac arrest. Many communities choose to focus on community CPR to shorten the time to BLS interventions to buffer their response.
A wider conversation needs to be had and I'm glad you are bringing it back from the shadows it hides in.
Thanks for reading,
-HM
That said, for every run my questions are, 1) Under typical traffic and road conditions, were we close enough to SAFELY respond to the patient (not the scene) within an acceptable time? (8 minutes seems to be the nationally accepted number) and 2) Did we employ the most effective techniques known to medical science?
My answer to #2 (and sometimes #1) often raises 3) What bureaucracy is standing in our way? Sometimes it's just a matter of FDA and DOH dragging their feet to allow a new procedure or drug, but more often than not it's a budget issue. Many times the same people that criticize the department for being slow, under-staffed, and under-equipped, fight attempts to raise funds to improve the service.
And still we fight death.
We want to make sure we are getting the best value for our local tax dollar. The money we spend on anything else does not matter, it seems, as long as we are squeezing every last penny out of that local tax dollar. And we are frequently “penny wise but pound foolish”.
In reviewing call reports that my system generates, I look at travel times a lot. Not so much response times, but transport to the hospital. There are numerous factors that go into those numbers- at what phase of checking en route to the hospital did the unit actually ‘check en route’? Why does it take one unit 48 minutes to make a certain trip that we do several times a month, but another unit completes it in 53 minutes, while another completes it in 39.
Are there some crews that are actually driving that much faster (or slower), or are their different times when the actual contact is made to say “en route to Big Hospital”, or is there a difference in the time it takes for the telecommunicator to process and enter the information in into the software?
So what does that say about response times if there is such a variance in transport times?
Still, response time is something easy to measure, no matter the different factors. It gives us a ‘number’. This at the same time that getting actual feedback from patient outcomes can be difficult at best. My system regularly transports patients to twelve different hospitals in six different counties. And of course, only two of those hospitals are in our county, and beyond that, each hospital in the other counties has its own computer system and software, nothing is compatible, and providing feedback to us is a distant after thought.
But getting back to the original concept proposed by THM that I rambled from, I think that outcomes should be the deciding factors in what an EMS system looks like and how it is measured, but things like public perception rear their ugly head. Afterall, are not we the ones who have beat into their heads that every second counts in their emergency?
Most of the complaints that my former managers had to deal with involved delayed or perceived delayed responses. If the ambulances wasn't there when the person expected it to be there, they called city hall and then city hall called my chief. And then incident reports were written, even if the response time was short and the medical problem minor.
I agree that EMS system design should be based on outcomes, but sadly I don't see it happening absent a major change in human nature and expectations.
Crazy, but true.