Blog Post
Business

Why Cognitive Bias is Ruining Your Clinic’s Outcomes Data Quality—and How to Fix it

Objective data isn't always used objectively. One of the measures garnering a lot of attention is outcome tools.

Zach Walston
|
5 min read
|
June 26, 2020
image representing why cognitive bias is ruining your clinic’s outcomes data quality—and how to fix it
Authors
Illustrators
Share this post:

Subscribe

Get the latest news and tips directly in your inbox by subscribing to our monthly newsletter

How do we know if we are providing high-quality care? The answer to this question is sought by a multitude of parties: patients, clinicians, educators, legislators, and insurance companies. Unfortunately, it’s not easy to determine. There is no single score or report that provides a definitive benchmark of quality, but various measures can help paint the picture. One of the measures garnering a lot of attention is outcome tools.

The Value and Benefits of Outcomes Data

Outcome tools provide value in many areas. We can use them to:

  • track information and then disseminate it to employees;
  • integrate quality data into the culture of our organizations;
  • develop continuing education courses and seminars;
  • facilitate conference participation and research projects;
  • guide mentorship, residency, and fellowship programs;
  • support marketing efforts; and
  • develop quality improvement initiatives to improve recruitment and employee retention.

Outcome tools have practice-wide benefits as well. At PT Solutions, for example, we use our outcomes data to:

  • foster research collaborations with universities;
  • negotiate with payers; and
  • educate the public.

Imagine how much influence we could have if the whole profession joined together and used outcomes data with a unified approach? 

While outcome tools are valuable and definitely represent a piece of the care quality puzzle, they are often met with resistance by clinicians. Outcome tools can elicit negative emotional reactions, as clinicians may feel attacked and scrutinized. As Danielle Ofri wrote in What Doctors Feel, “the desired practical outcome is smothered by the emotion cost.” While this is certainly true if the outcomes are treated as the only indicator of clinical quality, writing them off completely can remove objective data and accountability from the care equation.

Bias in Data Analysis

Unfortunately, we are bound to be heavily biased in our assessments of care quality. I experienced this challenge at PT Solutions, where I serve as the National Director of Quality and Research. In 2015, we rolled out Focus on Therapeutic Outcomes (FOTO), a system of outcome measures, to track clinical outcomes and patient satisfaction. Unfortunately, it was not a smooth undertaking.

I immediately sought to discover and correct the issue, as I knew we gave high-quality care. While recognizing we always have room for improvement, the results we were obtaining were not consistent with our other quality metrics, such as cancellation rates, online reviews, and patient-generated referrals. 

What I found is that the rapid rollout of FOTO resulted in a poor understanding of how to use the tool. Clinics were failing to complete surveys at regular intervals, especially on or near discharge. This resulted in a large “days between status and discharge” value. We treated patients for 10 visits, but the final score obtained was on visit six. Essentially, we were treating patients and improving them without getting proper credit in our outcome scores. I spent months beating the drum that the tool was being used incorrectly. Enter the law of unintended consequences.

By placing a large emphasis on our incorrect use of the tool, I created a narrative that quickly became a convenient excuse when results plateaued—any poor outcomes or satisfaction values from that day forward were attributed to an issue with the surveys themselves. At this point, I started seeking alternative explanations and strategies to improve our outcomes processes.

I started researching and reading about decision-making and critical thinking. Soon, I stumbled across one of the most influential books I have ever read. Thinking, Fast and Slow by Daniel Kahneman opened my eyes to the world of biases and cognitive fallacies. The pieces fell into place, and I started to understand many of the issues impeding the use of quality data.

Now, I’ll walk through the key cognitive fallacies our practice has encountered in our attempt to improve our use of outcomes and satisfaction data. I have fallen victim to each of these and can attest to their power. The goal is not to abolish these biases—after all, it’s impossible to rid ourselves of them completely—but rather to generate awareness so we can more effectively recognize and navigate these cognitive traps.

Confirmation Bias: Supporting Established Beliefs

The first bias to address is arguably the most common. Confirmation bias is when people seek data that are likely to support the beliefs they currently hold. We focus on what we know and neglect what we do not—which makes us overly confident in our beliefs. In the clinical world, this commonly occurs when supporting our treatment decisions. A quick PubMed search often reveals a host of studies supporting my viewpoint. So, I just conveniently ignore the ones refuting it. We see similar approaches regarding quality metrics.

As clinicians, we will naturally defend our treatment decisions and clinical quality. We don’t go into work thinking, “Today my aim is to be exceptionally mediocre at treating.” Therefore, when looking at quality data, we are likely to focus on the information that supports our current beliefs: “I don’t care what FOTO may say; look at how low my cancellation rate is!” This is exacerbated when adding new measures—like our practice did with the rollout of FOTO—as it invites additional cognitive fallacies.

Theory-Induced Blindness: Clinging to One Way of Thinking

Theory-induced blindness, a term coined by Daniel Kahneman, essentially means that once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. Our theoretical beliefs are robust, and it takes much more than one embarrassing finding for established theories to be seriously questioned. If I have accepted certain quality measures (like the cancellation rate or the number of patient-generated referrals), I am more likely to cling to those. This remains true even if new data comes to light that calls into question previous beliefs. We see this when clinicians fail to adopt new treatments or assessments because they have had success with previously accepted approaches. Changing how we conduct and assess our job is uncomfortable and difficult. This is exacerbated if we are working with limited information.

Availability Bias: Using Only the Information that Comes Easily

Compounding the issues of confirmation bias and theory-induced blindness is a failure to procure all relevant information. It’s very easy for us to cherry-pick information, both intentionally and unintentionally. The availability bias is the process of passing judgment by the ease with which instances come to mind. If we constantly assess our cancellation rate or our visits per referral, then those numbers will more easily come to mind. If practice leaders frequently refer to those same metrics, then their perceived importance will increase. Another piece of the availability bias is knowing when to draw conclusions.

I employed an availability bias when I was hyper-focused on the FOTO data for one of our flagship clinics. Following months of training and no discernible improvement in the outcomes data, I was certain there was an issue in the quality of care. However, once I looked at the entire picture and recognized our great cancellation rate, visits per referral, self-discharge rate, patient-generated referrals, Net Promoter Score® (NPS®), online reviews, and patient satisfaction numbers, I had to concede that clinical quality was not a concern. As it turned out, a single clinician was artificially impacting the scores by prematurely closing cases in FOTO to avoid the hassle of collecting and monitoring outcomes data. So, we must always ask ourselves, “Do I have all the information necessary to draw a conclusion?”

The Halo Effect: Relying Too Heavily on First Impressions

The second issue we faced regarding available information was the halo effect. The halo effect is when we increase the weight of first impressions, sometimes to the point that subsequent information is mostly wasted. It can affect subsequent performance assessments. How does this pertain to outcomes?

An example of the halo effect is assuming all clinicians in your practice—simply because you hired them—provide high-quality care, even if you have not seen them treat or reviewed their metrics. Even as we gain more information through co-treating and training, the initial conclusion drawn during the hiring process heavily influences our perception. Assuming we are great and the tool is wrong will stunt clinical development.

The Sunk-Cost Fallacy: Using Previous Investment to Justify Future Efforts

As you will notice, this progression is akin to the five stages of grief. Let’s now explore three biases that comprise the search for excuses. The first is the sunk-cost fallacy. This occurs when individuals continue a behavior or endeavor as a result of previously invested resources (time, money, or effort), regardless of the outcome. In rehab therapy, we often see this when clinicians cling to a particular treatment approach—regardless of the efficacy and literature support—because of previously invested resources. It can blind our acceptance of outcomes, as we assume the quantity of resources invested should be proportional to the quality of care.

Outcome Bias: Focusing Too Heavily on Negative Results

The second fallacy is the outcome bias. The outcome bias occurs when we blame decision-makers for good decisions that worked out poorly and give them too little credit for successful moves that appear obvious only after the fact. For example: “My cancellation rate is great because I have implemented more open and frequent communication with patients. The poor satisfaction score is outside my control. I have had a large influx of Medicaid patients and my new clinician is struggling.” Or perhaps: “Of course Sarah has great outcomes; her clinic is in an affluent area, and all of her patients are young, athletic, and seeking care during the acute stage. Meanwhile, my clinic is in a different part of town, and all my patients are in chronic pain and have been using opioids.” While these facts may be true, there is a lack of ownership, and no actionable tasks are being developed to address the outlined issues.

The Loss Aversion Fallacy: Favoring the Status Quo

Lastly, we have the loss aversion fallacy. Loss aversion is a powerful conservative force that favors minimal departure from the status quo. We experience stronger regret if a poor outcome results from action rather than inaction. For example: “I knew I shouldn’t have discussed recent cancellations with my patient. Now they are uncomfortable and won’t come back. I should have stayed quiet.” We are more prone to seek excuses and reasons to prevent action than to seek it. When presented with poor outcomes or satisfaction data, we are less likely to implement strategies to improve the results because of loss aversion.

The Connection Between Culture and Performance

At this point in my journey, buy-in started to take hold, but the results were inconsistent and region-specific. Ultimately, we found that culture had a substantial influence on results. The regions that were more stable, more open and honest with feedback, and more driven by internal competition consistently outperformed the others. Now that we had sufficient training in FOTO usage and cognitive bias awareness, we needed to determine which tool to use for promoting improved performance: carrots or sticks.

Rewarding Positive Results

The research is clear that reward for improved performance works better than punishment for mistakes. Note that I say reward and not financial incentive. The evidence is mixed regarding the effectiveness of financial incentives compared to other types of rewards, which could include:

  • verbal or written recognition,
  • promotion,
  • awards, and
  • additional responsibility.

However, the question is not simply a matter of how large a bonus the high-performing clinics should receive; it is whether the focus should be on the high-performing clinics or low-performing ones.

Some of you may disagree with focusing on reward, citing your experiences with the opposite effect, but that is often a result of the regression to the mean phenomenon. If someone exceeds expectations, future results are likely to be relatively worse, despite the initial praise. The reverse is seen with unexpected dips in performance and subsequent punishment. We need to assess trends before drawing conclusions on the effectiveness of our interventions.

Getting Brutally Honest

Once the biases have been addressed and the method of feedback has been established, it’s time to improve our outcomes assessment process and plan for the future. We need to be both reactionary and proactive. An interesting consideration is the difference in motivation between meeting goals and exceeding them.

We work a lot harder to meet goals than to exceed them. One is the difference between success and failure while the other is the difference between success and, well, more success. We saw this with our quality data. If we meet the goal of 98% patient satisfaction, it is no longer discussed. We no longer develop plans to improve it further—despite 2% of patients remaining unhappy—and we instead focus on other issues. If the goal is not a stretch goal—something difficult but still attainable—we often fall short of our potential. Additionally, we will have a handful of clinics simply meeting the goal and a handful missing the mark. So, on average, the group is still below the goal.

Being Proactive Versus Reactive

“Using distributional information from other ventures similar to that being forecasted is called taking an ‘outside view’ and is the cure to the planning fallacy.” – Bert Flyvbjerg

The planning fallacy was the last major hurdle we needed to clear while reshaping our culture on clinical outcomes. We needed to shift from strictly reacting to data from past treatments and start looking forward to proactively address potential concerns. The planning fallacy is the tendency to underestimate potential complications and time needed for future projects. This often occurs in patient care. We may not consider the impacts of a patient’s diet or sleep hygiene. We may not account for the influence of understaffed front offices or the time it will take a new graduate to assume a full caseload. To combat this fallacy, we need to aggregate multiple perspectives.

Aggregating assessments adds to the collective pool of information. This allows the decision-maker to take “the wisdom of the crowd” and formulate a more well-informed opinion. This can be accomplished at the treatment level and at the overall practice assessment level. Ask your colleagues to review a recent evaluation to look for any potential missed data, incorrect assessments, or likely future pitfalls in the prognosis and plan of care. Work as a leadership team to assess all relevant data and bring your individual perspectives to determine why a clinic or region is struggling with clinical outcomes. Regardless of the level of the issues—be it individual patient care or clinic-wide—look at the big picture. Looking at cases in isolation leads to more quick, emotionally-driven reactions.

Closing Thoughts

As Orfi pointed out in her book, outcome measures and similar quality metrics don’t provide a full understanding or measure of quality. They provide valuable information, but we cannot determine if quality care was provided solely from the outcomes data. The same can be said for patient satisfaction measures. Emotion and bias can influence a patient just as it does a clinician. For example, even if a patient’s primary issue is a long wait time prior to the appointment, the patient is likely to give a 1-star rating across the board, when the wait time has no influence on the care provided.

In my role, quality assessment is a key responsibility. Previously, I homed in on problems and expected excellent results. I also heavily weighed outcomes data and assumed that if the numbers were poor, then the care was poor. Through reflection, feedback, and assessment, I now have a more refined understanding of the value outcomes and satisfaction data provide. I also realize I was previously mistaken in my weighing of their value, and that I caused undue stress, frustration, and potentially even shame among my colleagues. We must hold one another accountable to high-quality care, and we have a variety of tools to make assessments of said quality. But we must also ensure we do not pull each other down when making those assessments.

Outcome tools can provide substantial value for your practice and the profession. While we may not solely rely on outcomes to tell a story, we can maximize their value by using the tools correctly and obtaining employee buy-in. The biases I covered are not all cognitive fallacies you will run across in clinical care, but they are highly prevalent. As stated earlier, the goal is not to eliminate them, but to have an awareness and to navigate potential cognitive traps. It will take practice. There will be struggles and resistance, but it will be well worth the effort.

Dr. Zach Walston, PT, DPT, OCS, serves as the National Director of Quality and Research at PT Solutions. He currently oversees research efforts at PT Solutions, develops and teaches continuing education courses, and creates quality improvement initiatives throughout the practice. Zach also serves as the Program Coordinator of the PT Solutions Orthopedic Residency Program and the director of the Clinical Mentorship Program. Connect with Zach on Twitter (@zachwalston) and LinkedIn.

Awards

KLAS award logo for 2024 Best-in-KLAS Outpatient Therapy/Rehab
Best in KLAS  2024
G2 rating official logo
Momentum Leader Winter 2024
Capterra logo
Most Loved Workplace 2023
TrustRadius logo
Top Rated 2023
Join the PXM revolution!

Learn how WebPT’s PXM platform can catapult your practice to new heights.

Get Started
two patients holding a physical therapist on their shoulders