Better Lucky than Smart
An age-old bit of wisdom can help you overcome outcome bias to improve your decision making as a leader.
My father has long been fond of the old expression “Better lucky than smart.” He spent years in the drilling and blasting business, an industry fraught with uncontrollable variables and unintended consequences despite rigorous calculations. More recently, I have come to see this adage as a simple tool for improving decision analysis and overcoming outcome bias — particularly among those not formally trained in decision science.
Also known as the “outcome effect,” outcome bias is a cognitive process that causes individuals to evaluate a decision based on the final result, whether that outcome was achieved by chance or through a sound process. When a good outcome results, the entire effort is judged positively. Conversely, a sound decision process may be condemned if the end product is negative for reasons unrelated to the process. Ideally, process and outcome would be evaluated separately, but this is rarely the case.
Ideally, process and outcome would be evaluated separately, but this is rarely the case.
For example, the coaching staff of the Seattle Seahawks was blasted for the play call at the end of Super Bowl XLIX in January 2015. Near the goal line with seconds to go and a four-point deficit to overcome, the Seahawks attempted a pass rather than hand the ball to their star running back, Marshawn Lynch. This call, made by offensive coordinator Darrell Bevell, will be “debated for decades to come,” according to NFL.com. Why? The pass was intercepted by Patriots rookie Malcolm Butler, and New England won the championship. (Sorry, Seahawks fans.)
However, based on the logic of rational analysis, the pass actually may have been the right call to make. Andy Benoit of Sports Illustrated and San Francisco 49ers head coach Jim Harbaugh were among those who defended Bevell and Seahawks head coach Pete Carroll. The pass-versus-run percentages were in the Seahawks’ favor: The play was unexpected, and the team was set up for it to work. Only it didn’t.
Outcome bias doesn’t surface only in sports — it affects all of our decision analysis. My colleagues at the National Preparedness Leadership Initiative and I conducted what we believe is the most extensive set of interviews looking at the leadership of the response to the Boston Marathon bombings. We spoke with everyone from the governor to the mayor to first responders and executives from the private-sector companies affected. It was one of most successful responses we have ever researched: Multiple agencies at the federal, state, and local levels worked extraordinarily well together. Ordinary citizens did not hesitate to help. The 264 people injured in the bombing were evacuated within 22 minutes. They were evenly dispersed among area hospitals. All survived. After an unprecedented shelter-in-place request later in the week, met with near-universal compliance, the suspects were apprehended. Citizens cheered. The heads of law enforcement, emergency medical services, as well as rank-and-file responders were hailed as heroes. They got it right!
Yes, they did. I do not for a moment want to discount the planning and preparation or the exceptional valor demonstrated that day. There were indeed many acts of selflessness and heroism. However, there was also a great deal of luck. Luck is an issue that has been raised almost every time that I have presented our findings. At a recent discussion of the marathon bombings case, I decided to put the question to the participants: Were responders lucky or smart?
The discussion was lively. On the “smart” side was listed extensive planning as well as drills and exercises conducted over the previous decade. Trust-based relationships between leaders, built intentionally over those years, fostered extraordinary cooperation and collaboration between organizations and across sectors. Ample equipment was well-positioned. Numerous medical personnel were on hand. EMS chief Jim Hooley knew to distribute patients evenly. Later in the week, law enforcement officials were able to cull through vast amounts of video footage to identify the suspects.
Then came the “lucky” list. The weather was perfect, so the medical tents were not overly burdened by marathon-related injuries before the bombing. The bombs were placed low to the ground, which resulted in fewer fatalities than would have been the case had shrapnel hit people in the head and chest. The detonations came when crowds were sparser — well after the peak gathering to see the elite runners. Had the bombs been placed a mile west in Kenmore Square, the more crowded part of the course at that time, it would have been far more difficult for medical personnel to reach the injured. Many officials were seasoned in their positions. The list went on and was soon longer than the “smart” tally.
Introducing luck to the analysis, and acknowledging that it played some role in the outcome, made it possible for the group to separate process from outcome. It made them think — and in some cases debate — what stemmed from chance and what resulted from in-depth planning and sound execution. It opened the opportunity to explore how future plans could be improved should Lady Luck not be so kind in a future event.
This is where I see the potential of “smart or lucky” as a simple tool to catalyze more rigorous decision analysis, particularly by those of us who are only casual analysts. When you start with “smart,” you allow proper credit to be allocated to those who have truly done well. Following with “lucky” gives people a way to challenge elements of any process and counter outcome bias. It may stimulate a dive into the data for a more objective view. I advise reversing the order when looking at a bad result; this will keep people from piling on and creating unnecessary acrimony.
Use luck to get smart about improving decision analysis. You are more likely to take away true lessons learned from high-consequence events.