It’s hard – if not impossible – to like being wrong. However, being wrong is sometimes inevitable. In Right Kind of Wrong: The Science of Failing Well, Amy Edmondson encourages us to fight our urge to hide our failures or berate ourselves for coming up short. Instead, she encourages us to distinguish between the intelligent failures that help us make progress in new territory and the basic and complex failures that can be wasteful and even destructive – but fortunately are also often preventable. In her previous work, The Fearless Organization,
Edmondson encouraged us to create organizations where people were free to speak up openly about aspects of the work, willing to experiment, and indeed felt able to quickly report failure. Here, she expands upon the need to understand errors and failures – and what to do with them.
Learning
To understand why failure is always a possible outcome, we must first accept a fundamental truth that we’ve all heard before. To err is human. Said differently, there’s no way of avoiding human error entirely. Even those who are disposed to perfectionistic traits cannot be perfect. (See Perfectionism and The Paradox of Choice for more.)
Once we accept that errors will happen, we are better able to disrupt the relationship between errors and failure. Far too many failures happen because an error isn’t corrected. The result of failure can be shame and a lack of learning – or it can result in learning.
Learning opens the opportunity to prevent similar errors in the future and to reduce preventable failures. It has the potential to prevent harm and loss. At their best, failures are small, meaningful, and instructive. Failures can be a good thing when they lead to learning and actions that result in better long-term outcomes.
We can learn from all kinds of failures, but what Edmondson calls “intelligent” failures are the only ones that provide genuinely new knowledge that helps advance progress in new territory.
Failure is Not Fatal
I concluded my review of The Fearless Organization with a key observation. Failure is inevitable if you try. I explained that I failed all the time. Years later, as I write this, I’ve got a number of 3D printed part iterations on my desk. Each one teaches me how not to make something. They’ll be thrown away soon enough to make room for more parts with different errors. The beauty of the 3D printer is it allows me to test designs with minimal cost and risk. I can play with an idea, test it, learn, and move on. That is the very essence of what Edmondson calls “intelligent failure” – it’s small, the stakes are low, and the learning is real.
Edmondson suggests that we need to be “learning to dance with failure.” We need to not just embrace failure but also learn how to pursue intelligent failures so that they’re not fatal – nor too costly. It’s important to take risks and be courageous – but within limits. (See Find Your Courage for more on courage.)
Failure Types
Not all failures are the same. There’s a mantra in startups, “Fail fast,” which also exists in agile software development. However, in both contexts, it misses the important second part: “…to succeed sooner.” It’s critical, because it’s that piece that matters. No one needs to fail – we need what failure offers us in the way of learning so as to succeed.
Edmondson proposes that there are three categories of contexts in which failure can occur:
- Consistent – There is well developed knowledge about how to achieve the desired results. Think recipe.
- Novel – Creating something new. There is no roadmap or recipe that reliably leads to results. Think exploration.
- Variable – Discontinuities where existing knowledge appears consistent or inside a skill set but for which the conditions have changed to make the existing knowledge insufficient. Think COVID-19.
Before I expand on these, I need to acknowledge that these contexts are reminiscent of Dave Snowden’s work on Cynefin.
Though he describes more contexts and liminal areas between them, they echo the same core truth that some things are knowable, and some are not.
For me, the contexts seem like those where I know how to solve the problem and those where I don’t. Variable contexts are where I mistake one for the other. It’s a place where I believe I have what I need, and I discover I don’t.
We often fail to realize the limits of our knowledge and the conditions under which something we know works. In my review of The Cult of Personality Testing, I commented on the narrow bands under which chemical reactions will occur. Without an awareness of the limitations, we can be surprised when a reaction doesn’t occur. I find this particularly troubling when we seek to get good feedback. We’re seeking feedback from others whose experience shapes how they model and simulate (or, more simply, view) the world. (See Sources of Power and Seeing What Others Don’t for more on modeling.)
However, if our conditions are too far outside their experience, their feedback may be less useful or even harmful. (More on that in the section Feedback Revisited below.)
Failures in the category of consistent contexts can be intelligent – that is, filled with learning – but only if we use the opportunity to change the system so that it detects and corrects errors before they become failures. (See Thinking in Systems for how to make changes to systems.)
Appropriate risks in the novel territory often lead to learning. Failures in the variable space can lead to intelligent failures if we discover the limitations of our knowledge.
Liking to Fail
Edmondson says, “Nobody likes to fail. Period.” By default, I agree. I’ve never met anyone who has volunteered a desire to fail. No one likes to be wrong. However, I diverge from her thinking in that I believe one can condition themselves to like failure. Adam Grant in Think Again shares a story when Daniel Kahneman was in the audience for one of his talks. Grant was explaining findings that contradicted Kahneman’s beliefs. Grant says, “His eyes lit up, and a huge grin appeared on his face. ‘That was wonderful,’ he said. ‘I was wrong.’” While I think that this is far from the standard response, it’s clearly a response that is possible. This is what Edmondson hopes to make easier for the rest of us mere mortals, who may not yet have developed Kahneman’s wisdom and genuine joy in discovery.
Careful readers will notice my substitution. Kahneman was happy that he was wrong – not that he had failed. This is where it gets tricky. Edmondson defines failure as “an outcome that deviates from desired results.” If learning is always one of the desired results, then even intelligent failure achieves at least some of the outcomes – thereby invalidating that it’s a failure in the first place.
This elevates us back to the chief purpose being learning. It’s inherent in Kahneman’s response that he learned something. While no one likes to fail by default, if you can elevate learning to being the chief purpose, then you can learn to like – or at least better accept – failure.
Barriers to Failing Well: Aversion, Confusion, and Fear
Edmondson explains that failing well is hard because of our aversion to failure, our confusion about what type of failure we’re experiencing, and fear of social stigma and excessive consequences.
While aversion is natural, there are ways to minimize it through reframing failure as opportunities for learning. Confusion is addressed with clarity around the types of contexts and the type of failures. Fear often looms largest of all – but it, too, can be addressed. Not just by using the techniques to shape the amount of risk taken but by better understanding fear.
Focus on Fear
When discussing fear, it’s important to recognize the relationship between stress and fear. Though often treated as distinct entities, they are the same phenomenon. We are fearful of something. If we weren’t, we’d call it anxiety – which is fear without a specific, targeted concern. With stress, we’ve encountered a stressor, and we’re afraid of the impact we believe is possible or probable.
If we want to reduce fear, we can take what we know about stress and fear to clarify its sources and adjust our cognitive biases. (See Thinking, Fast and Slow for a primer on cognitive biases.) Richard Lazarus in Emotion and Adaptation shares a model where stressors are evaluated, and from there we can become stressed. This is consistent with other researchers, such as Paul Ekman, who separates the startle response from other emotions because it’s unprocessed. (See Nonverbal Messages and Telling Lies.) Simply, the model is that we evaluate the potential impacts based on their degree of impact and their probability. We divide or mitigate this based on our coping resources – both internal and external. The result is our degree of stress or fear.
Biases exist in all three of these variables. We often systematically underestimate both our own resources and the resources of others that they’re willing to provide in support. We often overestimate the degree of impact. A failed experiment, company, or attempt doesn’t make us a failure. Hopefully, it means that we’ve learned. Finally, we often overestimate the probability.
It’s important to acknowledge that failures are common. The failure rate of businesses in the US in one year are 20%, 2 years 30%, 5 years 48%, and 10 years 65%. Failure in change projects (and all large projects) is around 70%. (See Why the 70% Failure Rate of Change Projects is Probably Right for more.) It’s quite possible that failure is the natural result – but often with percentages that are skewed, we’ll amplify them a bit more. (See How We Know What Isn’t So for more.)
The net of this is we can have an impact on our degree of fear around failure if we’re willing to delve into what we’re afraid of – and why.
The Relationship Between Effort and Success
In context, Edmondson shares how efforts to reduce errors and success at reducing errors aren’t the same. The relationship is “imperfect.” This is true. Just ask hospitals that are in a constant battle to increase handwashing rates. No one is startled to find out that handwashing reduces the spread of diseases. Providers and clinicians working in hospitals are educated people who are aware of how germs work and that handwashing is an effective strategy for preventing their spread. However, in most organizations, the best we get for sustained handwashing at appropriate times is around 80%. Decades of research and hundreds of millions of dollars haven’t resulted in a material change in behavior – a behavior that should be natural and automatic.
Similarly, seatbelt use in the United States isn’t 100% (it’s slightly over 90%) despite all the marketing campaigns, laws, and pressure. Effort alone doesn’t always drive behavior – and it doesn’t drive behavior consistently. When we’re working on reducing errors, we can’t expect that effort alone is enough. (See Change or Die for more on the difficulty of changing behaviors.)
Underground Failure
One of the riskiest things in an organization is when the feedback system is broken, and the leaders are deprived of the signals they need to make adjustments. Like the Titanic in the fog, a lack of visibility can lead to tragic consequences. Leaders who state unequivocally that failure is off-limits don’t prevent failures. They prevent hearing about failures.
Antifragile, Nassim Taleb’s book about growing from challenges, explains our need for feedback and the opportunity to make many, compounding, changes to improve. Deprived of feedback, we must make wild – and therefore riskier – changes. If we want to create conditions for our probable survival and growth, we need constant feedback.
“A stitch in time saves nine” is a very old saying with a simple meaning. If you can make the right corrective actions at the right time, you can save a lot of work. That means knowing about errors, mistakes, and failures quickly so you can address them – not when they’re so large they can no longer be hidden.
Intelligent Failure
Edmondson qualifies a failure as intelligent if it has four key attributes:
- It takes place in new territory.
- The context presents a credible opportunity to advance toward a desired goal (whether that be scientific discovery or a new friendship).
- It is informed by available knowledge (one might say “hypothesis driven”).
- The failure is as small as it can be to still provide valuable insights.
Here, I have a slightly different view. While Edmondson describes a set of conditions, I think the emphasis should be on results (as I implied earlier). I believe a failure is intelligent if:
- There is the possibility of real learning.
- It informs future work or results.
The shift is subtle but important. I allow for stupid errors in consistent contexts – as long as it is used to change the system so that the errors are less frequent. Consider the fate of TWA Flight 800 on July 17, 1996. It was a routine flight. It used a standard Boeing 747-100 with an excellent safety record. It was a well-established route. There is no doubt that the failure was tragic. However, the resulting investigation focused on the probability of a main fuel tank fuel-air mixture being ignited, triggering a fuel-air explosion. The results of this tragedy – and the learning – are more frequent inspections of fuel tanks, revised anti-spark wiring, and injection of inert gas (nitrogen) into empty or partially empty fuel tanks.
While Edmondson’s categories are useful for designing situations that allow for failure to be intelligent, they disqualify the opportunity to convert an unplanned failure in a routine operation to something from which good can come.
Designing Failures
No one would ever want to design their failures – or would they? Entrepreneur literally means “bearer of risk.” Edmondson is encouraging us to fail in the right way – a way that encourages learning. It’s designing experiments that are most likely to result in learning – and in ways that aren’t overly impactful. In short, failure is okay, but if you expect that it’s going to happen, you should consider that, in your trials, failure is an option you can live with.
Persistence and Stubbornness
Move too quickly to accept failure, and you’ll be told that you don’t have enough Grit (Angela Duckworth’s term that encompasses persistence). Linger too long, and you’ll be told that you’re too stubborn to accept what the market has been telling you. Finding the balance between the two is perhaps the most difficult thing that we must navigate.
Edmondson shares the story of the Eli Lilly drug, Alimta. It failed Phase III trials. It could have ended there except for the physician that noticed in the patients for whom the drug was ineffective there was also a folic acid deficiency. When the renewed trial was done with folic acid supplements for those with the deficiency, efficacy was established. In this case, the dogged pursuit of the goal of getting the drug to market worked– but that isn’t always the case.
Jim Collins in Good to Great describes the Stockdale paradox – of knowing when to stick to your guns and when to listen to the market. Adam Grant leads us over this familiar ground in Think Again and Originals. Robert Stevenson addresses it in Raise Your Line. It’s a challenge for Irving Janis and Leon Mann in Decision Making. The conceptual challenge surfaces repeatedly in dozens of books and contexts. Knowing when to accept failure and walk away – and when to persist – is a central challenge for all of us.
Feedback Revisited
Getting quality feedback is perhaps the most challenging aspect of life. Learning when to listen and when to say thank you and move on is a puzzle for the ages. In The Power of Habit, Charles Duhigg explains how Febreze was blown off track by bad feedback. The truth is that feedback can fall into a few basic categories:
- No Feedback – This vacuum makes one wonder if anyone is listening.
- Good Feedback – Specific, actionable, experience, data based, and validated.
- Bad Feedback – Unclear, unvalidated, or with limited experience, this kind of feedback leads you away from your goals without being malicious.
Unfortunately, the norm for the world today isn’t good feedback. It’s either no feedback or bad feedback. Most people provide no feedback – even when asked – and those who do often fail to recognize the limits of their experience and whether the feedback could be useful.
When we ask for feedback, we’re often not asking for clear enough feedback to be actionable. Even in training, we default to satisfaction or sentiment instead of whether we changed behavior. (See Kirkpatrick’s Four Levels of Training Evaluation.) Similarly, when we’re looking for feedback on our failure, we fail to create safety and do after-action reviews that lead to real insights and learning about what happened. (See Collaborative Intelligence for more.)
Vulnerability
Feedback leaves both the person giving and the person receiving vulnerable. The sender of the feedback is always worried about how the receiver will react. We’ve all been exposed to people who want feedback only to be shunned or attacked when the feedback is given. We’re naturally wary of giving it.
I was walking with a friend and her co-presenter after they gave a talk at a national conference. My friend said, “I’d love your feedback.” The friend I knew I could be honest with – but the co-presenter, I wasn’t so sure. I asked for clarification about what feedback they wanted as a way of ensuring I could speak into the specific area of consideration. They wanted feedback on a scenario they had demonstrated on stage, where my friend was a difficult person. The co-presenter had responded (admittedly) harshly in the scenario. I explained that I always start soft and move to harshness if required. That was the end of the conversation and an uncomfortable walk followed as we walked the rest of the way to the co-presenter’s book signing.
Here’s the funny part. Objectively, the co-presenter agreed. That didn’t stop her from having her feelings hurt. Given the situation, she didn’t lash out – but we’ve all seen that happen even when we’ve given good feedback.
The receiver is, of course, more obvious. Opening up to feedback leaves us vulnerable to whatever they want to say. They can use it as an opportunity for personal attack, or they can be gentle in their nudge towards better results, and we sincerely don’t know which with most people.
Vulnerability has a curious property – one that goes hidden for most. Those people who are the least vulnerable as people are the most likely to make gestures to become vulnerable. Said differently, the person most likely to take an investment risk is the person for whom a loss of the investment doesn’t matter. The more secure someone is in who they are, the more likely they are to invite others to provide feedback, to put themselves in appropriately vulnerable situations, and to allow their real self to be seen.
Perhaps that’s why we take people who are openly vulnerable as a source of power and strength. It’s paradoxical that those who appear the most vulnerable are those who are the least likely to be harmed – but when you recognize that it’s those people who make themselves appropriately vulnerable, the pieces can fall into place.
Blame
In a world of probabilities and no single cause, accusation, blame, and criticism make little sense. We live in a world of probabilities where no one thing is solely responsible for an outcome. (See The Halo Effect.) We want this. We want the simplicity of attributing a failure to a bad actor or a bad behavior. However, the truth is much more complicated.
The quality movement started by Edward Deming was constantly seeking root causes. Root cause analysis is a part of many cultures – even very good, high performing cultures. The problem is that, at its core, it’s flawed. One could easily cite the O-rings on the Space Shuttle Challenger as the root cause of the tragedy. However, that’s only one of hundreds of technical design issues that led to its destruction. Different choices for propulsion, the shape of the booster rocket, and innumerable other things all played a factor – as did the weather on that fateful day. The decision to launch in the unusually cold Florida weather played a factor, as did the failure to listen to the engineers who warned the teams of a potential problem.
Blame, of course, lands on people. It’s not the O-ring that’s to blame. It’s the manager who failed to delay the launch when concerns were raised. The Tacoma Narrows bridge failed because the decking wasn’t attached well enough to cope with unexpected aerodynamic forces. The engineer is to blame for not planning for those forces. He got off better than the engineers from the Hyatt Regency walkway collapse in Kansas City. The engineer, Jack Gillum, accepted the blame for the failure. However, the truth of the situation was that a change had been made after his original designs – one that he hadn’t recognized the true impact of until after the disaster.
The process of design change reviews and the urgency of the project factored into the failure. Gillum accepted responsibility but there’s more to learning than just that someone made a mistake. It’s something that Gillum has spent the rest of his life working on. How do we find and correct errors so that we can fail with fewer consequences and better learning? We need to fight the urge to attribute everything in a failure to a single factor – or person – and instead focus on extracting the maximum learning from every failure.
Accepting responsibility for a failure is different than someone assigning blame. We find the Right Kind of Wrong when we’re willing to learn – but not blame.