Skip to content

Decision Making: A Psychological Analysis of Conflict, Choice, and Commitment

DecisionMaking

It was a different time, 1977. Back then, publishing was harder and the focused energy that went into creating a book was larger. When Irving Janis and Leon Mann wrote Decision Making: A Psychological Analysis of Conflict, Choice, and Commitment, they were writing something that was designed to comprehensively cover everything known about decision making at the time. As it turns out, there hasn’t been that much added to the knowledge how we make decisions – and there’s been a great deal that we lost from their work in the sound-bite world we live in today.

Groupthink

I picked up the book, because people still quote Janis when they speak of “groupthink.” Of those who reference Janis when they say the word, few have read his work, and I wanted to understand the nuances and implications of groupthink. To understand it, we’ve got to travel back a few more years to the work of Solomon Asch and conformity. The short version is that Asch figured out you could make someone claim that two lines were the same length when they clearly weren’t. All it took was a few confederates willing to make the claim. (See Unthink for more on Asch’s work.)

In the context of working groups, it means that the group perception has a strong pull. Asch’s work was replicated later, and it was discovered that people who were coerced into thinking two lines were the same length had no conflict over this. Their brains had accepted the two different lines as the same, and there was no longer any conflict. For groups, this is challenging, because it means we can unconsciously and progressively bias our answers in a direction without either conflict or awareness.

That’s the groupthink that Janis was talking about. The gradual adjustments that lead to conformity of thought without the group’s knowledge. It’s why Hackman in Collaborative Intelligence encouraged the right rotation of external influences on a team to prevent the progression from getting too far. Janis’ recommendations were:

  1. Leaders should be impartial – at least at first.
  2. Every member should be assigned the role of critical evaluator.
  3. Someone should be assigned the role of devil’s advocate, intentionally poking holes in the existing plans.
  4. From time to time, divide the group and then have the groups merge, comparing their results.
  5. Survey all warning signals arising from rivals.
  6. Hold a second-chance meeting for everyone to restate their residual doubts and concerns.
  7. Invite non-core members on a staggered basis.
  8. Discuss the group’s deliberations with trusted associates.
  9. Set up multiple groups working on the same problem – when the decision is critical.

Espoused and Actual Behaviors

Janis and Mann are quite clear that their goal wasn’t to document the things that people said they did to make a decision. Instead, they were focused on how people actually behaved. They recognized, like Chris Argyris in Organizational Traps, Peter Senge in The Fifth Discipline, and William Isaacs in Dialogue, that what people say they believe and what they actually believe aren’t always the same.

Many of us are unconscious of the constant balance between exhaustive evaluation and the need for expediency. Barry Swartz in the Paradox of Choice builds on Janis’ work and the work of Herbert Simon to explain how the process of decision making and specifically how we can maximize the utility of our decisions – maximizing – but only at the risk of expending too much effort and creating anxiety. Satisficing, on the other end of the spectrum, looks to quickly discharge a decision and move on. However, it does so with the awareness that we will make some mistakes. Neither extremes are good, and no one exclusively picks one strategy. We’re constantly shifting our position about the degree to which we’re willing to invest in the decision – and this is something that Janis and Mann make clear.

Our beliefs and behaviors are bounded by the limits of our rationality – our bounded rationality. It was John Gottman in The Science of Trust that introduced me to the Nash equilibrium. The impact of which wouldn’t be fully realized until I realized the impact on evolution. When we can see more broadly, we realize that there are gains that can be accomplished when we work together instead of against each other. (See The Evolution of Cooperation for more on how we might have learned to cooperate and The Righteous Mind for how shared intention and Mindreading led to this.) When we operate with only our own concerns, we often find that we’re not achieving the best we can when we work together.

However, considering others and their needs is exhausting. We may find that we’ve depleted our internal resources before we’re able to consider others – no matter how loudly we might proclaim our desires. (See Willpower for more on exhaustion and Destructive Emotions for more about whether we’re fundamentally wired towards considering others our ourselves.)

College Lab Rats

One of the concerns expressed about how research was being done on decision making was that decisions were often placed in front of college students because they were easy to get as subjects. This had the tendency to focus research on situations with trivial consequences. It didn’t really matter whether you picked poster A or poster B. Janis and Mann correctly surmised that the way that we make decisions when it matters is very different than the way we make decisions when it’s a simulation.

Gary Klein in Sources of Power shares his journey to discover how rational decision making worked. In the end, he discovered that people didn’t often make rational decisions. Instead, they made recognition primed decisions that relied upon their ability to predict the outcomes of their interventions. These sorts of decisions couldn’t be made in the sterile environment of an office on a college campus.

Building the Balance Sheet

If we sidestep, for a moment, the gap between rational decision making that we believe we make and the recognition-primed decision making that Klein found, we need a way to tabulate and measure before we can even attempt to decide which path is best. That requires both an ability to foresee the future and a method of collection for the pros and the cons of each proposed decision – including doing nothing.

Janis and Mann recommend the idea of keeping the balance sheet despite the awareness that it is likely not the final arbiter of the decision. The objective is simply to create a structure to make the process of making the decision easier for the individual.

The columns for positive and negative consequences for a given choice are easy, but there is also the issue of the kind of positive or negative consequences to address. Janis and Mann believe that there are four categories for positive and negative anticipations:

  • Utilitarian gains and losses for self
  • Utilitarian gains and losses for others
  • Self-approval or disapproval
  • Approval or disapproval from significant others

In addition to the content of the balance sheet there’s a recommended process to follow:

  1. Open-ended interview
  2. Introducing the balance sheet grid
  3. Using a list of pertinent considerations
  4. Identifying the most important considerations
  5. Exploring alternatives
  6. Ranking alternatives

Here’s where I believe the experience of the last 40 years would change things substantially. First, we’ve better honed our ethnographic interviewing techniques to better understand the situation. (See The Ethnographic Interview.) We’ve also learned how to build better relationships with those we’re trying to support and assist using Motivational Interviewing techniques. Before someone can begin to come up with a schema for the challenges they’re facing and the alternatives available to them, they must be allowed to explore the topic without too much rigid structure. Ultimately, the goal is to enable creativity and innovation in the responses, since this enhances the potential choices. (See Unleashing Innovation and The Innovator’s DNA for more on innovation.)

The process as it was laid out lies on a fundamental assumption that brainstorming works – but it doesn’t. (See Quiet.) There are lots of reasons, but in short, creating a list and then coming back to figure out which of the items on the list are useful is wasteful. We need to establish that there is some unspoken bar, under which we won’t capture an idea to later decide to discard it. Instead of processing items then providing some weight to them, we should assign rough weights to the items as we go. (Another issue is the single-threaded nature of traditional brainstorming that can be mitigated with technology and allowing the conversation to become multi-threaded again.)

Another aspect that more recent research reveals is what Philip Tetlock and his colleagues discovered on forecasting. In Superforecasting, they explain that revisions and keeping track of the predicted probability of the outcome is also important. So, in addition to an impact number, we should also record a probability of the outcome occurring.

Collectively, this creates an opportunity to layout the foreseeable consequences both positive and negative for each choice in the decision. The permutations, options, and ideas can quickly become overwhelming if one attempts to truly run down every possibility, and that is why it’s important to triage the situation to only those options that appear viable – knowing that it’s possible, but not likely, that you’ll exclude the best option.

Dialogue Mapping

An alternative to the approaches proposed by Janis and Mann is the process of dialogue mapping. In this approach positives and negatives are mapped to items but the hierarchy of possible options and ideas is maintained. This can sometimes be a more efficient process as there will generally be clusters of choices that have the same positives and negatives. (See Dialogue Mapping for more.)

Serial Decision Making

While we create the balance sheet as if every option is weighed against the other options, and we make a decision among multiple options, the truth is that we rarely decide like this. Instead, we serially evaluate each potential option and do pairs-matching to see which of two options seems to be better. We continue this process only until we believe we’ve reached a point where additional comparisons won’t add value.

In effect, we all settle for satisficing in one way or another. We do this either because of the amount of information for each choice or because we simply believe that the effort we’re putting into the decision is no longer warranted.

Toss Up

One challenging observation is that when confronted with obviously irrelevant information, decision makers were more likely to regard the probabilities as 50:50. From Superforecasting, we know that 50:50 means that the person doesn’t know. In the presence of irrelevant information, we begin to wonder if we’re assessing the situation correctly or if we’ll ever have enough information.

The lack of faith in our ability to come to a clear conclusion has the effect of decreasing our interest in doing any further research to find the right answer. Whether we consider the information unattainable or are concerned with our ability to differentiate, we stop caring.

Simple Decision Rules

The truth is that decisions of any complexity are so fraught with uncertainty and details that we can’t possibly handle all the raw data. This is perhaps in part why David Snowden developed the Cynefin decision framework. It describes the degree of complexity and volatility of a situation and how those factors lead to radically different adaptive responses.

We often use single rule methods for evaluating the right decision. Whether the criteria is “best,” “right,” or “compassionate,” the decision is simplified by constraining to a single criterion – or a few criteria. Even when we don’t simplify to this degree, we frequently find people – particularly politicians – rallying around simple messages with easy solutions when we know that the proposed solutions won’t work or are at least unlikely to work.

If the problem has been encountered before and the last strategy was successful, the strategy is tried again. If the problem has been encountered before and the last strategy wasn’t successful, the opposite strategy is often employed. There is little thought given to the changing circumstances and the impact this should have. We blindly follow formulas whether they’re for the right problem or not.

Seventy percent of findings in journal articles can’t be replicated. Much of that is likely related to the fact that the effective criteria and constraints for the results aren’t articulated in the article. The article says, “Here are the results we got,” but rarely is it possible for a study to isolate the factors which led to the results – no matter how they may claim differently.

Reducing Anxiety and Conflict

Much of the internal psychodrama that happens as a part of the decision-making process is an attempt on the decision maker’s part to reduce their anxiety, stress, and conflict about the decision. Sometimes this will find the decision maker bolstering their perceptions of the decision that they’ve selected, other times it will take the form of others trying to calm the decision maker.

Consider for a moment the degree of impact of negative consequences that were expected compared to those that weren’t anticipated. Those which were anticipated have a substantially lower psychological impact. It’s as if the decision maker has already prepared their defenses and are therefore less impacted when the negative consequences do appear.

Another factor that decision makers use to manage their anxiety is to defer the consequences to the dim future, allowing them to focus on the here and now instead of consequences that have no immediate impact.

Leading Lean

One of the benefits of having grown up with a mother that did production and inventory control is that I got exposed to new approaches to manufacturing and managing inventory early. Cellular manufacturing and lean manufacturing were topics around the dinner table. It was fascinating to me how different ways of structuring work were more efficient. That plus my experience in software development helped me to understand the fundamentals of lean manufacturing. One of those characteristics is the introduction of activities that don’t add value – or sufficient value – to the customer. The other is the awareness that some decisions can be changed and some cannot.

Fundamental to lean is the idea that you delay decisions that cannot be changed, and you expedite decisions that can be changed. The simple criterion of reversibility is powerful. It can prevent spending too much time focused on making the best decision when the decision probably doesn’t matter that much – because it’s changeable later.

Goal Striving

The degree of anxiety associated with the decision-making process is driven in part by the degree to which the decision maker feels invested in the decision. The more invested the decision maker is attached to the outcomes of the decision, the more anxiety will be felt. This anxiety will inhibit the options that the decision maker can consider, as Daniel Pink points out in Drive.

There is a healthy balance between a concern for the decision and an unhealthy level of attachment. Perhaps this is one of the reasons that Buddhists recommend detachment – and not disengagement. There’s still an interest and concern for the decision without being too attached to the outcomes that are at least partially outside of the decision maker’s control. (See The Happiness Hypothesis and Resilient for more on detachment.)

Addressing Challenges

Invariably, there will be challenges to a decision once it’s been made. Janis and Mann suggest the following process for considering challenges:

  1. Appraising the Challenge – What’s the risk?
  2. Surveying Alternatives – How can I address this challenge?
  3. Weighing Alternatives – Which activity is best?
  4. Deliberating about Commitment – Should I commit to this new course of action?
  5. Adhering Despite Negative Feedback – I’m going to hold the course.

This process is a rational view of how people address challenges, but because of the degree of ego involvement in the decision, there’s a high degree of rejection of the potential challenges, and thus they may never go through this process.

Prior Commitments and Sunk Cost

Perhaps the most difficult decision to make is when to pull the plug on something. Kahneman calls it the sunk cost fallacy in Thinking, Fast and Slow. Janis calls it a bias toward prior commitments. Either way, it’s our tendency to continue to invest in decisions and projects despite the fact that there’s clear evidence that what we’re doing isn’t working… or is there? Jim Collins in Good to Great speaks of the Stockdale paradox. The unwavering belief that what we’re doing will work and the willingness to listen – and adapt. The problem with all this – no matter what term you want to use – is that there is almost never clear evidence.

In 2008, I released The SharePoint Shepherd’s Guide for End Users. For a year, it did almost nothing. It’s been over a $1 million dollar business for me. Had I quit after the first year of dismal sales, I would have lost out on almost all the revenue the book and derivative products have generated.

That experience haunts me. On the one hand, I need to find a time to cut the cord on investments. On the other hand, had I not spent a few thousand dollars on a mailing campaign, I would have lost out on so much. Because of experiences like mine and just general human experience, people hesitate to make the difficult decision to shut things down.

Expunging

In the case of decisions that are reversed, the process is often so painful that people begin to expunge memories of the bad decision. For instance, after a divorce, pictures of the former spouse are removed and often destroyed. Any gifts of significant meaning are similarly destroyed to free the psyche from the painful reminders of the decision that is perceived negatively.

Easy For Me, Hard for Others

There’s some classical wisdom that says that a woman should be hard to get if she wants to get a man. (The Betty Crocker cookbook has a similarly dated perspective that you must be able to cook a good pie to get a man.) The problem with the “hard to get” wisdom is that it’s not supported by research. In a study whose primary actor was a prostitute, some clients were told that she was going to restrict her clientele in the future, and others weren’t given this information. Those with whom she had communicated that she would be hard to get didn’t call back as often for a future appointment.

While this research has challenges with a selective sample (those men who paid for a prostitute’s services), it is a confusing result if it truly is better to be hard to get. Janis and Mann reconcile this by accounting for fear of rejection and, with additional research (by Walster and associates), concluding that a woman should be perceived as hard to get for others but easy to get for the man whom she is interested in.

Unpredictable Boomerangs

Some messages are multifaceted to the point that they can have a strong positive impact on one group and a strong negative impact on other groups. Consider an inducement towards a different brand than is normally purchased. Women, who presumably felt responsibility and knowledge for their purchases, actively resisted the inducement; whereas men, who were presumably not as responsible for or knowledgeable about the purchases, responded very favorably.

This means that we must be careful with our work to engage a new group of people or try strategies which can be divisive. It may be that we will sacrifice our core audience in the service of finding additional audiences.

Boomerangs occur in other situations as well. Someone signs a petition without much involvement in a cause, and when attacked about being a part of the movement the petition was about, they may become emboldened to take a more active stance. The act of being attacked for a relatively mildly-held belief causes the person to become more involved and committed to the cause.

Hidden Requirements

Perhaps one of the greatest tricks in causing people to make decisions is to hide the real requirements when they make the commitment. (See The Hidden Persuaders for more on this kind of deceptive practice.) Take Billy Graham’s call for people to pledge to be a member of the crusade. The motivated person steps forth, makes a public commitment to the cause, and shortly thereafter signs a pledge card. Before they know it, they’ve committed to being a part of something without really understanding what that means.

Resistance to Change

Janis and Mann explain in the context of smoking the kinds of rationalizations that people have when confronted with the fact that smoking kills. The same core rationalizations can be used for anything:

  1. It hasn’t really been proven.
  2. You don’t see a lot of that (consequences).
  3. It’s too late for me to change.
  4. I’ll just compensate with an equally bad problem.
  5. I need this.
  6. I’m only hurting myself.
  7. It’s a risk, but life is full of risks.

What’s striking about this list is that these statements can be made about any bad habit and poorly considered decision. I’ve heard all these objections in conjunction with COVID-19 vaccine hesitancy. (See When You Should Not Get The COVID-19 Vaccine.)

Optimal Fear

Some look at stress from the point of view that stressors are necessary to drive us towards some sort of action. The argument is that, without any stressors, we’d sit around and do nothing. (Netflix and chill?) It’s the introduction of stressors – and therefore some degree of fear – that drive us toward action and keeps us motivated enough to do something. However, on the other side of the equation, there is something to not having too much fear, because we’ll get frozen in our fear and be equally ineffective.

Amy Edmondson speaks about the need for psychological safety in The Fearless Organization, and Find Your Courage and A Fearless Heart speak to the need to overcome fear to be courageous enough to do things. Drive cites research on how even moderate amounts of stress (in the way of compensation) can inhibit performance. Fredrick LaLoux in Reinventing Organizations
explains how the lowest level of functioning for organizations are those that motivate through fear. In short, there’s no one, easy answer to the right amount of stressors to place in front of people. Generally speaking, you want as little fear as possible while maintaining enough to keep people motivated not to quit. Morten Hansen in Collaboration explains the problem of social loafing and some of what can be done to prevent individuals from deciding that they don’t need to work while others do.

Decision Making in Information Overload

Decision making is necessarily a process whereby we cannot have enough information and we have too much information. As was discussed earlier, we must choose to satisfice or maximize for each decision, but there’s a broader context that we live in today. Daniel Levitin in The Organized Mind explains how we’re not just making individual decisions in an information overload condition; our lives have become continuous information overload. (The Information Diet is another good source of information about how we’re inundated with information.)

As a result of our continuous bombardment with information, our reticular activating systems (RAS) have become more aggressive at filtering out information (see Change or Die for more on the RAS). That’s one of the reasons why marketing has moved to attention marketing. (See Got Your Attention? for more.) The more we continue to operate in an environment of constant noise and pressure, the more important it becomes that we are focused on how we consciously apply our best skills at decision making to minimize our efforts and maximize our efficacy.

Optimal Decision Making

To optimize decision making, Janis and Mann offer up this selection of criteria for vigilant decision making.

The decision maker, to the best of his ability and within his information-processing capabilities

1. thoroughly canvasses a wide range of alternative courses of action;

2. surveys the full range of objectives to be fulfilled and the values implicated by the choice;

3. carefully weighs whatever he knows about the costs and risks of negative consequences, as well as the positive consequences, that could flow from each alternative;

4. intensively searches for new information relevant to further evaluation of the alternatives;

5. correctly assimilates and takes account of any new information or expert judgment to which he is exposed, even when the information or judgment does not support the course of action he initially prefers;

6. reexamines the positive and negative consequences of all known alternatives, including those originally regarded as unacceptable, before making a final choice;

7. makes detailed provisions for implementing or executing the chosen course of action, with special attention to contingency plans that might be required if various known risks were to materialize.

Maybe it’s time that you make the decision to read more about Decision Making.