Years ago I came to understand how learning worked in the trades model. The apprentice was – literally – following the instructions of a journeyman or master to complete small repeatable tasks. The journeyman would start to detach from the small repeatable tasks. The journeyman would realize that there are multiple ways to get to the goal. The master became fluent in multiple approaches. The result was the ability to move fluidly between completely different techniques. The master recognizes the limits of the various approaches and picks the tool out of the toolbox that perfectly fits the situation. (See my review of Presentation Zen for a more detailed discussion of following, detaching, and fluency.)
I’ve mentioned in my previous reviews of The Heretic’s Guide to Best Practices and Dialogue Mapping that Paul Culmsee and I have a long-term friendship, despite being nearly literally half a world away from one another. I’ve still not had the pleasure of meeting his coauthor Kailash Awati, but I look forward to that day. What I know is that Paul thinks deeply about complex, unsolvable problems that Horst Rittel would call “wicked”. That gives you a perspective that there are no solutions to problems. There are only factors that lead to more or less success.
It’s this ambiguity of what works and doesn’t that underlies Paul and Kailash’s latest book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity.
Harassing Ambiguity
Before I get to the meat of the review, I have to admit that somehow in my head the title says harassing ambiguity and not harnessing ambiguity. I can’t explain where that comes from other than to say that I can see Paul and I poking an amorphous ambiguity with sticks trying to get it to form into something that we can get our arms around. Whether this is an indication that I don’t get enough sleep and my dreams have become weird, or it’s a statement that I see Paul as liking to poke at ambiguity to get it to reveal itself, I can’t really say.
What I know is that harnessing ambiguity directly is sort of like trying to hold on to Jell-O. It doesn’t really work out all that well. The tighter you hold the more liquid the Jell-O becomes. There’s a light touch needed to guide discussions to reduce ambiguity. There’s a dedication required to try the same thing over and over and expect a different result. (See The Halo Effect for more about probabilities and how you can expect different results from doing the same thing.) Consider a batter in baseball. Will every swing connect? The best players have batting averages that are roughly one third of the pitches they face. They literally try the same thing over and over and only succeed about every third time.
We believe that we live in a certain world where A+B=C, but ambiguity creeps in and A+B only sometimes equals C. Sometimes it equals D. The problem is that we can’t understand things to the level necessary to predict exactly what is going to happen. Ambiguity about the input variables leads us to not know the results.
The Need for Certainty
The problem with ambiguity from a human perspective is that our brains are literally not wired for ambiguity. We’re cause-and-effect engines. In the study of learning, we know that things like delaying the outcome or inserting even the smallest amount of randomization has a huge negative impact on our learning. (See Efficiency in Learning for more in approaches to learning and impacts.) There are primitive regions of our brain that are responsible for handling ambiguity and emotion and pattern-matching – but our executive functions are formed in the neocortex. Our rational, conscious selves seem to be centered in the neocortex, far from the land where everything is ambiguous.
Johnathan Haidt modeled this as the Rider-Elephant-Path model, where our rational rider keeps the illusion of control. (See The Happiness Hypothesis for more.) Daniel Kahneman describes it as System 1 and System 2 in Thinking, Fast and Slow. Gary Klein talks about recognition-primed decisions in Seeing What Others Don’t and Sources of Power. Knowledge management speaks of explicit knowledge which can be codified, and tacit knowledge which is known but can’t be quantified. Tacit knowledge can’t be quantified because it doesn’t have a small finite set of rules and the lack of ambiguity necessary to describe it in language. (See Lost Knowledge and The New Edge in Knowledge for more on knowledge management.)
Reiss stumbled across this need for closure in the development of his 16 desire model for predicting behavior, as discussed in Who Am I? and The Normal Personality, with the “order” dimension. Order includes more than just organization though that is the obvious outcome. Order is the need for everything to have a place. It also is associated with a need for black and white thinking. The need for closure means that you need to be able to label people and situations with something that alleviates the cognitive complexity of viewing it as a unique situation or person.
Certainty and the difference between the ways we think – or the different models for thinking that exist – abound in the literature.
Commodore 128
It was year ago when I was playing with my first computer, a Commodore 64. It was fun, I learned BASIC, and I wanted to do more. The natural progression was to a Commodore 128. The Commodore 64 had a CPU called 6502 at its core. The Commodore 128 had this 6502 CPU but also had a Z80 CPU as well. In the Commodore 128, it would boot up in 6502 mode and you could tell it to transition to the Z80 to run a completely different operating system, CP/M.
I’m reminded of this because the more we know about neurology the more we realize that there are relatively distinct systems that can be in operation –those systems can message each other (as the Commodore 128 did) but can’t both be active at the same time.
Ambiguous Risk
As it happens, there are risks that are different than others. Some risks have known probabilities and they become a math problem for our brains to solve. We can engage our executive function and solve for the best possible situation. However, other risks are unknown risks for which there is no math problem to solve. The equation isn’t known and there are no good methods for computing probability. These risks are processed differently in our brains. We don’t engage our executive function – our system 2; in these situations we rely on our basal brains. We rely on our differencing engines.
Relating to Certainty
When you are faced with a need to delay gratification, to wait for something to come in the future, how do you abide by that, if you’re not able to accept uncertainty? When ambiguity creates stress in your psyche, you’re driven to quell that anxiety. Imagine the stress that you can create in the mind of a small child when you ask them to delay their gratification. That’s exactly what Mischel did when he asked children to forego the marshmallow in front of them for a short time in exchange for the promise of two marshmallows. From a logical point of view, this is 100% interest for a small delay – a pretty big reward. (For more on the marshmallow experiment see Emotional Intelligence, Willpower, and How Children Succeed.)
It’s a simple test and the observation of it showed some of the strategies that children who are good at delaying gratification used – but that’s not the important part of this test. The important part is that, when Mischel followed up with these children, this simple test early in life predicted their success later in life. More interestingly, he learned that he could develop the skills of those who waited for the marshmallow in others, and could make a dramatic change in their lives as well.
This delayed gratification experiment offered something else of value that is ambiguity. The children weren’t given a fixed time that the experimenter would be back. They were told soon or shortly. This is ambiguous at best. Did they have to wait 30 seconds or 30 minutes?
Delayed gratification, when the delay isn’t well known, is ambiguity. Learning to accept delayed gratification is just one way that learning to accept and manage ambiguity can impact your life.
Innovation
The real problems in life are the ones that have no specific timeline and no predefined formula. Something as simple as picking a college may appear straight forward, but with no predefined criteria or relationship of the criteria, it becomes a problem for which there is no one right solution and only probabilities of future successes. These are the problems that confront us when we live life fully. Our innovations have some probability of being hyper-successful, and some probability of being laughably bad. Those probabilities are neither known nor fixed.
However, those who are better able to innovate may ultimately be more successful through their acceptance, and sometimes even embrace, of the ambiguity of how things will end up. Innovation relies on an acceptance of ambiguity as a basic building block.
The Need for Cognitive Closure
The Heretic’s Guide to Management does a great job of explaining how to work with folks who have a lower tolerance for ambiguity. It explains how to use familiar concepts to help them cope with a situation where their ambiguity tolerance is pushed beyond the edge. These techniques for a facilitator to help temporarily bias the ambiguity that the participant can take are what the authors call “Teddies”. Teddies are useful tools – sometimes used inappropriately – to soothe the participants to the point where they can relate to other participants and the problems.
The gap in coverage in The Heretic’s Guide to Management is that it doesn’t help you understand how to improve the overall capacity of the participants to handle ambiguity. I am hoping that the authors follow up with some coverage of this important topic. Too many times I find people that I’m working with who have a strong need for cognitive closure. They aren’t able to cope with ambiguity at all. They’re concrete, sequential learners – and actors. If they can’t see the direct outcome they don’t do it. Effectively, they want to become unfeeling and just do what has to happen.
Feeling Electrons
Richard Feynman acknowledged the advantage that hard sciences like physics have over “soft” sciences like psychology and said, “Imagine how much harder physics would be if electrons had feelings!” In other words, electrons behave the same way whether they’re having a good day or bad day. Electrons follow the same rules without complicating factors like feelings. Or do they? I remember a high school science project (not in MY high school) which showed that electrons don’t flow in one continuous stream like was commonly accepted (even by electrical engineers). I won’t pretend to understand this discovery, since it requires quantum mechanics and it’s been a long time ago. The reason this came to mind was that our understanding of physics relies upon very large averages of things happening.
We’re talking massive quantities of atoms and particularly electrons. As was mentioned in The Black Swan, the differences tend to average out. However, in the study of psychology, we’re generally interested in only one person or a very small number of people. Even organizational psychology looks at the interactions of a few thousand people. As a result, the differences that get factored out in physics don’t get factored out through averaging in psychology, organizational psychology, or leadership. You have to deal with all of the peculiarities of each person. Perhaps someday we’ll find out that electrons really do have feelings – we just haven’t cared about their feelings before.
Universal Solutions
In The Heretic’s Guide to Best Practices, a great deal of time was spent attempting to debunk the idea that there was one best practice that could be applied universally to any problem and would magically address the need. Obviously, this one best practice doesn’t exist. This time, it’s less about individual best practices, but instead the focus is squarely on the mistaken belief that there’s one business management model – or optimization model – that works best for every organization.
Taylor started the movement with Scientific Management, which at its core was the same goal of every model: get more productivity and less waste. Taylor had consultants walking around with stopwatches trying to time operations and reorganize people into better – more productive – spots. Backlash ensued as people resented being rearranged like cogs in a wheel.
Total Quality Management (TQM) followed scientific management after a 40-year delay. The idea here was that quality wasn’t an add-on to manufacturing, but instead an integrated part of the system. This is at the heart of the ISO 9000 certification and its derivatives that manufacturers seek to achieve. Ironically, despite the general understanding that the certification drives quality, it actually only says that you do what you say and you say what you do. It says that you document, not that you’re producing quality solutions, and in more than one organization the development of the quality system actually caused quality to go down.
As TQM started to lose favor, Lean Manufacturing started to gain prominence. We moved from the ideas of Edward Deming, to copying the Toyota Production System (TPS) with its ascribed acceptance of ideas from every level of the organization. Despite the promotion there are reports that even in the Toyota Production System not everyone was listened to. Still, the system worked. There were enough psychological constructs that allowed for progress to be made over the prevailing management approaches of the day.
Lean has been (I believe, incorrectly) simplified to the elimination of anything that doesn’t add value to the customer. Sometimes the approach is that if the customer won’t pay for it then we shouldn’t do it. Of course, this is an oversimplification because there’s always a need to sharpen your saw – though the customer won’t pay for it. Lean transformation projects often focus on the same sorts of things that systems thinking would tell us are important: flows and stocks/buffers. (See Thinking in Systems for more on systems thinking.)
Lean is interesting because lean concepts have been leveraged in industries outside of manufacturing with some success and some notable issues. Like its application to manufacturing, it works when it works and it doesn’t work when it doesn’t. Thinking in Systems explains that, by removing counter-reinforcing loops, reducing stocks, and doing the other optimizations necessary in lean, you necessarily make the system more vulnerable to wild swings – in the name of performance.
I remember working with a manufacturer working towards lean who sourced some components from China. Everything worked well until the Chinese manufacturer missed a few deadlines and there weren’t sufficient buffers in the system to deal with the delays. There were some very high freight bills as things had to be shipped airfreight instead of via sea, just to keep production lines from shutting down.
Making Maps
There’s an interesting point to map-making and one that’s not obvious. When making a map, we believe that the importance is on what we add to the map. We look to see whether we’re adding roads, businesses, rivers, etc. However, the art to map-making isn’t about what you add. The art to map-making is in what you leave out. The value of maps is in ignoring and eliminating the unimportant from the map that you’re making. Great map-makers create beautiful representations of reality that contain only what you need and none of what you don’t.
When we do studies to try to create new ways of doing things and documenting their successes, it’s a sort of map-making process that’s used. The objective is to identify those things that are changed in the experimental condition that are different from the control condition. However, controlling for other variables and trying to eliminate them can be difficult. As a result, most study designers don’t really know that the items they identified as important are truly the important items.
It’s only when the map is complete with the things that change that someone else can replicate your results – this is the way that science is tested. Far too few research papers that are published in well-respected, peer reviewed journals can be replicated. In most of these cases, it’s assumed that they can’t be replicated because some important aspect of the experimental condition has been omitted.
Fermi and Drake
Enrico Fermi was a college professor that demonstrated the wisdom of crowds. By using some well-ranged guesses, his students were able to relatively accurately guess at the number of piano tuners in Chicago. By guessing at the size of the market and the frequency with which the piano tuners are needed (or at least used), the resulting number was roughly right. The only conditions for success? An ability to make reasonable guesses, and enough people to factor out biases.
Compare this to the results of the Drake equation, which is used to estimate the number of detectable intelligent life in the galaxy. In other words, it predicts how many alien species we might find. The Drake equation is different in that we have no framing context of what the right values may be, and so the results are widely varied.
On the one hand, we can believe that we can factor out the uncertainty and ambiguity in things that surround our lives and our business. However, there are times when it’s simply not possible to factor out ambiguity, because we have no context for what a life without ambiguity would look like for real.
Shoot the Messenger
The predecessors to the Pony Express had a hard job. When a messenger arrived with good news for a king they might be rewarded. When they arrived with bad news – well, there’s a reason that there’s the saying, “Don’t shoot the messenger.” Messengers literally lost their lives delivering bad news to the kinds who didn’t like it – which, didn’t by the way, change the news.
However, practitioners of various methods often get blamed for the method’s failure. It’s “you’re not doing it right” that is blamed for a lack of success, rather than acceptance that the model itself has holes or doesn’t work in certain circumstances or that it was just an unfortunate set of circumstances. The beauty of being a model-maker is that you can always blame the practitioner for the failure of the model – unless of course you’ve not managed to get some other sucker to be the practitioner and you’re doing it yourself.
Many programs have dismissed the failures of the program to succeed based on the failings of well-meaning practitioners who may have executed the model flawlessly. However, this is what happens when a model fails as a result of a gap – what happens when something succeeds because of the people?
Agile Software Development as a Management Fad
I’ve had the pleasure of watching the growth of agile development over the course of my career. I’ve seen what amounts to the entire hype cycle of the approach. Agile development is built off of a few solid psychological principles. It relies on iteration. It insists on personal commitment. It has real value in many situations – and some limits where it’s not effective.
Early on in the hype for agile methodologies, of which there were several, criticisms from traditional developers were that the agile development projects weren’t succeeding based on it being a better approach. Instead, they were succeeding based on the fact that it was the better developers who were attracted to it and who were executing those projects.
The criticism is appropriate. The developers who were at the top of their craft were also those who were trying new things and trying to develop software better. So those who wanted to give agile development a try were the better developers. However, the question is whether agile succeeded because of good developers or if it worked on its own. This is the chicken or egg problem. Were they better because they wanted to try it, or did they become better because they did? In truth, the answer is probably a little of both.
Self-selection is a problem with statistical research. You tend to get the people to volunteer who are the most interested. Thus, their responses don’t represent people at large. They represent the people who are interested. In political polling this bias may factor out. Those who are interested in the survey are interested enough in showing up to vote. However, in many other cases, the self-selection problem can invalidate research.
I anticipate that in the future there will be an agile management model which will leverage the same core tenants of agile development for non-software development projects. It will be the latest management fad (like lean is) and it will work in some cases and won’t work in others.
Agile development is a model (or really a set of models) that is designed to solve a range of problems with good people most of the time. Someone will decide it’s the one model to rule them all, and will ultimately frustrate practitioners when they fail and their failure is blamed on them.
Getting What You Want by Pursuing Something Else
There’s a concept that surfaced in A Philosopher’s Notes about indirectly getting something you want by seeking something else. It references Hindu gods”
Lakshmi is the traditional Goddess of Wealth. The problem is, if you go straight after her (by constantly chasing the bling) she’ll tend to avoid you. Saraswati’s the Goddess of Knowledge. If you go after her (by pursuing self-knowledge, wisdom and all that goodness), an interesting thing happens. Apparently, Lakshmi’s a jealous Goddess. If she sees you flirting with Saraswati she’ll chase after you.
This indirect access of the things you want – wealth or wisdom – occurs in The Heretics Guide to Management as well. Here, the anchors are more clear. When you’re willing to work hard (or do purposeful practice, as the book Peak would say) you can achieve the success that you want. Perhaps you can even harness ambiguity and you won’t need The Heretics Guide to Management.