All posts by Larry



Here is an essay I wrote in 1998. In these days of continuous change, it seems even more appropriate than it did then. 

Ever notice how often people say,  “I’m really looking forward to getting closure on this topic.”  Closure is a good thing.  It provides a sense of finality;  of a chapter closed, a mission accomplished.  In a world where we juggle so many balls at the same time, each act of closure is a small victory.  And victories are important.

Have you notice that its opposite, “opensure”  is not a word that’s used  (if it is a word at all!)?  In the spirit of equal time and of stretching paradigms, I’d like to present a case for opensure.

“Opensure,” as the name suggests, implies a sense of openness; a continuing receptivity to new input.  The opposite of closure,   opensure is about  “hanging out for a while to see what develops.” Opensure is about curiosity.  Opensure is about learning.

Chaos theory, complexity theory, and psychology deal with opensure issues, although they are called something else.   Margaret Wheatley, in her book  Leadership and the New Science,  discusses the  (very normal) drive for closure.  Meg argues that the challenge of the 21st century leader is to resist the urge for closure, recognizing that the confusion and ambiguity  and uncertainty of an unresolved situation provides a tremendous opportunity for creative, paradigm-changing solutions.

Denise Easton and I created the Complexity Space Model to provide a map and tools for navigating complex adaptive systems.    These systems (think organizations) do not operate like linear, “well oiled machines.” Rather they are constantly emerging, self-organizing, and adapting. In such an environment, the ability to remain agile, nimble and learning is essential to survival and prosperity. “Opensure” becomes a critical skill.

The  Myers-Briggs Personality Type Indicator  (MBPTI)  provides awareness and  a vocabulary to help individuals distinguish between different personality types.    One of the four dimensions that the instrument measures regards evaluating individuals along a continuum  between  “judging”  (J)  and  “perceiving”  (P).  “Js” tend to make strong and rapid evaluations of data and experience and to articulate those positions strongly.  They want closure.  “Ps”  tend to resist making decisions, preferring instead to seek and analyze additional data and experience.  The theory underlying the MBPTI states clearly that both preferences are  “right.”  “Ps”  would seem to prefer  opensure.  Myers and Briggs point that being sensitive to one’s own preference and to the possibility and potential of a different preference in others helps to promote mutual understanding.  When the two different preferences are melded together constructively, significant synergies are possible.

“Deciding to decide — or not to . . .  that is the question.”  As individuals and organizations become increasingly aware of the critical need for continuous learning and growth, cultivating a healthy sense of opensure will become one of the success strategies of the 21st century.


Cause and Corrective Actions — Too late!

“Houston, we have a problem.” That statement, initiated by the astronauts on Apollo 13, galvanized people from all over the world into action. Their heroic actions resulted in the safe return of the astronauts.

Once the astronauts returned home, a different kind of problem-solving effort ensued – to figure out what happened and what to do so it would never happen again. This effort – known as “cause and corrective action” or (CCA) – is practiced with varying degrees of formality by most problem-solving entities. After all, who wants to deal with the same catastrophes over and over again?

While laudable and necessary, I propose that too much focus on CCA is misguided. The problem? The damage is already done! The short-term efforts to contain the immediate crisis and the long term efforts to correct the root cause are, in the parlance of Lean thinking, non-value added activities. That is, customers do not pay for mistakes which should have never occurred in the first place. If a restaurant burns the first steak you ordered, how many of you would pay for two steaks when presented the check?

As an alternative, consider “symptom and preventive action,” or SPA for short. The focus here is on crisis prevention rather than mitigation. It is a forward rather backward-looking measure.

An illustration. Imagine you were a coal miner in the 1800s. You knew there was sometimes gas in the mine shaft that killed miners without warning. This was, of course, catastrophic at every level. What was needed was a way to detect the presence of the noxious gas before it did damage to the miners. Someone recognized that the respiratory system of canaries was far more sensitive to what would later be identified as carbon monoxide than human respiratory systems. The miners began bringing caged canaries into the mine shafts with them. If the canary showed signs of respiratory distress, the miners would exit the mine shaft immediately.

What is your “symptom and preventive action” process? What are your “canaries in the coal mine?” What are the early warning signs, the “shots across the bow,” the, “Danger, Mr. Robinson” (for all of you “Lost in Space” aficionados) indicators of potential future problems?

While each person/department/function/organization’s canaries will need to be customized based on their specific situation, here are some generic ideas to jump start your thinking:

  • Ask, “What would a “little bit” wrong look like? Sound like?”
  • Introduce statistical process control to enable the distinction between “common cause variation,” (the “natural,” inherent variation found in every process), or “special cause variation,” (the “out of the ordinary,” “assignable cause” variation) from what is expected.
  • Introduce “rainbow charts.” These charts enable a process output to be plotted against known customer requirements. The chart itself is colored red outside the accepted specification limits, yellow at some pre-defined level just inside the specification limits, and green when the process output is deemed to be safely within the customer’s requirements.
  • Conduct periodic “pulse surveys” with your internal and/or external customers. These brief (3-5 questions) surveys can provide valuable insights as to how well you are meeting your customers’ present and anticipated needs.
  • “Look back – look forward.” You may be familiar with “frog and the boiling water” analogy. If a live frog is placed into a pot of already hot water, it will immediately jump out. However, if the same frog is put into a pot of water that is warmed gently, the frog will acclimate to it and eventually cook to death. Taking a longitudinal, over-time of performance may uncover subtle trends that indicate preventive action is needed.

Most organizations reward problem solvers. When and how does your organization reward its problem preventers?


You get ALL of What you Measure

 “Be careful what you ask for – you just might get it.” This saying, and others like it, have been around for a long time.

A number of different maxims attest to the power of establishing goals. “What gets measured gets done.” “You get what you measure.” “What is the difference between a practice and a game? In a game you keep score.” The intent of goals is to focus attention and they do that very well. It becomes critical, therefore, to pay close attention on where focus is to be applied.

A mentor once shared his “universal measures” to provide a language to think about potential areas in which to set goals. Those included:

  • Quality
  • Cost
  • Timeliness
  • Compliance
  • Customer issues
  • Employee issues
  • Support of divisional or organizational strategy, and in some cases
  • Innovation/pattern shifting

The universal measures are extremely valuable because of the precision they offer in goal setting. The kinds of improvements needed to reduce defects in a process are different than those needed to satisfy an employee complaint. Using a process or value stream map to identify wait-time reduction opportunities would highlight different elements of the process than using the same map to focus on reducing defects.

The downside of this precision and the differences it highlights is that there are often interactions between the different universal measures. Reducing defects might require the addition of new technology which would negatively impact cost. Deducing costs by layoffs will almost certainly increase the number of employee concerns and issues.

So what is a goal setter to do? One option, I suppose, is to throw in the towel – to admit that goal setting is “just too hard” and retreat to a softer expectation of “do better.” While an option, I don’t find that one very satisfying – and hope you don’t either.

A more rigorous second option would be to set a “goals matrix.” There are several versions of this. One version lists the universal measures across the columns of a spreadsheet. The primary measure of interest is given a quantitative target. Each of the other measures is then evaluated in the light of the opportunity for its interaction with the primary goal. In the case of a positive interaction, the complementary measure might have its own increased quantitative target entered into the cell, or in a slightly less rigorous approach the words “while also increasing” would be entered into those cells. In the case of a potential negative interaction, the words “while not negatively impacting” might be entered into the appropriate area.

A third approach is based on the assumption that humans act to maximize their own self-interest when confronted with a challenging performance situation. Maximizing self-interest might take the form of increasing positive outcomes, or decreasing the chance of negative outcomes. To the extent you believe this dynamic might be at work in your situation, it might be useful to ask yourself, “What would a person interested in maximizing his or her own self-interest start, stop, or do differently to maximize the metric that is being proposed?”

Here is an example that illustrates this dynamic. Seeking to maximize efficiency, leaders in a customer service call center established a new incentive program based on the number of calls each phone representative was able to handle in an hour. The intent was that more customers would be served in a given time and the percentage of value added work would increase.

In the first weeks of implementation, two significant results were seen. First, the representatives received significant increases in their compensation based on dramatic increases in the number of customers served per hour. The second outcome, though, was that perceptions of terrible customer service increased dramatically! The leaders were perplexed. They had accomplished their stated goal – yet had achieved an unintended consequence they could not understand.

Put yourself into the position of a phone representative. How would you maximize your own interests given this new goal? The reps learned quickly that the way to maximize their income was to wait approximately 30 seconds into any given customer interaction and then ask a question or make a request that they believed the customer could not answer at the time of the call. The rep would then very politely ask the customer to obtain that information and call them back. The rep would mark a completed call on their log and move to the next interaction where they would repeat the short period of time/ask for additional information cycle. At first, the customers felt that the additional information requests were reasonable and would call the rep back with the additional data. After another 30 to 60 seconds, the rep would request another piece of information the customer would likely not have immediately available, ask them to retrieve that information and call them back. Another completed call for the rep.

Not considered in this scenario was how the customer felt about being on a call, needing to retrieve additional information, wait on a call queue again, retrieve additional information, wait on a call queue again … the customers became very angry. After the second, third, or sometimes fourth called back customers became very agitated and asked to speak to a supervisor. The rap politely obliged, marking yet another call in their completed calls log, leaving their supervisor to deal with the disgruntled customer.

Once this dynamic was understood, it became clear that a different metric was required in order to drive the desired behavior of better and more efficient customer service. After deliberation, the leadership team established a new metric – the number of customers issues satisfied on the first call. The reps’ behavior changed overnight.

What measures and metrics are you focused on in your organization? Which of the universal measures do they emphasize? Have you consciously considered the potential interactions or checks and balances that might exist between your metrics and other potential measures? Have you asked the question, “How would someone looking to maximize their own interests based on this measure behave?”

Remember, “you get all of what you measure.”