Thinking Systems Need Systems Thnking

Published online in Wiley InterScience, October 2007

ABSTRACT

After describing machine and complex adaptive systems, we define a thinking system with two unique, defining characteristics: having goal(s) separate from survival and the capability to innovate purposefully. Thinking systems always learn and are the only systems that can structure their own learning. Healthcare is a paradigm of a thinking system and is repeatedly plagued with unintended, adverse outcomes particularly after fixes that fail. Systems thinking can dissolve such dysfunction in healthcare and by extension in any thinking system. Specific recommendations follow from this rationale.

INTRODUCTION AND RATIONALE

In the broadest sense, the thesis of this article is that our society must move beyond the industrial revolution and embrace the implications of living in an information age.  Unfortunately, our mental models, organizational structures and management philosophies are shackled by Newtonian (linear) ideation, assembly line mentality, as well as hierarchical, command-and-control attitudes.  Most results in a modern world are derived from interactions within systems composed of machines, computers and people.  These are thinking systems, and thinking systems need systems thinking.

We reason as follows. A thinking system is a complex whole populated by humans. Humans have diverse and frequently contradictory purposes, but they always learn. Through learning, thinking systems can intentionally improve. Healthcare is a thinking system plagued by fixes that fail. Systems thinking is an approach designed to prevent unintended consequences and is highly applicable to thinking systems. This rationale leads to specific recommendations for the improvement of the healthcare system and by extension to other service industries. To improve clarity of communication, we define several common parlance terms from the 1987 Random House Dictionary.

TYPES OF SYSTEMS

A system [“assemblage or combination of things or parts forming a complex or unitary whole”] is created when two or more components interact. The core of a nuclear reactor, where billions of elemental particles are interacting in chain reactions, is very simple machine-type system. The pattern is repetitive and the interacting elements are all the same: protons, electrons and neutrons. A bicycle is also a machine system, but given the variety of its parts and their interactions, a bicycle would be called a complex system. Simple or complex, machine systems do not change and are intended to achieve zero variability in outputs.

When there is feedback within a system, adaptation is possible. Such a system, called a complex adaptive system (CAS), has three defining characteristics: self-organization, co-evolution and emergence. (Ashmos, Duchon, McDaniel 2000; Beinhocker 1997; Johnson 2001; Kauffman 1995; McDaniel 1997; McDaniel & Driebe 2001; Pierce 2000) A CAS develops its own internal organizational structure, that is, it self-organizes. Co-evolution refers to interactive changes between system parts, one being feedback, that in turn can affect other parts of the system or other systems. Co-evolution can produce a non-linear dynamic called the Butterfly effect where “small interventions can have a great effect and great interventions can have little effect.” (Peirce 2000, 9)

Emergence — delineated by Steven Johnson in a worthy book of the same title (2001) — describes “movement from low-order rules to higher-order sophistication.” Local interactions, such as between ants and pheromone trails, produce macro-behaviors like an ant colony. Pattern-recognition software can produce the emergent result (Table 1) of Amazon.com emailing personalized recommendations for books to buy. When the outcome is greater than the rules, when the results are not predictable from study of the system parts or rules in isolation, there is emergence. In a sense, emergence is a consequence of learning. Learning can occur without consciousness. Plants learn to adjust position to optimize acquisition of sunlight and birds have learned how to fly in a pace line. White blood cells learn how to kill an invading bacterium. A weightlifter’s arms acquire muscle memory by repetition but neither the plant nor the lymphocyte nor your biceps thinks.

Thinking can be defined as having “a conscious [sic, self-aware] mind, to some extent of reasoning, remembering experiences, making rational decision.” Thinking involves volition. Many biologic systems have free will: ants, lions, fish, but only humans think with purposes beyond survival. Thinking humans can create things never seen before, and humans can kill for purposes other than survival. A CAS that includes purposeful humans is what we call a thinking system.

To understand a thinking system, consider a machine with cogs, wheels, levers, and pulleys or microscopic linkages on a computer chip.  The parts of a machine system interact in predictable, repetitive and controllable ways. Piled in a “heap” (Kauffman 1980), the parts of a bicycle have no function, but fitted together into a sequential process or system, their interaction produces a useful machine. (Stankard 2002) The same is true of an assembly line—a manufacturing system—where people function as cogs or pulleys or programs producing output that is predictable, controllable and subject to variability reduction techniques such as six sigma.

Even when the human in the machine model uses judgment on how much to tighten the lug nut or how much solution to add to the flask for a chemical reaction, there is an established right way, and the system expects the human to behave the same, consistent, pre-determined right way every time to produce the same output repeatedly. In the twenty-first century, more and more of these machine-like activities are actually done by machines. Thinking humans are necessary  for actions that require decisions where there is no established right answer. What the journalist reports is a matter of his/her judgment. A manager must try to juggle a series of inconsistent and often incompatible directives. A doctor constantly deals with uncertainty and unpredictable individual outcomes.

Thinking systems — populated by humans — are unique because they can have diverse intentions and can envision preferred outcomes in the far future. Animals have free will and they think. Ants build complex structures but do not create art. Birds respond to environmental changes but do not write books. A critical difference between thinking systems and all other complex adaptive or “chaordic” (Hock 1999) systems is purposefulness. The humans in a thinking systems have individual, multiple and sometimes contradictory purposes, which they must juggle and prioritize in order to achieve a specific intended result.

Why Read This Article:

When you understand what a thinking system is, you can accomplish more of what you want. YOU are a “thinking system,” that is capable of both true greatness and great evil. Like you and me, healthcare is a thinking system. Washington doesn’t understand this.

By Deane Waldman, MD, MBA, author of "The Cancer in the American Healthcare System"

Professor Emeritus of Pediatrics, Pathology and Decision Science, and holds the “Consumer Advocate” position on the Board of Directors of the New Mexico Health Insurance Exchange, and Adjunct Scholar (Healthcare) for the Rio Grande Foundation.

Table 1: Types of Systems

System Key Attribute Output Purpose
Machine Consistency Predictable Zero variability
CAS Adaptability Emergent Survival
Thinking Purposeful learning Both intended & unintended Self-generated
CAS=complex adaptive system

In addition to purposefulness, thinking systems have a second unique characteristic: creativity, which can be defined as the ability to “figure out how to use what you already know in order to go beyond what you currently think.” (Bruner 1983; 183) This is both “man’s only basic virtue” (Rand 1957: 935) and at the same time the source of evil, where people poison themselves with cigarettes and illicit drugs or kill one another over philosophical, even semantic differences. The “thinking” in thinking systems refers to the ability to create new ideas, to make things never seen before, to behave emotionally and/or illogically, and to act purposefully, sometimes without regard to survival, selflessly such as the firefighters on 9/11 or destructively as in the 9/11 hijackers.

THINKING SYSTEMS ALWAYS LEARN

The Process of Learning

Thinking systems include humans and humans always learn. They may learn the wrong things or things that the organization prefers that they not learn, but humans by their nature learn. If the environment is risk-averse and mistakes are punished, then workers learn not to take chances, to follow rules and procedures rigidly, and to hide or minimize adverse outcomes or mistakes. (Edmondson 1996) Learning is a necessary precursor to change and hopefully to improvement. Activities that enable learning can enhance the quality of outcomes. (Baker 2001, Berwick 1989, Leape 1991 & 1994, Waldman, Yourstone & Smith 2003)

Learning—the acquisition of “knowledge or skill by study, instruction or experience” —can occur on individual, group, even the national level. (Senge 1990) Prerequisites for learning include appropriate incentives; discretionary energy; enabling environments, both internal and external; necessary substrate(s); a plan; and time. While the opposites of these prerequisites can be impediments to learning, there are at least four special constraints in service industries such as healthcare: 1) Lack of established truth or fact; 2) Barriers to testing; 3) Substrate issues; and 4) Regulatory/legal milieu. Possibly the strongest suppressor of individual learning is the difficulty associated with un-learning: changing or giving up the already-established mental model.

In healthcare, mental models (Leape 1994), individual labels, shorthands (Cameron & Freeman 1991), effective tags (Ashmos 1998) are called diagnoses. They are based on statistical probability, not mathematical certitude and should always be provisional. One must be prepared to revise or even reverse the diagnosis and corresponding treatment—to unlearn what we “know”—based on new information, change in conditions, and response (or lack of) to therapy.

In the natural sciences and the manufacturing world, learning leads to accurate information and ultimately to high quality, efficiently produced products. In the social and biologic sciences as well as service industries, learning should produce judgment, adaptability, problem-solving ability and coping skills.

Healthcare has unique impediments to learning. (Waldman & Schargel 2003; Waldman, Yourstone & Smith 2003) Learning means moving from a state of lesser to a state of greater and certainly different knowledge or skill. This is unnerving to a patient who assumes that there is an established answer for their problem and expects their physician to know it. In addition to the individual problems associated with learning and unlearning, there are cultural issues. Among doctors and nurses, admitting ignorance or incomplete understanding is considered weakness or a personal failing and therefore is avoided, even rejected. Furthermore, healthcare culture is highly risk-averse and strongly protective of the status quo: “Different is bad even when it is good.” (Waldman, Hood, Arora & Smith 2004)

Thinking Systems Learn With Intent

Machines do not learn. Complex adaptive systems can learn. Thinking systems always learn. and can do what other systems cannot: direct its own learning--both the process and the intended outcome. Computers (at present) cannot program themselves. Bees do not consider alternative, more aesthetically pleasing structures for the hive. Thinking systems can use stochastic reasoning, apply inductive as well as deductive logic, structure the learning process itself, apply hypothesis testing, and innovate ideas or objects that never existed before for a specific purpose.

Figure 1: Learning by Machines, Complex Adaptive Systems and Thinking Systems

Legend for Figure 1

A machine does what it is constructed to do. An airplane does not learn. It flies when turned on and piloted. Airplanes alone cannot improve. Complex adaptive systems engage in random trials until something improves their survival. Thus, by evolution, birds have learned over millions of years that those who fly in a rotating “V” [a paceline for bicyclists] are more likely to reach their destination and procreate.

Lance Armstrong and his team for the Tour de France reviewed films of previous races to see what worked and did not. They compared the effect of changing the order and the intervals between rotations of the riders. They tested body position, drag created by clothing and type of handlebars to optimize outcomes. In short, the thinking system called Team Discovery knew what it wanted to accomplish, set up methods to study how to achieve the desired outcomes, tested various alternatives and compared the results, and then refined the process with more tests. This was purposeful, structured learning, a capability unique to thinking systems. Note also that there is feedback — planned and unplanned — throughout the learning process as well as the actual race. The creative capacity of a thinking system offers unique opportunities but also poses special challenges: how to direct and organize the learning of self-directed and self-organizing system parts (called people); how to produce the desired outcomes (when system parts may prefer a different outcome); and how to diffuse innovations that are proven to be effective. [Coleman 1957; Rogers 1983; Ritchie & Hammond 2005]

What should result from the process of learning in a thinking system? The answer depends on the responder: the individual wants to learn how to maximize personal utility while the organization wants workers to learn how to achieve organizational goals.

Four hundred years ago, descriptors like reliable, resolute, respectful, constant, and unchanging were considered positive attributes. In the modern, generally chaordic 21st century, the following would be considered compliments: flexible, self-starter, competitive, innovative, and adaptable. To survive in today’s turbulent environment, one must be able to change, with intent and continuously. Two questions of immediate relevance follow: 1) How to effect change, and 2) Change to what?

Weick (1993) suggested that no one can survive without continuous adaptation. This demands an effective strategy, which in turn requires good sense-making. As the external environment is constantly changing, so too sense-making and strategy must be continuously revised. This translates into continuous learning. Weick also shows how the “non-disclosive intimacy” (1993: 644) of smoke-jumpers was adequate until the situation changed drastically and suddenly, the lack of trust between workers caused their deaths. “Non-disclosive intimacy” describes many situations in healthcare, such as the operating room or the ER where people who do not really know each other are literally intimate with someone else’s vital organs. As long as the circumstances are predictable and within the team’s past experience, the patient survives. However, a team that does not know and trust each other cannot adapt to unknown conditions: it does not learn. Finally, Weick showed how small events can have great or disproportionate [Butterfly] effect. These are all common features of learning by thinking systems.

To survive requires change and effective change requires learning. “Education is the most commonly used change facilitator, and is one of the least successful.” (Miller 1998: 369) Though thinking systems always learn, they often refuse to be educated (educate: “to provide schooling in or training for”) or to be taught (teach: “to impart knowledge of or skill in; to give instruction in.”). Education and teaching are passive activities of the student. People learn best heuristically, by trial and error, by doing. The “education” (Miller above) most commonly used as a change lever is persuasion by data or by authority. It is imposed from the outside, taught to but not necessarily learned by the intended students. Didactic learning plays little role in the co-evolution of the components of a thinking system. In the risk-averse culture of healthcare, trial-and-error and co-evolution are actively discouraged, sometimes with disastrous consequences. (Baker 2001)

HEALTHCARE: A PARADIGM OF A THINKING SYSTEM

When a process is rigidly structured with interactions exactly defined and outcome tolerances precisely measured, there is no need for creative thinking or judgment while doing the work, only while designing the system. This applies to most manufacturing activities and some service occupations. However, increasingly in modern activities, outcomes are variable; the interactions are emergent, (Hock 1999; Johnson 2001) chaotic but not random (McDaniel 1997) and subject to repeated adjustment. System components must deal with ambiguity and uncertainty. This last description is particularly apropos of modern medicine.

There is no established right way—proven to work in all patients—to treat obesity or heart disease or breast cancer or diabetes or asthma. Therefore, the nurse or doctor must constantly use judgment rather than simply apply a cookbook approach to care. Data is often ambiguous and decisions are always made using imperfect information. Furthermore, given the number of complicating factors and the lead-lag between cause and effect in medicine, it may be impossible to know what worked and what did not. Healthcare professionals live in a world of uncertainty, ambiguity and increasing frustration and hostility, largely due to unrealized expectations.

Both the internal and external environments of healthcare have become adversarial: “most systems have formed as a defense against an increasingly hostile environment.” (Shortell 1993: 447) Patients expect answers for their problems and expect properly trained healthcare professionals to have these answers. These expectations often remain unfulfilled. The provider may not have the answer for a specific patient’s problems because an absolutely correct answer with a guaranteed outcome for that patient does not exist.  “When the [doctor] has to defend himself against a charge of malpractice by a plea of ignorance and fallibility, his plea is received with flat incredulity; and he gets little sympathy.” (Shaw 1913, 13) What Ashmos wrote about business managers is even more applicable to doctors and nurses: “When managers try to impose order and eliminate ambiguity, it is because ambiguity is seen as a form of ignorance and acknowledging ignorance brings out insecurities.” (Ashmos 1998: 591) Finally, healthcare workers are socialized to do good things for their patients but if outcomes are unclear or delayed, how can one be sure that he or she is doing good, and worse, how can they improve in a culture that discourages innovation? (Deal & Kennedy 1982; Collins & Porras 1997)

Begin with the end in mind, a widely touted management adage (Covey 1989), warns us that we must know where we are going; otherwise, we will never get there or even know that we got there. It is instructive to examine this aphorism in the context of a thinking system. Those who take the adage literally expect to define in advance the precise and specific outcomes of a stragic plan. However, the actions of a thinking system are influenced by the system components (humans) and therefore results emerge rather than follow inexorably and predictably. Secondly, business adages cannot be simply applied to a thinking system without significant adjustment. All too often, companies or institutions take a management idea and apply it, without modification, to a thinking system. This rarely produces the result intended. Many companies jumped on TQM (Total Quality Management) as a solution to near-term fiscal crisis only to be disappointed with the results. TQM involves cultural transformation. Without such fundamental change, TQM cannot succeed. With such change, success will be seen over many years to decades, not the next budget cycle.

HEALTHCARE: A DYSFUNCTIONAL THINKING SYSTEM

We believe that the system-wide problems in healthcare can be linked to the fact that it is a thinking system acting on a thinking system, while both are viewed as and managed like machines. We list below some root causes for dysfunction. (Waldman & Schargel 2003)

1) Timeline of Causality: There is a long delay—a large lead-lag or stock-and-flow problem—between action and effect, especially in preventative medicine.

2) Improper outcome measures: Currently we track what we do not want (death and cost) rather than what we desire: life, function and productivity.

3) The Substrate: The primary input to the system—the raw material—is people: thinking, feeling, willful humans. Healthcare is a people-processing system. There are limited numbers of patients with rare but severe conditions and one cannot make more to study. Patients often behave in ways that are not in their own best interests.

4) Micro-economic disconnection: In medicine, payments are fixed (regulated); demand is variable and is not linked—micro-economically—to reimbursement.

5) Contradictory incentives: In 1975, Kerr asked what you get if you want “A” but reward “B”. (Kerr 1975) In healthcare, patients want care but the system rewards productivity (patients per hour); is anyone surprised that efficient doctors spend little time with their patients?

6) Organizational structure: Shortell et al have argued that healthcare systems have failed “to behave as systems”. (Shortell 1993: 447) Martin Hickey, former CEO of Lovelace, said, “There is really no such thing” [as a health care system], just “‘silos’—individual interest groups ranging from insurance companies to physicians fighting for increasing scarce resources.” (Quigley 2002: A2)

7) Cosmology Episode: The worldview of most healthcare workers is what Weick called a “cosmology episode” (1993: 633) “when people suddenly and deeply feel that the universe is no longer a rational, orderly system.” Weick related this phenomenon to “vu jàdé –the opposite of déjà vu: I’ve never been here before, I have no idea where I am, and I have no idea who can help me.” (Weick 1993: 633)

UNINTENDED CONSEQUENCES

Logical, well-intentioned actions often result in unintended and often adverse outcomes. Low flush toilets increase water usage. The cost for Medicare was more than 800% above projection and helped create the impression that health care is a Right. The recently implemented Health Insurance Portability and Accountability Act has guidelines without protections that will create lawsuits over confidentiality issues, either for withholding clinically important data or for inappropriate transmission of protected medical information. Unintended consequences are so common they have become a law. Dee Ward Hock, founding CEO of VISA International and author of the Birth of the Chaordic Age, (1999) memorialized “The Sheep’s First Institutional Law of the Universe: Everything has both intended and unintended consequences. Intended consequences may or may not happen; unintended consequences always do. (Peirce 2000, 11)

In the 1980s, managed care (or “managed cost” per Klienke 1998) was adopted as a means to reduce the escalation of health care costs. Managed care was based on the sound concept that it was cheaper to prevent ailments than to treat them, in the long-term. However, the timeline between cause and effect in medicine—years, even decades—was forgotten or ignored and the focus became next month’s budget, not patient well being in twenty or thirty years and expenses that were avoided. Short term, linear thinking led to the following rationale. Preventative medicine is very expensive in the short term and most patients change health care plans every 2-3 years. Why should we spend current dollars on a patient who will not be in our plan when the complications of the present condition and the attendant costs, viz., of diabetes or hypertension or smoking, are years in the future. Therefore, it is not in our economic best interest to pay for patient education or in-home screening or smoking cessation programs.

The least care is the best care (financially). Therefore, the system is efficient when it: a) Erects barriers for patients to enter the system, b) Prevents doctors from ordering tests and consultations, and c) Discourages expensive procedures from coverage. The net effect of managed care is to reduce access, restrict tests and procedures, and frustrate both care-providers and patients. Managed care has changed the doctor-patient relationship from fiduciary [“a person to whom power is entrusted for the benefit of another”] to “accidental adversaries”. (Aronson 1996-8)

“When managers take explicit actions to solve one problem they sometimes create another.” (Ashmos 1998: 44) Indeed, it seems to be the norm that healthcare managers and regulators create new, unexpected and worse problems when attempting to solve the current, immediate issue, as exemplified from an actual case study shown in Figure 2 below.

Figure 2: Fixing The Operating Room — A Systems Analysis

Legend for Figure 2

The Figure shows how silo (non-systems) thinking can induce actions that lead to an effect opposite of the desired one, viz., fixing the operating room (OR) budget can cause closure of the OR.

Panel A:  A hospital has finds an excess number of nurses in the OR for the number of surgeries done. As the hospital wants a balanced budget, there is a Gap (#1). Remedial action is taken by reducing the number of OR nurses thus reducing waste (intended effect).

Panel B: Reducing the number of OR nurses creates a delay or constraint in throughput of surgical patients. In turn, patients wait in hospital for operations (greater length of stay), doctors refer their patients to other facilities and patients seek care elsewhere. Hospital staff and physicians become frustrated and seek work at other hospitals increasing turnover. These effects would be called positive feedback or balancing loops.

Panel C: The downstream consequences of the effects seen in Panel B include fewer referrals to the index hospital, fewer surgeries producing less revenue, and higher turnover generating additional costs.

Panel D: The net result of the intermediate effects in Panels B and C is an imbalance, again, between number of OR nurses and the OR budget, creating a new Gap (#2). Using the original reasoning, more nurses would be terminated causing a repetition of the first series of interrelated events, ultimately leading, quite logically and inexorably, to closing the OR.

 

A New Mexico newspaper headline stated, “Governor Richardson’s administration is proposing a 3.4 percent reduction in payments to health care providers in order to help slow the growth of Medicaid spending.” (Albuquerque Journal, January 3, 2004) While this was intended to reduce costs, it will drive doctors away, force more people into emergency rooms and increase the need for acute care, and thereby ultimately increase costs.

Unintended consequences occur for several reasons. In addition to the long delay between cause and effect, many problem solvers believe that good intentions and logic are sufficient to achieve positive, predictable outcomes. There is no obligation among either managers or legislators/regulators to produce—in advance of action—evidence-of-desired-effect. (Axelsson 1998)  Silo thinking is common and many do not consider the broader context of narrowly drawn decisions. Most important, the nature of a thinking system is to innovate, to co-evolve and self-organize, to modify the process, and to produce results different from those initially intended. It is as though the auto assembly line decided to reorganize the car, trying the steering wheel in the trunk and the headlights on top of the hood.

WHAT IS SYSTEMS THINKING?

Systems thinking embodies an approach to understanding how things work first described by Ludwig von Bertalanffy. (Bertalanffy 1975; Davidson 1983) Numerous strategists have modified and expanded the concepts over the years (Ackoff 1999; Aronson 1996-8; Ashmos 2000; Beinhocker 1997; Kauffman 1980 & 1995; Lazlo 1972; McDaniel 1997 & 2001; Senge 1990; Sterman 2002). The central thesis is the effects or output of any system are dependent on the interactions of its parts and that studying the parts in isolation will not provide an accurate picture of the system. Network analysis [Wellman & Berkowitz 1988; Scott 1991; Ibarra 1993; Marsden & Friedkin 1994; Stephenson 1996; Rowley 1997], like systems thinking starts with a similar assumption that one must study the multiple, cross-level interactions over time rather than simple dyadic interplay. Measuring wheel size and crankshaft length provides no understanding about what a car can do. Furthermore, optimizing the function of parts in isolation (rather than in relation to all other parts) often degrades net system outcomes. (Ackoff 1999) Systems thinkers distinguish between analysis, with its root lysis (“to cut or break apart”) from anasynthis with its root in synthis (referring to make whole or put together; Aronson 1996-8) where one studies an intact whole rather than the system parts. Detailed study of hydrogen and oxygen separately provides no understanding of the wetness of water. Management theorists make an analogous distinction in how organizations deal with complexity. (Alexander 1986; Ashmos 2000; McDaniel 2001) Those using silo thinking and analysis seek to simplify, to reduce complexity. Those employing systems thinking absorb complexity and diversity, making the whole stronger (anasynthis).

Most current management approaches are short term, silo, linear, and binary (StSLB). Consider healthcare. St) We track 30 day survival after surgery rather than functional status ten years later; S) Silo thinking refers to solving problems in isolation, such as giving a drug to improve kidney function without regard to its effect on the liver; L) As elucidated by Sterman (2002), most analysts think in terms of flow charts and interactions in a straight, linear sequence rather than multi-layered feedback interactions over time (See Figure 3 below); and B) Only positive or negative effects are considered, rather than variable degrees of effect. While life and death are mutually dichotomous, they are not exhaustive. One can be alive normally or be incontinent or blind. Thinking in binary alternative outcomes is incomplete. Some examples of StSLB thinking in healthcare are shown in Table 2 below.

Table 2: StSLB Reasoning and Unintended Consequences in Healthcare

Time Action Intended Effect Actual Effect
Local Action
1982* Out-patient cath Better quality; reduced cost Not implemented when reported because enablish bureaucracy did not exist
2001* Fix OR Budget Improve hospital finances ↓↓ Net revenue. ↓↓ Quality.
Federal
1964 Medicare Health care for those under or over the age of employment Health care became a right
1980 Managed Care Control costs Revenue shifting. Change in governance. ↓ Access. ↓ Workforce.
1996 HIPAA Coverage when change jobs.. altered to... Protect confidential nature of medical information ↓ Communication. ↑ Errors. ↑ Costs.
1997 Balanced Budget Act Reduced national medical expenditures Financial crisis for most doctors and all academic medical centers (Beller 2000)
2003 RX Benefit Provide drugs to needy seniors Your guess is as good as ours (or theirs)

In systems thinking, the distinction between reason and sense has important practical implications. Reason is defined is “mental powers concerned with forming conclusions, judgments or inferences” while sense refers to “grasp the meaning of; to understand.” Reason uses logic to comprehend an order or sequence, while sense-making may use logic and/or perceptual senses with the goal of interpretation and valuation within a context. There are behaviors that defy logic and results that seem incomprehensible in any Newtonian or logical pattern. It is not a matter of just knowing more or compiling more data in order to see the logic in an outcome. Heisenberg proved that some things are simply unknowable. This does not provide an excuse to stop seeking understanding; it simply requires the systems thinker to abandon predictability and control in order to accept sense-making and to facilitate emergence.

Application Of Systems Thinking To Thinking Systems

The application of systems thinking forces planners and strategists to focus on processes, interactions and causes of poor outcomes, rather than individual players, isolated components of a system or interim results. “When only the superficial symptoms of complex problems are addressed, the underlying problem typically remains unsolved, and even can be exacerbated if the solution feeds into a vicious cycle (such as providing food as direct aid, which relieves starvation but perpetuates the problem of population growth in inhospitable climates.” (Edmondson 1996: 9 quoting Senge 1990) Managed care was touted as the answer to the ills of US healthcare but it has both exacerbated the old problems and created new ones.

Using systems thinking first requires consideration of scale or boundaries. Is the system under consideration the left coronary artery, the heart, the patient, the operating room, or the hospital? There is no such thing as a completely closed, self-contained system. “A system in one perspective [viz., healthcare] is a subsystem in another” [viz., the US nation state.] (Lazlo 1972: 14) This obligates the planner to consider broader context of decisions.

Systems thinkers consider three inters: the interactions of components within a process (viz., heart surgery), the interrelationships of processes within a system (viz., healthcare), and the inter-connections between systems and across time. To explicate these inters, systems thinkers apply archetypes like “accidental adversaries”. They identify characteristics such as self-stabilizing, goal-seeking, self-programming, program-following, anticipatory, environment modifying, self-replicating or self-maintaining. They organize using loops, such as balancing or reinforcing and internal processes like escalation. Finally, the systems’ analyst projects all possible outcomes, including one particularly applicable to healthcare: “ fixes that fail”. (Aronson 1996-98) Despite the number of seeming imponderables and the apparent unpredictability of the outcomes, with modern computer technology it is possible to model behaviors and outcomes of willful, thinking systems. (Sterman 2002)

Figure 3: The Broader Context of Medical Events: Linear versus Systems Analysis

Legend for Figure 3

State and Federal governments must deal every day with budgetary constraints. In the upper left panel, a linear thinker finds that requests for funds are greater than funds available: this is a budget that is out of balance. Therefore, the legislator cuts allocations, viz., by reducing reimbursements. (Albuquerque Journal 2004) Balance is restored. Problem is solved.

The other panels, from lower left to upper right display a systems analysis to reduction in funding for asthma prevention. Cessation of asthma prevention increases the frequency of acute asthma attacks and perforce the need for hospital admissions. Beyond the boundaries of the medical system (MED), i.e., in the economy (ECON), more hospital admissions produce increased expenditures that are cause budgetary imbalance, “solved” by cutting allocations or reimbursements (viz., dropping asthma prevention programs). This is the immediate or direct feedback loop for asthma, but not the only effect of cutting asthma prevention.

The child who is ill at home or in hospital is absent from school (educational system, EDUC). This requires the parent to be absent from work. Over time, from Childhood to Adolescence, the child’s school performance drops and the parent can lose his or her job. Both will lead to increased unemployment (in the “Society” system, SOC), which in turn increases expenditures and forces budget cuts. Ultimately, in the Adulthood panel, the interactions and loops described previously lead to a reduced GDP.

Actions taken at one time and in one subsystem have ripple effects in other systems and downstream over time.

 

A contrast between the application of systems thinking and the more common linear approach is instructive. In Figure 3, an StSLB thinker (upper left graphic) might observe that the provisional budget is out of balance, reason that by reducing allocations, viz., for asthma prevention programs, one can balance the budget. Problem solved. The systems thinker would see it differently.

Cutting funds for asthma prevention (lower left hand grid) is an action taken in the economic system that causes an effect in the medical system, i.e., more acute asthma and more hospital admissions, increasing expenses (back to economic system). Resulting school absences would reduce school performance (education system) and also would require the parent to take off work, losing his/her job (economic system) negatively impacting productivity (national system). Over time, increased unemployment and lack of education increases crime (social system). Systems analysis shows that the long-term net effect of cutting asthma prevention is a decrease in GDP.

Referring to the previous section titled “Healthcare — A Dysfunctional System”, the use of systems thinking and systems dynamics (Ackoff 1999; Aronson 1996-8; Kauffman 1980 & 1995; McDaniel 2001; Sterman 2002) seem particularly appropriate to “dissolve” (definition per Ackoff 1999) the seven root causes of healthcare malaise. It is “inters” not individual actions that determine outcomes. Our current healthcare system is malfunctioning because it is a thinking system that is viewed as and managed like a machine. In medical parlance, we have the wrong diagnosis and therefore are administering an incorrect treatment. We think (erroneously) that we manage cost by constraining one system input—money—while root causes of escalating expenses remain unaffected or exacerbated. Healthcare planners and decision-makers need to: a) Recognize the nature of the system and its parts; b) Define what results they really want; and c) Consider how these outcomes might emerge from a properly designed system.

SPECIFIC RECOMMENDATIONS

The application of systems thinking to thinking systems requires acceptance of the inherent nature of a thinking system: it has free will, it learns and it innovates. This means the planner or leader must develop a process for change, NOT a detailed strategic plan. Thinking systems will reject the strategic plan for two reasons. First, the system is composed of people who must be engaged and must own to do, otherwise, they will subvert the change to preserve the status quo. Secondly, thinking systems self-organize, the parts co-evolve and results emerge from these interactions. If a strategist tries to implement his or her plan, the thinking system will still self-organize thereby changing the planned process, will still co-evolve altering the initially designed structure and function, and the results that will emerge are unlikely to correspond to those originally intended. Examples in Table I above demonstrate such unintended consequences in healthcare, and while the focus of this article has been healthcare, the recommendation to utilize systems thinking applies to any thinking system, such as education.

To apply systems thinking successfully, one must first define in general terms the preferred long-term outcomes or results. This requires dialogue with and education of the customer base. In healthcare or education or the environment, this means the entire populace. With the current information exchange capabilities of our nation, this is possible. To achieve a consensus on long-term outcomes requires explication of the consequences of decisions by the population. Do we want healthcare as an entitlement? Without, as at present, an appropriate balancing loop (see Aronson 1996-8), what will be the financial consequences to our country? Can we agree on the real purpose of education? How will we determine that the desired outcomes are achieved?

Experts, legislators, leaders at all levels tend to lecture to the populace, the governed, the workers, rather than engage them in dialogue. In national-level problems such as healthcare and education, engagement of the public is a necessary critical first step. “The patient and the community [the nation] must be invited to participate in clinical quality improvement work.” (Shortell 1998: 615) Re: our educational system: “The right to know is like the right to live.  It is fundamental and unconditional in its assumption that knowledge, like life, is a desirable thing.” (Shaw 1913: 39) The public must decide what are the desired outcomes and priorities. Mobilization of public opinion and development of a consensus are necessary to have the force required to overcome political inertia and self-serving decision-making. If the public decides, the public will be more willing to accept the consequences of their decisions, including their responsibilities (Kite 2003) in these matters and the costs. Without a national consensus and a national will, radical change and implementation of systems thinking will not be possible.

Changes will be necessary in management philosophy, worker culture and the external environment, viz., the tort system for medical malpractice and the governance structure of school boards or medical centers. (Kotter & Schlesinger 1979; Brass & Burkhardt 1993) They are needed in order to allow the system parts to interact properly rather than have conflicting incentives and contradictory value systems. Again, this recognizes that the system interacts with the external environment: these are open, not closed, systems. To accomplish such sweeping change, even with popular support and therefore political clout, requires a Champion-with-power. Left to a committee or to political processes that spend every waking hour seeking reelection, nothing substantive will happen.

Ways to achieve desired outcomes will require testing. For instance, what approaches actually work to prevent obesity or school dropouts? Taking a lesson from the pediatric oncologists who made such great strides in curing leukemia by national collaborative trials, we need to perform large pilot projects to determine how to achieve the desired results. This returns us to a national database, risk taking, and long-term trials. Finally, the interactivity of system parts must always be kept in mind. Healthcare and education are not isolated systems but interdependent ones. The solution to obesity is likely to be found at least partly in the educational process and vice-versa. School dropout rate will go down when children learn personal responsibility for their own well-being.

The current system of measures, viz., 30-day survival after heart surgery and end-of-year test scores for No Child Left Behind, do not correspond to what we think the public wants: in healthcare, long-term well being and in education, productive and globally competitive citizens. To measure these outcomes requires very long-term outcome measures and given the mobility of our citizens, both physically as well as changing schools or health plans, this leads to national databases. With such a database, epidemiologists and clinicians could develop large enough statistical populations to answer questions of causality and best practice. They could see developing trends before national outbreaks or environmental health disasters. With such a database, educators could assess the best approaches to pre-school development, the optimal third grade education program and higher education choices. Without such a database, we will continue to face multiple, fervently held, and contradictory opinions as well as unintended consequences.

Conclusion

In the modern world, machines manipulate physical objects, computers manipulate data, and people think. Because results are achieved by willful interactions among people as well as between people and machines (including computers), modern outcomes derive from processes and systems, not individuals in isolation. We focused on healthcare as a thinking system but assert the general applicability of our rationale. In a 2004 Opinion piece, Arend and Kreeps used similar logic to urge a change in the “mindset” of our national intelligence system—to one of innovation. Creativity and purposefulness are unique attributes of thinking systems. To realize their full potential, thinking systems need systems thinking.

 

REFERENCES

1. Ackoff RL. 1999. Ackoff’s Best-His Classic Writings on Management. Wiley & Sons, New York.

2. Ackoff RL, Rovin S. 2003. Redesigning Society. Stanford Business Books: Stanford, CA.

3. Albuquerque Journal, January 3, 2004, No.3, E3

4. Alexander JA, Fennell M (1986) “Patterns of decision making in multihospital systems.” Journal of Health and Social Behavior 27(1): 14-27

5. Arend AC, Kreeps SE (December 30, 2004) “New threats must forge new mindset.” Albuquerque Journal No. 365, A9.

6. Arndt M, Bigelow B. 2000. The transfer of business practices into hospitals: history and implications. Advances in Health Care Management Vol. 1: 339-368

7. Aronson D. 1996-8. Systems thinking website and specific pages as: www.systems-thinking.org.  Accessed April 2004.

8. Ashmos DP, McDaniel RR. 1991. Physician participation in hospital strategic decision making: The effect of hospital strategy and decision content. Health Services Research 26(3): 375-401.

9. Ashmos DP, Huonker JW, McDaniel RR. 1998. The effect of clinical professional and middle manager participation on hospital performance. Hlth Care Mgmt Rev 23(4): 7-20.

10. Ashmos DP, Duchon D, McDaniel RR. 2000. Organizational response to complexity: the effect on organizational performance. Journal of Organizational Change 13(6): 577-594

11. Axelsson, R. 1998. Toward an evidence-based health care management. International Journal of Health Planning and Management 13; 307-17

12. Baker, E. 2001. Learning from the Bristol Inquiry. Cardiology in the Young 11: 585-587

13. Beinhocker, ED. 1997. Strategy at the edge of chaos.” The McKinsey Quarterly. Winter #1: 24-40.

14. Bertalanffy, L. 1975. Perspectives on General systems Theory: Scientific-Philosophical Studies. Braziller, New York.

15. Berwick, DM. 1989. Continuous Improvement as an ideal in health care. New England Journal of Medicine 320(1): 53-56.

16. Brass DJ, Burkhardt ME (1993) Potential power and power use: An investigation of structure and behavior. Academy of Management Journal 36(3): 441-470.

17. Cameron KS, Freeman SJ (1991) “Cultural congruence, Strength, and Type: Relationships to Effectiveness” Research in Organizational Change and Development 5: 23-58.

18. Champy J (1995) Reengineering Management. HarperBusiness: New York, NY.

19. Chassin MR. 1998. Is health care ready for six sigma quality? The Millbank Quarterly Winter v76 i4 p 565(2)

20. Christenson CM, Bohmer R, Kenagy J (2000) “Will disruptive innovations cure health care?” Harvard Business Review 78(5): 102-112

21. Coleman J, Katz E, Menzel (Dec., 1957) “The diffusion of an innovation among physicians.” Sociometry 20(4): 253-270

22. Collins JC, Porras JI. Built to Last. HarperBusiness, New York, 1997

23. Conger JA, Kanungo RN (1988) The empowerment process: Integrating theory and and practice. Academy of Management Review 13: 471-482.

24. Coutu DL. 2002. The Anxiety of Learning. [Interview with Edgar Schein]. Harvard Business Review March pp 100-106

25. Covey S (1989) The Seven Habits of Highly Effective People. Simon and Schuster: New York City.

26. Davidson M. 1983. Uncommon Sense — The Life and Thought of Ludwig von Bertalanffy (1901-1972), Father of General Systems Theory. Tarcher, Inc., Los Angeles.

27. Deal TE, Kennedy AA (1982) Corporate Culture: Rites and Rituals of Corporate Life. Perseus Publishing, Cambridge, MA.

28. Edmondson AC. 1996. Learning from mistakes is easier said than done: Group and organizational influences on the detection and correction of human error. Journal of Applied Behavioral Science 32(1): 5-28.

29. Hock D. 1999. Birth of The Chaordic Age. Berrett-Koehler Publ., San Francisco.

30. Ibarra H. 1993. Network centrality, power, and innovation involvement: Determinants of technical and administrative roles.  Academy of Management Journal, 36: 471-501.

31. Johnson S (2001) Emergence — The connected lives of ants, brains, cities and software. Scribner: New York.

32. Jung CG. 1973. Four archetypes: Mother/Rebirth/Spirit/Trickster. Princeton, NJ, Princeton University Press.

33. Kauffman SA (1980) Systems One: Introduction to Systems Thinking. SA Carlton, Minneapolis, MN

34. Kauffman SA. 1995. At Home in the Universe. Oxford University Press, New York

35. Kerr S. 1975. On the folly of rewarding A While hoping for B.” Academy of Management Journal 18: 769-783

36. Kite M (June 03, 2003) “Fat people will have to diet if they want to see the doctor.” London Times A14

37. Klienke JD. 1998. Bleeding Edge-The Business of Health Care in the New Century. Aspen Publishers, Gaithersburg, MD

38. Kotter JP, Schlesinger LA (1979) “Choosing strategies for change.” Harvard Business Review Mar-Apr 57(2): 106-114

39. Lazlo E. 1972. The Systems View of the World. George Braziller, New York.

40. Leape LL, Brennan TA, Laird N, et al. 1991. Nature of adverse events in hospitalized patients: Results of Harvard Medical Practice Study II. N Engl Jrnl Med 324(6): 377-84.

41. Leape LL. December 21,1994. Error in Medicine. JAMA 272(3): 1851-1857.

42. Marsden, PV, & Friedkin, NE. 1994.  Network studies of social influence.  In S. Wasserman & J. Galaskiewicz (Eds.), Advances in Social Network Analysis: Research in the social and behavioral sciences. 3-25.  Thousand Oaks, CA: Sage.

43. McDaniel RR. 1997. Strategic Leadership: A view from quantum and chaos theories.” Health Care Management Review 22(1): 21-37.

44. McDaniel RR, Driebe DJ. 2001. Complexity Science and Health Care Management. Advances in Health Care Management 2: 11-36

45. McFadden KL, Towell ER, Stock GN. 2004. Critical success factors for controlling and managing Hospital Errors. Quality Management Journal 2004; 11(1) 61-73.

46. Millenson, M. 2003. The Silence. Health Affairs 22(2): 103-112.

47. Miller WL, Crabtree BF, McDaniel R, Stange KC. May 1998. Understanding change in primary care practice using complexity theory. Jrnl Fam Practice 46(5): 369-376

48. Miller MM. December 25, 2003. Don’t look for responsible leadership under tree. Albuquerque Journal Vol 358, A12

49. Neumann E. 1955. The great mother: An analysis of archetype. Princeton, NJ: Princeton University Press.

50. Peirce JC (2000) The paradox of physicians and administrators in health care organizations. Health Care Management Review 2(1): 7-28.

51. Pfeffer J. 1995. Producing sustainable competitive advantage through the effective management of people. Academy of Management Executive. February 9(1): 55-69.

52. Quigley W. October 28, 2002. The health of health care. Quoting Martin Hickey, former Lovelace CEO. Albuquerque Journal Outlook, pp. 3, 9

53. Rand A (1957) Atlas Shrugged. Signet Books: New York.

54. Ritchie JB, Hammond SC (2005) “We (still) need a world of scholar-leaders: 25 Years of reframing education.” Journal of Management Inquiry 14(1): March, pp. 6-12

55. Rogers EM. 1983. Diffusions of Innovations, 3rd Ed. (New York Free Press).

56. Rowley TJ. 1997. Moving beyond dyadic ties: A Network theory of stakeholder influences. Academy of Management Review 22(4): 887-910.

57. Scott, J. 1991. Social network analysis: A Handbook.  Thousand Oaks, CA: Sage.

58. Senge PM. 1990. The Fifth Discipline-The Art and Practice of the Learning Organization. Currency Doubleday, New York.

59. Shaw GB. 1913. Preface to The Doctor’s Dilemma, Penguin, Baltimore 1954

60. Shortell SM, Gillies RR, Anderson DA, Mitchell JB, Morgan KL. Winter 1993. Creating organized delivery systems: The barriers and facilitators.” Hospital & Health Services Administration 38(4): 447-466

61. Shortell SM, O’Brien JL, Carman JM, Foster RW, Hughes EFX, Boerstler, O’Connor EJ. June 1995. Assessing the impact of continuous quality improvement/total quality management: Concept versus implementation. Health Services Research 30(2): 377-401

62. Shortell S. March 1997. Commentary on: “Physician-Hospital integration and the economic theory of the firm” by JC Robinson. Medical Care Research and Review 54:3-24

63. Shortell SM, Bennett CL, Byck GR. 1998. Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Millbank Quarterly 76(4): 593-624

64. Stankard M. 2002. Management Systems and Organizational Performance. Quorum Books, Westport, CT.

65. Stephenson K, Lewin D. 1996. Managing workforce diversity: macro and micro level HR implications of network analysis. International Journal of Manpower 17(4/5): 168-196.

66. Sterman JD. 2002. Systems dynamics modeling: Tools for learning in a complex world. IEEEE Engineering Management Review First Quarter pp. 42-52.

67. Tuchman B (1984) The March of Folly. Alfred P. Knopf: New York.

68. Waldman JD, Young TS, Pappelbaum SJ, et al. 1982. Pediatric cardiac catheterization with 'same-day' discharge. American Journal of Cardiology 50:800-804.

69. Waldman JD, Smith HL, Hood JN. 2003. “Corporate Culture –The missing piece in the healthcare puzzle.” Hospital Topics 81(1): 5-14.

70. Waldman JD, Yourstone SA, Smith HL. 2003. Learning Curves in Healthcare. Health Care Management Review 28(1): 43-56.

71. Waldman JD, Schargel F. October 2003. Twins in Trouble:  The need for system-wide reform of both Healthcare and Education.” Total Quality Management & Business Excellence 14(8): 895-901.

72. Waldman JD, Hood JN, Arora S, Smith HL. 2004. Changing the Approach to our Work- force, with Focus on Healthcare. Jrnl of Appl Business & Economics Fall, 24(2): 38-60.

73. Wellman, B, Berkowitz, SD. 1988. Social structures: A Network Approach. New York: Cambridge University Press.

74. Weick KE. 1993. The collapse of sense-making in organizations: The Mann Gulch Disaster. Administrative Sciences Quarterly. 38: 628-52.

75. Westley, FR. 1990. Middle managers and strategy: Microdynamics of inclusion. Strategic Management Journal 11: 337-351.

 

 

Mailing address:

ADM Consulting & Books

PO Box 37396

Albuquerque, New Mexico 87176-7396

 

Copyright © 2015-2019 ADM Consulting & Books. All Rights Reserved. All other trademarks on this site are the property of their respective owners. Site design and maintenance by www.DesignStrategies.com.