View Electronic Edition

Chicken Little, Cassandra, and the Real Wolf

Somewhere during the fracas that followed the publication of our book, The Limits to Growth , I remember finding one of my co-authors, Jørgen Randers, pacing the office in frustration. In his lilting Norwegian-English, he lamented, "People just don't know how to think about the future!"

His complaint was that our book, which contained twelve computer graphs charting out twelve different possible paths for the human economy up to the year 2100, was being received as an absolute prediction. A prediction of doom, at that, though at least one of the graphs showed a future in which eight billion people maintain a European standard of living in a way that does not undermine the earth's resource base—probably one of the most optimistic forecasts anyone has ever made. We were trying to say that the future is a matter of CHOICE, and that sustainable, equitable, wonderful choices were possible. But we were heard through a cultural filter that apparently saw the future as PREDETERMINED, to be predicted, but not changed—certainly not chosen. That culture also clearly expected—or at least found thrills and excitement, headlines and newspaper sales, in the thought—that the predetermined future will be a disaster.

Disaster—what could be more fascinating? Think of the content of the nightly news. The undying story of Titanic. The movies about volcanic eruptions and asteroid crashes. The slight edge of glee in some of the more extreme Y2K fanatics. There is something utterly delicious about the thought of the End-Of-The-World-As-We-Know-It.

Back when Jørgen was pacing the floor, we were honestly shocked by the reaction to our scenarios. We had not thought much about the culture in which we were speaking, though we ourselves were part of that culture. But we were at MIT; we had been trained in science. The way we thought about the future was utterly logical: if you tell people there's a disaster ahead, they will change course. If you give them a choice between a good future and a bad one, they will pick the good. They might even be grateful.

Naive, weren't we.

We ignored thousands of years of crystal balls, Delphic oracles, tea leaves, astrology, prophets—all of which are still remarkably alive and well in the subconscious of the computer age. Mythology gives us few examples of the CONDITIONAL forecast: if you do A, the result will be B; if you do X the result will be you choose. Even when the ancient forecasts did happen to be conditional, somehow the hero (never, that I can remember, a heroine) inevitably made the disastrous choice. Orpheus can lead Eurydice out of the underworld as long as he doesn't turn around to look at her—which he does. Lord Krishna tells Yudhishtra that if he goes on gambling, there will be terrible consequences—and he goes on gambling. Siegfried can return the Ring to the Rhine maidens and bring peace to heaven and earth or keep it and bring down himself, his bride Brünnhilde, and all Valhalla—guess which he does?

You know, I love that last scene of Gotterdammerung, where Brünnhilde charges into the funeral pyre and Valhalla crumbles and the Rhine rises to swallow up everything. Let's admit an inborn irresistible attraction to catastrophe and move on, because we are also formed by other myths.

There's Chicken Little, the sincere but silly forecaster of hysterical nonsense. Decades later some of our critics still put us in that box. I would prefer to be associated with the tale of the boy who cried "wolf"; at least there was a real wolf.

But the legendary prophet with whom I most feel a connection is Cassandra, to whom the god Apollo gave the ability to foresee the future, and then, after she displeased him, the terrible curse that no one would ever believe her. That story gives me shudders.

It also shows the ancient Greeks' sophistication about the perverse logic of prognostication. If people had believed her, then Cassandra WOULDN'T have been able to foretell the future, because action would have been taken to avoid foreseen disasters. The Cassandra legend must be one of the earliest recorded human recognitions that there is a basic contradiction between prediction and choice. A predictable world has no room for choice; a choosable world is not predictable.

Predictability and Choice

Of course the world must be made up of a complicated mixture of BOTH predictability AND choice; otherwise we wouldn't have been able to maintain for millennia such a rich legendry of predictions and inevitable tragedies, and simultaneously a belief in free will. In a brilliant essay on foretelling the future, E.F. Schumacher wrote:

When the Lord created the world and people to live in it...I could well imagine that He reasoned with Himself as follows: "If I make everything predictable, these human beings, whom I have endowed with pretty good brains, will undoubtedly learn to predict everything, and they will thereupon have no motive to do anything at all, because they will recognize that the future is totally determined and cannot be influenced by any human action. On the other hand, if I make everything unpredictable, they will gradually discover that there is no rational basis for any decision whatsoever and, as in the first case, they will...have no motive to do anything at all. Neither scheme would make sense. I must therefore create a mixture of the two. Let some things be predictable and let others be unpredictable. They will then, amongst many other things, have the very important task of finding out which is which." (E.F. Schumacher, Small is Beautiful )

It isn't all that difficult to begin, at least, to get a handle on what kinds of things are predetermined and what can be chosen.

System dynamics—for instance, the sort of computer modeling we used in Limits to Growth—keeps careful separate track of physical things, which have to obey physical laws (e.g., material objects age and take time to construct; they cannot appear from or disappear to nowhere; they cannot be in two places at the same time), and goals and decisions. Physical things are, most of the time, predictable. Goals and decisions fall into the realm of information. Information is often subject to choice, change, rearrangement, improvement, deterioration, bias, utter derangement, or total transformation.

That distinction between physical stuff and mental stuff sounds simple and obvious, until you put the two realms together and have human choice interacting with, influenced by, and trying to influence physical things. Then can come surprises, for many reasons. Something in the physical realm may take a lot longer to move or change or unfold than anyone expects—or something may blow up. Something in the information realm (such as a concerted response to reduce greenhouse gas emissions) may stay stuck far longer than it needs to, because of denial, paradigm blindness, lack of imagination, or entrenched opposition. Or something in the information realm that has been stuck for a long time (such as the legitimacy of the Soviet Union) may suddenly shift overnight.

A nuclear power plant, once built, generally operates predictably for fifteen to thirty years, but now and again human error produces a Three Mile Island or a Chernobyl. Or human choice produces a Shoreham and a Zwentendorf, fully built plants in Long Island and Austria respectively, which by political choice were never started up.

President Nixon's "Project Independence," dreamed up after the 1973 oil embargo, promised that the United States would be free of imported oil by 1980. System dynamicists saw immediately (and later demonstrated with a computer model) that, given the expected lifetime of installed oil-burning furnaces and cars and inevitable delays in finding and gearing up domestic oil wells, that goal was physically impossible. (An amazing amount of political discussion is directed toward goals that are physically impossible.)

Mix physical beings with mental models, and choice becomes—maddeningly—a matter of risk. The fifteen-year-olds in the current population will fairly predictably start to vote in three years, have children over the next five to twenty-five years, retire in fifty years, and die in sixty-five years. The exact numbers are mushy, of course, because now we are talking human behavior and genetics. Some of those fifteen-year-olds, exercising "choice," will already have had children; some, mostly male, will have children when they're sixty. Some will never vote. Nevertheless, put enough of us together, and our collective behavior is predictable enough for insurance companies to make a lot of money betting on it.

As Schumacher also said, "...most people, most of the time, make no use of their freedom and act purely mechanically....When we are dealing with large numbers of people many aspects of their behaviour are indeed predictable; for out of a large number, at any one time only a tiny minority are using their power of freedom, and they often do not significantly affect the total outcome."

System dynamicists boil down the difference between predictability and choice to some simple rules of thumb:

-The larger the aggregation (of people, nuclear power plants, trees, whatever), the more predictability.

- In the short term, while infrastructure facilities remain in place, while pipelines under construction or materials in transit discharge their contents, while people age, while trees grow, while existing pollutants work their way out of the groundwater or the bottom mud, a great deal (but not everything) cannot be changed and therefore can be predicted.

- In the long term, almost everything can change. Infrastructure facilities may have been replaced (solar-powered? informed by whole-system thinking?). There may be a new generation of people (with new mindsets and cultures?) and trees (tightly-controlled plantations? a slow ecological return to whatever nature chooses?). Therefore, not much can be predicted, but a great deal can be chosen.

- In the middle term, there is a messy combination of predictability and choice.

The actual duration of the "short," "middle," and "long" term depends on the average turnover rates of materials in the system under discussion. Turnover rates are orders of magnitude different between mayflies and mountains, between computers and cathedrals, between easily degraded and recycled pollutants such as human sewage and nearly immortal pollutants such as PCBs, CFCs, and plutonium. It is often, but not always, true that entities that operate with similar constants-in-time (such as lifetimes in years or decades) interact more strongly with each other than with entities having wildly different time constants (lifetimes in nanoseconds, say, or centuries or millennia). Some of the biggest unpredictabilities come, however, when that rule is broken. A new virus hits a hitherto-unexposed human population. Emissions from the industrial economy start turning the ponderous flywheels of the global climate. All hell breaks loose.

The information realm is usually more fluid than the physical realm, more open to choice, less predictable. But even within this realm, there are some useful guidelines for sorting out the predictable from the choosable. Garrett Hardin laid out some of them once in a clever essay about three kinds of truth:

- Always-True Truth: This truth remains true no matter what anyone thinks or says about it. For example, burning fossil fuels creates carbon dioxide; the carbon dioxide concentration in the atmosphere has increased by more than 30 percent in the last century; global surface temperatures in 1998 were the warmest in recorded history.

- Truth-by-Repetition Truth: This truth is more likely to become true the more you say it. I can run a marathon; every child wants a Furby for Christmas; the stock market is about to crash; the government can't do anything right; Social Security will go bankrupt. This kind of truth is the stock-in-trade of the public relations people and the politicians. Say it often enough, however absurd it is, and you might be able to gin up enough shared belief to create it as reality. (Unless it violates an Always-True Truth.)

- Doubt-by-Repetition Truth: This truth may become less true the more you say it. I'm about to sneeze; there will be a surprise attack on Baghdad tomorrow; the stock market is not overextended; I am not an alcoholic; the economy can grow forever. These truths distract attention or reveal secrets or stoke up false confidence or divert action by denying and demoting the kind of thinking that can lead to problem solving. They are often purposeful thought stoppers.

Always-True Truths deal with the physical realm; Truth-by-Repetition and Doubt-by-Repetition Truths deal with the information realm, where what we say can influence the beliefs and behaviors of others and ourselves—these are slithery truths, to be used with great care. Confusing one type with another (for example, trying to make global warming go away by emphatically denying its existence) can be fatal.

I tend to get especially infuriated by the Truth-by-Repetition Truth when it is articulated with absolute certainty, as if it were an Always-True Truth; especially when it purports to tell me what is feasible in human affairs—or, more often, what is infeasible. The US political system will never permit a carbon tax. Or campaign reform. The global population will reach fourteen billion. Half the species on earth will go extinct in the coming century. There will be runaway climate change.

These are not only predictions, they border on self-fulfilling prophecies. They sweep away the possibility of choice, though there is in fact plenty of latitude for choice. They aren't based on physical impossibilities, they are based on the speaker's limited imagination about political or social possibilities. And of course they are a direct invitation to inaction. Well, if it's hopeless, why try? Let's just sit around and wait for disaster.

When I hear statements like these, I'm tempted to ask whether that's the future the speaker WANTS. That question is usually brushed away. The future isn't about wanting. It's about battening down the hatches, preparing for the worst, not getting your hopes up. The surest way to disaster is to declare it inevitable, do nothing to prevent it, and mock and demoralize anyone who tries.


Which brings me to my favorite approach to the future: vision. Joseph Smith declaring "This is the place." Babe Ruth pointing to the outfield stands and plunking a home run just there. John Kennedy asserting that there will be a man on the moon within a decade. Martin Luther King's dream of a future in which his four children would be judged not by the color of their skin, but by the content of their characters. Mikhail Gorbachev ripping away the straitjacket of Soviet thinking and announcing perestroika.

Visionary statements and actions come from a completely different place in the human psyche from predictions, forecasts, scenarios, or cynical, downer assertions of political impossibility. They come from commitment, responsibility, confidence, values, longing, love, treasured dreams, our innate sense of what is right and good. A vision articulates a future that someone deeply wants, and does it so clearly and compellingly that it summons up the energy, agreement, sympathy, political will, creativity, resources, or whatever to make that future happen. It is a Truth-by-Repetition Truth, but of a special, powerful kind.

I know that the very topic of vision instantly pushes a warning button in most of us, so I need to stop here for a definition. I am only interested in RESPONSIBLE visions, by which I mean statements about the future that:

1. State how someone actually wants it to be—no mushy concessions to assumed political feasibility, no settling for something less;

2. Violate no Always-True Truths (break no physical laws); and

3. Express desires and values that are widely shared (break no moral laws).

We tend to distrust visionaries, because we have such bad experience with irresponsible ones. Hitler's vision was morally irresponsible. Nixon's vision of energy independence was physically irresponsible. Bill Clinton's vision of a future health care system was half-assed, laced through with concessions to political infighters—not really what he or anyone else wanted, just what he was willing to settle for, so uninspiring it was not worth fighting for.

Another reason we are uncomfortable in the realm of vision is that, if it's a vision that truly moves us, one we deeply share, we're afraid of disappointment. The visionary automatically puts himself or herself on the line; commits to something that hasn't happened yet; takes a visible stand. That kind of action brings up fear. What if it doesn't come off? Then not only will that vision look foolish, ALL visions will look foolish.

It's much safer to mire ourselves in cynicism. We'll never look foolish.

A Futurist? OK. What Species? When?

If you can stand one more categorization of ways of thinking about the future, here's one from Russell Ackoff (Redesigning the Future: a Systems Approach to Societal Problems, Wiley-Interscience, 1974) that has stuck in my brain ever since I first read it:

Inactivists are satisfied with the way things are. They believe that any intervention in the course of events is unlikely to improve things and is very likely to make them worse. Inactivists work hard to keep changes from being made. Inactivists have a greater fear of doing something that does not have to be done (errors of commission) than of not doing something that should be done (errors of omission).

Reactivists prefer a previous state to the one they are in. They believe things are going from bad to worse. Hence they not only resist change; they try to unmake previous changes and return to where they once were. Reactivists dislike complexity and try to avoid dealing with it. They reduce complex messes to simple problems that have simple solutions—solutions that are "tried and true." They are panacea-prone problem solvers, not planners looking into the future. They try to recreate the past by undoing the mess they believe the planning of others has wrought.

Preactivists believe that the future will be better than the present or the past, how much better depending on how well they get ready for it. Thus they attempt to predict and prepare. They want more than survival—they want to grow, excel, become better, bigger, more affluent, more powerful, more many things. Preactivists are preoccupied with forecasts, projections, and every other way of obtaining glimpses into the future. They believe the future is essentially uncontrollable, but they can control its effects on them. They plan FOR the future; they do not PLAN THE FUTURE. They seek neither to ride with the tide nor to turn it backward, but to ride in front of it and get to where it is going before it does. That way they can take advantage of new opportunities before others get to them.

Interactivists are not willing to settle for the current state or to return to the past or to get to the future ahead of everyone else. They want to design a desirable future and invent ways of bringing it about. They try to PREVENT, not merely prepare for, threats and to CREATE, not merely exploit, opportunities. Interactivists seek self-development, self-realization, self-control; an increased ability to design their own destinies. They are not satisfiers, not optimizers, but idealizers. To them the formulation of ideals and visions are not empty exercises in utopianism, but necessary steps in setting the direction for development. Interactivists are radicals; they try to change the foundations as well as the superstructure of society, institutions, and organizations. They desire not to resist, ride with, nor ride ahead of the tide; they try to redirect it.

Well, it's obvious that both Ackoff and I are biased in the interactive direction, but Ackoff was actually making the point that all four of these approaches to the future can be appropriate in different situations, and that all of us can and do play all these roles from time to time. When it comes to seeds for my garden, I'm an inactivist—I already have great varieties and know how to grow them; I resist purple beans and super-sweet corn and bioengineered potatoes. When it comes to nuclear power or the global economy, I'm a reactivist—I wish I could roll back the clock. Like many farmers, I'm preactive about the weather, tuning into the forecasts many times a day, always peering at the western sky from which the weather comes, trying to transplant just before the rain and harvest just before the frost.

But for most activities in my life, and in all my efforts to help bring about a sustainable society, I'm an interactivist, a visionary, a learner, a radical. I don't run scenarios; I articulate visions. I see no reason why there can't be a carbon tax—or even better a strong, inviolable carbon emission quota—if it will stave off climate disaster. I'm not willing to believe that we can't reclaim our democracy from the moneyed special interests. What's to stop us, other than our own timidity? We don't have to bring fourteen billion people into the world unless we choose to; we could switch to solar power just as fast as the turnover times of our existing capital plant allow; we could return half the planet to nature and create good, sufficient, joyful lives for ourselves from the other half. Why not? Really, why not?

What a huge difference it makes in world view, in empowerment, in responsibility, in self-identity, in the qualities of imagination and courage we draw forth from ourselves, if we think of the future as something not to be predicted, but to be chosen! If we throw off that ancient remorseless myth that we will always choose foolishly!

There are real wolves out there. I happen to believe my computer model when it says that the End-Of-The-World-As-We-Know-It is not only a possibility, but a high probability. As the Chinese proverb says, "If you don't change direction, you will end up where you are headed." I think we are headed for disaster. But that thought does not thrill me. And it does not panic me into trying to fashion a world so controlled that it is actually predictable. Rather it energizes me to work toward a vision of a World-That-Works-For-Everyone, including all the nonhuman Everyones, a world in which eight billion people (or preferably fewer) maintain a European standard of living in a way that does not undermine the resource base, a world that evolves and learns and dances and operates from generosity and joy.

The worst wolves, really, are the imaginary ones inside our own heads.