strategy+business is published by PwC Strategy& Inc.
 
or, sign in with:
strategy and business
Published: March 1, 2005

 
 

Supermodels to the Rescue

Practical Matters
New-generation simulations are easy to overuse. Even with the decline in the price of computing power, the cost of modeling a relatively simple problem — such as SCA’s box manufacturing process — can run beyond $100,000; more complex models like Nasdaq’s, which need to capture human behavior and learning, can cost $500,000 or more to develop. Moreover, companies should not reject basic strategic and operational thinking in favor of computer simulations any more than elementary school students should ignore their multiplication tables because calculators are available. Agent-based models should be a tool for better thinking, rather than a replacement for thinking.

In rough terms, there is a threshold for determining when agent-based modeling and other related forms of simulation should be used, a threshold known as “high computational complexity.” The idea of computational complexity is that some systems or problems just cannot be simplified. Any model that can accurately mimic the system has to be roughly as complex as the original system itself. Complexity theorists believe that systems made up of many interacting parts are often of this type — including ecosystems, the global climate, and business and government organizations. And although very recent mathematical research suggests that some of these systems may have “hidden” structure that would enable their simplification and prediction on a more theoretical basis, the scientific tools for understanding this structure may take decades to develop. Fortunately, the vast processing power of the computer offers a way to meet computational complexity head on, and defeat it, by creating models that run in a world where time is greatly compressed and where history can be repeated many times.

Doing it well, of course, requires more than a little practice. An ongoing project at Argonne National Laboratory in Chicago illustrates the process nicely.

The State of Illinois is legally committed to deregulating its electricity market in the year 2007. Given the recent disaster associated with deregulation in California — with the power “shortage” during the summer of 2000, later traced to the market manipulation of energy traders at Enron and elsewhere — the Illinois Commerce Commission would like to identify potential trouble beforehand and take wise steps to avoid it.

At the request of the state, Charles Macal, a researcher at Argonne National Laboratory, and his colleagues have developed an agent-based model, in an effort to ensure that this deregulation comes with no surprises. They have focused on good modeling practice from the outset. The team first carried out a “participatory simulation,” with knowledgeable people playing the roles of the agents that would ultimately appear in the computer model. “This helped greatly to identify likely strategies the agents would use in the real world, and the kinds of information the agents would want to use in their decisions,” says Dr. Macal.

Based on their observations, the team built a model with agents to represent companies that generate, consume, transmit, and distribute electrical power, as well as individual consumers and regulators. These agents explore various market strategies, and on the basis of their experience, adapt their behavior as time goes on, constantly searching for new strategies that perform better. The agents also learn how they can potentially influence the market for their own benefit. To make the model credible, Dr. Macal and his team found they also had to model the underlying electric grid, following the flow of electricity through each and every one of the 2,000 physical nodes in the Illinois system.

With a model of this complexity, validation through extensive tests is absolutely crucial. But it is not enough just to look at the totality of what the computer spits out. Agent-based models often yield surprises and explore realms of behavior where no one knows quite what to expect. So there is no way to confirm a model’s validity by comparing its output to “known” results. To validate their model, Dr. Macal and his colleagues instead scrutinized the model’s guts, checking as many factors as possible to be sure it was handling the details correctly — that is, reproducing the behavior of individual agents accurately, whether these agents represented power producers or industry regulators. After an exhaustive study of each component under many conditions, they had the confidence to use the model as a tool for exploration and discovery.

 
 
 
Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store

 

Resources

  1. Keith Oliver, Leslie H. Moeller, and Bill Lakenan, “Smart Customization: Profitable Growth Through Tailored Business Streams,” s+b, Spring 2004; Click here.
  2. Michael Schrage, “Here Comes Hyperinnovation,” s+b, First Quarter 2001; Click here.
  3. Eric Bonabeau, “Agent-Based Modeling: Methods and Techniques for Simulating Human Systems,” PNAS, Vol. 99 (May 14, 2002), 7280–7287
  4. Joshua Epstein and Robert Axtell, Growing Artificial Societies: Social Science from the Bottom Up (MIT Press, 1996)
  5. Navot Israeli and Nigel Goldenfeld, “On Computational Irreducibility and the Predictability of Complex Physical Systems,” unpublished; Click here. 
  6. Proceedings of the Agent 2004 Conference on Social Dynamics: Interaction, Reflexivity, and Emergence (University of Chicago Press, forthcoming); Click here.
 
Close