strategy+business is published by PwC Strategy& Inc.
or, sign in with:
strategy and business
Published: March 1, 2005


Supermodels to the Rescue

It sounded like a good idea. But the exchange thought it best to investigate further before going ahead. It developed a model of the activities of traders and market makers, the firms that maintain liquidity for particular securities by their commitment to buy and sell them at the listed prices. The “agents” were programmed to act like participants in the real market, using common strategies, but they were also endowed with artificial intelligence and so were able to adapt and alter their strategies on the fly as they identified trends or patterns in the market.

This may seem almost like magic, but it can be done. One of the most powerful elements of human behavior is humans’ ability to cope with uncertainty by learning from experience. When faced with some altogether novel task, we almost never work out what to do using strict logic. Rather, we try something, and if it doesn’t work, we try something else. We tend to stick with any approach that works, yet, often by accident, we stumble over minor variations of older ideas that work even better. In this way, through an evolutionary process, we learn.

Nasdaq endowed the agents in its model with a similar capacity to learn by trial and error. In the guts of the model, each agent in the market kept tabs on a handful of possible “strategies” by which it might make decisions. Each such strategy was a mathematical recipe for taking reality — in the form of price movements and the actions of other traders in the recent past — and deciding on some specific action. Each agent would monitor its own handful of strategies, seeing which would have earned it the most money in recent trades, and, at any moment, would use this “best” strategy to make its next decision. In other words, like humans, the agents kept track of successful ideas and used them, while paying less attention to less successful ideas.

Some of the strategies were those identified through interviews with real market participants, but the model also contained an evolutionary mechanism: Every so often, each agent would try out a completely new strategy obtained by introducing minor, random variations into one of its older strategies. In this way, the agents had the potential to discover new ways of behaving that might be superior to anything they had done in the past. They could even discover possible ways of making profits that the model’s designers themselves could not have foreseen.

In this case, this small element of “artificial intelligence” turned out to be crucial to the model’s success. Once the model was working like the real market — reproducing price fluctuations in a mathematically accurate way — the company could use the virtual market as a laboratory. Nasdaq found a surprise. In repeated experiments, reducing the tick size beyond a certain point actually increased the bid–ask spread. As it turned out, so-called parasitic strategies, which make quick profits for individual market makers at the expense of overall market efficiency, grew less risky and more profitable with a smaller tick size. The artificially intelligent market makers naturally learned this and responded, as live market makers surely would in reality, driving up not only the bid–ask spread but also the overall volatility of price fluctuations. (See Exhibit 1.) When Nasdaq went ahead with its plans and actually changed the tick size from 1/16 to 1/100 in 2001, it was able to anticipate this effect. The exchange slowed the introduction of the new system and developed a new “Super Montage” system for displaying buy and sell orders and their execution, which it hoped would ameliorate problems.

Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store



  1. Keith Oliver, Leslie H. Moeller, and Bill Lakenan, “Smart Customization: Profitable Growth Through Tailored Business Streams,” s+b, Spring 2004; Click here.
  2. Michael Schrage, “Here Comes Hyperinnovation,” s+b, First Quarter 2001; Click here.
  3. Eric Bonabeau, “Agent-Based Modeling: Methods and Techniques for Simulating Human Systems,” PNAS, Vol. 99 (May 14, 2002), 7280–7287
  4. Joshua Epstein and Robert Axtell, Growing Artificial Societies: Social Science from the Bottom Up (MIT Press, 1996)
  5. Navot Israeli and Nigel Goldenfeld, “On Computational Irreducibility and the Predictability of Complex Physical Systems,” unpublished; Click here. 
  6. Proceedings of the Agent 2004 Conference on Social Dynamics: Interaction, Reflexivity, and Emergence (University of Chicago Press, forthcoming); Click here.