Miklos Sarvary ([email protected]), Luk N. Van Wassenhove ([email protected]), and Atalay Atasu ([email protected]), “Remanufacturing as a Marketing Strategy,” INSEAD Working Paper No. 58.
For consumers, the attraction is clear. Remanufactured items provide a low-cost alternative to new products and usually work just as well. Moreover, they’re better for the environment because they reuse materials.
For companies, however, the value is more ambiguous, argue Miklos Sarvary, an associate professor of marketing, Luk N. Van Wassenhove, the Henry Ford Chaired Professor of Manufacturing and professor of operations management, and Atalay Atasu, a Ph.D. student, all at INSEAD. When considering remanufacturing, companies must carefully assess product life cycles and market growth, and they must forecast to what degree remanufactured products can eat into their own and their competitors’ market share.
Indeed, environmental issues, competition, and product life cycles are the primary interlinked elements in deciding whether to pursue remanufacturing, according to the researchers, who created a mathematical model to examine the interplay of these factors.
The most interesting outcome of their research was the conclusion that remanufacturing is likely to be more profitable in marketplaces that are more competitive. This is because remanufactured products help the manufacturer target price-conscious and environmentally concerned consumers, two lucrative segments that are often difficult to attract.
The researchers also confirmed concerns that remanufactured products would cut into the higher-priced market sectors of the original products. Consequently, this downside must be managed with smart pricing strategies. If customers are indifferent to the distinction between new and remanufactured products, a higher-priced strategy for both items may be possible.
This model suggests that remanufacturing is a balancing act. In other words, the authors say, remanufacture with care, as Bosch Tools in the United States has: The company remanufactures certain products but only because it has relatively small market shares in these items and remanufacturing represents extremely high cost savings.
Biases in Forecasting
Rogelio Oliva ([email protected]) and Noel Watson ([email protected]), “Managing Functional Biases in Organizational Forecasts: A Case Study of Consensus Forecasting in Supply Chain Planning,” Harvard Business School Working Paper No. 07-024.
How many products will your company sell in the next budget period? Ask the sales manager and you will get one answer. Ask the CEO and you’ll likely get another. The marketing VP? The head of manufacturing? Still different answers.
Forecasting of sales figures is prone to biases, many of which are caused by differences in organizational or individual incentives. What’s good for the CEO is not necessarily what’s good for the factory chief. And when personal prejudices are not at fault, planning can be adversely influenced by problems with the flow of information or in the processes that created the forecasts in the first place.
To better understand the potential trouble spots that underlie such a fundamental facet of business life, Rogelio Oliva, an associate professor at the Mays Business School of Texas A&M University, and Noel Watson, an assistant professor at Harvard Business School, examined forecasting at an anonymous California-based electronics firm. Through 25 interviews with people involved in the sales forecasting process, they sought to understand the elements of power (such as formal role-based authority, charisma, and external reputation), processes, and politics involved.
The authors found that the company’s planning process had, historically, been driven largely by the sales function. Sales directors responsible for regional markets made initial forecasts, which they then passed on to operations and finance. The process was ad hoc, with important communication as likely to take place in hallways as in formal meetings. Armed with these forecasts, the finance department created plans and monitored results. Finance tended to pressure the sales team to hike up its forecasts so that the company could meet its financial goals. Meanwhile, because people in the operations group were generally skeptical of the forecasts from the sales team, they made their own forecasts to put the best light on potential inventory shortages for which they might be blamed. Similarly, the marketing director took the forecasts from sales and factored in the possible effects of promotions and other activities.
This flawed system eventually contributed to an inventory write-off equaling about 10 percent of revenues and the recruitment of a new CEO and executive group. One of the new arrivals was given the task of improving the forecasting process. A fresh approach was launched in 2002 and was fully in place a little over a year later. This new method demanded that the company’s functional groups agree on overall forecast, rather than continue to produce their own biased predictions.
This brand of consensus forecasting was led by an independent group called the Demand Management Organization, which managed, synthesized, challenged, and created projections of likely demand. Freed from functional self-interest, forecasting quickly became more robust and useful. In the summer of 2002, the accuracy of sell-through forecasts was only 58 percent. By the fall of 2003, it was 88 percent. Moreover, inventory turns increased, on-hand inventory decreased, and obsolescence costs were slashed.
Although feeding accurate information into the forecasting process is clearly critical, this research suggests that companies also need to consider social and political agendas when they are designing forecasting processes. And the authors contend that the simple tactic of giving responsibility for planning to an independent group within the organization can reap impressive dividends.
The Value of Innovation
Michael G. Jacobides ([email protected]), Thorbjørn Knudsen (tok@sa[email protected]), and Mie Augier
([email protected]), “Benefiting from Innovation: Value Creation, Value Appropriation and the Role of Industry Architectures,” Research Policy, vol. 35, no. 8, October 2006.
It is not always the innovator who captures value from an innovation. In the 1980s IBM developed the first mass-market personal computer, which rapidly became the industry standard, but financially, Microsoft and Intel benefited from it the most. A decade later, Apple coined the term personal digital assistant to describe its innovative Newton; however, it was the Palm Pilot PDA that enjoyed the first mass-market success.
According to Michael G. Jacobides, an assistant professor at the London Business School; Thorbjørn Knudsen, a professor of marketing at the University of Southern Denmark; and Mie Augier, an assistant professor at Copenhagen Business School, these examples illustrate the profound risks inherent in innovation, even for those able to overcome the larger challenge of bringing a new product or technology to market.
How can innovators ensure they enjoy the fruits of their labors? The authors describe the traditional strategy as “attempts to fortify the fortress.” Innovators erect barriers by asserting their intellectual property rights through the use of patents and trademarks. But, although this approach can slow down imitators, say the authors, it never eliminates them because patents expire and trademarks do not completely prevent other firms from copying. Consequently, the authors offer an alternative strategy, one that takes advantage of “industry architecture” (an idea first introduced by David Teece, a professor at the Haas School of Business, University of California at Berkeley).
Architecture describes the organization of an industry and the relationships among its players. At the birth of a new technology, the innovator can shape the architecture around the invention. However, the authors argue, most do not.
IBM’s infamous decision to license the operating system for its PC from Microsoft, for instance, created an architecture that favored Bill Gates’s startup and, by association, Intel. Had IBM developed its own operating system or bought one outright, the architecture would have been quite different — and so would the division of value.
Today, we see a very public battle for industry architecture in Sony’s attempts to dominate the high-definition DVD market, with its Blu-Ray technology going head-to-head with HD-DVD, the standard developed by a group of companies led by Toshiba. The eventual winner will occupy a prominent place in the resultant industry architecture.
Many factors can affect an industry’s architecture over time. These include natural evolution (the outcome of Blu-Ray versus HD-DVD, for instance), new production methods (as when the advent of lean manufacturing in the automobile industry shifted control of the architecture to Japanese companies), new laws and regulations (for example, in the pharmaceutical industry, the way patent rules affect the production of generic drugs) and quality validation (such as when a brand like Intel, through its Intel Inside strategy, convinces consumers that its products alone are the standard of excellence for microprocessor chips).
Industry architecture strategies can take a number of forms. For example, innovators can position themselves to be a bottleneck in the industry. By licensing its operating system with new PCs, Microsoft blocked all other software companies from competing in this sector and achieved a global market share in excess of 95 percent.
Indeed, this approach could be mimicked in any industry and by companies of any size, the authors claim. To illustrate, they offer the hypothetical example of a restaurateur “who knows how to create value both by inventive cooking and a talent for spotting trendy, industrial post-modern properties that can be turned into a restaurant.”
By opening a chic restaurant, the innovator makes the location more fashionable, which will attract other fashionable eateries to the area. Instead of regarding these as competition, he or she could choose to invest in suitable local real estate. In this way, every potential restaurant owner would have to go through the bottleneck created by the innovator before opening up a site in the neighborhood that he or she controls.
By broadening their thinking in this way, the authors say, innovators can be rewarded with a greater share of the value from their innovations.
Meeting the Problem Head On
Paula Jarzabkowski ([email protected]) and David Seidl ([email protected]), “Meetings as Strategizing Episodes in the Social Practice of Strategy,” Advanced Institute of Management Research Working Paper No. 37.
Given time pressures and short-term horizons, it is tempting to see each and every meeting you attend as an end in itself. But if you regard meetings in this light, you are failing to come to terms with their real nature. As a result, the meeting — defined as “a planned gathering of three or more people who assemble for a purpose that is ostensibly related to some aspect of organizational or group function” — often remains misunderstood. So say Paula Jarzabkowski of the U.K.’s Aston Business School and the Advanced Institute of Management Research, and David Seidl of the University of Munich’s Institute of Business Policy and Strategic Management.
The objectives of meetings vary. The intention may be for participants to make decisions, set agendas, build commitment, provide information, reduce complexity, or simply converse. But, Jarzabkowski and Seidl suggest, the purpose of a meeting often transcends an individual session. In fact, one aspect of meetings that is not fully comprehended is that one organizational powwow tends to produce the need for another. Depressingly, meetings are not particularly valuable ways to reach consensus or accomplish a great deal in an organization.
To explore this topic further, the authors examined 51 strategic-level meetings at universities in the United Kingdom over seven years. Universities were thought to be fertile ground because of their “ostensibly democratic” and open approach to discussions, their diffuse power relationships, and their autonomous professional employees. University meetings should be a place where many points of view are represented and listened to. Unfortunately, the authors found that democracy proved something of an illusion; in the 51 meetings, participants voted on only two occasions.
This research provided highly interesting insights into the process by which strategies are shaped and changed in organizations. Certain types of meetings encourage the suspension of organizational structures. In doing so they allow participants to shape new directions. These meetings are characterized as “closed,” and are attended solely by the top management team. “Open” meetings, those not limited to top managers, tend to reassert existing organizational structures overtly or more subtly and thus are less receptive to new ideas from those on the lower rungs of the group.
If strategic development is to involve more than an organization’s senior management team, the way meetings are conducted and structured must be carefully altered. Otherwise, existing organizational mores will dominate, and the status quo will not change.[email protected]) is a business writer based in the U.K. He is the author of a number of management books and a regular contributor to strategy+business and The (London) Times.
Stuart Crainer ([email protected]) is a business writer based in the U.K. and a regular contributor to strategy+business. He and Des Dearlove founded Suntop Media, a publishing and training company providing business content for online and print publications.