strategy+business is published by PwC Strategy& Inc.
 
or, sign in with:
strategy and business

 
 

A CIO's View of the Balanced Scorecard

Defining the data and the timing of reporting is the hardest task, says a scorecard veteran.

Illustration by Lars Leetaru
When Robert Kaplan and David Norton published “The Balanced Scorecard: Measures that Drive Performance” in the Harvard Business Review in 1992, the idea of measuring business performance from financial and nonfinancial perspectives was novel. Their original balanced scorecard significantly advanced the notion that effective performance measurement must provide a view of both.

Today, the balanced scorecard is one of the most widely used and hotly debated management tools in the executive arsenal. Countless permutations of the original Kaplan and Norton framework, in an equally varied multitude of applications, have been tried by all kinds of organizations. Although the original balanced scorecard concept was meant to assess the health of an entire business, the basic concept has been adapted to fit business units and support organizations. When it works, the scorecard is a powerful resource to help executives understand past and current performance, and plan for the future.

Scorecards can be a great resource for managing the IT function. First, considerable numeric data is available to measure systems performance. Second, IT scorecards can be designed to measure end-user benefits and satisfaction. Third, scorecards can be a powerful vehicle to bridge the communication gap between IT professionals and the business customers they serve. So, when I took over as CIO of Booz Allen Hamilton in 2000, I wanted to set up a scorecard for my own management team.

Whether the cause is information systems’ mystique, executives’ technophobia, or poor marketing by information technology managers, for most senior business executives, delving into IT reports has limited appeal. IT performance reports that appear in a form comparable to reports of other business functions can offer a clearer window for businesspeople into the IT domain, particularly when it reveals corporate senior management and business unit leaders the value they receive from IT services.

Why Scorecards Fail
As is the case with any business tool, however, the scorecard is not a magic wand; its value depends heavily on how it is implemented. I had the advantage of having worked for Booz Allen as a management consultant advising CEOs, CIOs, and other senior executives, so I knew what we were taking on.

Over the years, I had witnessed many attempts by large corporate IT departments — some successful, some not — to help implement scorecards or scorecardlike systems. Although I am a strong advocate of scorecards, I’ve seen enough unsuccessful ones in my time to have learned from those that didn’t work.

Take the difficult and error-prone processes of defining the data (perhaps the hardest task of all) and the timing of reporting; many a scorecard has failed because the designers didn’t get these two elements right.

The CEO of a U.S.-based health insurer I worked with wanted to create a business scorecard that provided more timely information than he was receiving from the traditional reporting system. The existing systems produced business unit performance data for review once a week. And although he received detailed reports from his four different business units, he didn’t have consolidated information showing, at a high level, how the entire business was doing.

The CEO thought if he could follow performance trends daily, he would have a better chance of turning problems into opportunities. He concluded he wanted information that was no more than 24 hours old for all the business units, listed side by side on one report so it would be easy for him to compare data.

Even with the CEO requesting the scorecard, the project to create it failed. Initially, managers blamed the IT systems for technical problems with integrating data from the different business units to create the consolidated report the CEO had requested. I was on the team that was brought in to assess the situation and recommend a solution. As it turned out, the problem in producing the new scorecard had nothing to do with technology. Rather, it was related to defining the data. In this case, we found that no two business units defined data in the same way. In fact, none of the business units even shared the same definition of revenue, so comparisons were nearly meaningless.

 
 
 
Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store