strategy+business is published by PwC Strategy& Inc.
 
or, sign in with:
strategy and business

 
 

A CIO's View of the Balanced Scorecard

At another company we worked with (a book and magazine publisher), the problem had to do with defining the time period for reporting.

When the company decided to implement a scorecard, corporate leaders required business unit heads to submit their data to the scorecard team by the fifth day of each calendar month. To meet this deadline, the unit leaders, who wanted to review the data before forwarding it to the scorecard team, asked their division heads to send the data to them for the previous 30 days by the 25th of each month. In turn, the division heads had their department leads supply them with data by the 20th of the month, and so on. This approach actually made some sense since many of the dates corresponded to the periods when data was sent to or received from external printers and distributors.

Although the scorecard accurately recorded events, it was still confusing for users. Since the end of the reporting time period varied by reporting level, there was no single frame of reference. Junior staff would see an event in one report, while the same event might appear a month later in the report for senior managers. This “when exactly did this occur?” problem strained communications and made comparisons difficult. The scorecard continued to be produced for a few years, but it was quickly marginalized by newer systems with more common reporting cycles.

Then there’s the ever-present problem of “dueling data.” Database administrators (DBAs) know that nothing can damage confidence in a database more than inconsistency. Having two pieces of the same data in a database is worse than having only one incorrect piece; any DBA worth his or her salt knows that if you have multiple pieces of the same data in a database, you should delete all but one. (One wrong employee number can be blamed on HR. Two employee numbers for one individual spells trouble for IT.) The same is true with scorecards.

And what if your scorecard data isn’t consistent with the data in other, more established, reporting systems? You’re in trouble, as the backers of the more established systems (regardless of whose data is correct) will attack the newer scorecard application. Although both sets of data may be correct, if the two data sources appear to contradict each other, the veracity of at least one of the systems will be brought into question. Ultimately, the credibility of all the reporting systems, not just the scorecard, could be damaged.

Another recurring reason scorecards fail should be the most obvious, but it often isn’t: Some people just don’t want a lot of attention placed on their performance.

Consider the manufacturer of industrial products that was having a difficult time implementing an Executive Information System (EIS), an older, broader concept that is similar to a scorecard. The software, as well as the analysis performed by the company’s IT department, was blamed. But a basic investigation turned up a different culprit: The head of one business unit never allowed data about his department to be sent to the chief executive officer. Instead, he took the reports personally to the CEO, reserving 90 minutes to walk the chief through the numbers. This way, the business unit leader could emphasize and spin the data in all the right places. He did not need, or want, a simple EIS, which would have allowed the CEO an unescorted promenade through the data. He even told me that the EIS would be used “over my dead body.”

When the marketing director for a building products manufacturer wanted to quash his company’s scorecard project, he threw numerous roadblocks in its way and waited for the enthusiasm for it to die. I watched the marketing director go in for the kill, and then saw him
formally dissolve the abandoned project.

 
 
 
Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store