For the past three decades, researchers in information systems have been marching to the sound of their own footsteps, pursuing technology independently of the larger human and business context into which it is embedded.
We have seen a parade of new technologies and techniques: data bases, fourth-generation languages, expert systems, graphical user interfaces, structured development, object-oriented development, computer-aided software engineering and intranets. Corporations have invested billions of dollars and millions of hours in the development and implementation of these tools.
But despite this enormous commitment of time and money, the results have been decidedly mixed. Just why this is so raises questions of a seemingly esoteric nature. Yet the answers could not have more important practical consequences for any chief executive who wants to have a future in a world that increasingly turns on applying information technology to the marketplace.
Our rationale for adopting new I.T. systems has been twofold. First, we believe, almost religiously, that each new technology somehow makes things better, although we are never quite sure what "better" means. And second, if we fail to embrace a new technology, it is possible that everyone else will embrace it, and we will be left behind. So we have no choice but to jump on the bandwagon.
Thus, the pervasive influx of information systems has taken on a life of its own. We do not control the evolution of information technology; it controls us. As Thoreau said, "Men have become the tools of their tools."
The drawback with this technological determinism is that I.T.'s evolution defines both the practice and the research agenda in information systems. When problems arise in the field, both researchers and practitioners look to the technology for answers. Yet we are increasingly realizing that the technology is not up to providing solutions. It is therefore becoming ever more important to look outside I.T.'s boundaries to see if these problems have been encountered before. Most information systems professionals, for example, would be surprised to know that if we could conjure up the ghost of Socrates, he might very well be able to provide us with valuable insights into the problems and disappointments of many information systems implementations. Plato, Aristotle, Locke, Hume and Wittgenstein would also have much to contribute.
The work of these and other philosophers can apply directly to our understanding of information systems in many important ways, shedding light on everything from the social and ethical impacts of I.T. to the metaphysical assumptions underlying information systems development. It is possible to employ the concepts of these philosophers as we mine for reasons to explain why I.T. projects overrun budgets, underdeliver against expectations and leave more questions than answers about return on investment.
Indeed, the key reason why so many efforts to develop information systems are considered failures can be traced back to an erroneous metaphysical assumption. And this error can be seen in the answer to the seemingly heavy ontological question: Do information systems requirements exist in the world? That is, does the blueprint for a particular system -- from software to hardware -- exist outside the thoughts of the requirements analyst, waiting to be discovered, or is it constructed in the mind of the analyst based upon the search for data gathered in the analysis process? Bear with me, this is not a scene from a Woody Allen movie.
Those whom I call the realists assume that the requirements exist and are waiting to be found. Conceptualists, by contrast, believe that the requirements are constructed in the mind
of the analyst. These two philosophical positions lead to quite different approaches to information systems development.
The realists, believing that the requirements are already out there in the world, proceed to discover them by asking users what they want. The process is usually disciplined by a structured methodology, and computer-aided software engineering tools are used to record the discoveries. If two requirements analysts examine the same technology need, they should come up with exactly the same results. The requirements are then validated by comparing them with the real world to see if there is a match with current technology development. The realist's assumption simplifies the requirements analysis process by making it a discovery process rather than a construction process. With the real world as the standard for correctness, the process for validating requirements is made simple, but potentially wrong.
To illustrate this, consider the railroad paradox. A railroad company, in deciding whether to put a train station in a town currently without one, sends market researchers to the town to see if there is any demand for a station. Upon arriving in the town, the researchers see no one waiting for a train and therefore assume there is no demand.
This seemingly silly response actually serves to illustrate that demand doesn't always exist until the service is available. And with I.T., the requirements of an information system often do not exist until people know what is possible with the technology. So to ask people what they want when they are unaware of what is possible is to increase the chances of gathering incorrect requirements and building the wrong system.
Conceptualists proceed quite differently. They believe that they must first clearly define the problem that needs a solution and set out the objectives of that solution before they can proceed with a construction process. Accordingly, conceptualists are more likely to emphasize interpersonal skills, including interviewing and consensus building, over analytical skills.
Using the railroad example, the conceptualist would talk to the people in the town about their transportation objectives and then would develop a solution to meet those objectives. And, in fact, the solution might not be a railroad; it might be a subway or a new highway. The key is that the process is not driven by the technology. In a corporate I.T. scenario, a conceptualist would talk to the users about what their problems are, attempt to define those problems, set the objectives of the solution and only then look at the ways to meet those objectives.
Though it seems clear that a conceptualist view is far more appropriate for information systems development, there are still cases in which a realist's view can work. Development efforts can be divided, rather roughly, into two categories. In the first, the requirements are well understood, because a stable and reasonably standard technology is being employed. Thus, a model of the system that needs to be developed currently exists either in an information system application or in well-defined business procedures. In this scenario, a realist's assumption may be appropriate.
But in the second category, in which requirements are not well understood and a new technology is being applied, the conceptualist's approach is crucial.
If the problem is not well understood, or the objectives are not well defined, then using the realist's structured methodology or computer-aided tools will likely solve, very efficiently, the wrong problem. One could argue that in today's rapidly evolving technological environment, most major development efforts fall into the second category. Yet, time and again, a realist's metaphysical approach is employed, leading to failed development efforts.
At the heart of the conceptualist view of information systems is the need to define objectives to guide the development process. The use of objectives is not at all new. As far back as the late 1960's, research in the area began to identify the need to set objectives prior to doing systems analysis. But though the idea has emerged from time to time, it has surprisingly never managed to catch on. Indeed, many of the popular textbooks on systems analysis and design mention the idea only in passing.
From a philosopher's perch, the problem can be simplified into this paradox: On the one hand, we have ontological assumptions that simplify the information systems development process, yet produce unsatisfying results. On the other hand, we have ontological assumptions that make the development process much more difficult, yet are far more likely to produce better products.
The conclusion is clear: the conceptualist's choice, to define objectives, even in the face of more difficult upfront planning and design, is the only way to insure successful information systems projects in the future.
For the chief executive who is concerned about the high costs and mixed results of information systems, that conclusion now offers a way out of the forest.
The next time your chief information officer suggests that the company invest in a new technology, just ask these three questions:
1) what problem are you trying to solve?
2) what are the objectives of the solution?
3) how can this technology be used to meet those objectives?
If you do not get satisfactory answers to those questions, you will undoubtedly get unsatisfactory results from your technology investment.