Global spending on enterprise IT could reach US$3.7 trillion in 2018, according to Gartner. The scale of this investment is surprising, given the evolution of the IT sector. Basic computing, storage, and networking have become commodities, and ostensibly cheaper cloud offerings such as infrastructure-as-a-service and software-as-a-service are increasingly well established. Open source software is popular and readily available, and custom app development has become fairly straightforward.
Why, then, do IT costs continue to rise? Longtime IT consultant Dave McComb attributes the growth in spending largely to layers of complexity left over from legacy processes. Redundancy and application code sprawl are rampant in enterprise IT systems. He also points to a myopic view in many organizations that enterprise software is supposed to be expensive because that’s the way it’s always been.
McComb, president of the information systems consultancy Semantic Arts, explores these themes in his new book, Software Wasteland: How the Application-Centric Mindset Is Hobbling Our Enterprises. He has seen firsthand how well-
intentioned efforts to collect data and translate it into efficiencies end up at best underdelivering — and at worst perpetuating silos and fragmentation. McComb recently sat down with s+b and described how companies can focus on the standard models that will ultimately create an efficient, integrated foundation for richer analytics.
S+B: What inspired you to write Software Wasteland?
McComb: When I started my career, I became a part of the problem without realizing it. I built a lot of enterprise systems and thought I’d done a pretty good job. But the longer I worked with large clients, the more it started bothering me how much waste there really was.
It wasn’t until I sat down to write the book that I realized that the information technology industry is now twice the size of the petroleum industry. And unlike the manufacturing sector, which has now had 30 or 40 years of quality and productivity improvement, the IT industry hasn’t even started to make improvements.
S+B: Many company leaders complain about the high cost and low quality of software development projects.
McComb: We hear our clients complain all the time, but then they turn right around and do things that make it worse. It’s not like anybody is intentionally screwing these projects up. I think they just don’t realize what they’re doing.
Many [executives] are so excited about and proud of the huge amount of data they have now. Yes, it’s a great boon; we have more data, and we can do more with it. But that data growth increases the complexity of what we’re dealing with. In a lot of ways it’s the data complexity that’s driving the cost.
S+B: Yet companies continue to spend more and more on software, without stopping to address the complexity problem. What’s the root of this problem?
McComb: Part of the problem has to do with beliefs that are no longer true, if they were ever true to begin with. I list seven of these fallacies in the book. One of the fallacies has to do with overspecifying requirements. Although it’s true you won’t get exactly what you want without detailed requirements, the converse is even truer: Your detailed requirements will drive your project costs up 10- to 100-fold, increase your risk, and greatly prolong the project.
Another fallacy has to do with the belief that software development costs way more than it actually does when done correctly. I know so many companies and state agencies that somehow became convinced over the last couple of decades that a fairly ordinary information system, such as a simple inventory system or a customer relationship management system, should cost them several hundred million dollars to implement. Yet when you study the system and what it’s designed to do, it’s very hard to figure out where that acceptance of high costs comes from, other than habit.
S+B: So the people doing the procurement all think they need to spend this much?
McComb: They’ve become convinced because all their peers spend this much. Let me give you an example. Each of the 50 U.S. states has its own child support enforcement system. About 10 or 15 years ago, these agencies started to replace their old systems, funded by the federal government. The first few systems had contract values of $70 million to $90 million, and then these projects ran over budget.
One of the more recent contracts started out at $130 million, and then grew to $300 million. The state became quite irritated and was trying to sue its contractor, but instead decided to appeal to the federal department — which gave it another $100 million to finish the project. After that, I learned, still another state spent $1.7 billion on its child support enforcement system.
A child support enforcement system isn’t complicated. There are only three parts to these systems. First, a simple case-management function tracks the noncompliant parents. There are only tens of thousands or maybe hundreds of thousands of these parents in any given state. Then a very simple accounting function takes the checks as they arrive and distributes payments to whoever is due them. Usually one person gets the check, but occasionally the payments are split between foster care and another party. The third function enables the state to garnish wages, lottery winnings, and other forms of income.
How in the world you spend hundreds of millions of dollars on a system like that is beyond me. In reality, it should cost between $6 million and $10 million. The U.S. Department of Health and Human Services, which ultimately funds these projects, requires states to consider transferring a system from a state that previously implemented one. Thus if software construction were the main cost, each subsequent state would have lower and lower implementation costs. But the reality is that each state adds to the code, increasing the complexity, and the cost of each subsequent implementation goes up.
S+B: What’s another part of the software complexity problem?
McComb: Companies are allowing their data to get too complex by independently acquiring or building applications. Each of these applications has thousands to hundreds of thousands of distinctions built into it. For example, every table, column, and other element is another distinction that somebody writing code or somebody looking at screens or reading reports has to know. In a big company, this can add up to millions of distinctions.
But in every company I’ve ever studied, there are only a few hundred key concepts and relationships that the entire business runs on. Once you understand that, you realize all of these millions of distinctions are just slight variations of those few hundred important things.
In fact, you discover that many of the slight variations aren’t variations at all. They’re really the same things with different names, different structures, or different labels. So it’s desirable to describe those few hundred concepts and relationships in the form of a declarative model that small amounts of code refer to again and again.
S+B: How do you make better use of the logic and data you need?
McComb: Software is just a means to an end. A business runs on data, and you make decisions based on data. You should be employing software to make better use of that data and create new data.
“Software is just a means to an end. A business runs on data, and you should be employing software to make better use of that data.”
You’ll need to unearth and inventory the rules in your enterprise, then determine which rules are still valid. The rules you keep — the few hundred key concepts and relationships — need to be declared at the data layer so they can be updated, reused, and managed. If you leave them buried in the application code, they won’t be visible or replaceable.
In older systems, huge percentages of all of these buried rules are obsolete. They specified something that was true years ago. You don’t do things this way anymore, but you’re still supporting all that code and trying to manage the data associated with it. That’s just waste.
Tools that interpret legacy software help you comb through this code and find these little rules. Once you’ve done that, you do the same kind of rationalization and then some model-driven integration on the data side. Sifting through that amount of data and organizing it is a chore in and of itself, but incredibly worth doing, because if you don’t do it, next year it’s going to be worse.
The model-driven integration that ties everything together takes the few hundred rules you’ve kept and maps them to the data you’ve rationalized.
S+B: What’s left at the application layer after you’re done?
McComb: If the model- and data-driven approach I’m advocating is well designed and managed, the enterprise can end up with 50 or 100 tiny applets that each do one thing. Kind of like an app store today, but the app store couldn’t actually work as an efficient enterprise system. It just isn’t robust enough. An app store isn’t integrated. It relies on the fact that each human is doing his or her own integration. Maybe it’s tied into a calendar or email, but that’s about it.
But if you take that same idea and say, “Our data model is self-policing and complete enough that these little applets can read and update the shared data in such a way that any insights are captured and returned to the data repository,” then 50 to 100 of them should be sufficient. When those applets are no longer needed, they’re just let go. There’s nothing about them that forces them to stay in the mix.
S+B: What do you hope readers will take away from Software Wasteland?
McComb: I’m trying to get people angry, to get them to realize they’re spending 10 to 100 times more than they ought to be. I’m hoping they’ll go do an experiment or at least check this approach out.
I’ve actually obligated myself to write a trilogy. This first book is aimed at executives, and it drives home what a mess we’ve gotten into and what the data-centric alternative should look like. The second book will be more for modelers and designers. It’s going to be a data-centric pattern language derived from the classic Christopher Alexander book, A Pattern Language. The third book will be for developers and architects. It’s literally going to be a blueprint: How would you build an architecture that did this? Because I don’t want to leave people angry. I want them to actually do something about the problem.
- Alan Morrison is a senior research fellow at PwC US, based in San Jose, Calif. He was named a Quora top writer in 2016, 2017, and 2018, and has written for ExtremeTech and Recode.