Pundits proclaim the miraculous power of the Internet. It ushered in a “New Economy” and created a “flat world.” We even refer to our progeny as members of the “Net generation.” More than 5 billion devices are now connected to the Internet, accessing or serving up 500 billion gigabytes of information and transmitting 2 trillion e-mails per day. The decentralized structure of the Internet has ushered in a new level of worldwide connectivity, enabling product development teams to collaborate across the globe, banks to reach people in the developing world, and middle-aged divorcees to find their high school sweethearts.
But this increasing connectivity has a dark side. Although spam recently dropped to its lowest levels in years, it still accounts for fully 75 percent of global e-mail traffic, 1.5 trillion messages per day. Every minute produces 42 new strains of malware — short for malicious software — including viruses, worms, and Trojans. An average of 8,600 new websites with malicious code are created each day, and 50 percent of results for the top 100 daily search terms lead to malicious sites. Until last year, 4.5 million computers were under the control of a single botnet that used these computers for nefarious means and disguised the malware presence by minimizing its impact on the computer’s performance and by eliminating other malware attempting to attack its network of computers. It was malware with its own antivirus software.
How then can we improve the safety and reliability of the Internet, an increasingly critical, shared global resource? As business leaders, managers, and individuals, we place our trust in the technical wizards in the back room who run the servers and write the code. We install antivirus software as directed and update other programs when told (at least when we have time to restart our computers). But the results suggest this isn’t enough.
The best way to drive the kind of improvement in information security that would really clean up the Internet, we believe, is for corporate leaders and computer security professionals to reflect on the lessons of the manufacturing quality movement of the late 20th century. The methods employed by quality professionals — Six Sigma is an example — raised the visibility of the “cost of quality” and triggered a fundamental change in the philosophy of error prevention. Similarly, information security needs to be raised to the boardroom level, and the computer experts need to come out of the back rooms to engage all users to address the challenge. By doing this, we could collectively reduce malware to a level that does not put Internet-enabled advances at risk.
A Short History of Malware
The origins of security concerns and computer malware are as old as the computer itself. In the earliest days, when computers were wired rather than programmed, companies generally secured these physical (albeit not virtual) behemoths in locked offices and buildings to prevent unauthorized access.
In 1949, even before computers evolved to clearly separate hardware and software, the leading theoretician of computing, John von Neumann, delivered a lecture on the “theory and organization of complicated automata,” which laid the foundation for both positive and negative impacts of software. His Theory of Self-Reproducing Automata (University of Illinois Press), published posthumously in 1966, explicitly addressed the idea of self-replicating code. In fact, in 1980, Arpanet, the U.S. Department of Defense–sponsored predecessor of the Internet, shut down thanks to an accidentally propagated status message. In 1983, Fred Cohen intentionally developed a program that could “‘infect’ other programs by modifying them to include a possibly evolved copy of itself,” as he put it in his thesis, on the then-popular VAX family of minicomputers, which preceded the advent of personal computers. Drawing upon a biological analogy, he called the new program a virus.