DisruptiveInnovation.Org - Latest Discussions

Genuine Theories are Immprovable

David Sundahl, 10/26/2016


(This is part of the work that Clay and I did as part of our book project on what characterizes good theory building.)

Alan Hodgkin, a famous neurophysiologist responsible for describing how the voltage in neurons changes rapidly when they are stimulated (for which he won a Nobel Prize), would go around the laboratory each day visiting with each student or postdoctoral researcher working on one project or another. If you showed him data from yesterday’s experiments that were the expected result, he would nod approval and move on. The only way to get his attention was to have an anomalous result that stuck out. Then he would sit down, light his pipe, and go to work with you on what this could mean.

-Stuart Firestein, Ignorance: How It Drives Science

Essentially, all models are wrong, but some are useful.

-George E. P. Box, British Mathematician and Statistician

We assert that the most powerful engine of theory development is not positive results but anomalies. Put another way, the body of understanding created by a theory must be susceptible to anomalies. By anomaly, we do not mean a “death blow” to a theory. Rather, an anomaly occurs when the predictions that follow from a theory come out contrary to expectation. Because the true purpose of research should be to increase Bacon’s “discoveries and powers,” any genuine theory should enable researchers to improve it. What our model shows and what we have seen is that anomalies are the engine that drive the improvement of theories. A theory that is static, cannot advance our understanding of phenomena of interest. A theory that grapples with anomalies deepens and broadens our understanding. Consequently, the most important hallmark of genuine theory is improvability through discovery of anomalies.

Some anomalies appear as though they definitively falsify a theory being tested. Surely this is the case in the early phase of theory creation. But once a theory has a robust set of categories, we can find no precedent for anomaly laying to rest a theory. Even the most spectacular example in history of science, often referred to as the most famous failed experiment of all time—the Michelson-Morley experiment—which seemed once and for all to disprove the existence of aether, was not immediately seen as the death-blow for the theory. Their experiment was considered to have fallen within the parameters that one would expect given the constraints of the experimental apparatus.

How Anomalies Cause Revolutions

As a stable theory acquires adherents who also invest in understanding it, the theory becomes a paradigm. The modern usage of the term “paradigm” was coined by Thomas Kuhn in his landmark work on the history and evolution of science, The Structure of Scientific Revolutions. Kuhn’s definition of a paradigm consists of the two elements mentioned above. First, its constructs and their relationships are relatively stable. Anomalies no longer cause one to re-configure the system of constructs. Second, there is a fairly large body of people who accept and work on the theory—usually, a set of theories that have arisen from the repeated testing, anomaly discovery, and construct modification. Kuhn calls this latter work on the theory “normal science.”

Once a paradigm is established, if researchers discover an anomaly for which the paradigm simply cannot account, they often then put it on a shelf somewhere – the academic equivalent of what a police investigator would call a “cold case.” When researchers discover another anomaly that the paradigm likewise cannot account for it similarly is put aside for the time being on the cold case shelf as well. When enough cold cases have accumulated, an enterprising researcher will then study them together and announce, “Hey guys! Look at all of these cold cases! Can you see the pattern across all of them? The paradigm simply cannot be true!”

Paradigms are likewise pervasive. Recommendations for, say, child rearing have undergone massive paradigm replacement in the last several decades. For thousands of years the paraphrase of the biblical injunction found in Proverbs, which we know as “spare the rod, spoil the child” has been replaced by a more successful paradigm of child rearing, based on the solid theory-building research of many people and institutions, completely devoid of corporal punishment. The accumulation of many anomalies for the methods of child rearing in history accumulated until people trained not only as parents, but as psychologists and child development experts were able to show that the “cold cases” amounted to a pattern that needed investigation and replacement with a superior paradigm.

Often, the pattern across these anomalies can only be observed from a theory that is used in another field or discipline–in which the original and deepest believers of the paradigm have little background. Because of this, the inability of the devout to see the significance of accumulated anomalies, they defend the validity of the original paradigm, often to their graves. Indeed, the instinctive tool kit that they used for learning in their branch of science renders many of them literally unable to see the anomalies that put the paradigm into question. For this reason, Kuhn observed that the toppling of a paradigm and the development of new knowledge that takes its place, typically is initiated by new researchers whose training and disciplines are different and requires not merely a toppling, but a replacement for the old theory. Consequently, the toppling of a paradigm, though it proceeds bit-by-bit, can take decades or even millennia until a suitable alternative is well established.

Impostor Theories

Some explanations of phenomena look like theories, but are not because they cannot be improved. As impostors, they look like theories, but they lack the core elements that make something a genuine theory: improvability through anomaly.

There are few more influential gurus than Jim Collins whose books Built to Last and Good to Great have dominated best-seller lists for years. In his work, Collins and team present seven characteristics that they found common among all firms that had a particularly successful and durable performance. While Collins is open about the fact that his work merely presents correlations, he is not shy about recommending that firms follow his prescribed pathway in developing each of those characteristics. The implication, of course, is that if a firm builds itself according to Collins’ prescriptions it will become great—a causal statement. However, aside from the obvious problem that correlation is not causation, Collin’s views cannot be improved because whether something is an anomaly for his “theories” is not clear. For instance, Apple has had a number of years of tremendous success. Its market cap is the largest in the world. But by all accounts, it was not captained by Collins’ humble “level 5” leader. Nevertheless, it’s unclear whether Apple counts as an anomaly. Consequently, improvement in Collins’ work is stymied.

Anomaly as Cumulative Progress

Genuine theories, however, progress by anomaly identification. In Christensen’s The Innovator’s Dilemma, he addresses former theories of technological change and progress. In particular, he probes the question of why it can be so difficult for very successful companies to catch the wave of new technologies. In what follows, we see how a good theory creation process builds on earlier work by recognition of anomalies and modification or creation of superior constructs—yielding a better causal theory.

For some time, the theory that guided thinking about why new technologies undid incumbent firms was based on the view that firms could handle incremental, but not radical technological change. To a large extent, the prevailing conventional wisdom had been that some mixture of poor management or organizational inertia prevented firms from catching new waves of technology. Radical new technologies presented a kind of novelty that the an incumbent firm was incapable of taking advantage of. But there were many anomalies—many instances where the predictions of the theory did not match reality—since the firms that faltered on new technologies had often been extremely innovative at some point in the past.

Given the simplistic account of these failures, researchers such as Rebecca M. Henderson and Kim B. Clark observed that the there appeared to be structural impediments to incumbent firms adapting to radical technological advancements. As an organization grew itself around the needs of a product—and in particular the components that fit together to form the overall product—it created an organizational structure suited to those needs, but ill-suited to anything other than an incremental technological progress. An organization’s functional structure, it could be said, was a reflection or even an optimization of its needs to create the products and services its customers demanded. Consequently, an improved theory, accounting for the obvious anomalies was proposed: established firms’ organizational structures and routines could handle incremental but not radical change.

Clark’s further development of the theory of why incumbent firms stumble in their attempts to develop and market new technologies gave way to the work of Philip Andersen and Michael Tushman, in what might be called a “competency” model. Andersen and Tushman found an anomaly: legacy structure was not a sufficient condition to cause a firm to miss a new wave of technology. In addition to the difficulty of re-organizing and re-orienting to take advantage of radically different technologies, they found in many cases that the demands of radically new technologies eroded the hard-won competencies a firm had developed to build and sell their core technologies. The core competencies that made a firm successful became a hindrance in a firm’s efforts to adapt to emerging technologies.

Christensen’s study of the disk drive industry built on the work of his predecessors had learned about the challenges of incumbent firms while also uncovering anomalies. Christensen found that contrary to the previous theories of technological change, established firms had no trouble developing new technology, whether it was incremental or radical, so long as that technology supported the firm’s core business model. Incremental vs. radical, as categories, needed to be replaced. Christensen’s new constructs were what he termed “sustaining” technology that supported the value-creating logic of the firm and “disruptive” technologies that either did not fit or even eroded the firm’s core business. These “disruptive” technologies were hamstrung not by organizational structures or core competencies, but by the process of resource allocation. Specifically, untested products with unknown (or non-existent) markets were sure losers when compared with the known markets and demand for sustaining technologies—those that met the needs of a firm’s most profitable customers. The nearly perfect storm of lower margins for the new technology along with little to no interest from a company’s best customers translated into a situation where a smart executive would be foolish to launch into the unproven business.

One of the implications of Christensen’s theory about technological innovation was that a new technology needed to be separated from the mother ship and needed adequate funding. Without that separation, the mainline business would assimilate the emerging business into existing financial, logistical and operational standards. But, as it turns out, separateness and adequate resource allocations, though important, were not enough. Clark Gilbert and Joseph L. Bower discovered that while the constructs “disruptive” and “sustaining” were still robust enough for reliable prediction, there were some anomalies. In their study of the transition from print to online media, they found some firms succeeded in capitalizing on the disruptive wave of online media. What they found was that the requirement of separateness hypothesized by Christensen did not entirely explain the differences among media outlets. Separateness and resource allocation were necessary, but not sufficient, conditions for a successful transition to a new technology. Gilbert and Bower discovered that the successful firms employed powerful framing effects that made the difference. The new technology needed to be framed as a challenge to the core business. But that was not enough—those firms that simply treated the Internet as a threat failed. Those that succeeded both recognized the threat to their business while concomitantly engendering a sense of opportunity among those involved in the venture. The combination, then, of separateness, proper funding and wise framing improved the theory of how leading firms can take advantage of new technologies.

The progress of our understanding of how incumbent firms can capitalize on new technologies is also a story about researchers building on the work of their predecessors. As the story of technological advancement started with blaming managers, it eventually evolved a categorization scheme—incremental vs. radical innovations—that deepened insight into the causal factors at work. That deeper insight gave way to a deeper understanding the role a firm’s structures and competencies played in whether a new technology could become a new growth opportunity. Later, in Christensen’s work it became clear that it was less the structure of the operations than it was the demands of customers—who didn’t want the disruptive technologies—and the need for operational and resource separation. Finally, Gilbert and Bower deepened our knowledge so that we now understand the conditions under which firms can capitalize on emerging technologies. The continual probing of anomalies which drove improvement in both the causal explanation and the constructs that underpin it is truly a story of the progress of theory in expanding humankind’s “discoveries” and “powers.”