IT’S IMPORTANT WE AGREE ON AN ANSWER TO THIS QUESTION
Accelerating technological progress is not just an abstract idea. If true, it has implications regarding all our biggest life choices: what to study, what job to get, whether to save money, and whether to have kids. Not to mention bigger policy and governance issues that affect our society at large.
In futurism circles, accelerating progress seems to be slowly emerging as a consensus view. However, there is still plenty of dissent on this issue, and possibly for good reason. So this post is going to lay out what I believe to be the three main arguments for accelerating progress.
OKAY BUT WHAT DO I MEAN BY “ACCELERATING PROGRESS”
I mean that our technology is advancing at a greater than linear rate. That’s it. I don’t want to get into arguments about the exact nature of the curve, and whether it is precisely exponential or not. Instead I simply mean to defend the proposition that the rate of progress is speeding up, rather than following a linear or decelerating trajectory.
(1) THE SUBJECTIVE ARGUMENT
To many of us, it simply feels like things are moving faster. I’ve only been on this planet thirty years, but I’ve lived through the personal computer revolution, the rise of the internet, the adoption of cellphones, and the wide-scale deployment of smart phones. Very soon I will witness the release of autonomous cars and dawn of augmented reality. Each major technological development seems to come faster than the previous one and to be increasingly disruptive of existing economic and cultural norms.
BUT NOT EVERYONE EXPERIENCES IT THAT WAY
There are many thinkers for whom it doesn’t feel like things are speeding up. Economist Tyler Cowen is a good example. In The Great Stagnation he writes:
“Today, in contrast, apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space age television cartoon from the 1960s, have not come to pass. You don’t have a jet pack. You won’t live forever or visit a Mars colony. Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.”
Cowen is strangely dismissive of this “seemingly magical internet.” As far as technologies go, the internet is not like a car or a refrigerator. It’s just a way of connecting people to each other. It’s a very fundamental thing, a general purpose technology that affects all facets of the economy. But that said, this quote is primarily a subjective statement. If Cowen feels like things haven’t changed very much in the last fifty years, then I can’t really argue with that. I just happen to feel differently.
Another acceleration skeptic is prominent venture capitalist Peter Thiel. In a recent interview, he said:
“I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).
“There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had.”
Again, in order to make his case, Thiel must treat the internet as an exception, which I still find odd. But Thiel is absolutely right that in plenty of technological areas we have underperformed, at least with regards to prior expectations. This notion of prior expectations is important. Cowen, Thiel, and other stagnationists are fond of invoking jet packs and other classic science fiction tropes as evidence of our lack of progress. For example, in this talk, Thiel mentions how we once envisioned “vacations to the moon.” And in his essay Innovation Starvation, stagnationist Neal Stephenson begins by asking “where’s my ticket to Mars?”
MAYBE OUR EXPECTATIONS WERE JUST INCORRECT
It should go without saying that our failure to build a world that resembles science fiction novels of the fifties and sixties should not necessarily have any bearing on how we evaluate our current technological position. In many ways the present day is far more advanced than our prior imaginings. After all, pocket-sized devices that give you instant access to all the world’s knowledge are certainly nothing to scoff at. It’s just that the technological progress we’ve ended up getting is not necessarily the same progress we once expected. I’d call that a failure of prediction, not a failure of technology.
REAL VS. VIRTUAL PROGRESS
Perhaps the focus of technology has simply shifted from growing “outward” to growing “inward.” Rather than expanding and colonizing the stars, we have been busy connecting to each other, exploring the frontiers of our own shared knowledge. And perhaps this is absolutely what we should be doing. Looking ahead, what if strong virtual reality turns out to be a lot easier (and more practical) than space travel? Why go on a moon vacation if you can simulate it? Thiel laments that “we simply aren’t moving any faster,” but one could argue that our ears, eyes, and thoughts are moving faster than ever before. At what point does communication start to substitute for transportation?
At the heart of the stagnationists’ arguments I sense a bias in favor of “real things” and against “virtual things.” Perhaps this perspective is justified, since if we are talking about the economy, it is much easier to see how real things can drive growth. As for virtual things driving growth—the jury’s still out on that question. Recently we’ve seen a lot of value get created virtually and then digitally distributed to everyone at almost no cost to the consumer. And many of today’s most promising businesses are tech companies that employ very few people and generate a lot of their value in the form of virtual “bits.” Cowen himself nails this point clearly and succinctly in the third chapter of his book, where in writing about the internet, he states “a lot of our innovation has a tenuous connection to revenue.”
(2) THE EMPIRICAL ARGUMENT
Until we can agree on a standardized way to measure technological progress, all of the above discussion amounts to semantics. What is the “value” of the internet when compared to moon vacations? How many “technological progress points” does an iPhone count for? One man’s progress is another man’s stagnation. Without a relevant metric, only opinions remain.
Although no definitive measure exists for the “amount of technology” a civilization has, it might be possible to measure various features of the technological and economic landscape, and from these features derive an opinion about the progress of technology as a whole.
USING ECONOMIC MEASURES
In making their case for stagnation, Cowen and Thiel commonly cite median wages, which have been stagnant since the 1970s. Cowen writes, “Median income is the single best measure of how much we are producing new ideas that benefit most of the American population.” While these median wage statistics are interesting and important, they are absolutely not a measure of our technological capability. Rather they represent how well our economic system is compensating the median worker. While this is a fairly obvious point, I think it is an important one. It’s easy to fall into the trap of conflating technological health with economic health, as if those two variables are always going to be synchronized to each other. It seems much more logical to blame stagnant median wages on a failure of our economic system rather than a failure of our technology.
Certainly one can tell a story about how it is a technological slowdown that is causing our stagnant median wages. But one can also tell the opposite story, as Erik Brynjolfsson and Andrew McAfee do in Race Against the Machine:
“There has been no stagnation in technological progress or aggregate wealth creation as is sometimes claimed. Instead, the stagnation of median incomes primarily reflects a fundamental change in how the economy apportions income and wealth. The median worker is losing the race against the machine.”
Regardless of which story is right, if we start with the question “is technological progress accelerating,” I don’t think the median wage statistic can ever provide us more than vague clues. It’s doubtful whether we can rely on a “median” measure. There is no law guaranteeing that technological gains will be shared equally and necessarily disseminate down to the median person. Cowen himself expresses this idea when he writes “a lot of our recent innovations are private goods rather than public goods.”
There are of course other economic measures besides the median wage that might correlate more closely with technological progress. Productivity is a good example. However, the medium of money guarantees that such economic measures will always be at least one degree removed from the technology they are trying to describe. Moreover, it is difficult to calculate the monetary value of some of our more virtual innovations because of the “tenuous connection between innovation and revenue” mentioned above.
COUNTING TECHNOLOGICAL ACHIEVEMENTS
Another strategy for measuring technological progress is to count the frequency of new ideas or other important technological landmarks.
In The Great Stagnation, Cowen cites a study by Jonathan Huebner which claims we are approaching an innovation limit. In the study, Huebner employs two strategies for measuring innovation.
The first method involves counting the number of patents issued per year. Using patents to stand in for innovation strikes me as strange, and I’m sure many people who are familiar with the problems plaguing our patent system would agree. A good critique comes from John Smart, who writes:
“Huebner proposes that patents can be considered a “basic unit of technology,” but I find them to be mostly a measure of the kind of technology innovation that humans consider defensible in particular socioeconomic and legal contexts, which is a crude abstraction of what technology is.”
Huebner’s other method involves counting important technological events. These events are taken from a list published in The History of Science and Technology. Using this data, Huebner produces the following graph.
As you can see, the figure shows our rate of innovation peaking somewhere around the turn of the century, and then dropping off rapidly thereafter.
ANY LIST OF IMPORTANT EVENTS IS HIGHLY SUBJECTIVE
While counting technological events is an interesting exercise, it’s hard to view such undertakings as intellectually rigorous. After all, what criteria make an event significant? This is not a simple question to answer.
Things get more complicated when one considers that all innovations are built upon prior innovations. Where does one innovation end and another innovation start? These lines are not always easy to draw. In the digital domain, this problem only gets worse. The current debacle over software patents is symptomatic of the difficulty of drawing clear lines of demarcation.
By way of example, ask yourself if Facebook should count as an important innovation landmark. One can easily argue no, since almost all of Facebook’s original features existed previously on other social networking sites. And yet Facebook put these features together with a particular interface and adoption strategy that one could just as easily argue was extremely innovative. Certainly the impact of Facebook has not been small.
OTHER ATTEMPTS TO EMPLOY THE EVENT-COUNTING STRATEGY
In The Singularity is Near, Ray Kurzweil also attempts to plot the frequency of important technological landmarks throughout time. However, instead of using just one list of important events, he combines fifteen different lists in an attempt to be more rigorous. In doing so, he reaches the opposite conclusion of Huebner: namely that technological progress has been accelerating throughout all of Earth’s history, and will continue to do so.
Which is not to say Kurzweil is right and Huebner is wrong (in fact there are methodological problems with both graphs), but that this whole business of counting events is highly subjective, no matter how many lists you compile. I think if we want to find a useful empirical measure of our technological capabilities, we can do better.
MEASURING THE POWER OF THE TECHNOLOGY DIRECTLY
The following definition of technology comes from Wikipedia:
“Technology is the making, usage, and knowledge of tools, machines, techniques, crafts, systems or methods of organization in order to solve a problem or perform a specific function.”
So if we want to measure the state of technology, it follows that we might want to ask questions such as “how many functions can our technology perform?” “how quickly?” and “how efficiently?” In short: “how powerful is our technology?”
Of course this quickly runs into some of the same problems as counting events. How do you define a “specific function?” Where does one function end and another begin? How can we draw clear lines between them?
THE SPECIALNESS OF COMPUTERS SHOULD NOT BE OVERLOOKED
Fortunately some of these problems evaporate with the arrival of the computer. Because if technology’s job is to perform specific functions, then computers are the ultimate example of technology. A computer is essentially a tool that does everything. A tool that absorbs all other technologies, and consequently all other functions.
In the early days of personal computing it was easy to see your computer as just another household appliance. But these days it might be more appropriate to look at your computer as a black hole that swallows up other objects in your house. Your computer is insatiable. It eats binders full of CDs, shelves full of books, and libraries full of DVDs. It devours game systems, televisions, telephones, newspapers, and radios. It gorges on calendars, photographs, filing cabinets, art supplies and musical instruments. And this is just the beginning.
Along the same lines, Cory Doctorow writes:
“General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”
In fact, computers and technology writ-large seem to be merging together so rapidly, that using a measurement of one to stand in for the other seems like a pretty defensible option. For this reason I feel that computing power may actually be the best metric we have available for measuring our current rate of technological progress.
Using computing power as the primary measure of technological progress unfortunately prevents us from modeling very far back in history. However, if we accept the premise that computers eventually engulf all technologies, this metric should only get more appropriate with each passing year.
MOORE’S LAW
When it comes to analyzing the progress of computing power over time, the most famous example is Moore’s Law, which predicts (correctly for over 40 years) that the number of transistors we can cram onto an integrated circuit will double every 24 months.
How long Moore’s law will continue is of course up for debate, but based upon history the near-term outlook seems fairly positive. Of course, Moore’s Law charts a course for a relatively narrow domain. The number of transistors on a circuit is not an inclusive enough measure to represent “computing power” in the broader sense.
One of Ray Kurzweil’s more intriguing proposals is that we expand Moore’s law to describe the progress of computing power in general, regardless of substrate:
“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.”
“As the figure demonstrates there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price performance of computing long before integrated circuits were even invented.”
Measured in calculations per second per $1000, the power of computers appears to have been steadily accelerating throughout the last century, even before integrated circuits got involved.
OTHER MEASURES OF COMPUTING POWER
While I like Kurzweil’s price-performance chart, the $1000 in the denominator ensures that this is still an economic variable. Including money in the calculation inevitably introduces some of the same concerns about economic measures mentioned earlier in this essay.
So to eliminate the medium of money entirely, we might prefer a performance chart that tracks the power of the absolute best computer (regardless of cost) in a given time period. Fortunately, Kurzweil provides very close to such a chart with this graph of supercomputer power over time:
THE NETWORK AS A SUPERCOMPUTER
Just as all technology is converging toward computers, there is a sense in which all computers are merging together into a single global network via the internet. This network can itself be thought of as a giant supercomputer, albeit one composed of other smaller computers. So by measuring the aggregate size of the network we might also get a strong indication of our current rate of computing progress.
Please note that I do not necessarily support many of Kurzweil’s more extreme claims. Rather I am simply borrowing his charts to make the narrow (and fairly uncontroversial) point that computing power is accelerating.
THE SOFTWARE PROBLEM
While increasing computer power makes more technological functions possible, a bottleneck might exist in our ability to program these functions. In other words, we can expect to have the requisite hardware, but can we expect to have the accompanying software? Measuring the strength of hardware is a straightforward process. By contrast, software efficacy is a lot harder to quantify.
I think there are reasons to be optimistic on the software front. After all, we will have an ever growing number of people on the planet who are technologically enabled and capable of working on such problems. So the notion that software challenges are going to stall technological progress seems unlikely. That’s not a proof of course. Software stagnation is possible, but anecdotally I don’t see evidence of it occurring. Instead I see Watson, Siri, and the Google autonomous car, and get distinctly the opposite feeling.
ULTIMATELY NO METRIC IS PERFECT
At this point, you still may not accept my premise of a growing equivalence between computers and technology in general. Admittedly, it’s not a perfect solution to the measurement problem. However, the idea that available computing power will play a key role in determining the pace of technological change should not seem far-fetched.
(3) THE LOGICAL ARGUMENT
Empirical analysis is useful, but as is clear by now, it can also be a thorny business. In terms of explaining why technological progress might be accelerating, a simple logical argument may actually be more convincing.
A key feature of technological progress is that it contributes to its own supply of inputs. What are the inputs to technological innovation? Here is a possible list:
- People
- Education
- Time
- Access to previous innovations
- Previous innovations themselves
As we advance technologically, the supply of all five of these inputs increases. Historically, technological progress has enabled larger global populations, improved access to education, increased people’s discretionary time by liberating them from immediate survival concerns, and provided greater access to recorded knowledge.
Moreover, all innovations by definition contribute to the growing supply of previous innovations that new innovations will draw upon. Many of these innovations are themselves “tools” that directly assist further innovation.
Taking all this into account we can expect technological progress to accelerate as with any feedback loop. The big variable that could defeat this argument is the possibility that useful new ideas might become harder to find with time.
However, even if finding new ideas gets harder, our ability to search the possibility space will be growing so rapidly that anything less than an exponential increase in difficulty should be surmountable.
CONCLUSION: THE PLAUSIBILITY OF RAPID CHANGE SHOULD BE CONSIDERED
Although some skepticism of these arguments is still warranted, their combined plausibility means we should consider outcomes in which change occurs much more rapidly than we might traditionally expect. Clinging to a linear perspective is not a good strategy, especially when so much is at stake. In short, we should question any long-term policy or plan that does not attempt to account for significantly different technology just ten or even five years from now.