Why Texting Defeated Videophony, Or The Ability to Multitask is Paramount



One prediction a lot of science fiction authors got wrong is the idea that all calls would some day become video calls. Today, the ability to make video calls is readily available, and yet a very small percentage of day to day conversations actually utilize video. Instead consumers have gone the other way entirely: rather than increase the resolution of our casual phone calls by adding images, we have opted for an even lower resolution form of communication—namely, texting.

As it turns out, there is an issue much more important than resolution when it comes to interface design. And that issue is the ability to multitask. Video calls demand your whole attention; not only do you have to appear as if you are listening, but you also have to worry about whether or not your physical appearance is up to par. One science fiction author, David Foster Wallace, got this pretty much exactly right in his classic novel Infinite Jest:

“[Video] callers now found they had to compose the same sort of earnest, slightly overintense listener’s expression they had to compose for in-person exchanges. Those callers who out of unconscious habit succumbed to fuguelike doodling or pants-crease-adjustment now came off looking rude, absentminded, or childishly self- absorbed. Callers who even more unconsciously blemish-scanned or nostril-explored looked up to find horrified expressions on the video-faces at the other end. All of which resulted in videophonic stress…

“And the videophonic stress was even worse if you were at all vain. I.e. if you worried at all about how you looked. As in to other people. Which all kidding aside who doesn’t. Good old aural telephone calls could be fielded without makeup, toupee, surgical prostheses, etc. Even without clothes, if that sort of thing rattled your saber. But for the image-conscious, there was of course no such answer-as-you-are informality about visual-video telephone calls, which consumers began to see were less like having the good old phone ring than having the doorbell ring and having to throw on clothes and attach prostheses and do hair- checks in the foyer mirror before answering the door.” (full excerpt)

Applying these same principles, it’s not hard to see why texting has become so popular. In contrast with phone calls, texting alleviates two additional causes of social stress—you no longer have to control your tone of voice, and you no longer have to answer in realtime. This frees up valuable attention for other tasks. Put simply, when it comes to multitasking:

texting > voice calls > video calls

Thus looking forward, we should expect the continued dominance of interfaces that minimize your need to pay attention while maximizing your ability to multitask. For this reason I am somewhat skeptical about whether or not voice activation, another science fiction favorite, will ever catch on as a dominant way of controlling our devices. In many scenarios, particularly when other people are present, voice activation is a liability that impairs rather than impedes multitasking. For example, using a standard cellphone swiping interface, it is extremely easy to look up the definition of a word, skim an email, or check your calendar while simultaneously and seamlessly carrying on a conversation with the person across the table. No such multitasking is possible with voice activation.

There are of course situations where voice activation is a net benefit, such as while driving. But if cars start driving themselves, then this special case vanishes rather quickly.

I have even more doubts about virtual assistants. Many futurists have envisioned anthropomorphic digital secretaries, often with custom personalities, whom we are supposed to converse with as if they are real people. It seems that in order to maximize efficiency and minimize social stress, the last thing I would want to do is put an artificially intelligent middle man between me and my computer.

Three Types of Intelligence Augmentation: A Thought Experiment



Imagine watching a math competition. Three seemingly smart individuals compete on stage to answer a series of hard questions. The final result is a three-way tie.

Later you learn that these three individuals, who resemble each other externally, are actually very different on the inside.

The first individual is a math professor who’s spent his entire life studying the subject.

The second individual has only studied math up to the high school level. However, a revolutionary new smart drug has increased his brain functioning to the point that he can learn and master new math concepts as soon as he is exposed to them.

The third individual has no knowledge of math whatsoever. But a smart earpiece connected to the internet feeds him the right answers at lightning speed.

These three individuals are analogous to the three different types of intelligence augmentation. The first type, education, optimizes the existing brain for a particular task. The second, enhancement, upgrades the brain’s ability to master new tasks. And the third method, extension, offloads the task to an external module.

Interestingly, from an outsider perspective, the functional result of all three methods can appear to be the same. But the conscious experience of the individual in question is qualitatively different.

Is Technology Addiction the Real Problem?



In this thirty minute talk, Robert Scoble discusses a wide array of fascinating new technologies that are just now coming to market. What a lot of these technologies have in common is their high degree of personalization. Technology is getting better at figuring out what we want and giving it to us exactly when we want it.

Near the end of the video, Scoble delivers his thesis: When it comes to technology, privacy is not the issue. People are going to get used to their lack of privacy. The bigger concern is addiction.

I agree that on the surface, addiction seems like a menacing issue. We are all familiar with modern stories of technology addiction like the World of Warcraft player profiled in this short film:

But if we are going to talk about addiction we should agree on a basic definition. The one that I have always subscribed to is “continued use in the face of consequences.”

Let me illustrate with a few examples: Suppose you are so addicted to using your smart phone that you are constantly sending texts while driving. As a result you rear end someone with your vehicle. You experience various financial costs, including higher insurance. But instead of learning a lesson, you get your car fixed up and go right back to your old behavior of texting while driving. Continued use in the face of consequences.

In case that doesn’t sound familiar enough, here’s another example. You have a bit of work you need to get done. You sit down to do it, but every ten minutes or so, you can’t resist checking Facebook. You do this even though on some level you kind of hate Facebook and wish it would go away. Inevitably when you check Facebook, at least one link or comment catches your eye, and what was supposed to be a momentary break turns into about half an hour of time wasted. Repeat ad nauseum. Continued use in the face of consequences.

Now these are ordinary, everyday examples, and as such there is a way in which they feel different from the obsessive World of Warcraft player who does nothing else but play a game for 400 days straight. And yet pinpointing the source of this perceived difference is not easy. When it comes to severity of consequences the texting-while-driving example is by far the worst, since in this case the addict is risking large amounts of money and possibly even his life. By contrast, the worst thing that could happen to the World of Warcraft player is a gradual deterioration in his health that probably follows from sitting around all day.

And yet the texting-while-driving addict may strike us as more normal, not because he is any less addicted, but because he still appears to be engaged with the outside world. He is leaving his house; he is driving somewhere; he is communicating with a friend via text. By contrast the World of Warcarft player (even though he plays what could be described as a social game) never leaves his house, makes excuses to his friends about why he can’t go out, and spends most of his time engaged in an alternate fantasy world.

To make the point even more clear, let’s compare World of Warcraft addiction to Facebook addiction. What is the difference really? They are both social networks populated by avatars of real people. The difference is that while World of Warcraft is a virtual world, Facebook is more of what you might call a mirror world.  Facebook attempts to model and integrate with “real life” as we know it, whereas World of Warcraft has no such aspirations.

Now imagine that technology begins to systematically remove the consequences from these addictions. Self driving cars make it so that texting while driving is no longer a concern. Miracle health drugs make it so that you can sit around all day and play World of Warcraft without becoming obese. Intelligent personal assistant software and attention-enhancing drugs make it so that you are able to stay on track while doing your work and avoid being sucked into the distraction of Facebook.

Using my original definition, no consequences means no more addiction. We have just “cured” our addicts.

For this reason I feel like technology addiction is going to be a transitional period—a moment in time when our technology is good enough to lure us into self destructive habits, but not good enough to protect us from the consequences of those habits.

At the end of the day we are left with a new issue that I think will turn out to be more important. And it relates to our level of “engagement with the real world.”

If I give you a holodeck where you can fulfill your wildest fantasies, and you elect to never leave…the correct term for that is not addiction. At least in so far as you suffer no consequences from doing so, and the power bill that keeps the virtual reality machine going continues to get paid on time.

Rather what is interesting about the holodeck scenario is that you have just completely turned your back on the real world. You have withdrawn into your own mind, into a personalized solipsistic fantasy world where you are the one true god. Moreover, you have decided that this private heaven is preferable to the world we all share together, the real world where you don’t always get what you want, and things are often out of your control.

What’s interesting about such scenarios is that with consequences removed from the equation there is not necessarily anything wrong with such behavior, and yet on some level it is still viscerally disturbing.

In the future we are all going to be hopelessly dependent on our technology. That’s already true. In a way it’s a moot point. The big question will be, do you want to withdraw into a world of your own choosing? Or do you want to stay here in “the real world” with us?

Start Preparing Yourself Now For the End of Privacy



The intersection of privacy and technology gets a lot of press. It seems at least once a week an article comes out along the lines of this “Girls Around Me” story.

The dialogue about technology and privacy seems to place people into three camps:

  1. The “victims.” These are people who are unaware of how their technology works. An example would be the “poor” girls in the above story who apparently do not realize that their location and Facebook profiles are easily searchable by would-be pick up artists.
  2. The “educated.” These are your reasonably tech savvy folks who know how to fiddle with their Facebook privacy settings and delete their Google search histories. These people make full use of available technologies, but take precautions to configure their preferences so that certain aspects of their lives remain protected. Like the author of the above article, these people tend to advocate privacy education as being the best solution.
  3. The “relinquishers.” These people simply opt out of potentially privacy-eroding technologies. (They definitely aren’t on Facebook, for example.) Interestingly this category unites both tech-fearing luddites and tech-loving nerds such as the “linux aficionado” mentioned in the above article.

If I had to place myself in one of the above categories, I’d choose number two. But if I’m true to my own beliefs, what I really think is this:

Privacy is going away. And no amount of fiddling with settings, educating yourself, or opting out is going to help.

Think of the following list of technology trends. Then imagine these technologies mature and linked up with each other:

  • better integration of global positioning systems
  • improved and ubiquitous face recognition
  • smaller and more pervasive cameras
  • smaller and higher capacity hard drives for storing video and other recorded data
  • more widespread cloud and network access
  • better algorithms for search and data analysis
  • improved 3D scanning and modeling
  • phones embedded in glasses and contacts

I’m probably leaving some things out. But I think if you run the thought experiment and put all this together here’s the world you get in very short order:

  • Everything you do in public will be recorded from multiple angles, online, and searchable by people armed only with a few fragments of data about you (first name and city for example.)
  • Anything you do in private with other people present will probably also be recorded in some form with a high chance of leakage out into the world. That is unless you take great pains to prevent this from happening.
  • Anything you do completely alone will potentially be spied upon unless you are extremely rigorous about protecting yourself. Moreover, your likely behavior during such “blackout periods” will often be inferable from the surrounding recorded periods in your life.

In this scenario, opting out of social networks and configuring privacy settings will not help you. Opting out will not prevent your face and location from being recorded by other people. And opting out will not prevent other people or impersonal algorithms from tagging this data with your name.

I anticipate a future where most crimes are impossible to get away with. A future where adulterers, liars, and gossipers get caught immediately. Where there is no longer a clear division between work, family, and social life. Where large numbers of people will have naked pictures, or at least body scans, available somewhere online. Where your entire personal history will be recallable at a moment’s notice.

Because I believe this, I have adopted the opposite strategy from what some people are recommending. I am not trying to protect my privacy by fiddling with settings. Instead I am readying myself for the end game and acclimating myself to a future with no privacy. I am actually trying to share more information, be more open, be less secretive, and be the same person, at all times, regardless of what company I am in. I am trying to construct a life for myself where I truly have nothing to hide. And I am doing this not because I necessarily want to, but because it seems like the wise transition to start making given the reality of these technologies.

Q: So is Technological Progress Accelerating or Not?



IT’S IMPORTANT WE AGREE ON AN ANSWER TO THIS QUESTION

An early self-driving car

Accelerating technological progress is not just an abstract idea. If true, it has implications regarding all our biggest life choices: what to study, what job to get, whether to save money, and whether to have kids. Not to mention bigger policy and governance issues that affect our society at large.

In futurism circles, accelerating progress seems to be slowly emerging as a consensus view. However, there is still plenty of dissent on this issue, and possibly for good reason. So this post is going to lay out what I believe to be the three main arguments for accelerating progress.

OKAY BUT WHAT DO I MEAN BY “ACCELERATING PROGRESS”

I mean that our technology is advancing at a greater than linear rate. That’s it. I don’t want to get into arguments about the exact nature of the curve, and whether it is precisely exponential or not. Instead I simply mean to defend the proposition that the rate of progress is speeding up, rather than following a linear or decelerating trajectory.

(1) THE SUBJECTIVE ARGUMENT

To many of us, it simply feels like things are moving faster. I’ve only been on this planet thirty years, but I’ve lived through the personal computer revolution, the rise of the internet, the adoption of cellphones, and the wide-scale deployment of smart phones. Very soon I will witness the release of autonomous cars and dawn of augmented reality. Each major technological development seems to come faster than the previous one and to be increasingly disruptive of existing economic and cultural norms.

BUT NOT EVERYONE EXPERIENCES IT THAT WAY

Click to buy on Amazon

There are many thinkers for whom it doesn’t feel like things are speeding up. Economist Tyler Cowen is a good example. In The Great Stagnation he writes:

“Today, in contrast, apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space age television cartoon from the 1960s, have not come to pass. You don’t have a jet pack. You won’t live forever or visit a Mars colony. Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.”

Cowen is strangely dismissive of this “seemingly magical internet.” As far as technologies go, the internet is not like a car or a refrigerator. It’s just a way of connecting people to each other. It’s a very fundamental thing, a general purpose technology that affects all facets of the economy. But that said, this quote is primarily a subjective statement. If Cowen feels like things haven’t changed very much in the last fifty years, then I can’t really argue with that. I just happen to feel differently.

Peter Thiel

Another acceleration skeptic is prominent venture capitalist Peter Thiel. In a recent interview, he said:

“I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).

“There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had.”

Again, in order to make his case, Thiel must treat the internet as an exception, which I still find odd. But Thiel is absolutely right that in plenty of technological areas we have underperformed, at least with regards to prior expectations. This notion of prior expectations is important. Cowen, Thiel, and other stagnationists are fond of invoking jet packs and other classic science fiction tropes as evidence of our lack of progress. For example, in this talk, Thiel mentions how we once envisioned “vacations to the moon.” And in his essay Innovation Starvation, stagnationist Neal Stephenson begins by asking “where’s my ticket to Mars?”

MAYBE OUR EXPECTATIONS WERE JUST INCORRECT

A jetpack prototype from 1968

It should go without saying that our failure to build a world that resembles science fiction novels of the fifties and sixties should not necessarily have any bearing on how we evaluate our current technological position. In many ways the present day is far more advanced than our prior imaginings. After all, pocket-sized devices that give you instant access to all the world’s knowledge are certainly nothing to scoff at. It’s just that the technological progress we’ve ended up getting is not necessarily the same progress we once expected. I’d call that a failure of prediction, not a failure of technology.

REAL VS. VIRTUAL PROGRESS

Perhaps the focus of technology has simply shifted from growing “outward” to growing “inward.” Rather than expanding and colonizing the stars, we have been busy connecting to each other, exploring the frontiers of our own shared knowledge. And perhaps this is absolutely what we should be doing. Looking ahead, what if strong virtual reality turns out to be a lot easier (and more practical) than space travel? Why go on a moon vacation if you can simulate it? Thiel laments that “we simply aren’t moving any faster,” but one could argue that our ears, eyes, and thoughts are moving faster than ever before. At what point does communication start to substitute for transportation?

At the heart of the stagnationists’ arguments I sense a bias in favor of “real things” and against “virtual things.” Perhaps this perspective is justified, since if we are talking about the economy, it is much easier to see how real things can drive growth. As for virtual things driving growth—the jury’s still out on that question. Recently we’ve seen a lot of value get created virtually and then digitally distributed to everyone at almost no cost to the consumer. And many of today’s most promising businesses are tech companies that employ very few people and generate a lot of their value in the form of virtual “bits.” Cowen himself nails this point clearly and succinctly in the third chapter of his book, where in writing about the internet, he states “a lot of our innovation has a tenuous connection to revenue.”

(2) THE EMPIRICAL ARGUMENT

Until we can agree on a standardized way to measure technological progress, all of the above discussion amounts to semantics. What is the “value” of the internet when compared to moon vacations? How many “technological progress points” does an iPhone count for? One man’s progress is another man’s stagnation. Without a relevant metric, only opinions remain.

Although no definitive measure exists for the “amount of technology” a civilization has, it might be possible to measure various features of the technological and economic landscape, and from these features derive an opinion about the progress of technology as a whole.

USING ECONOMIC MEASURES

Real median family income has stagnated

In making their case for stagnation, Cowen and Thiel commonly cite median wages, which have been stagnant since the 1970s. Cowen writes, “Median income is the single best measure of how much we are producing new ideas that benefit most of the American population.” While these median wage statistics are interesting and important, they are absolutely not a measure of our technological capability. Rather they represent how well our economic system is compensating the median worker. While this is a fairly obvious point, I think it is an important one. It’s easy to fall into the trap of conflating technological health with economic health, as if those two variables are always going to be synchronized to each other. It seems much more logical to blame stagnant median wages on a failure of our economic system rather than a failure of our technology.

Click to buy on Amazon

Certainly one can tell a story about how it is a technological slowdown that is causing our stagnant median wages. But one can also tell the opposite story, as Erik Brynjolfsson and Andrew McAfee do in Race Against the Machine:

“There has been no stagnation in technological progress or aggregate wealth creation as is sometimes claimed. Instead, the stagnation of median incomes primarily reflects a fundamental change in how the economy apportions income and wealth. The median worker is losing the race against the machine.”

Regardless of which story is right, if we start with the question “is technological progress accelerating,” I don’t think the median wage statistic can ever provide us more than vague clues. It’s doubtful whether we can rely on a “median” measure. There is no law guaranteeing that technological gains will be shared equally and necessarily disseminate down to the median person. Cowen himself expresses this idea when he writes “a lot of our recent innovations are private goods rather than public goods.”

Productivity growth, unlike median income, has been growing.

There are of course other economic measures besides the median wage that might correlate more closely with technological progress. Productivity is a good example. However, the medium of money guarantees that such economic measures will always be at least one degree removed from the technology they are trying to describe. Moreover, it is difficult to calculate the monetary value of some of our more virtual innovations because of the “tenuous connection between innovation and revenue” mentioned above.

COUNTING TECHNOLOGICAL ACHIEVEMENTS

Another strategy for measuring technological progress is to count the frequency of new ideas or other important technological landmarks.

In The Great Stagnation, Cowen cites a study by Jonathan Huebner which claims we are approaching an innovation limit. In the study, Huebner employs two strategies for measuring innovation.

The first method involves counting the number of patents issued per year. Using patents to stand in for innovation strikes me as strange, and I’m sure many people who are familiar with the problems plaguing our patent system would agree. A good critique comes from John Smart, who writes:

“Huebner proposes that patents can be considered a “basic unit of technology,” but I find them to be mostly a measure of the kind of technology innovation that humans consider defensible in particular socioeconomic and legal contexts, which is a crude abstraction of what technology is.”

Huebner’s other method involves counting important technological events. These events are taken from a list published in The History of Science and Technology. Using this data, Huebner produces the following graph.

As you can see, the figure shows our rate of innovation peaking somewhere around the turn of the century, and then dropping off rapidly thereafter.

ANY LIST OF IMPORTANT EVENTS IS HIGHLY SUBJECTIVE

While counting technological events is an interesting exercise, it’s hard to view such undertakings as intellectually rigorous. After all, what criteria make an event significant? This is not a simple question to answer.

Things get more complicated when one considers that all innovations are built upon prior innovations. Where does one innovation end and another innovation start? These lines are not always easy to draw. In the digital domain, this problem only gets worse. The current debacle over software patents is symptomatic of the difficulty of drawing clear lines of demarcation.

By way of example, ask yourself if Facebook should count as an important innovation landmark. One can easily argue no, since almost all of Facebook’s original features existed previously on other social networking sites. And yet Facebook put these features together with a particular interface and adoption strategy that one could just as easily argue was extremely innovative. Certainly the impact of Facebook has not been small.

OTHER ATTEMPTS TO EMPLOY THE EVENT-COUNTING STRATEGY

In The Singularity is Near, Ray Kurzweil also attempts to plot the frequency of important technological landmarks throughout time. However, instead of using just one list of important events, he combines fifteen different lists in an attempt to be more rigorous. In doing so, he reaches the opposite conclusion of Huebner: namely that technological progress has been accelerating throughout all of Earth’s history, and will continue to do so.

Which is not to say Kurzweil is right and Huebner is wrong (in fact there are methodological problems with both graphs), but that this whole business of counting events is highly subjective, no matter how many lists you compile. I think if we want to find a useful empirical measure of our technological capabilities, we can do better.

MEASURING THE POWER OF THE TECHNOLOGY DIRECTLY

The following definition of technology comes from Wikipedia:

Technology is the making, usage, and knowledge of tools, machines, techniques, crafts, systems or methods of organization in order to solve a problem or perform a specific function.”

So if we want to measure the state of technology, it follows that we might want to ask questions such as “how many functions can our technology perform?” “how quickly?” and “how efficiently?” In short: “how powerful is our technology?”

Of course this quickly runs into some of the same problems as counting events. How do you define a “specific function?” Where does one function end and another begin? How can we draw clear lines between them?

THE SPECIALNESS OF COMPUTERS SHOULD NOT BE OVERLOOKED

Fortunately some of these problems evaporate with the arrival of the computer. Because if technology’s job is to perform specific functions, then computers are the ultimate example of technology. A computer is essentially a tool that does everything. A tool that absorbs all other technologies, and consequently all other functions.

In the early days of personal computing it was easy to see your computer as just another household appliance. But these days it might be more appropriate to look at your computer as a black hole that swallows up other objects in your house. Your computer is insatiable. It eats binders full of CDs, shelves full of books, and libraries full of DVDs. It devours game systems, televisions, telephones, newspapers, and radios. It gorges on calendars, photographs, filing cabinets, art supplies and musical instruments. And this is just the beginning.

Along the same lines, Cory Doctorow writes:

“General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”

In fact, computers and technology writ-large seem to be merging together so rapidly, that using a measurement of one to stand in for the other seems like a pretty defensible option. For this reason I feel that computing power may actually be the best metric we have available for measuring our current rate of technological progress.

Using computing power as the primary measure of technological progress unfortunately prevents us from modeling very far back in history. However, if we accept the premise that computers eventually engulf all technologies, this metric should only get more appropriate with each passing year.

MOORE’S LAW

When it comes to analyzing the progress of computing power over time, the most famous example is Moore’s Law, which predicts (correctly for over 40 years) that the number of transistors we can cram onto an integrated circuit will double every 24 months.

How long Moore’s law will continue is of course up for debate, but based upon history the near-term outlook seems fairly positive. Of course, Moore’s Law charts a course for a relatively narrow domain. The number of transistors on a circuit is not an inclusive enough measure to represent “computing power” in the broader sense.

One of Ray Kurzweil’s more intriguing proposals is that we expand Moore’s law to describe the progress of computing power in general, regardless of substrate:

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.”

“As the figure demonstrates there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price performance of computing long before integrated circuits were even invented.”

Measured in calculations per second per $1000, the power of computers appears to have been steadily accelerating throughout the last century, even before integrated circuits got involved.

OTHER MEASURES OF COMPUTING POWER

While I like Kurzweil’s price-performance chart, the $1000 in the denominator ensures that this is still an economic variable. Including money in the calculation inevitably introduces some of the same concerns about economic measures mentioned earlier in this essay.

So to eliminate the medium of money entirely, we might prefer a performance chart that tracks the power of the absolute best computer (regardless of cost) in a given time period. Fortunately, Kurzweil provides very close to such a chart with this graph of supercomputer power over time:

THE NETWORK AS A SUPERCOMPUTER

Just as all technology is converging toward computers, there is a sense in which all computers are merging together into a single global network via the internet. This network can itself be thought of as a giant supercomputer, albeit one composed of other smaller computers. So by measuring the aggregate size of the network we might also get a strong indication of our current rate of computing progress.

Please note that I do not necessarily support many of Kurzweil’s more extreme claims. Rather I am simply borrowing his charts to make the narrow (and fairly uncontroversial) point that computing power is accelerating.

THE SOFTWARE PROBLEM

While increasing computer power makes more technological functions possible, a bottleneck might exist in our ability to program these functions. In other words, we can expect to have the requisite hardware, but can we expect to have the accompanying software? Measuring the strength of hardware is a straightforward process. By contrast, software efficacy is a lot harder to quantify.

I think there are reasons to be optimistic on the software front. After all, we will have an ever growing number of people on the planet who are technologically enabled and capable of working on such problems. So the notion that software challenges are going to stall technological progress seems unlikely. That’s not a proof of course. Software stagnation is possible, but anecdotally I don’t see evidence of it occurring. Instead I see Watson, Siri, and the Google autonomous car, and get distinctly the opposite feeling.

ULTIMATELY NO METRIC IS PERFECT

At this point, you still may not accept my premise of a growing equivalence between computers and technology in general. Admittedly, it’s not a perfect solution to the measurement problem. However, the idea that available computing power will play a key role in determining the pace of technological change should not seem far-fetched.

(3) THE LOGICAL ARGUMENT

Empirical analysis is useful, but as is clear by now, it can also be a thorny business. In terms of explaining why technological progress might be accelerating, a simple logical argument may actually be more convincing.

A feedback loop

A key feature of technological progress is that it contributes to its own supply of inputs. What are the inputs to technological innovation? Here is a possible list:

  • People
  • Education
  • Time
  • Access to previous innovations
  • Previous innovations themselves

As we advance technologically, the supply of all five of these inputs increases. Historically, technological progress has enabled larger global populations, improved access to education, increased people’s discretionary time by liberating them from immediate survival concerns, and provided greater access to recorded knowledge.

Moreover, all innovations by definition contribute to the growing supply of previous innovations that new innovations will draw upon. Many of these innovations are themselves “tools” that directly assist further innovation.

Taking all this into account we can expect technological progress to accelerate as with any feedback loop. The big variable that could defeat this argument is the possibility that useful new ideas might become harder to find with time.

However, even if finding new ideas gets harder, our ability to search the possibility space will be growing so rapidly that anything less than an exponential increase in difficulty should be surmountable.

CONCLUSION: THE PLAUSIBILITY OF RAPID CHANGE SHOULD BE CONSIDERED

Although some skepticism of these arguments is still warranted, their combined plausibility means we should consider outcomes in which change occurs much more rapidly than we might traditionally expect. Clinging to a linear perspective is not a good strategy, especially when so much is at stake. In short, we should question any long-term policy or plan that does not attempt to account for significantly different technology just ten or even five years from now.

A Detailed Critique of “Race Against the Machine”



eBook available on Amazon

FIRST OF ALL, THIS IS AN EXTREMELY IMPORTANT BOOK

Race Against the Machine deserves praise for jump-starting an important discussion about the effect of technology on our economy. As the authors point out, the impact of computers and information technology has been largely left out of most analysis regarding causes of our current unemployment woes. This book, therefore, is an attempt to “put technology back in the discussion.”

BUT IT SUFFERS FROM SCHIZOPHRENIA

The book is divided into roughly two halves: one pessimistic and one optimistic. The first three chapters comprise the pessimistic portion and make a compelling case for how accelerating technological progress and rapid productivity gains are not only creating unemployment, but also contributing to greater inequality. The last two chapters take a more optimistic tone and attempt to lay out possible solutions to these problems.

Unfortunately, the pessimistic chapters are much more convincing. The net effect is rather dissonant, not unlike a general rounding up his troops and announcing, “Gentlemen, we’re outmatched in every possible way. Now go out there and win!”

An autonomous car at Stanford University

A QUICK SUMMARY OF THE “PESSIMISTIC” CHAPTERS

Race Against the Machine begins by building a strong argument for technological unemployment. The authors draw our attention to recent innovations like self-driving cars and Jeopardy-winning computers and show how such advances threaten to encroach further and further into realms once dominated by human labor. Such technologies ultimately affect all sectors of the economy since computers are a prime example of what economists call a “General Purpose Technology.” Like steam power and electricity, computers reside in a category of innovation so powerful that they interrupt and accelerate the normal march of economic progress.” Moreover, thanks to Moore’s law, computers are improving at an exponential rate, so we can expect very disruptive changes to arrive very quickly indeed.

Making matters worse, a host of related trends result in the benefits of technological progress not being shared equally. Not only is the value of highly skilled workers diverging sharply from that of low skilled workers, but the value of capital is increasing relative to that of labor. Also important is the “superstar effect.” Technological advances extend the reach of superstars in various fields and allow them to win ever larger market shares. This comes at the expense of the many, since “good, but not great, local competitors are increasingly crowded out of their markets.” Putting all this together you get the central thesis of Race Against the Machine: namely, that digital technologies are outpacing the ability of our skills and organizations to adapt.

A humanoid robot

HOW THE AUTHORS DISTANCE THEMSELVES FROM THE “END-OF-WORK”

In some ways this is a familiar argument. The idea that technology might replace the need for human workers on a large scale has been floating around for a long time, at least since the industrial revolution. However, it should be noted that early on the authors distance themselves from what they call the “end-of-work” crowd, thinkers like Jeremy RifkinMartin Ford, and even John Maynard Keynes, who have argued that with time the role of human labor is bound to diminish. Certainly, one can understand why the authors would be cagey about being associated with this idea. Historically, predictions regarding the coming obsolescence of human labor have been wildly exaggerated, and economists generally view such arguments as fallacious.

So what differentiates Race Against the Machine from a more traditional end-of-work argument?  According to the authors:

“So we agree with the end-of-work crowd that computerization is bringing deep changes, but we’re not as pessimistic as they are. We don’t believe in the coming obsolescence of all human workers. In fact, some human skills are more valuable than ever, even in an age of incredibly powerful and capable digital technologies. But other skills have become worthless, and people who hold the wrong ones now find that they have little to offer employers. They’re losing the race against the machine, a fact reflected in today’s employment statistics.”

In short, the authors are more optimistic because they believe there will still be plenty of jobs for humans in the future. We just need to update our skills and organizations to cope with new digital technologies, and then we will be able to create new avenues of employment and save our struggling economy.

BUT THE “OPTIMISTIC” CHAPTERS AREN’T SO ENCOURAGING

Which brings us to those last two chapters, the unconvincing ones I alluded to earlier. For I believe Race Against the Machine suffers from the same problem as a lot of nonfiction books: It does a great job of stating the problem, but a not-so-great job of laying out the solution.

Human-computer teams competing at “cyborg chess”

SO HOW ABOUT WE RACE WITH MACHINES?

The first suggestion the authors make can be summarized as “race with machines.” A human-machine combo has the potential to be much more powerful than either a human or machine alone. So therefore it’s not simply a question of machines replacing humans. It’s a question of how can humans and machines best work together.

I don’t disagree with this point on the surface. But I fail to see how it suggests a way out of our current predicament. The human-machine combo is a major cause of the superstar economics described earlier in the book. Strengthen the human-machine combo and the superstar effect will only get worse. In addition, if computers are encroaching further and further into the world of human skills, won’t the percentage of human in the human-machine partnership just keep shrinking? And at an exponential pace?

Moreover, as I’ve written about before on this site, the human-machine partnership can sometimes be less than the sum of its parts. Consider the example of airline pilots:

“In a draft report cited by the Associated Press in July, the agency stated that pilots sometimes “abdicate too much responsibility to automated systems.” Automation encumbers pilots with too much help, and at some point the babysitter becomes the baby, hindering the software rather than helping it. This is the problem of “de-skilling,” and it is an argument for either using humans alone, or machines alone, but not putting them together.” (link)

PERHAPS ORGANIZATIONAL INNOVATION WILL SAVE US

The authors go on to discuss the importance of “organizational innovation.” In particular, they discuss the creation of new business platforms that might empower humans to compete in new marketplaces.

Again, I agree in theory. Certainly some new platform may hold the key to productively mobilizing the unemployed. But current examples are far from encouraging. The authors cite websites like eBay, Apple’s App Store, and Threadless. An obvious point would be that the kind of people who are able to hustle and make a living on such websites are not exactly average workers in any sense of the word. Not everyone can run an online retail store, program an app, or design a t-shirt. But that’s beside the point. The question we should be asking is will such online marketplaces grow in the future? Perhaps they will expand to the point that they can encompass more and more ordinary workers?

I am highly skeptical. Once again, superstar economics apply here, since effectively everyone in these markets is competing with everyone else. One potential solution is the growth of niche markets. If you focus on selling unique items to a niche audience perhaps you can carve out your own little market in which you are the lone superstar.

But this idea also has its problems. How many niches can there possibly be? Enough to provide employment for the legions of truck drivers and supermarket checkers who may soon be exiting the workforce?

When discussing technology and unemployment, I think it is important not to leave digital abundance out of the discussion. Digital abundance has the potential to be just as disruptive as automation. Traditional businesses are under attack from two sides. Services are being automated, while at the same time goods are being digitized.

Imagine the domestic entrepreneur who has started his own eBay store. He sells niche action figures to a few enthusiastic fans. Nonetheless, enough fans exist that he can make a decent living, all thanks to the wonders of eBay’s “organizational innovation.”

Objects made using a 3D printer

Enter affordable desktop 3D printing, a technology that is rapidly arriving on the scene. All of a sudden once eager customers can buy cheap raw materials and print all the action figures they want. Digital files containing the precise specs for figures get designed, released, and traded extensively on file sharing sites. An explosion of innovation for sure, but also a potential threat to a business model that focuses too much on the sale of unique tangible goods.

Thought experiments like this reveal why intellectual property and digital rights management are going to become increasingly hotly debated issues. As tangible goods become digitized they go from being tangible property to intellectual property. So the efficacy of a lot of future businesses depends on the efficacy of intellectual property, and a survey of recent history quickly reveals the troubles inherent in this area.

OKAY, BUT AREN’T THERE OTHER TYPES OF ORGANIZATIONAL INNOVATION?

So far I have focused on marketplaces for goods. I should note that there are also online labor marketplaces like Taskrabbit and Mechanical Turk. These websites provide a great service by efficiently matching demand for labor to humans willing to work. While increasing efficiency is beneficial, such websites will be of limited help if demand for average-skilled labor falls in the aggregate.

Now I don’t want to sound overly pessimistic. In general, I would agree that the unemployed represent a huge slack resource, and quite possibly somebody is going to come up with some previously unimagined way to harness this large pool of people. But at the moment, such organizational innovation is just a theory. I do not see the seeds of a workable solution in the current crop of platforms.

ON MICROMULTINATIONALS

As their final example of organizational innovation, the authors mention the promise of “micromultinationals.” They write:

“Technology enables more and more opportunities for what Google chief economist Hal Varian calls “micromultinationals”—businesses with less than a dozen employees that sell to customers worldwide and often draw on supplier and partner networks. While the archetypal 20th-century multinational was one of a small number of megafirms with huge fixed costs and thousands of employees, the coming century will give birth to thousands of small multinationals with low fixed costs and a small number of employees each.”

I don’t know if this quote is meant to be taken literally, but for fun let’s crunch some numbers. The coming century (100 years) will give birth to thousands (max: 9999) multinationals with low numbers of employees (less than 12).  Therefore:

9999 x 11 / 100 = 1100 jobs/year. Not exactly encouraging.

Exponential growth seems mild at first and then suddenly manifests as extreme changes

LOOKING TO COMBINATORIAL EXPLOSION FOR HOPE

Early on in the book, the authors take time to explain the incredible power of exponential growth. They discuss Moore’s law and quote Ray Kurzweil’s book The Singularity is Near to illustrate how exponential growth can quickly shift from modest gains to “jaw-dropping” changes.

This puts us in a dire place. If we accept the authors’ premise of a losing race, in which technology (progressing exponentially) is outrunning our skills and our institutions, then how can society hope to catch up? Trying to win a race against exponential growth sounds like an impossible task.

In chapter four, the authors claim to come up with the answer: “combinatorial explosion.”

Combinatorial explosion is the idea that new ideas are combinations of two or more old ideas. Since digital technologies facilitate the easy exchange of information, and ideas—unlike physical resources—can’t be used up, we therefore have virtually limitless possibilities for innovation.

“Combinatorial explosion is one of the few mathematical functions that outgrows an exponential trend. And that means that combinatorial innovation is the best way for human ingenuity to stay in the race with Moore’s Law.”

This suggests a strange dichotomy. As if Moore’s Law is the exclusive tool of machines, while combinatorial explosion is the exclusive tool of humans. This is clearly false. Combinatorial explosion is a huge cause of our current situation. It is a primary reason why disruptive technologies are moving so fast in the first place.

Here’s an easy example: IBM is re-purposing Watson—the Jeopardy-winning computer—to perform medical diagnosis. So here we see one idea colliding with another in true combinatorial fashion, and what’s the result? Yet another potential threat to jobs, this time in the medical field.

Technologies like Watson can readily be repurposed for a variety of uses.

HYPERSPECIALIZATION AND INFINITE NUMBERS OF MARKETS

Hyperspecialization is the authors’ answer to the problem of superstar economics:

“In principle, tens of millions of people could each be a leading performer—even the top expert—in tens of millions of distinct, value-creating fields.”

In principle maybe. In practice there are huge obstacles. Again, what’s to stop one superstar-machine combo from just dominating multiple fields? Or even just one machine? In the health care industry for example, computers like Watson are going to be able to mine the literature of all fields of medicine. After all, to a computer, what’s a few thousand more documents to read? Machines can rapidly scale up their expertise in ways that humans simply can’t.

More importantly we should examine the term “value-creating fields.” Value under our current system is closely tied to scarcity. Digital abundance directly undermines this source of value. So once again we are confronted with an intellectual property challenge. If we are going to have an economy where everyone is an expert in a different field and produces “bits,” we are going to need a mechanism by which these non-scarce bits translate into an income. The truth is we already have numerous such experts. The Internet is overflowing with amateurs who voluntarily immerse themselves in hyper-specialized tasks purely for enjoyment, not because this path is necessarily a viable strategy for making money.

BUT DON’T WE ALL HAVE SOMETHING UNIQUE TO CONTRIBUTE?

Yes of course. Human creativity is astounding, and everyone has something to offer. But peoples’ output—however unique, interesting, and valuable—will not necessarily be monetizable. Especially in an abundant digital environment.

At this point, I want to return to a stray quote from earlier in the book, because I think it presents the opportunity to make an important point.

“…digital technologies create enormous opportunities for individuals to use their unique and dispersed knowledge for the benefit of the whole economy.”

When I look at the Internet and communications technologies, I see a huge threat to this “dispersed knowledge.” The Internet has a way of destroying information asymmetry, which is another important factor to consider when looking at future employment. Any jobs that depend upon having exclusive access to knowledge that no one else has are potentially at risk in a world where increasingly everyone is connected and data is widely shared.

“Superstar” teacher: Salman Khan

INVESTMENT IN HUMAN CAPITAL

Education seems like the most straightforward solution to our problem. If our skills are falling behind, then we’d better acquire new skills right?

I am highly dubious of education’s ability to solve this problem. For one, the most promising experiments in education right now, those that use technology aggressively, like Khan Academy or Stanford’s online courses, have the potential to create unemployment in the education field. In addition to the long-term promise of fully automated learning environments, superstar economics rears its head once again. After all, Khan Academy is built around a superstar teacher: Khan himself. And Stanford’s recent Artificial Intelligence course allowed one professor to effectively reach 58,000 students.

The authors do present an important counter:

“Local human teachers, tutors, and peer tutoring can easily be incorporated into the system to provide some of the kinds of value that the technology can’t do well, such as emotional support and less-structured instruction and assessment. For instance, creative writing, arts instruction, and other “soft skills” are not always as amenable to rule-based software or distance learning.”

Person-to-person interaction is indeed an important aspect in a lot of teaching, and won’t be vanishing any time soon. However, it arguably becomes less important the further you move up the educational ladder. A fifth grader needs lots of emotional support and hands-on instruction, but a self-directed higher education student may need almost none. College is so expensive right now that I can imagine people increasingly forgoing it in favor of cheaper, more automated learning options.

But even with person-to-person learning, technological advances mean that single tutors or teachers will increasingly be able to meet the needs of more and more students. If the learning software does a halfway decent job, then the necessity of human intervention should decrease with time.

In addition, going back to the idea of digital abundance, such human intervention may be increasingly available for free. Already, it’s stunningly easy to go on the Internet and find volunteers who will provide emotional support and helpful feedback, at zero cost. Online communities around “soft skills” like creative writing are particularly vibrant, and offer the opportunity to develop a craft with a huge support network that easily rivals what you would get from a traditional paid learning experience.

I think there is a cultural bias towards judging online interactions as somehow always less valuable than real space interactions. With every passing year, as the resolution of communications technologies increases, this point of view becomes increasingly absurd. Cultural norms may move slowly, but I suspect they will eventually come around on this issue.

But all of these considerations aside, there is a much bigger problem. One cannot escape the simple truth that humans learn slowly and technology advances quickly. If we take exponential growth seriously, how can education expect to keep up? Are we going to retrain unemployed truck drivers to become app programmers? Chances are by the time such retraining is complete, technology will have moved on. And don’t forget that the machines themselves will increasingly be educating themselves.

One solution might be augmenting human thinking capability. If we could upgrade humans the way we upgrade machines, then “the race” would be over, and racing with machines would make more sense. This may sound far-fetched, but in these futuristic times nothing should be ruled out. In Andrew McAfee’s own words, “Never say never about technology.” The question is when will such technologies arrive? And what societal upheaval might we be in for in the meantime?

Innovation is great but it doesn’t necessarily equal job creation.

POLICY RECOMMENDATIONS

The authors make a series of common sense policy recommendations that affect institutions like education, business, and law. Most of these suggestions are great, and might lead to a better society, but it is unclear how any of them will create jobs. Rather the goal of these suggestions seems to be to allow innovation and progress to flourish, which in my opinion may only accelerate the process of job loss, as per my arguments above.

The one exception might be suggestion number 13:

“Make it comparatively more attractive to hire a person than to buy more technology. This can be done by, among other things, decreasing employer payroll taxes and providing subsidies or tax breaks for employing people who have been out of work for a long time. Taxes on congestion and pollution can more than make up for reduced labor taxes.”

I suppose this might help keep some jobs around longer, but at the expense of investment in technology that presumably we would want to encourage. This seems directly at odds with the authors’ other more pro-innovation suggestions. Do we want technological progress or not?

RE-EVALUATING THE DESIRABILITY OF “WORK”

One solution the authors quickly dismiss is wealth redistribution. Their reasoning?

“While redistribution ameliorates the material costs of inequality, and that’s not a bad thing, it doesn’t address the root of the problems our economy is facing. By itself, redistribution does nothing to make unemployed workers productive again. Furthermore, the value of gainful work is far more than the money earned. There is also the psychological value that almost all people place on doing something useful. Forced idleness is not the same as voluntary leisure.”

As a culture we’re deeply attached to the idea of jobs, but I suspect many of us wouldn’t have too much trouble getting over our attachment.

I think a distinction ought to be made between wage labor and other perfectly meaningful ways of occupying one’s time. Looking ahead, perhaps our cultural reverence for wage labor is misplaced. After all, one way to look at wage labor is it is a mechanism which forces us to spend time on what the short-term market thinks is valuable, rather than on what we as individuals think is valuable. Sure, lots of people love their jobs. But lots of people hate their jobs too. If you liberated all the people who hate their jobs from the constraints of wage labor, chances are a decent portion of them might find something else more productive to spend their time on. And society as a whole might benefit tremendously.

I do not rule out the possibility that eventually we may find a way to gainfully employ everybody while keeping our current system intact. If some human innovator does not crack this problem, then certainly a sufficiently powerful computer might. But I shouldn’t have to point out the absurdity of asking in effect, “Hey Hal, can you figure out what jobs we should be doing?” It seems to me that long before then, we ought to be re-examining whether we even want traditional jobs in the first place. Perhaps we should be working with machines in order to win the race against labor.

The authors repeatedly state that our institutions are losing a race with technology. But they do not consider the possibility that our economy itself might be one of these trailing institutions.

NOT TO OVERSTATE THE LEVEL OF DISAGREEMENT

Near the end of the book, the authors do admit that there are “limits to organizational innovation and human capital investment.” So most likely they would not be too surprised by many of the criticisms I’ve leveled above.

And I would agree with the authors that, excepting the jobs issue, all of these new technologies are unequivocally a good thing. There is clearly a lot of cause for optimism in the general sense.

If technology significantly brings down the cost of goods like health care, unemployment won’t be so threatening.

In addition, if we can advance technology quickly enough there are two long term solutions to the unemployment problem. The first, direct intelligence augmentation, I have already mentioned above. The second solution involves cost of living. Specifically, if technology can significantly lower costs of living, then declining income prospects for average individuals will not sting so much. However, to accomplish this we would have to see dramatic drops in the price of essential goods like housing and healthcare, drops which may happen eventually, but may not arrive quickly enough to prevent social unrest.

CONCLUSION

In the final assessment, I think my biggest issue with this book is the way authors fail to effectively distinguish themselves from the “end-of-work” crowd. After stating in the opening chapter that they are “more optimistic” about the future of human labor, they do not present any credible reasons for such optimism. The authors may claim they do not believe in the “end of work,” but their claims will not prevent me from filing them next to Martin Ford and Jeremy Rifkin on my bookshelf.