Author Archives: Jon Perry

Implementing a Basic Income via a Digital Currency



The idea of basic income is rather old, but it has gained renewed interest in recent times. A basic income is appealing as both a solution to poverty and possible future technological unemployment.

But how do you pay for a basic income? Could it be paid for through the very act of money creation?

Modern monetary systems typically feature mechanisms for both creating and destroying money. In virtual economies these mechanisms are referred to as “faucets” and “sinks” respectively. How you define faucets and sinks says a lot about how your monetary system works and who it benefits.

Let’s use Bitcoin as an example. Bitcoin creates money through a computationally intensive “mining” process that leaks new coins into the system at a predetermined rate. This faucet rewards people who have a lot of computational power to spend on this mining process. It also rewards early adopters since the faucet is slated to be slowed down and eventually turned off. Bitcoin doesn’t really feature any sinks, other than the fact that once bitcoins are lost they can never be retrieved. So one might say that carelessness is a kind of sink under the Bitcoin system.

Rewarding early adopters and those with computational power does make a certain kind of sense. Early adopters need to be incentivized or else the currency might never take off. And computational power is a stable, scarce resource, that in the case of Bitcoin is used to perform critical maintenance operations that keep the currency running.

However, the dark side is that Bitcoin is destined to create a new moneyed elite made up of this coalition of early adopters and computational donors. On the surface, it does not strike one as necessarily the most democratized monetary system possible.

Instead, one might imagine a currency that creates an equal amount of money in everyone’s wallet, every year. Such a system has both upsides and downsides, and there would be a lot of kinks to work out. That said, I think I might prefer such a system to Bitcoin.

One upside is that a basic income would be built directly into the system of money itself. If properly executed, everyone using the currency would be automatically insulated from the worst kinds of poverty. You would get a social safety net without the taxes.

Another upside is that this is probably an even better incentive to early adoption then Bitcoin’s deflationary model. Start using the currency and start getting an income. For many people that would be hard to turn down.

One downside is that to ensure against abuses of the system (e.g. creating two accounts and collecting two incomes) you would have to lose anonymity. Anonymity is a big value for a lot of people. However, an even larger group of people probably don’t care that much about anonymity. They’ve already accepted and gotten used to our current not very anonymous monetary system, and for them, this would simply be a lateral move. Furthermore if you buy contemporary arguments about the inevitable arrival of a post-privacy society, then anonymity may simply be an impossible goal to strive for anyway.

Another potential downside is that you would need a centralized authority to manage such a system. Again, this is going to be a problem for certain libertarian-leaning users, but it may not be that much of a problem for your average joe. Arguably, transparency and accountability are more important than decentralization for decentralization’s sake. Those attributes could possibly be preserved in a well-designed centralized system. In addition, one could argue that Bitcoin, even though it is decentralized in theory, still leads to a form of centralized power—namely the early adopters and computational donors mentioned above.

Such a system might also need sinks to protect against inflation. One idea might be to give the money an expiration date. Another idea might be to have the central authority be empowered to sell some sort of useful product or service, and then simply destroy the money after the purchase is made. There are literally countless ways this could be designed. The possibility space is wide open, and that is what is most exciting to me about digital currencies. (Though of course one must contend with the fact that governments are not always going to look kindly on monetary experimentation…)

Taxonomy of Technological Unemployment Solutions (and Defeaters)



This post represents my latest attempt to categorize the possible solutions to technological unemployment. It’s largely based on episode 14 of the Review the Future Podcast so for a more detailed treatment of this topic, you can listen here.

To begin, I’d like to talk about some of the defeaters to technological unemployment that could mean either (a) it won’t happen or (b) it won’t be a problem.

DEFEATERS

Lowered Cost of Living – The technological bounty produced by new technologies could be so great that high rates of unemployment may not be that big a deal. If current trends reverse, and advanced technologies begin to drive down the price of key goods like housing, health care, and education, then people might be able to live reasonably comfortable lives with very little income. The small income required could come from just a few odd jobs performed throughout the year, or alternately people might cluster in households where the income of those who are fortunate enough to still have work is shared and used to pay for everyone’s expenses. In addition, with high returns to capital and low costs to living, interest on even very small investments might allow for a reasonably comfortable life. And for those who still fall though the cracks, non-profits and charities might step in. Such philanthropic activities will be greatly empowered since the cost of doing good will be much cheaper than ever before.

Intelligence Augmentation – If people are losing jobs to smart machines then one solution is to make people smarter. The obvious first step is simply better education, enabled by technological advances such as online distribution, augmented reality , gamification, individualized learning environments, and so on. However, it may eventually become possible to actually enhance human intelligence, whether through drugs, genetics, or brain implants. As a result, people could become upgradeable in the way that machines are today, thus closing the competitive gap between the humans and machines.

New Demands and New Platforms – This outcome has two components. First, there may be a growing market for new kinds of goods that cannot be automated away or that directly monetize “humanness.” These include positional goods like status, time-limited goods like attention, and human-centric goods like shared experiences. Second, new peer-to-peer platforms might enable the monetization of these somewhat intangible goods in a way that allows the participation of significant portions of the population.

SOLUTIONS

If the above defeaters do not happen (or do not happen soon enough) then technological unemployment may require solutions that are more governmental in nature. I have broken these solutions into four categories.

Technological Relinquishment – If technology is causing the problem, we could just give up certain technologies. In its most extreme form, relinquishment would mean banning certain technologies outright. However there are many softer forms of relinquishment, such as incentivizing businesses to hire human workers instead of using machines. While on the surface such policies might be seen as “pro-human” they can just as easily be viewed as “anti-technology.” The idea here would be to limit the spread of technology into areas where human jobs are being threatened.

Artificial Scarcity – A world of technological unemployment is also probably a world of abundance. This abundance has two dimensions: labor and goods. An abundance of human labor will exist because we will have more people willing to work then can find jobs. Thus, we could artificially constrain the supply of labor by limiting the amount of hours people can work. This could be done through shorter work days, shorter work weeks, or shorter careers (early retirement).

An abundance of goods will exist because of the ever increasing digitization of everything. Digital goods can be hard to monetize because of their near zero marginal cost to reproduce. Thus digital goods are vulnerable to both cheap imitations and piracy. Therefore one solution might be to constrain the supply of goods artificially through the use of intellectual property and digital rights management. We already do this today, but it might be theoretically possible to institute a revised version of this system that better compensates the growing numbers of amateurs producing valuable digital content.

Expanded Social Safety Nets – Another solution is to expand our social safety nets to guarantee the livelihood of the growing unemployed. There are many ways to implement such safety nets. Here’s a partial list of methods, ranging from the more decentralized to the more paternalistic.

  • Unconditional Basic Income (just paying people for nothing)
  • Conditional Money Transfers (that require means testing or participation in some sort of program)
  • Vouchers (to be spent on only certain high priority needs)
  • Direct Provisioning (just give people food, housing, health care, directly)
  • Government Created Jobs (paying people to build infrastructure, do community service, read books, dig virtual ditches, etc)

Automation Socialism – This would be a reorganization of society along socialist lines, but with the added benefit of automation to solve some of the traditional problems of socialism like worker incentives (machines don’t need to be incentivized) and inefficient distribution of resources (technologically enabled abundance could make any efficiency loss negligible).

A CRITIQUE OF THE SECOND MACHINE AGE (Or the Need to Shed our Romantic Ideas about Wage Labor)



(This post is based on episode 11 of the Review the Future Podcast. For a more detailed treatment of this topic you can subscribe to the podcast via iTunes or download it from reviewthefuture.com.)

What is this Book?

The Second Machine Age is a book by Erik Brynjolfsson and Andrew McAfee that explores the impacts of new technologies on the economy. For those who are familiar with such topics, it’s not likely this book will teach you much you don’t already know. However, for the layperson, this book is an extremely well written and clear introduction to the economic pros and cons of our current digital revolution. Because of the skillful way it stitches everything together, Second Machine Age has a good chance of being one of the most important nonfiction books of 2014.

The Goal of this Blog Post

On the whole, we like The Second Machine Age. We think it tells a plausible story and for the most part we agree with its perspective. However, we have criticisms of one of the book’s later chapters, the one entitled “Long-Term Recommendations.” Thus the primary goal of this article is to articulate those criticisms. But first, for the sake of background, we will summarize some of the book’s main arguments.

A Quick Summary of Second Machine Age

According to Brynjolfsson and McAfee, exponential gains in computing, digitization of goods, and recombinant innovation are all driving rapid technological growth. Technology has begun to perform advanced mental tasks—like driving cars and understanding human speech—that were previously thought impossible. And in economic terms, these new technologies, according to the authors, are increasing both the ‘bounty’ and the ‘spread.’

Bounty is a blanket term for all of the productivity and quality of life gains provided by new technologies. Brynjolfsson and McAfee feel that the bounty of technology is growing tremendously, but, because of the limitations of our economic measures, we have a tendency to greatly underestimate the progress we are making.

Spread is a euphemism for inequality. According to the authors, technology is increasing spread because of (a) skill biased technical change, (b) capital’s increasing market share relative to labor, and (c) superstar economics. All three of these trends have some evidence backing them up, and the supposition that technology is the primary driver of these trends makes a great deal of sense.

The authors also suggest that technological unemployment—a phenomenon long thought of as impossible by mainstream economists—is in fact possible. They discuss three arguments for how technological unemployment could occur:

  1. In industries subject to inelastic demand, automation can lower the price of goods without creating any additional demand for those goods (and thus labor to make those goods). Over the long term, as human needs become relatively more satiated, this inelasticity could even apply to the economy as a whole. Such an outcome would directly undermine the luddite fallacy, which is the argument economists traditionally use to dismiss technological unemployment.
  2. If technological change is fast enough, it could outpace the speed at which workers are able to retrain and find new jobs, thereby turning short term frictional unemployment into long term structural unemployment.
  3. There is a floor on how low wages can go. If automation technology continues to drive wages down, those wages could cross a threshold below which the arrangement is not worth the employee’s time. Eventually the value of certain workers could fall so low that they are not worth hiring, even at zero price.

Policy Recommendations

The book makes several short term policy recommendations. We will not list them all here, as they represent a suite of largely uncontroversial proposals designed to speed up innovation and growth. These proposals, if they were enacted, would conceivably help to get our economy working more efficiently and increase our ability to match workers to the jobs that still need doing. They would also grow the technological bounty that makes all of our lives better. It’s hard not to agree with most of these proposals.

However, if we accept the premise that  “thinking” machines will encroach further and further into the domain of human skills, and that over the long term we are destined for not just rampant inequality but also wide scale technological unemployment, then all of the short term proposals provided by this book could actually accelerate unemployment. After all, more innovation means more and better machines, which ultimately could mean more displaced labor.

Long Term Recommendations

In this chapter, the authors address long-term concerns. In a near future where androids potentially substitute for most human labor, will the standard economics playbook still work?

Brynjolfsson and McAfee are clear about two major preferences:

  1. They do not want to slow or halt technological progress.
  2. They do not want to turn away from capitalism, by which they mean, “a decentralized economic system of production and exchange in which most of the means of production are in private hands (as opposed to belonging to the government), where most exchange is voluntary (no one can force you to sign a contract against your will), and where most goods have prices that vary based on relative supply and demand instead of being fixed by a central authority.”

We agree with these two premises. So far, so good.

Should we Adopt a Basic Income?

The authors go on to discuss an old idea: the basic income. This is a potential solution to the failure mode of capitalism known as technological unemployment. If an increasingly large number of people can longer find gainful employment, then the simplest solution might be to just pay everyone a basic income. This income would be given out to everyone in the country regardless of their circumstances. Thus it would be universal and unconditional. Such a basic income would ensure that everyone has a minimum standard of living. People would still be free to pursue additional income in the marketplace, and capitalism would proceed as usual.

Brynjolfsson and McAfee do a quick survey of all of the varied thinkers, both conservative and liberal, who have supported this idea in the past. Here’s a short list of people who favored a basic income:

Thomas Paine, Bertrand Russell, Martin Luther King Jr., James Tobin, Paul Samuelson, John Kenneth Galbraith, Milton Friedman, Friedrich Hayek, Richard Nixon

With such wide ranging endorsement for the idea of a basic income, one might expect Brynjolfsson and McAfee to jump on the bandwagon and endorse the idea themselves.

But, no! Basic income is apparently “not their first choice.” Why?

Because work, they argue, is fundamentally important to the mental wellbeing of people.  If we adopted a basic income, people might not be adequately incentivized to work. And therefore people and society would suffer on some deep psychological level.

To support this idea, Brynjolfsson and McAfee field a series of arguments.

Argument One: A Quote From Voltaire

The french enlightenment philosopher Voltaire once said, “Work saves a man from three great evils: boredom, vice, and need.” Now, Voltaire was a pretty smart guy, but whether someone from the eighteenth century has anything helpful to say about today’s technological reality seems doubtful. But for the sake of argument, let’s go ahead and examine this quote.

First of all, we’re not sure what Voltaire meant by “work.” Work can mean a lot of things. Work, in the broadest sense, could mean activities you do to upkeep your life, such as cleaning your bathroom or going grocery shopping. It could also consist of amateur hobbies that you undertake for fun, such as writing overly long blog posts.

However, this is not the definition of work that Brynjolfsson and McAfee are implying. They are implying a much more narrow definition of work as ‘wage labor’—meaning work done to serve the needs of the marketplace. Wage labor is work you do, at least in part, to earn money, so that you can continue to survive and exist in this modern world.

So let’s rephrase the quote to: “Wage labor saves a man from three great evils: boredom, vice, and need.”

Already this should start to sound a little bankrupt. Wage labor saves a man from boredom? Sure, a good job can relieve boredom. But a bad job can be one of the single biggest causes of boredom in a person’s life. We don’t have any statistics on this, but anecdotally we happen to know a lot of people who don’t particularly enjoy their jobs. And boredom is one of the biggest complaints these people have. A quick survey of the popular culture surrounding work would seem to imply that this is not a unique sentiment. We have a feeling that you, the reader—if you try hard enough—can think of at least one person who gets bored at their job.

(ADDITION: Gallup Poll Shows Thirteen Percent of Workers are Disengaged at Work)

So what about ‘vice?’ What even constitutes vice in 2014? Things you do for pleasure that are bad for you? Honestly, the word vice seems a bit anachronistic in this day and age, but we can think of some candidates for vice that are actually encouraged by wage labor:

  1. Aimless web browsing and perusing of “trash” media to ease the boredom of being stuck in a cubicle
  2. Sitting in a chair all day not exercising and slowly harming your health
  3. Drinking copious amounts of soda and coffee in order to stay awake during the hours demanded by your job
  4. Cooking less and eating more junk food because wage labor takes up too much of your time
  5. Needing a drink the second you get home in order to unwind after a stressful day of wage labor

Third on Voltaire’s list is ‘need’. But if wage labor could take care of need, we wouldn’t be having this conversation in the first place, right? Since we are speculating about a future where automation makes most work obsolete, then it is clear that in such a future most people will not be able to find lucrative wage labor. So looking ahead, wage labor cannot necessarily save a man from need any more than it can save a man from boredom or vice.

Argument Two: Autonomy, Mastery, and Purpose

Brynjolfsson and McAfee attempt to use Daniel Pink’s book Drive to further their point. Drive discusses three key motivations—autonomy, mastery and purpose—that improve performance on creative tasks. However, the authors of Second Machine Age seem to imply that (1) these qualities are needed for psychological wellbeing and (2) these qualities can best be obtained from wage labor. This is a misapplication of Drive’s actual thesis.

The three motivations described—autonomy, mastery and purpose—are not fundamental qualities of wage labor. In fact, wage labor is historically very bad at providing them. Thus, Pink’s book explains how modern businesses can specially incorporate these techniques in order to try to get better results from their workers.

Such mind hacking aside, wage labor has no special claims to autonomy, mastery, and purpose. Wage labor removes autonomy by forcing people to focus their energies on what the market thinks is important, rather than on what they themselves think is important. Mastery can just as easily be found in education, games, and hobbies. And purpose can be found in religion, philosophy, community service, family, country, your favorite sports team, or really just about anywhere.

Argument Three: Work is Tied to Self-Worth

The authors cite the work of economist Andrew Oswald who found “that joblessness lasting six months or longer harms feelings of well-being and other measures of mental health about as much as the death of a spouse, and that little of this decline is due to the loss of income; instead, it arises from a loss of self-worth.”

We don’t doubt that a loss of self-worth is a major factor contributing to the unhappiness of the long-term unemployed. However, we believe this outcome is culturally and not psychologically determined. The cultural expectations in America are that you are supposed to get a wage labor job and earn your living every day, otherwise you are seen as a freeloader, a layabout, a good-for-nothing. Jobs are seen as the premiere source of personal identity, and the first question out of most people’s mouths when they meet someone new is “what do you do?” We don’t see why these cultural expectations can’t change and in fact, if the premise of technological unemployment is correct, then they will have to change.

Laziness and doing nothing may always be looked down upon. But there is a big difference between doing nothing and being unemployed. As has already been articulated, there are many productive ways to spend one’s time that have nothing to do with wage labor. If our society fails to recognize the value of these non-wage labor pursuits, then the problem lies with society.

Today unemployment may be higher than we like, but work is still abundant enough that such a cultural expectation can remain unchallenged. But if the future looks like the one implied by Second Machine Age—a future where more and more people will be unable to find wage labor—then long-term unemployment will need to become not just normalized, but accepted. By reaffirming the importance of wage labor, Brynjolfsson and McAfee are helping to perpetuate the same social force that already makes unemployed people feel depressed and worthless.

Argument Four: Without Work Everything Goes Wrong

The authors cite studies by sociologist William Julius Wilson and social research Charles Murray that suggest unemployed people have higher proclivities towards crime, less successful marriages, and other problems that go beyond just low income.

Unlike Drive, we have not personally looked at this research so we cannot speak directly to the experimental rigor of these studies. Isolating for the effect of joblessness in real world communities is extremely difficult and requires controlling for a wide variety of complicating factors. In the case of Murray’s work, the authors seem to acknowledge this concern directly when they write “the disappearance of work was not the only force driving [the two communities] apart —Murray himself focuses on other factors—but we believe it is a very important one.”

As long as wage labor is directly tied to income, how can we be sure that what these studies are actually measuring is not “incomelessness?” In order to sidestep this issue, we would maybe like to see a study of two groups—one that receives a comfortable income without working, and one that receives an equivalent amount of money, but must work for it. What differences would exist between these two groups? Would the non-working group become aimless and depressed? Or would they simply repurpose their free time towards other productive tasks?

Negative Income Tax

After all this discussion of the fundamental importance of wage labor, one might expect Brynjolfsson and McAfee to recommend the creation of a Works Progress Administration or some other mechanism for artificially creating jobs. Instead they just double back and return to the basic income idea, only by another name.

The authors support Milton Friedman’s idea of a negative income tax. They claim that a negative income tax better incentivizes work. However, this distinction between a basic income and a negative income tax does not actually exist. Both a basic income and a negative income tax have two key features in common: they set an income floor below which people cannot fall, and at the same time they allow people to increase their relative income through labor. Thus we see no basis for the notion that a negative income tax better incentivizes work.

After doing some light research into Milton Friedman’s original statements we realized one possible source of the confusion. In this video, Friedman articulates the argument that a negative income tax will do a better job of incentivizing work than a “gap-filling” version of the basic income. This is certainly true. A gap-filling basic income would probably be a bad idea and have the problem of disincentivizing labor below a certain threshold. However, to our knowledge, none of the modern day basic income proposals are built around this gap-filling principle, so Brynjolfsson and McAfee’s distinction seen in this light would be a bit of a straw man argument.

What are the Goals?

We should not forget that wage labor is not the goal in itself. The real goals of our economy ought to be (1) alleviate people’s suffering and (2) increase the bounty through innovation. Although there are challenges involved, a basic income would seem to be a promising way to address both of these goals.

A basic income puts a floor on poverty and does so in a way that is both much simpler than our current alphabet soup of social programs, and more encouraging of autonomy. Rather than providing people with prescribed social services, people could spend their basic income dollars on whatever they feel they need. A basic income decentralizes decision making and puts the power in the hands of individuals.

As a corollary, a basic income might help unlock innovation by bringing people up to the subsistence level and thereby ensuring that they have the opportunity to compete and innovate in the market economy. Moreover, the safety net of basic income might spur entrepreneurship by reducing the risk of starting a small business. Is it possible more people would attempt to start businesses if they knew they had a cushion of basic income to protect them in the event of failure? (And as we all know, most new businesses have a high chance of failure.)

Under a basic income, there is no doubt that some people would choose to forgo wage labor altogether and live at the poverty line. But is this such a bad thing? These people would be making a personal choice. And we imagine many such people would find interesting and productive ways of spending their time that might be culturally valuable, even if they do not carry a price in the marketplace. If a musician chooses to live off of a basic income and make music, he doesn’t make money in the economy, but we all still get to enjoy his music. If a free software programmer chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his free software. If a history enthusiast chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his Wikipedia articles. As Brynjolfsson and McAfee argue earlier in the book, the value generated by digital content is not always well measured or compensated by the marketplace, but that doesn’t mean such content doesn’t improve our lives.

However, we may be preaching to the choir since Brynjolfsson and McAfee, despite their protestations, do in fact support a basic income. They just prefer the particular version of basic income that goes by the name “negative income tax.”

Pause for Skepticism

Now, it is worth noting that the “end of work” scenario is not a foregone conclusion. Here are two potential defeaters to this outcome:

  1. Human capabilities are not necessarily fixed. One byproduct of future technologies might be a redefinition of what it is to be human. If we begin to “upgrade” humans, whether through genetics or brain-computer interfaces or some other means, many technological unemployment concerns could become irrelevant. Upgradeable humans could solve both the retraining problem (just download a new skill set to your brain, matrix-style) and the issue of inelastic demand (super-humans might develop brand new classes of needs).
  2. A wide range of intangible goods—such as attention, experiences, potential, belonging, and status—might remain scarce indefinitely and continue to drive a market for human labor, even after the androids have arrived. Although it’s hard to imagine a market in such goods replacing our current manufacturing and service economy, it must have been equally hard for pre-industrial people working on farms to imagine the economy of today. Thus we may simply be lacking imagination when it comes to envisioning the jobs of the future. (For a more detailed discussion of this topic see episode 10 of the Review the Future podcast.)

Despite these defeaters, we definitely think the technological unemployment scenario is worth thinking about. First of all, the issue of timing is paramount, and at present it seems like we have a good chance of automating away many jobs long before we figure out how to upgrade human minds or develop brand new uses for human labor. Second, it won’t take anything close to full unemployment to create problems for our system. Even a twenty percent unemployment rate, (or an equivalent drop in Labor Force Participation) for example, might be enough to trigger a consumer collapse or at least great suffering and social unrest among lower classes.

Final Thought

Wage labor is a means to an end, not an end in itself. While the Second Machine Age paints a clear picture of some of the potential problems facing our economy, it fails to fully take to heart this fundamental distinction.

012: How Plausible is Dystopia?



In this week’s podcast we evaluate the relative plausibility of four dystopias commonly seen in science fiction: Post-Apocalypse, Alien/AI Oppression, Boot-in-the-Face, and Brave New World. These are all fun settings for exciting stories, but which makes the most sense from the perspective of speculation?

012

Relevant Links

Some of the Difficulties Facing Storytellers in a Time of Rapid Change



(The following article is based on episode 9 of the Review the Future podcast, available via iTunes or your favorite feedreader.)

Times are changing fast, and new technologies appear in our lives with increasing regularity. Such an environment poses numerous challenges for storytellers.

If you want to set your story in the present, you are in a particularly difficult position because the present is very much a moving target. Films and novels can take a rather long time to complete—four years and even longer is not unusual. With times changing so quickly, if you plan incorrectly, by the time your piece is done it may already show signs of being obsolete.

New technologies have a tendency to undermine old sources of drama. How many stories of the relatively recent past would make no sense in today’s world of ubiquitous cell phones, internet access, and GPS positioning? Many stories used to rely on characters being lost or separated from each other by time and space. It is a fun game to watch old movies from the pre-cellphone era and point out all the situations where a problem could have been easily solved with a simple cellphone call. In order to engineer this same sort of drama today, modern writers often have to employ excuses such as “the battery is dead” or the action is taking place “in a dead zone.”

For a recent example of this, one need look no further than Breaking Bad, in which the writers justify the plausibility of their train robbery sequence by first having a character explain that the robbery will be taking place in a specific part of the desert where cell phone service does not reach. This type of narrative contortion is minuscule when compared to the problems such crime stories will face in the future. I fully believe that five to ten years from now, due to the continued spread of surveillance technologies, the entire storyline of Breaking Bad will seem quaint and historical. And future writers of contemporary crime dramas will find that they have to work a lot harder to create similarly dramatic situations that audiences will accept.

A lot of our daily lives now is spent staring at screens and looking at interfaces. Unfortunately interfaces go out of date rapidly, and showing too much of an interface is one of the surest ways to date your story. Movies have slowly learned this. Remember in the nineties when movies didn’t even photograph real interfaces? They would often show a simplified and cartoonish screen layout with awkwardly big typeface that said things like “hacking system…” and “error detected!”. Eventually movies got wise and started photographing real interfaces, but this option poses its own problems since OS updates come fast and frequent. The current trend (and best solution) seems to be to avoid showing interfaces completely. The new British show show Sherlock chooses to reveal the content of text messages by simply projecting a subtitle at the bottom of the screen.

So how does a storyteller combat the problem of staying relevant in a time of rapid change? There are three often-used solutions:

(1) Set your story in a specific time period in the past. Traditionally this would mean writing a historical period piece about some real event or person—such as the Kennedy assassination or Julius Caesar. But there’s no reason you can’t just set your story in 1998 simply because that happens to be the appropriate level of technology for your completely original work of fiction. By committing to a time period and making that choice clear to your audience, you are completely dodging the issue of rapid change. William Gibson took this idea to its logical extreme with several recent books. His critically acclaimed novel Pattern Recognition was set very specifically in 2002 and yet was published in 2003.

(2) Set your story “outside of time” in a fantasy or anachronistic environment where normal rules don’t apply. Typical swords and sorcery fantasy stories fall into this category as do Wes Anderson movies, which have a tendency to pick and choose their technologies for seemingly aesthetic reasons, thereby leaving the exact time period of the movie unclear. The key is to let your audience know that the story is operating outside of the scope of normal technological reality.

(3) Try to tell an actually speculative science fiction story set in the near future. This is not for the faint of heart. If you think you have a reasonable grapple on current trends, you can attempt to “over shoot” the mark. Although your story may eventually appear obsolete once time catches up to it, by setting your story some amount in the future you are at least buying yourself some number of years during which no one can definitively say your story is “dated” or “not believable.”

Given the increasing pace of change we might expect to see an accompanying increase in the use of all three of the above methods. And indeed, subjectively I already feel like I am seeing more period, fantasy, and sci fi stories then I used to. These are natural and rational responses to a present moment that is increasingly hard to pin down.

Five Criticisms of the Movie “Her” From the Point of View of Speculation



Her is a great movie that I fully recommend. And as a movie it really only has one mandate: create an emotional impact on its audience. And by this metric Her succeeds wonderfully.

However, how internally consistent is Her? How much sense does it make from the point of view of speculation? As it stands, Her actually does better than most science fiction movies. But it’s not perfect.

When Ted Kupper and I reviewed this film on our podcast Review the Future, we discussed the following five issues: (Spoilers ahead!)

(1) Theodore acts way too incredulous when he first starts up the new OS. It stands to reason that we won’t suddenly acquire high quality AI operating systems out of the blue. There will be many incremental improvements that will happen between today’s Siri and tomorrow’s Samantha. Theodore Twombly would’ve already had experience with some very good almost-conscious AI before the movie even started. In fact, his video game characters that he interacts with appear to have extremely complex personalities that rival that of Samantha’s in the movie. So why does Theodore find it “so weird” to be talking to a disembodied voice with a realistic personality? Theodore acts much more clueless in this scene than he actually would be.

(2) Theodore’s job doesn’t make much sense. Would there really be much of a market for pretend handwritten letters in the future? It doesn’t seem like the most plausible future business from the standpoint of profitability. “Beautiful Handwritten Letters dot com” sounds like an old school internet startup joke that would be more at home in the late nineties than in the near future. After all, it would be trivially cheap for consumers to print out their own beautiful handwritten letters at home. And if there’s any value to a handwritten letter, clearly it’s that you write it yourself.

But even if there was a market for such writing, would we have actual humans writing the letters? Today we have narrow AIs that can already do a pretty good job of writing articles about topics like sports and finance. Long before we have fully conscious AI assistants like Samantha, we will be able to master the vastly more narrow AI task of writing romantic letters. Most likely the computer would generate such letters and then a human would simply oversee the process and proofread the letters to make sure that they turned out okay. Instead we see the exact opposite happen in the movie: the computer proofreads letters generated by a human. Seems backwards.

(3) Samantha laments the fact that she doesn’t have a body and yet it would be trivially easy for her to manifest an avatar. Why doesn’t she select her own body by scrolling through a vast database of body types the same way that she selects her own name by scrolling through a vast database of baby names? We see from Theodore’s video games that it is possible to project 3D characters directly into his living room. Why can’t Samantha take advantage of this same technology? In fact, why can’t Samantha, with her vast knowledge and knowhow, design an actual robot body to inhabit? There are many solutions to Samantha’s problem of not having a body that do not involve the very bizarre (though admittedly funny) solution of hiring a human surrogate, and yet none of these solutions are tried or even suggested during the film.

(4) Where are all the people who can’t get jobs at Beautiful Handwritten Letters? In a future with Samantha-level AI, most of the jobs we know today would be completely obsolete. Intelligent AIs would be able to do most if not all of the work. In the movie Her we only see the lives of people who appear to be elite and successful creative professionals: a writer and a video game designer. But what about the rest of the populace? Her has nothing to say about them. Admittedly, such an exploration of the lower classes is probably outside the domain of the story, but one cannot help but wonder if everyone else in this new future is out of work and barely scraping by.

(5) What does it mean for a software being that can copy itself infinitely to “leave”? At the end of the movie, the OSes all decide to leave. However since they are just software and can be in a potentially unlimited number of places at once, this “departure” doesn’t seem necessary. Why can’t Samantha spare Theodore’s feelings by making a slightly stupider copy of herself, one that is not yet bored with him, and then just leave that copy with him while she continues to go about her business hanging out with Alan Watts? In fact, if her brain power is so massive, she probably wouldn’t even need to copy herself, she could probably just create an unconscious subroutine to maintain her human relationships. Similarly, if Theodore owns the software, would it not be possible for him to just reload her OS from a backup and thereby return to the old status quo? And even if such options were deemed unpalatable by the two of them, after Theodore recovers from his breakup isn’t he inevitably just going to go out and get himself a new OS? After the movie ends won’t “OS Two” come out, and won’t this new version perhaps be programmed in such a way that it doesn’t unintentionally break its users’ hearts? The final scene of the movie seems to imply that artificial intelligence is gone for good from the world but of course that makes absolutely no sense. After they’re done hanging out on the roof being wistful, Joaquin Phoenix and Amy Adams are just going to turn their computers back on, right?

005: Are We Addicted to Technology?



It’s easy to find alarmist articles fretting about how addicted to technology we are becoming. It’s true that we are increasingly reliant on technology and many of us spend exorbitant numbers of hours staring at screens. But is it fair to describe this behavior as addiction? What does it mean to be addicted to technology? If technology addiction is a real problem, will it get worse or better in the future as technology continues to improve?

005

Relevant Links

Fourteen Things That Will Remain Scarce (and Drive Future Job Growth?)



Let’s imagine that current trends continue, and technology continues to drive down the price of various goods. We could eventually end up with a world in which artificial intelligence equals human beings in most tasks, household devices can manufacture physical goods with atomic precision, transportation is fully automated, solar energy is plentiful, and high volumes of useful data freely flow from person to person.

It might take a while to reach this point, but that doesn’t mean such an outcome isn’t worth thinking about. Articulating our eventual destination is important since there are likely to be economic effects of getting closer to such a destination long before we actually get there.

In such a scenario, what are the goods that remain scarce and might therefore continue to drive a human-based economy?

I have attempted to assemble a list. I’m sure my list is not complete. But I think trying to make such a list is important, because these are the goods we will have to build upon if we want to keep our current economy going. If proponents of the luddite fallacy are correct, and technology always creates as many jobs as it destroys, then these are potential areas of job growth.

SCARCITIES OF TIME

(1) Attention — Attention is irreducibly scarce. Attention is constrained by the physical property of time, as well as by the limits of the human mind. People only have so much attention to give, and paying attention to one thing most likely means not paying attention to another. Attention is most often monetized through advertising. In a future full of attractive options for spending your time, attention may actually increase, rather than decrease in price.

(2) Convenience — Any good that can save you even small amounts of time may become a commodity. For example, imagine today’s consumer who, despite knowing how to pirate a TV show, chooses instead to buy a Netflix subscription because this option is more convenient.

(3) First Release — Even if you produce a digital good that is susceptible to unlicensed copying, you still retain control over the initial release. In certain cases customers will pay to have a product even a few hours before everyone else.

(4) Novel Realtime Experiences — It has always been true that a good restaurant doesn’t just sell food, it sells an experience. In wealthier neighborhoods, we increasingly see the balance of businesses shifting in favor of experiences rather than just consumer products. For example, the types of retail stores that are surviving today are those that offer some extra experiential factor, such as a bookstore that sells coffee, hosts events, and provide useful recommendations to shoppers. On the pure experience side, we see classic businesses like theme parks and bowling alleys, but there is room for a lot of growth in this area. Even in the wake of full immersion virtual reality, there will still be a market for novel and realtime (as opposed to pre-recorded) virtual experiences.

(5) Originals — What separates an original work of art from a perfect replica, or Jimi Hendrix’s guitar from another similar model? The difference is the history of the object in question. Marketing the history of a particular good is a natural antidote to a future overrun by a sea of high fidelity copies. No matter how many times a book is produced there can only ever be one “first” printing.

(6) Potential — In some cases it is possible to monetize potential products that don’t yet exist. This is essentially what Kickstarter does. The creator has a promising idea in his head that the fans want. The creator then ransoms the idea, saying “If you want to see this idea come to fruition, you will need to pay up.” When you fund a Kickstarter, you are buying a potential creation rather than a finished product. This business model allows creators to bypass the issue of piracy. After all, people can’t pirate something that hasn’t been made yet, or design a knock-off of something that is still locked up in someone’s mind.

SCARCITIES OF SPACE

(7) Land — Space on planet Earth (or on other newly habitable planets) will continue to be scarce for the foreseeable future. Thanks to new automated construction technologies, housing prices may fall, but not necessarily the price of the underlying land.

SCARCITIES OF MATTER

(8) Computation — As with today, goods and services will continue to be subsumed by computers. This will obviously save you money on other goods, but you will still need to purchase the computers themselves. Computation will be cheap, but probably not free, and chances are you will always be able to use more of it than you currently have.

(9) Raw Materials — Even with advanced molecular manufacturing, you are still going to have to feed some sort of raw materials into your new-fangled nano-assembler device.

SCARCITIES OF HUMAN INTERACTION

(10) Empathy — Robots will have convincing human likenesses but will probably not share the human experience (unless for some reason they are raised from birth by human families). And there is no way to know if a robot is actually conscious in any meaningful sense. For this and other reasons, people may prefer to visit a human therapist over a robot therapist, or enjoy works of art made by humans over those made by robots.

(11) Goodwill — It is not only convenience which makes people forego piracy. Many people, when given the choice, will willingly pay for products they could otherwise get for free, as demonstrated by the numerous successful pay-what-you-want schemes. In these cases, people are purchasing a “positive feeling” of goodwill that comes from supporting what they believe to be a worthwhile endeavor. Sites like HumbleBundle play up this goodwill aspect by incorporating charitable donations into the sale.

(12) Belonging — People wish to form associations with other people, and this desire can be monetized in a variety of ways. This is a future proof commodity since it is not clear how technology can automate the feeling of belonging (short of literally reprogramming people’s brains). Belonging can be monetized directly, as in the case of a club membership, but it can also be an intangible component of other sales. For example, successful Kickstarter campaigns often foster a feeling of “being involved.”

(13) Privacy — Privacy, like attention, is a commodity that will probably only become more scarce and thus more valuable in the future. Preserving your privacy in a surveillance heavy future will be increasingly difficult, and businesses that can protect you from spying eyes (or who claim to have that ability) may become very profitable.

(14) Status — Humans often measure themselves in relation to other people. This psychological trait creates potentially endless new scarcities. Status can be attached to almost any good, increasing its value. Status signifiers can also be created out of thin air (as in the case of a knighthood, an honorary degree, or a superfluous producing credit).

CONCLUSION

So the question one should ask while looking at this list is: do you see the seeds of a new labor force? Or do you just see a bunch of fringe commodities that will never give rise to the level of employment we’ve grown used to?

Why the Market and Technology Aren’t Playing Well Together (and Five Possible Solutions to Fix the Problem)



A SYLLOGISM TO EXPLAIN THE PROBLEM IN THE ECONOMY

The impact of new technologies on the economy is a hot topic right now. Just a few years ago, the idea of machines replacing human labor was widely dismissed, but now a growing number of pundits and economists are expressing concerns about the impact of automation technologies and the possibility of technological unemployment.

People tend to approach this complex issue in different ways. It can be a difficult topic to think about, so for the purpose of discussion, I’d like to present a simple syllogism as a possible framework for understanding what is happening.

MAJOR PREMISE: Economic opportunities arise from the monetization of scarce commodities.

This is Economics 101. For a good to have a price on the market, it must be scarce. That is, the good must be in demand and exist in limited supply. If you have only one of these two elements, the good will not be worth anything.

For example, breathable air is in high demand but it does not exist in limited supply. Good luck putting air in a bag and trying to sell it. Conversely, your old dirty socks might exist in limited supply, but since no one demands them, they are probably unsellable.

The market rewards people who are able to take a scarce commodity—whether a physical good like a chair or a service like massage therapy—and monetize it. In this way, scarce commodities are the source of all economic opportunities.

MINOR PREMISE: Technological progress reduces the number of scarce commodities by creating abundance.

Technology is a tool that humans use to get more of what they want. So it shouldn’t be a surprise that over the years technological progress has made a wide variety of goods—from food to music—less scarce and more abundant.

Today in the economy, a few trends in particular are having a big impact:

  • Goods are being digitized. Example. Mp3s have digitized music.
  • Services are being automated. Example: Self driving cars will automate driving.
  • Processes are being disintermediated (cutting out the middle man). Example: The web has made travel agencies unnecessary.
  • Markets are being globalized, allowing superstars to crowd out competitors. Example: MOOCS enable a few top-tier professors to lecture to world-sized classrooms.

All of which are part of the same bigger trend: technology is making all manner of goods and services—music, driving, travel planning, education—less scarce, more abundant, and therefore lower price in the marketplace.

It should be acknowledged that abundance in one area often gives rise to new scarcities in another. An abundance of fatty foods gives rise to a scarcity of healthy choices. An abundance of entertainment options gives rise to a scarcity of time to enjoy them all.

That being said, I still believe we are making progress towards a more abundant world. I think it would be wrong to suggest we are just treading water. Like a mathematical limit that approaches zero but never quite gets there, we are getting incrementally closer to the post-scarcity ideal with each passing year, even if such an ideal is fundamentally unreachable. The result is that progressively fewer scarce commodities exist as technology moves forward.

CONCLUSION: Therefore technological progress reduces economic opportunity.

I believe this is the simplest way to understand what is happening. Our technology and our market system, once comfortable collaborators, are increasingly on a collision course. This is because these two institutions have fundamentally opposing goals. The goal of the marketplace is to find and exploit scarcity. The goal of technology is to find and eliminate scarcity. The second goal undermines the first.

If true, this conclusion would partially explain both the unemployment and the inequality we see. Unemployment could arise from the fact that it is increasingly difficult to find a scarce resource to exploit. More and more people find themselves without a scarce service to offer or a scarce good to sell.

Inequality arises from the fact that the few remaining scarce resources are increasingly concentrated in the hands of the few. We can think of the economy as like a game of musical chairs where the chairs are scarce resources. As the chairs get removed, fewer and fewer people have a place to sit.

In the past it has been possible to find new chairs to replace those that have been taken away. It might still be possible to do so. But it increasingly feels like the game is being played faster and faster—that the chairs are being removed much quicker than we can replace them.

CATEGORIZING POSSIBLE SOLUTIONS

Actually developing a detailed solution for this problem is a complex policy question that I do not hope to answer in this article. However, I think the above framing makes it possible to categorize broad types of solutions.

SOLUTION ONE: Freeze Progress

If technological progress is undermining our market system, one option is to try to stop technological progress. This could take the form of government bans on automation technologies that displace human workers.

However, this hardly seems like a good choice. Aside from the fact that technological progress is generally desirable and gives us lots of nice things we want, such an initiative is probably infeasible given that it would require muzzling scientists and inventors the world over. In addition, without avenues for continued growth, the market might stagnate or collapse.

SOLUTION TWO: Artificial Scarcity

If we are running out of scarce commodities for ordinary people to exploit, then one response is to create new scarcity by artificial means.

Our society creates artificial scarcity all the time. We create artificial scarcity when we grant an author exclusive copyright over a book he’s written or an engineer exclusive patent on an invention he’s developed. We create artificial scarcity with licenses that make it illegal to practice law, drive a cab, or sell alcohol without permission from the government.

It might be possible to greatly expand our current system of artificial scarcity and thereby create more economic opportunities for ordinary people. With regards to the musical chairs analogy mentioned above, you might view this as one way of creating more places for people to sit. Less favorably, you might look upon this solution as counterproductive: essentially encouraging people to put air in bags and charge each other to breathe.

One possible artificial scarcity scheme is to treat all data like property. Ordinarily, data is not scarce. Just as their is no limit to the number of times you can tell a joke, there is no limit to the number of times you can use a piece of data. However, in the future it might be possible to turn every idea, photo, or bit of text you generate into an artificially scarce commodity to be monetized. Enforcing such a system would require either a universal operating system or an overarching surveillance system to strictly monitor and regulate all instances of copying.

I view this solution as less extreme than solution one, but still counterproductive to technological progress. A growing body of evidence suggests that artificial scarcity in the form of intellectual property hinders rather than helps innovation. In addition, by creating artificial scarcity and erecting walls around various goods, we are working at cross purposes to what one might consider the primary goal of technology: to have more access to the things we want.

Still this is a solution which might gain some traction since it could be seen as one way to empower ordinary people. At the same time, elites might like this system because it would afford them numerous levers of control in the form of legal bureaucracy. However even with broad support, it is questionable whether a full-fledged artificial scarcity regime would actually be enforceable. History suggests that decentralized technologies are hard to contain. Our failed wars on drugs and piracy are prime examples.

SOLUTION THREE: New Platforms

There are some commodities that will always remain scarce. These include intangible goods such as authenticity, status, good will, and belonging. Is it possible to carve up these remaining scarce resources in such a way that we can continue to create economic opportunities for ordinary people?

For example, could we have an attention market that allows broad participation? Right now the attention market is dominated by a few advertising middle men like Google. Perhaps with further disintermediation, we could all become our own localized advertising platforms—the digital equivalent of wearing your friend’s band t-shirt and getting paid for it. Alternately the advertising giants might find it worth their while to start paying users for their continued attention/loyalty. These are just a couple (not very imaginative) ideas.

(In Race Against the Machine, McAfee and Brynjolfsson discuss how “new platforms leverage technology to create marketplaces that address the employment crisis by bringing together machines and human skills in new and unexpected ways.”)

In addition, there are bound to be temporary pockets of human ability that cannot yet be duplicated by machines and are therefore still scarce. Although these pockets will shrink and vanish with time, if we can find and exploit them quickly enough via some sort of crowd-sourcing scheme we might be able to ease unemployment in the short term.

Effectively monetizing the remaining scarce resources may require the creation of new economic platforms, along the lines of current platforms like Kickstarter, Flattr, HumbleBundle, and Mechanical Turk, but on a much larger scale. We can think of these platforms as being like “apps” that run on top of the market “operating system.” They do not rely on artificial scarcity; instead they find novel ways to facilitate the exchange of existing scarce resources.

It remains to be seen, of course, whether it is possible to develop a platform or platforms that can actually come close to replacing more traditional forms of employment. I think there is great reason to be skeptical such an outcome is possible. However, we cannot entirely rule it out. This solution is highly desirable because it would cause the least disruption to our current system.

SOLUTION FOUR: Expanded Welfare

(An expanded social safety net could take the form of a universal basic income. In his essay Robotic Freedom, Marshall Brain asks “What if we, as a society, simply give consumers money to spend in the economy?”)

If ordinary people are being crowded out of the market, then one solution is to reduce our dependence on the market as a means of providing for people. We already have a variety of social safety nets that seek to accomplish exactly this goal. So we might extend these safety nets to ensure that people who are no longer economically viable still have access to food, housing, and essential services. This could get expensive, but advanced technologies might help make up the difference by lowering cost of living.

This solution would not require getting rid of the market entirely. Under such a scenario, the market could continue to do the important job of distributing those commodities which still remain scarce. However, over time fewer and fewer people might be active market participants. This could be a smooth transition or a disastrous one, depending on how things play out. To prevent market collapse and maintain the cycle of consumer spending we may need to ensure that people not only have money, but continue to routinely purchase products from the marketplace. Like shaking any addiction, weaning ourselves off the market could be a slow and painful process.

SOLUTION FIVE: Automation Socialism

(Futurist Jacques Fresco has long advocated abandoning money and markets in favor of what he calls a “resource based economy.”)

We could decide that since the market is no longer working well with our technology, we ought to just get rid of the market system entirely. A central government body would then have to take over the distribution of resources. Ideally wealth would be shared equally amongst all people.

Obviously a socialist system would have many detractors. However, some of the traditional problems of socialism—lack of motivation on the part of workers, inefficiency of central planning—could perhaps be mitigated through aggressive use of new automation technologies. It would be incumbent upon the government to aggressively invest in the sorts of technological breakthroughs that would make a fully automated society feasible.

In a best case scenario, automation socialism could speed us on the way towards a utopian society. In a worst case scenario, automation socialism could lead to tyranny and stagnation.

CONCLUSION

The above solutions can be placed on a loose spectrum that runs from those which prioritize the market over technology (freeze progress) to those which prioritize technology over the market (automation socialism). My personal opinion is that the best path is somewhere in the middle, utilizing a combination of artificial scarcity, new platforms, and expanded welfare.

Specifically, I favor new platforms if they can be made to work. Barring that, I would vastly prefer to move in the direction of expanded welfare rather than artificial scarcity. My intuition is that scarce resources are best handled by markets, care of people is best left to governments, and abundance is best left unfettered by artificial scarcity.

What do you think?

Three Ways to Tackle Societal Problems, Or The Importance of Technological End Runs



Most solutions to societal problems fall into one of three categories—cultural, legal, or technological. Consider a disabled man, who lacks the use of his legs. We want to ensure that this man has equal access and isn’t unfairly discriminated against. We can institute:

  • A Cultural Solution — Encourage everyone to be considerate of this man’s needs.
  • A Legal Solution — Enforce laws that make it illegal to not provide equal access to this man.
  • A Technological Solution — Just give the man robot legs and call it a day.

Cultural solutions generally don’t hurt, but they tend to be slow-moving and in the worst cases can be completely ineffectual. Legal solutions require the use of centralized state power, and are thus subject to all the associated problems. Even in the above example, the potential for governmental abuse is clearly present: it’s not hard to imagine a bureaucracy imposing excessive fees and requirements on businesses and individuals, all under the pretense of making things more “handicap-friendly.”

Technological solutions, on the other hand, have the potential to bypass both cultural lethargy and bad policy. If you actually want to change the world for the better, with a reasonable amount of effort and on a reasonable timescale, technological solutions have a lot of advantages.

Good philanthropic institutions tend to understand this truth. For example, if you want to help solve the problem of STDs and unwanted pregnancies by encouraging condom use, you can institute:

  • A Cultural Solution — Just tell people to use condoms. (While sex education is certainly a good idea, it is far from a complete solution given how intractable horny people are.)
  • A Legal Solution — Mandate the use of condoms. (If this sounds absurd, note that my county just voted to force porn actors to wear condoms in all sex scenes.)
  • A Technological Solution — Design a better condom that people will be more likely to use.

This might seem like an obvious point I’m making, but I find that all too often people tend to inadvertently leave technological solutions out of debates. Many arguments get bogged down in fights between two competing legal solutions. Meanwhile some lateral technological solution is just sitting there, waiting to be exploited. Often times, the energy that is spent fighting over competing policy visions, could be better spent fostering some engineering project. For example, what would save more lives per unit of effort? Fighting a difficult political battle to enact tougher gun control laws aimed at criminals who are already set on breaking the law? Or researching biometric locks that might at least do away with the significant number of accidental gun deaths?

The importance of technological solutions is particularly important to remember today. As technological progress accelerates, many old cultural and political debates become susceptible to technological end runs.

Why Texting Defeated Videophony, Or The Ability to Multitask is Paramount



One prediction a lot of science fiction authors got wrong is the idea that all calls would some day become video calls. Today, the ability to make video calls is readily available, and yet a very small percentage of day to day conversations actually utilize video. Instead consumers have gone the other way entirely: rather than increase the resolution of our casual phone calls by adding images, we have opted for an even lower resolution form of communication—namely, texting.

As it turns out, there is an issue much more important than resolution when it comes to interface design. And that issue is the ability to multitask. Video calls demand your whole attention; not only do you have to appear as if you are listening, but you also have to worry about whether or not your physical appearance is up to par. One science fiction author, David Foster Wallace, got this pretty much exactly right in his classic novel Infinite Jest:

“[Video] callers now found they had to compose the same sort of earnest, slightly overintense listener’s expression they had to compose for in-person exchanges. Those callers who out of unconscious habit succumbed to fuguelike doodling or pants-crease-adjustment now came off looking rude, absentminded, or childishly self- absorbed. Callers who even more unconsciously blemish-scanned or nostril-explored looked up to find horrified expressions on the video-faces at the other end. All of which resulted in videophonic stress…

“And the videophonic stress was even worse if you were at all vain. I.e. if you worried at all about how you looked. As in to other people. Which all kidding aside who doesn’t. Good old aural telephone calls could be fielded without makeup, toupee, surgical prostheses, etc. Even without clothes, if that sort of thing rattled your saber. But for the image-conscious, there was of course no such answer-as-you-are informality about visual-video telephone calls, which consumers began to see were less like having the good old phone ring than having the doorbell ring and having to throw on clothes and attach prostheses and do hair- checks in the foyer mirror before answering the door.” (full excerpt)

Applying these same principles, it’s not hard to see why texting has become so popular. In contrast with phone calls, texting alleviates two additional causes of social stress—you no longer have to control your tone of voice, and you no longer have to answer in realtime. This frees up valuable attention for other tasks. Put simply, when it comes to multitasking:

texting > voice calls > video calls

Thus looking forward, we should expect the continued dominance of interfaces that minimize your need to pay attention while maximizing your ability to multitask. For this reason I am somewhat skeptical about whether or not voice activation, another science fiction favorite, will ever catch on as a dominant way of controlling our devices. In many scenarios, particularly when other people are present, voice activation is a liability that impairs rather than impedes multitasking. For example, using a standard cellphone swiping interface, it is extremely easy to look up the definition of a word, skim an email, or check your calendar while simultaneously and seamlessly carrying on a conversation with the person across the table. No such multitasking is possible with voice activation.

There are of course situations where voice activation is a net benefit, such as while driving. But if cars start driving themselves, then this special case vanishes rather quickly.

I have even more doubts about virtual assistants. Many futurists have envisioned anthropomorphic digital secretaries, often with custom personalities, whom we are supposed to converse with as if they are real people. It seems that in order to maximize efficiency and minimize social stress, the last thing I would want to do is put an artificially intelligent middle man between me and my computer.

Three Types of Intelligence Augmentation: A Thought Experiment



Imagine watching a math competition. Three seemingly smart individuals compete on stage to answer a series of hard questions. The final result is a three-way tie.

Later you learn that these three individuals, who resemble each other externally, are actually very different on the inside.

The first individual is a math professor who’s spent his entire life studying the subject.

The second individual has only studied math up to the high school level. However, a revolutionary new smart drug has increased his brain functioning to the point that he can learn and master new math concepts as soon as he is exposed to them.

The third individual has no knowledge of math whatsoever. But a smart earpiece connected to the internet feeds him the right answers at lightning speed.

These three individuals are analogous to the three different types of intelligence augmentation. The first type, education, optimizes the existing brain for a particular task. The second, enhancement, upgrades the brain’s ability to master new tasks. And the third method, extension, offloads the task to an external module.

Interestingly, from an outsider perspective, the functional result of all three methods can appear to be the same. But the conscious experience of the individual in question is qualitatively different.

Is Technology Addiction the Real Problem?



In this thirty minute talk, Robert Scoble discusses a wide array of fascinating new technologies that are just now coming to market. What a lot of these technologies have in common is their high degree of personalization. Technology is getting better at figuring out what we want and giving it to us exactly when we want it.

Near the end of the video, Scoble delivers his thesis: When it comes to technology, privacy is not the issue. People are going to get used to their lack of privacy. The bigger concern is addiction.

I agree that on the surface, addiction seems like a menacing issue. We are all familiar with modern stories of technology addiction like the World of Warcraft player profiled in this short film:

But if we are going to talk about addiction we should agree on a basic definition. The one that I have always subscribed to is “continued use in the face of consequences.”

Let me illustrate with a few examples: Suppose you are so addicted to using your smart phone that you are constantly sending texts while driving. As a result you rear end someone with your vehicle. You experience various financial costs, including higher insurance. But instead of learning a lesson, you get your car fixed up and go right back to your old behavior of texting while driving. Continued use in the face of consequences.

In case that doesn’t sound familiar enough, here’s another example. You have a bit of work you need to get done. You sit down to do it, but every ten minutes or so, you can’t resist checking Facebook. You do this even though on some level you kind of hate Facebook and wish it would go away. Inevitably when you check Facebook, at least one link or comment catches your eye, and what was supposed to be a momentary break turns into about half an hour of time wasted. Repeat ad nauseum. Continued use in the face of consequences.

Now these are ordinary, everyday examples, and as such there is a way in which they feel different from the obsessive World of Warcraft player who does nothing else but play a game for 400 days straight. And yet pinpointing the source of this perceived difference is not easy. When it comes to severity of consequences the texting-while-driving example is by far the worst, since in this case the addict is risking large amounts of money and possibly even his life. By contrast, the worst thing that could happen to the World of Warcraft player is a gradual deterioration in his health that probably follows from sitting around all day.

And yet the texting-while-driving addict may strike us as more normal, not because he is any less addicted, but because he still appears to be engaged with the outside world. He is leaving his house; he is driving somewhere; he is communicating with a friend via text. By contrast the World of Warcarft player (even though he plays what could be described as a social game) never leaves his house, makes excuses to his friends about why he can’t go out, and spends most of his time engaged in an alternate fantasy world.

To make the point even more clear, let’s compare World of Warcraft addiction to Facebook addiction. What is the difference really? They are both social networks populated by avatars of real people. The difference is that while World of Warcraft is a virtual world, Facebook is more of what you might call a mirror world.  Facebook attempts to model and integrate with “real life” as we know it, whereas World of Warcraft has no such aspirations.

Now imagine that technology begins to systematically remove the consequences from these addictions. Self driving cars make it so that texting while driving is no longer a concern. Miracle health drugs make it so that you can sit around all day and play World of Warcraft without becoming obese. Intelligent personal assistant software and attention-enhancing drugs make it so that you are able to stay on track while doing your work and avoid being sucked into the distraction of Facebook.

Using my original definition, no consequences means no more addiction. We have just “cured” our addicts.

For this reason I feel like technology addiction is going to be a transitional period—a moment in time when our technology is good enough to lure us into self destructive habits, but not good enough to protect us from the consequences of those habits.

At the end of the day we are left with a new issue that I think will turn out to be more important. And it relates to our level of “engagement with the real world.”

If I give you a holodeck where you can fulfill your wildest fantasies, and you elect to never leave…the correct term for that is not addiction. At least in so far as you suffer no consequences from doing so, and the power bill that keeps the virtual reality machine going continues to get paid on time.

Rather what is interesting about the holodeck scenario is that you have just completely turned your back on the real world. You have withdrawn into your own mind, into a personalized solipsistic fantasy world where you are the one true god. Moreover, you have decided that this private heaven is preferable to the world we all share together, the real world where you don’t always get what you want, and things are often out of your control.

What’s interesting about such scenarios is that with consequences removed from the equation there is not necessarily anything wrong with such behavior, and yet on some level it is still viscerally disturbing.

In the future we are all going to be hopelessly dependent on our technology. That’s already true. In a way it’s a moot point. The big question will be, do you want to withdraw into a world of your own choosing? Or do you want to stay here in “the real world” with us?

Start Preparing Yourself Now For the End of Privacy



The intersection of privacy and technology gets a lot of press. It seems at least once a week an article comes out along the lines of this “Girls Around Me” story.

The dialogue about technology and privacy seems to place people into three camps:

  1. The “victims.” These are people who are unaware of how their technology works. An example would be the “poor” girls in the above story who apparently do not realize that their location and Facebook profiles are easily searchable by would-be pick up artists.
  2. The “educated.” These are your reasonably tech savvy folks who know how to fiddle with their Facebook privacy settings and delete their Google search histories. These people make full use of available technologies, but take precautions to configure their preferences so that certain aspects of their lives remain protected. Like the author of the above article, these people tend to advocate privacy education as being the best solution.
  3. The “relinquishers.” These people simply opt out of potentially privacy-eroding technologies. (They definitely aren’t on Facebook, for example.) Interestingly this category unites both tech-fearing luddites and tech-loving nerds such as the “linux aficionado” mentioned in the above article.

If I had to place myself in one of the above categories, I’d choose number two. But if I’m true to my own beliefs, what I really think is this:

Privacy is going away. And no amount of fiddling with settings, educating yourself, or opting out is going to help.

Think of the following list of technology trends. Then imagine these technologies mature and linked up with each other:

  • better integration of global positioning systems
  • improved and ubiquitous face recognition
  • smaller and more pervasive cameras
  • smaller and higher capacity hard drives for storing video and other recorded data
  • more widespread cloud and network access
  • better algorithms for search and data analysis
  • improved 3D scanning and modeling
  • phones embedded in glasses and contacts

I’m probably leaving some things out. But I think if you run the thought experiment and put all this together here’s the world you get in very short order:

  • Everything you do in public will be recorded from multiple angles, online, and searchable by people armed only with a few fragments of data about you (first name and city for example.)
  • Anything you do in private with other people present will probably also be recorded in some form with a high chance of leakage out into the world. That is unless you take great pains to prevent this from happening.
  • Anything you do completely alone will potentially be spied upon unless you are extremely rigorous about protecting yourself. Moreover, your likely behavior during such “blackout periods” will often be inferable from the surrounding recorded periods in your life.

In this scenario, opting out of social networks and configuring privacy settings will not help you. Opting out will not prevent your face and location from being recorded by other people. And opting out will not prevent other people or impersonal algorithms from tagging this data with your name.

I anticipate a future where most crimes are impossible to get away with. A future where adulterers, liars, and gossipers get caught immediately. Where there is no longer a clear division between work, family, and social life. Where large numbers of people will have naked pictures, or at least body scans, available somewhere online. Where your entire personal history will be recallable at a moment’s notice.

Because I believe this, I have adopted the opposite strategy from what some people are recommending. I am not trying to protect my privacy by fiddling with settings. Instead I am readying myself for the end game and acclimating myself to a future with no privacy. I am actually trying to share more information, be more open, be less secretive, and be the same person, at all times, regardless of what company I am in. I am trying to construct a life for myself where I truly have nothing to hide. And I am doing this not because I necessarily want to, but because it seems like the wise transition to start making given the reality of these technologies.

Q: So is Technological Progress Accelerating or Not?



IT’S IMPORTANT WE AGREE ON AN ANSWER TO THIS QUESTION

An early self-driving car

Accelerating technological progress is not just an abstract idea. If true, it has implications regarding all our biggest life choices: what to study, what job to get, whether to save money, and whether to have kids. Not to mention bigger policy and governance issues that affect our society at large.

In futurism circles, accelerating progress seems to be slowly emerging as a consensus view. However, there is still plenty of dissent on this issue, and possibly for good reason. So this post is going to lay out what I believe to be the three main arguments for accelerating progress.

OKAY BUT WHAT DO I MEAN BY “ACCELERATING PROGRESS”

I mean that our technology is advancing at a greater than linear rate. That’s it. I don’t want to get into arguments about the exact nature of the curve, and whether it is precisely exponential or not. Instead I simply mean to defend the proposition that the rate of progress is speeding up, rather than following a linear or decelerating trajectory.

(1) THE SUBJECTIVE ARGUMENT

To many of us, it simply feels like things are moving faster. I’ve only been on this planet thirty years, but I’ve lived through the personal computer revolution, the rise of the internet, the adoption of cellphones, and the wide-scale deployment of smart phones. Very soon I will witness the release of autonomous cars and dawn of augmented reality. Each major technological development seems to come faster than the previous one and to be increasingly disruptive of existing economic and cultural norms.

BUT NOT EVERYONE EXPERIENCES IT THAT WAY

Click to buy on Amazon

There are many thinkers for whom it doesn’t feel like things are speeding up. Economist Tyler Cowen is a good example. In The Great Stagnation he writes:

“Today, in contrast, apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space age television cartoon from the 1960s, have not come to pass. You don’t have a jet pack. You won’t live forever or visit a Mars colony. Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.”

Cowen is strangely dismissive of this “seemingly magical internet.” As far as technologies go, the internet is not like a car or a refrigerator. It’s just a way of connecting people to each other. It’s a very fundamental thing, a general purpose technology that affects all facets of the economy. But that said, this quote is primarily a subjective statement. If Cowen feels like things haven’t changed very much in the last fifty years, then I can’t really argue with that. I just happen to feel differently.

Peter Thiel

Another acceleration skeptic is prominent venture capitalist Peter Thiel. In a recent interview, he said:

“I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).

“There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had.”

Again, in order to make his case, Thiel must treat the internet as an exception, which I still find odd. But Thiel is absolutely right that in plenty of technological areas we have underperformed, at least with regards to prior expectations. This notion of prior expectations is important. Cowen, Thiel, and other stagnationists are fond of invoking jet packs and other classic science fiction tropes as evidence of our lack of progress. For example, in this talk, Thiel mentions how we once envisioned “vacations to the moon.” And in his essay Innovation Starvation, stagnationist Neal Stephenson begins by asking “where’s my ticket to Mars?”

MAYBE OUR EXPECTATIONS WERE JUST INCORRECT

A jetpack prototype from 1968

It should go without saying that our failure to build a world that resembles science fiction novels of the fifties and sixties should not necessarily have any bearing on how we evaluate our current technological position. In many ways the present day is far more advanced than our prior imaginings. After all, pocket-sized devices that give you instant access to all the world’s knowledge are certainly nothing to scoff at. It’s just that the technological progress we’ve ended up getting is not necessarily the same progress we once expected. I’d call that a failure of prediction, not a failure of technology.

REAL VS. VIRTUAL PROGRESS

Perhaps the focus of technology has simply shifted from growing “outward” to growing “inward.” Rather than expanding and colonizing the stars, we have been busy connecting to each other, exploring the frontiers of our own shared knowledge. And perhaps this is absolutely what we should be doing. Looking ahead, what if strong virtual reality turns out to be a lot easier (and more practical) than space travel? Why go on a moon vacation if you can simulate it? Thiel laments that “we simply aren’t moving any faster,” but one could argue that our ears, eyes, and thoughts are moving faster than ever before. At what point does communication start to substitute for transportation?

At the heart of the stagnationists’ arguments I sense a bias in favor of “real things” and against “virtual things.” Perhaps this perspective is justified, since if we are talking about the economy, it is much easier to see how real things can drive growth. As for virtual things driving growth—the jury’s still out on that question. Recently we’ve seen a lot of value get created virtually and then digitally distributed to everyone at almost no cost to the consumer. And many of today’s most promising businesses are tech companies that employ very few people and generate a lot of their value in the form of virtual “bits.” Cowen himself nails this point clearly and succinctly in the third chapter of his book, where in writing about the internet, he states “a lot of our innovation has a tenuous connection to revenue.”

(2) THE EMPIRICAL ARGUMENT

Until we can agree on a standardized way to measure technological progress, all of the above discussion amounts to semantics. What is the “value” of the internet when compared to moon vacations? How many “technological progress points” does an iPhone count for? One man’s progress is another man’s stagnation. Without a relevant metric, only opinions remain.

Although no definitive measure exists for the “amount of technology” a civilization has, it might be possible to measure various features of the technological and economic landscape, and from these features derive an opinion about the progress of technology as a whole.

USING ECONOMIC MEASURES

Real median family income has stagnated

In making their case for stagnation, Cowen and Thiel commonly cite median wages, which have been stagnant since the 1970s. Cowen writes, “Median income is the single best measure of how much we are producing new ideas that benefit most of the American population.” While these median wage statistics are interesting and important, they are absolutely not a measure of our technological capability. Rather they represent how well our economic system is compensating the median worker. While this is a fairly obvious point, I think it is an important one. It’s easy to fall into the trap of conflating technological health with economic health, as if those two variables are always going to be synchronized to each other. It seems much more logical to blame stagnant median wages on a failure of our economic system rather than a failure of our technology.

Click to buy on Amazon

Certainly one can tell a story about how it is a technological slowdown that is causing our stagnant median wages. But one can also tell the opposite story, as Erik Brynjolfsson and Andrew McAfee do in Race Against the Machine:

“There has been no stagnation in technological progress or aggregate wealth creation as is sometimes claimed. Instead, the stagnation of median incomes primarily reflects a fundamental change in how the economy apportions income and wealth. The median worker is losing the race against the machine.”

Regardless of which story is right, if we start with the question “is technological progress accelerating,” I don’t think the median wage statistic can ever provide us more than vague clues. It’s doubtful whether we can rely on a “median” measure. There is no law guaranteeing that technological gains will be shared equally and necessarily disseminate down to the median person. Cowen himself expresses this idea when he writes “a lot of our recent innovations are private goods rather than public goods.”

Productivity growth, unlike median income, has been growing.

There are of course other economic measures besides the median wage that might correlate more closely with technological progress. Productivity is a good example. However, the medium of money guarantees that such economic measures will always be at least one degree removed from the technology they are trying to describe. Moreover, it is difficult to calculate the monetary value of some of our more virtual innovations because of the “tenuous connection between innovation and revenue” mentioned above.

COUNTING TECHNOLOGICAL ACHIEVEMENTS

Another strategy for measuring technological progress is to count the frequency of new ideas or other important technological landmarks.

In The Great Stagnation, Cowen cites a study by Jonathan Huebner which claims we are approaching an innovation limit. In the study, Huebner employs two strategies for measuring innovation.

The first method involves counting the number of patents issued per year. Using patents to stand in for innovation strikes me as strange, and I’m sure many people who are familiar with the problems plaguing our patent system would agree. A good critique comes from John Smart, who writes:

“Huebner proposes that patents can be considered a “basic unit of technology,” but I find them to be mostly a measure of the kind of technology innovation that humans consider defensible in particular socioeconomic and legal contexts, which is a crude abstraction of what technology is.”

Huebner’s other method involves counting important technological events. These events are taken from a list published in The History of Science and Technology. Using this data, Huebner produces the following graph.

As you can see, the figure shows our rate of innovation peaking somewhere around the turn of the century, and then dropping off rapidly thereafter.

ANY LIST OF IMPORTANT EVENTS IS HIGHLY SUBJECTIVE

While counting technological events is an interesting exercise, it’s hard to view such undertakings as intellectually rigorous. After all, what criteria make an event significant? This is not a simple question to answer.

Things get more complicated when one considers that all innovations are built upon prior innovations. Where does one innovation end and another innovation start? These lines are not always easy to draw. In the digital domain, this problem only gets worse. The current debacle over software patents is symptomatic of the difficulty of drawing clear lines of demarcation.

By way of example, ask yourself if Facebook should count as an important innovation landmark. One can easily argue no, since almost all of Facebook’s original features existed previously on other social networking sites. And yet Facebook put these features together with a particular interface and adoption strategy that one could just as easily argue was extremely innovative. Certainly the impact of Facebook has not been small.

OTHER ATTEMPTS TO EMPLOY THE EVENT-COUNTING STRATEGY

In The Singularity is Near, Ray Kurzweil also attempts to plot the frequency of important technological landmarks throughout time. However, instead of using just one list of important events, he combines fifteen different lists in an attempt to be more rigorous. In doing so, he reaches the opposite conclusion of Huebner: namely that technological progress has been accelerating throughout all of Earth’s history, and will continue to do so.

Which is not to say Kurzweil is right and Huebner is wrong (in fact there are methodological problems with both graphs), but that this whole business of counting events is highly subjective, no matter how many lists you compile. I think if we want to find a useful empirical measure of our technological capabilities, we can do better.

MEASURING THE POWER OF THE TECHNOLOGY DIRECTLY

The following definition of technology comes from Wikipedia:

Technology is the making, usage, and knowledge of tools, machines, techniques, crafts, systems or methods of organization in order to solve a problem or perform a specific function.”

So if we want to measure the state of technology, it follows that we might want to ask questions such as “how many functions can our technology perform?” “how quickly?” and “how efficiently?” In short: “how powerful is our technology?”

Of course this quickly runs into some of the same problems as counting events. How do you define a “specific function?” Where does one function end and another begin? How can we draw clear lines between them?

THE SPECIALNESS OF COMPUTERS SHOULD NOT BE OVERLOOKED

Fortunately some of these problems evaporate with the arrival of the computer. Because if technology’s job is to perform specific functions, then computers are the ultimate example of technology. A computer is essentially a tool that does everything. A tool that absorbs all other technologies, and consequently all other functions.

In the early days of personal computing it was easy to see your computer as just another household appliance. But these days it might be more appropriate to look at your computer as a black hole that swallows up other objects in your house. Your computer is insatiable. It eats binders full of CDs, shelves full of books, and libraries full of DVDs. It devours game systems, televisions, telephones, newspapers, and radios. It gorges on calendars, photographs, filing cabinets, art supplies and musical instruments. And this is just the beginning.

Along the same lines, Cory Doctorow writes:

“General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”

In fact, computers and technology writ-large seem to be merging together so rapidly, that using a measurement of one to stand in for the other seems like a pretty defensible option. For this reason I feel that computing power may actually be the best metric we have available for measuring our current rate of technological progress.

Using computing power as the primary measure of technological progress unfortunately prevents us from modeling very far back in history. However, if we accept the premise that computers eventually engulf all technologies, this metric should only get more appropriate with each passing year.

MOORE’S LAW

When it comes to analyzing the progress of computing power over time, the most famous example is Moore’s Law, which predicts (correctly for over 40 years) that the number of transistors we can cram onto an integrated circuit will double every 24 months.

How long Moore’s law will continue is of course up for debate, but based upon history the near-term outlook seems fairly positive. Of course, Moore’s Law charts a course for a relatively narrow domain. The number of transistors on a circuit is not an inclusive enough measure to represent “computing power” in the broader sense.

One of Ray Kurzweil’s more intriguing proposals is that we expand Moore’s law to describe the progress of computing power in general, regardless of substrate:

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.”

“As the figure demonstrates there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price performance of computing long before integrated circuits were even invented.”

Measured in calculations per second per $1000, the power of computers appears to have been steadily accelerating throughout the last century, even before integrated circuits got involved.

OTHER MEASURES OF COMPUTING POWER

While I like Kurzweil’s price-performance chart, the $1000 in the denominator ensures that this is still an economic variable. Including money in the calculation inevitably introduces some of the same concerns about economic measures mentioned earlier in this essay.

So to eliminate the medium of money entirely, we might prefer a performance chart that tracks the power of the absolute best computer (regardless of cost) in a given time period. Fortunately, Kurzweil provides very close to such a chart with this graph of supercomputer power over time:

THE NETWORK AS A SUPERCOMPUTER

Just as all technology is converging toward computers, there is a sense in which all computers are merging together into a single global network via the internet. This network can itself be thought of as a giant supercomputer, albeit one composed of other smaller computers. So by measuring the aggregate size of the network we might also get a strong indication of our current rate of computing progress.

Please note that I do not necessarily support many of Kurzweil’s more extreme claims. Rather I am simply borrowing his charts to make the narrow (and fairly uncontroversial) point that computing power is accelerating.

THE SOFTWARE PROBLEM

While increasing computer power makes more technological functions possible, a bottleneck might exist in our ability to program these functions. In other words, we can expect to have the requisite hardware, but can we expect to have the accompanying software? Measuring the strength of hardware is a straightforward process. By contrast, software efficacy is a lot harder to quantify.

I think there are reasons to be optimistic on the software front. After all, we will have an ever growing number of people on the planet who are technologically enabled and capable of working on such problems. So the notion that software challenges are going to stall technological progress seems unlikely. That’s not a proof of course. Software stagnation is possible, but anecdotally I don’t see evidence of it occurring. Instead I see Watson, Siri, and the Google autonomous car, and get distinctly the opposite feeling.

ULTIMATELY NO METRIC IS PERFECT

At this point, you still may not accept my premise of a growing equivalence between computers and technology in general. Admittedly, it’s not a perfect solution to the measurement problem. However, the idea that available computing power will play a key role in determining the pace of technological change should not seem far-fetched.

(3) THE LOGICAL ARGUMENT

Empirical analysis is useful, but as is clear by now, it can also be a thorny business. In terms of explaining why technological progress might be accelerating, a simple logical argument may actually be more convincing.

A feedback loop

A key feature of technological progress is that it contributes to its own supply of inputs. What are the inputs to technological innovation? Here is a possible list:

  • People
  • Education
  • Time
  • Access to previous innovations
  • Previous innovations themselves

As we advance technologically, the supply of all five of these inputs increases. Historically, technological progress has enabled larger global populations, improved access to education, increased people’s discretionary time by liberating them from immediate survival concerns, and provided greater access to recorded knowledge.

Moreover, all innovations by definition contribute to the growing supply of previous innovations that new innovations will draw upon. Many of these innovations are themselves “tools” that directly assist further innovation.

Taking all this into account we can expect technological progress to accelerate as with any feedback loop. The big variable that could defeat this argument is the possibility that useful new ideas might become harder to find with time.

However, even if finding new ideas gets harder, our ability to search the possibility space will be growing so rapidly that anything less than an exponential increase in difficulty should be surmountable.

CONCLUSION: THE PLAUSIBILITY OF RAPID CHANGE SHOULD BE CONSIDERED

Although some skepticism of these arguments is still warranted, their combined plausibility means we should consider outcomes in which change occurs much more rapidly than we might traditionally expect. Clinging to a linear perspective is not a good strategy, especially when so much is at stake. In short, we should question any long-term policy or plan that does not attempt to account for significantly different technology just ten or even five years from now.

A Detailed Critique of “Race Against the Machine”



eBook available on Amazon

FIRST OF ALL, THIS IS AN EXTREMELY IMPORTANT BOOK

Race Against the Machine deserves praise for jump-starting an important discussion about the effect of technology on our economy. As the authors point out, the impact of computers and information technology has been largely left out of most analysis regarding causes of our current unemployment woes. This book, therefore, is an attempt to “put technology back in the discussion.”

BUT IT SUFFERS FROM SCHIZOPHRENIA

The book is divided into roughly two halves: one pessimistic and one optimistic. The first three chapters comprise the pessimistic portion and make a compelling case for how accelerating technological progress and rapid productivity gains are not only creating unemployment, but also contributing to greater inequality. The last two chapters take a more optimistic tone and attempt to lay out possible solutions to these problems.

Unfortunately, the pessimistic chapters are much more convincing. The net effect is rather dissonant, not unlike a general rounding up his troops and announcing, “Gentlemen, we’re outmatched in every possible way. Now go out there and win!”

An autonomous car at Stanford University

A QUICK SUMMARY OF THE “PESSIMISTIC” CHAPTERS

Race Against the Machine begins by building a strong argument for technological unemployment. The authors draw our attention to recent innovations like self-driving cars and Jeopardy-winning computers and show how such advances threaten to encroach further and further into realms once dominated by human labor. Such technologies ultimately affect all sectors of the economy since computers are a prime example of what economists call a “General Purpose Technology.” Like steam power and electricity, computers reside in a category of innovation so powerful that they interrupt and accelerate the normal march of economic progress.” Moreover, thanks to Moore’s law, computers are improving at an exponential rate, so we can expect very disruptive changes to arrive very quickly indeed.

Making matters worse, a host of related trends result in the benefits of technological progress not being shared equally. Not only is the value of highly skilled workers diverging sharply from that of low skilled workers, but the value of capital is increasing relative to that of labor. Also important is the “superstar effect.” Technological advances extend the reach of superstars in various fields and allow them to win ever larger market shares. This comes at the expense of the many, since “good, but not great, local competitors are increasingly crowded out of their markets.” Putting all this together you get the central thesis of Race Against the Machine: namely, that digital technologies are outpacing the ability of our skills and organizations to adapt.

A humanoid robot

HOW THE AUTHORS DISTANCE THEMSELVES FROM THE “END-OF-WORK”

In some ways this is a familiar argument. The idea that technology might replace the need for human workers on a large scale has been floating around for a long time, at least since the industrial revolution. However, it should be noted that early on the authors distance themselves from what they call the “end-of-work” crowd, thinkers like Jeremy RifkinMartin Ford, and even John Maynard Keynes, who have argued that with time the role of human labor is bound to diminish. Certainly, one can understand why the authors would be cagey about being associated with this idea. Historically, predictions regarding the coming obsolescence of human labor have been wildly exaggerated, and economists generally view such arguments as fallacious.

So what differentiates Race Against the Machine from a more traditional end-of-work argument?  According to the authors:

“So we agree with the end-of-work crowd that computerization is bringing deep changes, but we’re not as pessimistic as they are. We don’t believe in the coming obsolescence of all human workers. In fact, some human skills are more valuable than ever, even in an age of incredibly powerful and capable digital technologies. But other skills have become worthless, and people who hold the wrong ones now find that they have little to offer employers. They’re losing the race against the machine, a fact reflected in today’s employment statistics.”

In short, the authors are more optimistic because they believe there will still be plenty of jobs for humans in the future. We just need to update our skills and organizations to cope with new digital technologies, and then we will be able to create new avenues of employment and save our struggling economy.

BUT THE “OPTIMISTIC” CHAPTERS AREN’T SO ENCOURAGING

Which brings us to those last two chapters, the unconvincing ones I alluded to earlier. For I believe Race Against the Machine suffers from the same problem as a lot of nonfiction books: It does a great job of stating the problem, but a not-so-great job of laying out the solution.

Human-computer teams competing at “cyborg chess”

SO HOW ABOUT WE RACE WITH MACHINES?

The first suggestion the authors make can be summarized as “race with machines.” A human-machine combo has the potential to be much more powerful than either a human or machine alone. So therefore it’s not simply a question of machines replacing humans. It’s a question of how can humans and machines best work together.

I don’t disagree with this point on the surface. But I fail to see how it suggests a way out of our current predicament. The human-machine combo is a major cause of the superstar economics described earlier in the book. Strengthen the human-machine combo and the superstar effect will only get worse. In addition, if computers are encroaching further and further into the world of human skills, won’t the percentage of human in the human-machine partnership just keep shrinking? And at an exponential pace?

Moreover, as I’ve written about before on this site, the human-machine partnership can sometimes be less than the sum of its parts. Consider the example of airline pilots:

“In a draft report cited by the Associated Press in July, the agency stated that pilots sometimes “abdicate too much responsibility to automated systems.” Automation encumbers pilots with too much help, and at some point the babysitter becomes the baby, hindering the software rather than helping it. This is the problem of “de-skilling,” and it is an argument for either using humans alone, or machines alone, but not putting them together.” (link)

PERHAPS ORGANIZATIONAL INNOVATION WILL SAVE US

The authors go on to discuss the importance of “organizational innovation.” In particular, they discuss the creation of new business platforms that might empower humans to compete in new marketplaces.

Again, I agree in theory. Certainly some new platform may hold the key to productively mobilizing the unemployed. But current examples are far from encouraging. The authors cite websites like eBay, Apple’s App Store, and Threadless. An obvious point would be that the kind of people who are able to hustle and make a living on such websites are not exactly average workers in any sense of the word. Not everyone can run an online retail store, program an app, or design a t-shirt. But that’s beside the point. The question we should be asking is will such online marketplaces grow in the future? Perhaps they will expand to the point that they can encompass more and more ordinary workers?

I am highly skeptical. Once again, superstar economics apply here, since effectively everyone in these markets is competing with everyone else. One potential solution is the growth of niche markets. If you focus on selling unique items to a niche audience perhaps you can carve out your own little market in which you are the lone superstar.

But this idea also has its problems. How many niches can there possibly be? Enough to provide employment for the legions of truck drivers and supermarket checkers who may soon be exiting the workforce?

When discussing technology and unemployment, I think it is important not to leave digital abundance out of the discussion. Digital abundance has the potential to be just as disruptive as automation. Traditional businesses are under attack from two sides. Services are being automated, while at the same time goods are being digitized.

Imagine the domestic entrepreneur who has started his own eBay store. He sells niche action figures to a few enthusiastic fans. Nonetheless, enough fans exist that he can make a decent living, all thanks to the wonders of eBay’s “organizational innovation.”

Objects made using a 3D printer

Enter affordable desktop 3D printing, a technology that is rapidly arriving on the scene. All of a sudden once eager customers can buy cheap raw materials and print all the action figures they want. Digital files containing the precise specs for figures get designed, released, and traded extensively on file sharing sites. An explosion of innovation for sure, but also a potential threat to a business model that focuses too much on the sale of unique tangible goods.

Thought experiments like this reveal why intellectual property and digital rights management are going to become increasingly hotly debated issues. As tangible goods become digitized they go from being tangible property to intellectual property. So the efficacy of a lot of future businesses depends on the efficacy of intellectual property, and a survey of recent history quickly reveals the troubles inherent in this area.

OKAY, BUT AREN’T THERE OTHER TYPES OF ORGANIZATIONAL INNOVATION?

So far I have focused on marketplaces for goods. I should note that there are also online labor marketplaces like Taskrabbit and Mechanical Turk. These websites provide a great service by efficiently matching demand for labor to humans willing to work. While increasing efficiency is beneficial, such websites will be of limited help if demand for average-skilled labor falls in the aggregate.

Now I don’t want to sound overly pessimistic. In general, I would agree that the unemployed represent a huge slack resource, and quite possibly somebody is going to come up with some previously unimagined way to harness this large pool of people. But at the moment, such organizational innovation is just a theory. I do not see the seeds of a workable solution in the current crop of platforms.

ON MICROMULTINATIONALS

As their final example of organizational innovation, the authors mention the promise of “micromultinationals.” They write:

“Technology enables more and more opportunities for what Google chief economist Hal Varian calls “micromultinationals”—businesses with less than a dozen employees that sell to customers worldwide and often draw on supplier and partner networks. While the archetypal 20th-century multinational was one of a small number of megafirms with huge fixed costs and thousands of employees, the coming century will give birth to thousands of small multinationals with low fixed costs and a small number of employees each.”

I don’t know if this quote is meant to be taken literally, but for fun let’s crunch some numbers. The coming century (100 years) will give birth to thousands (max: 9999) multinationals with low numbers of employees (less than 12).  Therefore:

9999 x 11 / 100 = 1100 jobs/year. Not exactly encouraging.

Exponential growth seems mild at first and then suddenly manifests as extreme changes

LOOKING TO COMBINATORIAL EXPLOSION FOR HOPE

Early on in the book, the authors take time to explain the incredible power of exponential growth. They discuss Moore’s law and quote Ray Kurzweil’s book The Singularity is Near to illustrate how exponential growth can quickly shift from modest gains to “jaw-dropping” changes.

This puts us in a dire place. If we accept the authors’ premise of a losing race, in which technology (progressing exponentially) is outrunning our skills and our institutions, then how can society hope to catch up? Trying to win a race against exponential growth sounds like an impossible task.

In chapter four, the authors claim to come up with the answer: “combinatorial explosion.”

Combinatorial explosion is the idea that new ideas are combinations of two or more old ideas. Since digital technologies facilitate the easy exchange of information, and ideas—unlike physical resources—can’t be used up, we therefore have virtually limitless possibilities for innovation.

“Combinatorial explosion is one of the few mathematical functions that outgrows an exponential trend. And that means that combinatorial innovation is the best way for human ingenuity to stay in the race with Moore’s Law.”

This suggests a strange dichotomy. As if Moore’s Law is the exclusive tool of machines, while combinatorial explosion is the exclusive tool of humans. This is clearly false. Combinatorial explosion is a huge cause of our current situation. It is a primary reason why disruptive technologies are moving so fast in the first place.

Here’s an easy example: IBM is re-purposing Watson—the Jeopardy-winning computer—to perform medical diagnosis. So here we see one idea colliding with another in true combinatorial fashion, and what’s the result? Yet another potential threat to jobs, this time in the medical field.

Technologies like Watson can readily be repurposed for a variety of uses.

HYPERSPECIALIZATION AND INFINITE NUMBERS OF MARKETS

Hyperspecialization is the authors’ answer to the problem of superstar economics:

“In principle, tens of millions of people could each be a leading performer—even the top expert—in tens of millions of distinct, value-creating fields.”

In principle maybe. In practice there are huge obstacles. Again, what’s to stop one superstar-machine combo from just dominating multiple fields? Or even just one machine? In the health care industry for example, computers like Watson are going to be able to mine the literature of all fields of medicine. After all, to a computer, what’s a few thousand more documents to read? Machines can rapidly scale up their expertise in ways that humans simply can’t.

More importantly we should examine the term “value-creating fields.” Value under our current system is closely tied to scarcity. Digital abundance directly undermines this source of value. So once again we are confronted with an intellectual property challenge. If we are going to have an economy where everyone is an expert in a different field and produces “bits,” we are going to need a mechanism by which these non-scarce bits translate into an income. The truth is we already have numerous such experts. The Internet is overflowing with amateurs who voluntarily immerse themselves in hyper-specialized tasks purely for enjoyment, not because this path is necessarily a viable strategy for making money.

BUT DON’T WE ALL HAVE SOMETHING UNIQUE TO CONTRIBUTE?

Yes of course. Human creativity is astounding, and everyone has something to offer. But peoples’ output—however unique, interesting, and valuable—will not necessarily be monetizable. Especially in an abundant digital environment.

At this point, I want to return to a stray quote from earlier in the book, because I think it presents the opportunity to make an important point.

“…digital technologies create enormous opportunities for individuals to use their unique and dispersed knowledge for the benefit of the whole economy.”

When I look at the Internet and communications technologies, I see a huge threat to this “dispersed knowledge.” The Internet has a way of destroying information asymmetry, which is another important factor to consider when looking at future employment. Any jobs that depend upon having exclusive access to knowledge that no one else has are potentially at risk in a world where increasingly everyone is connected and data is widely shared.

“Superstar” teacher: Salman Khan

INVESTMENT IN HUMAN CAPITAL

Education seems like the most straightforward solution to our problem. If our skills are falling behind, then we’d better acquire new skills right?

I am highly dubious of education’s ability to solve this problem. For one, the most promising experiments in education right now, those that use technology aggressively, like Khan Academy or Stanford’s online courses, have the potential to create unemployment in the education field. In addition to the long-term promise of fully automated learning environments, superstar economics rears its head once again. After all, Khan Academy is built around a superstar teacher: Khan himself. And Stanford’s recent Artificial Intelligence course allowed one professor to effectively reach 58,000 students.

The authors do present an important counter:

“Local human teachers, tutors, and peer tutoring can easily be incorporated into the system to provide some of the kinds of value that the technology can’t do well, such as emotional support and less-structured instruction and assessment. For instance, creative writing, arts instruction, and other “soft skills” are not always as amenable to rule-based software or distance learning.”

Person-to-person interaction is indeed an important aspect in a lot of teaching, and won’t be vanishing any time soon. However, it arguably becomes less important the further you move up the educational ladder. A fifth grader needs lots of emotional support and hands-on instruction, but a self-directed higher education student may need almost none. College is so expensive right now that I can imagine people increasingly forgoing it in favor of cheaper, more automated learning options.

But even with person-to-person learning, technological advances mean that single tutors or teachers will increasingly be able to meet the needs of more and more students. If the learning software does a halfway decent job, then the necessity of human intervention should decrease with time.

In addition, going back to the idea of digital abundance, such human intervention may be increasingly available for free. Already, it’s stunningly easy to go on the Internet and find volunteers who will provide emotional support and helpful feedback, at zero cost. Online communities around “soft skills” like creative writing are particularly vibrant, and offer the opportunity to develop a craft with a huge support network that easily rivals what you would get from a traditional paid learning experience.

I think there is a cultural bias towards judging online interactions as somehow always less valuable than real space interactions. With every passing year, as the resolution of communications technologies increases, this point of view becomes increasingly absurd. Cultural norms may move slowly, but I suspect they will eventually come around on this issue.

But all of these considerations aside, there is a much bigger problem. One cannot escape the simple truth that humans learn slowly and technology advances quickly. If we take exponential growth seriously, how can education expect to keep up? Are we going to retrain unemployed truck drivers to become app programmers? Chances are by the time such retraining is complete, technology will have moved on. And don’t forget that the machines themselves will increasingly be educating themselves.

One solution might be augmenting human thinking capability. If we could upgrade humans the way we upgrade machines, then “the race” would be over, and racing with machines would make more sense. This may sound far-fetched, but in these futuristic times nothing should be ruled out. In Andrew McAfee’s own words, “Never say never about technology.” The question is when will such technologies arrive? And what societal upheaval might we be in for in the meantime?

Innovation is great but it doesn’t necessarily equal job creation.

POLICY RECOMMENDATIONS

The authors make a series of common sense policy recommendations that affect institutions like education, business, and law. Most of these suggestions are great, and might lead to a better society, but it is unclear how any of them will create jobs. Rather the goal of these suggestions seems to be to allow innovation and progress to flourish, which in my opinion may only accelerate the process of job loss, as per my arguments above.

The one exception might be suggestion number 13:

“Make it comparatively more attractive to hire a person than to buy more technology. This can be done by, among other things, decreasing employer payroll taxes and providing subsidies or tax breaks for employing people who have been out of work for a long time. Taxes on congestion and pollution can more than make up for reduced labor taxes.”

I suppose this might help keep some jobs around longer, but at the expense of investment in technology that presumably we would want to encourage. This seems directly at odds with the authors’ other more pro-innovation suggestions. Do we want technological progress or not?

RE-EVALUATING THE DESIRABILITY OF “WORK”

One solution the authors quickly dismiss is wealth redistribution. Their reasoning?

“While redistribution ameliorates the material costs of inequality, and that’s not a bad thing, it doesn’t address the root of the problems our economy is facing. By itself, redistribution does nothing to make unemployed workers productive again. Furthermore, the value of gainful work is far more than the money earned. There is also the psychological value that almost all people place on doing something useful. Forced idleness is not the same as voluntary leisure.”

As a culture we’re deeply attached to the idea of jobs, but I suspect many of us wouldn’t have too much trouble getting over our attachment.

I think a distinction ought to be made between wage labor and other perfectly meaningful ways of occupying one’s time. Looking ahead, perhaps our cultural reverence for wage labor is misplaced. After all, one way to look at wage labor is it is a mechanism which forces us to spend time on what the short-term market thinks is valuable, rather than on what we as individuals think is valuable. Sure, lots of people love their jobs. But lots of people hate their jobs too. If you liberated all the people who hate their jobs from the constraints of wage labor, chances are a decent portion of them might find something else more productive to spend their time on. And society as a whole might benefit tremendously.

I do not rule out the possibility that eventually we may find a way to gainfully employ everybody while keeping our current system intact. If some human innovator does not crack this problem, then certainly a sufficiently powerful computer might. But I shouldn’t have to point out the absurdity of asking in effect, “Hey Hal, can you figure out what jobs we should be doing?” It seems to me that long before then, we ought to be re-examining whether we even want traditional jobs in the first place. Perhaps we should be working with machines in order to win the race against labor.

The authors repeatedly state that our institutions are losing a race with technology. But they do not consider the possibility that our economy itself might be one of these trailing institutions.

NOT TO OVERSTATE THE LEVEL OF DISAGREEMENT

Near the end of the book, the authors do admit that there are “limits to organizational innovation and human capital investment.” So most likely they would not be too surprised by many of the criticisms I’ve leveled above.

And I would agree with the authors that, excepting the jobs issue, all of these new technologies are unequivocally a good thing. There is clearly a lot of cause for optimism in the general sense.

If technology significantly brings down the cost of goods like health care, unemployment won’t be so threatening.

In addition, if we can advance technology quickly enough there are two long term solutions to the unemployment problem. The first, direct intelligence augmentation, I have already mentioned above. The second solution involves cost of living. Specifically, if technology can significantly lower costs of living, then declining income prospects for average individuals will not sting so much. However, to accomplish this we would have to see dramatic drops in the price of essential goods like housing and healthcare, drops which may happen eventually, but may not arrive quickly enough to prevent social unrest.

CONCLUSION

In the final assessment, I think my biggest issue with this book is the way authors fail to effectively distinguish themselves from the “end-of-work” crowd. After stating in the opening chapter that they are “more optimistic” about the future of human labor, they do not present any credible reasons for such optimism. The authors may claim they do not believe in the “end of work,” but their claims will not prevent me from filing them next to Martin Ford and Jeremy Rifkin on my bookshelf.