Transcript: Dr. James Hughes on What is Technoprogressivism?



This transcription was graciously provided by Gerd Leonhard of the Futures Agency.

The original audio version is available here.

In this episode, we talk with Trinity College professor and Institute for Ethics in Emerging Technology (IEET) founder Dr. James Hughes about the political term Technoprogressive and the recent Technoprogressive Declaration he helped develop (and we here at RTF have signed). Hughes contextualizes the movement as a new, techno-optimistic wing of the traditional Enlightenment liberal project, and portrays Technoprogressivism as the left wing counterpart to the noisy Libertarian wing of the futurist movement. We talk about the position of the technoprogressive movement on a host of issues, including universal basic income, longevity enhancement, and how to promote a techno-optimistic viewpoint specifically within the American Left, which has developed a sometimes-justified suspicion of technological solutions to problems.

[0:00:00]

Announcer: Welcome to Review the Future, the podcast that takes an in-depth look at the impact of technology on culture.

Ted: I’m Ted Kupper.

Jon: I’m Jon Perry.

Ted: And today, we’re asking the question, what is technoprogressivism?

Jon: So, today, I’m very excited to be here with our guest, Dr. James Hughes, who both co-founded and now serves as executive director of the Institute for Ethics and Emerging Technologies. He holds a doctorate in sociology and lectures on health policy at Trinity College, and he’s the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future.

Dr. James Hughes, welcome to the podcast.

James: Delighted to be here.

Jon: So, on this podcast, we talk a lot about radical future technologies; and in discussing these topics, we can’t help but again and again run into issues of policy and governance. One of the things that excites me about your work is that you really seem to be focused on this overlap between far futurism and policy. But specifically, where I want to begin today is that recently in Paris, I understand you’re involved in the drafting of a Technoprogressive Declaration that states the core principles of a political stance called technoprogressivism. So perhaps you could start by just helping us to understand what this term means and its origins.

James: Well, to do that, I need to rewind a little bit of my own history. I came out of the left, and I still consider myself a man of the left. And when I realized that I was also a futurist and a techno-optimist and what we now call a transhumanist, which was about 25 years ago, I hooked up with a transhumanist, as they were at the time, the Extropian mailing list, and that was not a pleasant meeting because the Extropians at the time back in the early ’90s on the internet were dominated by libertarians and anarcho-capitals, so we didn’t have a very pleasant conservation.

And I realized that I needed to find my intellectual ancestry and kin and knew that the left had not always been Luddite the way that it was today or as Luddite as it is today. And so I started a number of projects, one of which was eventually the Changesurfer radio program and then hooking up with the World Transhumanist Association which was much more politically diverse than the Extropians.

And through those different investigations, I began to discover an intellectual lineage that goes back to at least the enlightenment of people like Condorcet who was one of the French revolutionaries who contributed to the enlightenment thinking of the French revolution, and who was also what we would now call a transhumanist. He believed that technology would eventually eliminate death, eliminate work, eliminate slavery, and eliminate ignorance.

And so that lineage has been there. So what I consider technoprogressivism to be is just the kind of contemporary application of that combination of ideas from the enlightenment of techno-optimism, faith, and human reason with ideas about democracy, human emancipation, egalitarianism, and so forth; that nexus of means that occurred during the French revolution.

It needs a new name today because the left has lost a lot of that faith and reason and the techno-optimism, sometimes for good reasons because the 20th century gave us plenty of reasons to be suspicious of unbridled techno-optimism or unbridled faith in human reason. And so, there are a lot of legitimate critiques to be made, and I think that the technoprogressive itself, in general, the ones I know, are able to incorporate that understanding of history.

It’s also a term that grew up out of the last decade of attempting to build a transhumanist movement and realizing that transhumanism and the various other allied movements around transhumanism, futurism, and so forth, is just too shallow, a set of means to build a move, a political movement around.

If you simply believe that it’s good for people to have access to cognitive enhancement drugs, but the people in the room, one group wants to eliminate democratic states altogether and have no drug and device regulation, and the other side of the room wants to have drug and device regulation and a natural healthcare regime that would make it accessible to every cognitive enhancement drug, well, you’re not going to get very far with your political program. I mean you agree about one thing, but you don’t agree about a lot of other important things.

And that’s been the nature of our experience with the transhumanism. So, when we started the Institute for Ethics and Emerging Technologies ten years ago, that was in the context of the grueling realization that to have a public policy framework to work in transhumanism that we needed to have at least some other or basic agreements about the legitimacy of democratic states and the need to have drug and device regulation, universal access, and so forth.

That was also the time that I had written my book, Citizen Cyborg, in which I argued for democratic transhumanism. That term never got on if anybody than me, but then quickly, the term “technoprogressivism” became descriptive for some people within the IEET of the position that we were carving out, and that’s what we’ve run with so far.

Now, there’s a lot of other terms out there. Social futurism has been promoted by Amon Twyman. As I say, the ideas aren’t particularly novel, so you could just say traditional left-wing thinking or something like that. But it does express a certain nexus of concerns about human enhancement, life extension, existential risks, and so forth, things that are of contemporary concern to the transhumanist or futurist community and that our points of contention often with our participation in left and socially progressive social movements.

Jon: We want to kind of get maybe specific about some of the platform of technoprogressivism as defined by that declaration. One issue that we talk about a lot on the podcast is technological unemployment, and that often leads to discussions of unconditional basic income. My understanding is that the technoprogressive platform is in support of that idea, and that’s a very old idea. But, of course, in America, it doesn’t get discussed very often these days.

Ted: Or not very seriously.

Jon: Yeah, although it seems to be resurfacing a little bit. I wonder how you feel about the current debate or lack of debate on that issue and how to go about selling an idea like unconditional basic income to people in this country.

James: I think really the tipping point for this particular idea, for both the realizations around technological unemployment as an inevitability and the realization that basic income guarantee is a desirable social policy. We’re probably at the same point with this debate that we were with gay marriage maybe 15 years ago where the tipping was beginning, and we just couldn’t foresee how quickly it was all going to fall into place.

[0:09:59]

We’ve has a decline in the proportion of Americans who are in paid employment since 2000. We had a steady increase up to the year 2000, and it’s been subtly decreasing since 2000. Part of that is the demographic shift of Baby Boomers beginning to retire, but that’s an important part of our technoprogressive analysis of the situation. As good social futurists, we should be looking at all the dimensions of the situation, not just the technological.

The demographic dimension is that we’re going to have this big bulge of older people who are going into — in the current way that we do pensions and retirement and old age, they’re going to be becoming dependent on the social welfare state, stop working, and that’s a part of this picture. The other part is the technology is going to, increasingly and probably from our perspective exponentially, begin to erode human employability.

So there aren’t very many public policies that we’ve done a recent special issue of the Journal of Evolution and Technology where we published one very comprehensive review of all the possible public policies that could address technological unemployment if it begins to emerge more clearly as we think it will.

So, the first thing the people may say is well, we could ban some technologies. In New Jersey, you’re not allowed to pump your own gas. You have to have a gas attendant pump your gas. Well, that’s going to be pretty annoying to people especially if the cost and efficiency and quality advantages of the new technologies completely outstrip the ability of humans to provide them. S, if we say you have to go to a human travel agent in order to book your Expedia tickets when you could have just done it yourself, well, people are not going to put up with that very long. Or you could say that we can start shrinking the work week, which I think is a good policy and certainly one that has been proposed for a long time.

Shrink the work week, shrink the work year, and shrink the work life. Shrink the work life by extending the period of education, subsidizing education longer, not extending the retirement age but maybe actually moving it forward a little bit. That would be a way to shrink the work life.

Shrinking the work year would be to have more days of vacation, more paid family leave and so forth, which Americans are, of course, in need of. And then shrinking the work week would be having like a 35-hour work week and then a 30-hour work week and so forth.

Those will be good policies and those would have helped ease us into the situation that we’re going to be entering. But eventually, we have to say if we have fewer and fewer people who we can tax to support more and more people who are dependent on public services, then we need to have a whole renegotiation of the social contract around work, leisure, retirement, disability and so forth, and basic income is the most obvious solution to that.

Ted: Yeah, that’s how it seems to us, although obviously as I’ve spoken about this idea to various people, I’ve discovered that it’s not an obvious solution to a lot of people and objections you get to it are pretty standard. You hear people say that no one will work, that’s a common one or —

Jon: Too expensive.

Ted: — That is too expensive to provide. I’ve seen some back-of-the-envelope math that seems to bear that out at least at today’s current price levels for living, and I wonder what would you suggest is a good rebuttal or argument to those kinds of criticisms.

James: Well, we start with the idea of redistributing the existing social welfare state. And certainly if you just take — in the first place, getting hold of Social Security is going to be a huge political fight. So we’re probably going to have to face serious political crisis in most countries before the basic idea of basic income gets established as a public policy.

But at any rate, just as we took social security and disability payments and all the other forms of social welfare that we do and redistributed those to everybody, yes, it would be a very meager like $10,000 a year or $5,000 a year, whatever it is. And it wouldn’t be a living wage for anybody, and not to mention that it would be inegalitarian.

So, one of the ideas that, of course, ends up being pretty much — if we’re going to have progressive taxation on the other end of what people make, negative income tax is basically the same thing as a basic income guarantee, and that has been supported by people like Milton Friedman and all kinds of people on the right, and it’s much easier probably in the United States at any rate to implement.

The basic income guarantee would basically be that below a certain level of income you get that amount of money from the government, and above a certain level of income you get progressively taxed. And by implementing that kind of pivot point in our taxation policy, we could begin to then move that pivot point up and up until it becomes more and more of a living wage.

In terms of the fiscal side, people are absolutely right that we need to change how we support the democratic state. We are undertaxed in the United States in terms of all the public needs that we have. Our bridges are falling down. Our highways are underserviced, et cetera, et cetera. And then we have all these human needs that will increasingly need to be met.

So we need to increase progressive taxation on income. We also need to increase our exploitation of the licensing and sale of public goods such as the airwaves. One of the existing models of a basic income guarantee is the Alaska revenue fund from the petroleum resources that they have, and every Alaskan gets then three, four or five thousand dollars a year as a check on the basis of their co-ownership of the Alaskan oil revenue. I think that that model could be extrapolated to all kinds of things in the United States and other countries.

So, yeah, it’s going to take some pretty serious reforms of the fiscal climate for us to get there. But again as Mitt Romney pointed out, what did he say, 47% or 49% of Americans were moochers and now it’s even more because more and more people are being kicked out of employability or aging out of employability. And when the tipping point becomes the majority of us being moochers, then we have to have that conversation of how do we make this egalitarians so that the 60% of us who are dependent on the 30% who are working that that’s an equitable arrangement.

Jon: Yeah, it would require a lot of cognitive dissonance to keep demonizing the moochers when we’re rapidly growing as a force.

[0:15:05]

James: Exactly.

Ted: Well, despite their requirement, I actually think it’s relatively likely to be possible. I think right now people have the strong tendency to think of themselves as being the makers, not the takers, regardless of their actual position in the world. I think it’s actually going to be very easy to continue demonizing those people especially for people with a right wing political orientation who think of themselves as valuing hard work and sort of subscribe to that Calvinist sort of worldview like you kind of get what you deserve.

I worry about that, honestly, because I think that that framing is so hard to get around. I worry that people who consider themselves moochers will vote against their own interest and vote to keep the 30% who are still working above them because they think it’s somehow fair or something.

James: Well, you raise two interesting — I mean the first place is that it’s very easy to find people today who are on Medicare arguing that Obamacare should be abolished because it’s socialism, Obamacare which is basically a Republican idea that everybody should be obliged to buy a private health insurance and that we can’t have public health insurance at all. That’s Obamacare. People who actually get public health insurance is saying that Obamacare is socialism. So yes, you’re absolutely right, that kind of a rationality will persist.

In the second place, you raised an interesting question which is central to the technoprogressive kind of framing of the political world, which is are people on the left more rational than people on the right? If you look at the work of people like Chris Mooney who he is a journalist but he has written several books about this now, and many political sociologists and social psychologists who are working on this question, I think that there is a basic personality and even neurological difference between people who have a more liberal or left wing orientation.

I’m not talking about Marxist-Leninist or some of the hardcore forms of leftism which end up looking a lot like right wing thinking, but generally the kind of liberal to conservative spectrum and democratic societies. People who are more liberal tend to think more rationally, use more of their neocortex and less of their amygdala to make political decisions. This is a big problem. It’s one of the reasons for us talking past each other is that people on the left tend to be trying to use reason and people on the right are saying — as Stephen Colbert famously said, “I don’t care what the facts are. I feel it in my gut.”

Ted: Right, right. I think to some extent you can even defend that point of view, but it does make it very hard to make political progress. Of course, many of us in the futurist community are also interested in rationalism and trying to trick ourselves into being more rational or at least researching ways in which we’re irrational so we know more about that. Interestingly, as I’m thinking about that community, that community is like heavily libertarian, is it not, that rationalist community?

James: Yes.

Ted: So I don’t know. That may be —

Jon: They’re a minority group overall, but there’s definitely a strong contingent of people that are clearly focused on rationalism and reason but are also extremely right wing, at least on economic issues.

James: Well, if you look at Jon Haidt’s work on political psychology, it’s fascinating. His original model was that there were five basic moral intuitions: fairness, non-harming. Those were the two that were common among people in the left. Then people on the right, the more common moral intuitions were the importance of hierarchy, the importance of in-group solidarity and the importance of sacred values. I think that that model works great and there’s a lot of neuroscientific research that kind of underwrites why we inherited those primate mammalian moral intuitions.

So when he came up with that model, he was basically on the left-right spectrum. And then the libertarians started to complain saying, “Well, what about us? Where are we on the spectrum? Because that doesn’t fit us.” So he started to test them and he said, “It turns out that you guys don’t respond to any of the moral intuitions. You’re basically morally tone-deaf.”

And as a kind of bone to them, in his most recent book he came up with the sixth moral intuition which is the freedom moral intuition. So libertarians only respond to one moral intuition. That’s “Get off my back, Jack.”

So yes, I mean libertarians are very specific psychological and probably neurological phenomenon in our society. It’s a politics for 13- to 21-year- old boys. It’s not a politics for adults.

Jon: Well, that’s probably going to anger some people.

I mean obviously there’s a lot of overlap though too, I think, between libertarians and progressives on social issues, right? I mean there’s some common ground to forge. It’s just these issues of basic income and taxation and stuff become super contentious and hard to deal with.

Ted: Right. Although you will find libertarians who are in favor of basic income as well because they find it to be preferable to an alphabet soup of government bureaucracies, right?

James: [0:20:28] [Indiscernible] anti-state if we just give people money.

Ted: Right. Yeah, a less paternalistic way of providing aid. So if you buy the basic premise that aid is necessary, which not all the libertarians do, you might prefer that in the same way that you might expect traditional conservatives to prefer Obamacare over Social Security or Medicare as we were just talking about.

But then, again, tribalism is important to everyone. If it’s not coming from your camp, sometimes it’s hard to support things, even things you ought to support.

So that’s interesting. Let’s talk about the technoprogressive agenda for life extension. This is something that’s the big topic for us and it’s a major motivator for me personally. What is the technoprogressive stance on that?

James: Well, again, if you go back to Condorcet, he imagined that we would eventually eliminate death. William Godwin, the anarchist philosopher in the early part of the 19th Century, imagined we would eliminate death. This has been an old enlightenment vision that the progress of science and medicine would eventually bring us radical longevity.

I think the contemporary political context that we face is that we need to have these Apollo Projects or Manhattan Projects committed to the project of understanding the biological processes of aging and how to reverse them.

We need to get that public financing. Private financing is not attracted to this prospect yet. They don’t see the payoff as being certain enough. So we’ve called this project the Longevity Dividend Project because most people in public policy are terrified at the prospect of more old people. When Social Security was implemented, fewer than half of all Americans would see 65.

Ted: Would live to see it is that what you’re saying?

James: Yeah, would survive that long. Now, people are surviving to 80 and 90 years old. So we have a lot more folks who are going to be dependent on the welfare state. That’s called the old age dependency ratio. The old age dependency ratio in Europe is even more extreme because they not only have good longevity because they have good public health, but also they have even lower fertility for mostly religious and cultural reasons, but also the longer you live, the fewer children you have. So there are a lot of complicated things going on.

But Europe is having declining population and a growing old age dependency ration. Japan is going to face this very severely. China is going to hit it in about 30 years. We’re going to hit it too but not quite as bad because we’re more religious. Religious folks tend to have more kids and we have more Hispanic immigrants and they tend to have more kids and so forth. But even those folks are having fewer kids. So, all around the world we expect to see this growing old age dependency ratio problem.

So when you talk to public policy folks and you say, “Hey, why don’t we invest a lot of public resources and allowing people to live radically longer?” that terrifies them. So what you need to do is frame it in a way that shows that it’s not only good for us each individually to live longer but that it can be good for society. One of the ways it’s good for society is that the old age dependency problem is principally a problem of the cost of nursing and medical care that an old person who needs 24/7 nursing can cost $50,000 to $100,000 a year. A person who doesn’t need that kind of nursing can cost maybe $5,000 a year in medical cost with various kinds of chronic ailments of old age.

If you can get rid of these chronic ailments of old age, they cannot only not cost the state but they could actually participate in some form, maybe paid or unpaid, in a way that contributes to society instead of being dependent and pulling potentially their wife or their daughter or their sister or somebody else out of the job market to take care of them.

So there are a lot of derivative benefits of healthy longevity, and I think the way to frame the goal here is that we don’t want more old sick people. We want more youngish, healthy older people. Those older people would cost the state a lot less and potentially contribute to society in ways that would be self-sustaining.

Ted: Yeah, I really like that framing of health longevity. I think immediately most people’s minds, when you talk to them about life extension, go to just giant old age homes full of comatose grandmas. If you can get the image in their mind that this is going to be a healthful productive part of life then all of a sudden their attitude changes but that’s always hard to get across because what we have now is that we have technologies that will help you live a few years longer but you don’t really make your quality of life all that much in. What we’re obviously all hoping for as a result of these technologies is that things will get actually healthier and not just longer.

Jon: Or at least that’s the goal we should be striving for and at least putting more resources into.

Ted: Yes, yes, yes.

James: It always astonishes me when I talk to people about life extension. They say, “What would you do with more than 80 years?” It’s like, “My God, how limited must your vision of your life be.” I’ve tried to learn several languages and I failed at all of them. If I had another decade or two, maybe I could make another start and try to actually learn Chinese. How many novels or books, hardcover books are sitting in my library that I’ve never even touched after buying them? How many movies might I like to see? How many countries have I not visited? How many people have I not met in this planet? To think that 80 years is enough to live any life is absurd.

Ted: Agree completely. I could definitely do with another 80 right after — I mean, just even if I was just going to do the whole thing over again and do everything better.

James: By the way, I just saw Groundhog Day. It’s a fantastic movie. It’s both a transhumanist meditation of my book and a Buddhist meditation. It’s Buddhist for a variety of reasons, but the transhumanist part is the notion that you would have ennui if you had to continue living your life. In the first place, we’re not going to force anybody to continue living. No transhumanist is imagining any technology that would make dying impossible.

Ted: Right, or illegal.

James: But if you just continue living when people start to experience ennui, well, what he discovers in Groundhog Day as I think what we would all discover is that in even living the same day over and over again, you would be able to find more and more interesting things to do. That was the greatest part of that movie.

Jon: Yes. When you’re healthy, the ceiling is super high in terms of things to spend your time on.

But why don’t we move on? There’s a bunch of other parts of the technoprogressive platform — reproductive rights, reforming drug laws, things aimed towards helping disabled groups or gender minorities and so on. One of the things that’s mentioned on the list is digital rights, which is a topic that we get into a lot on this podcast that we’re super interested in. It’s mentioned in the declaration but it’s mentioned kind of in passing.

So I wanted to see how much that’s been discussed as part of the platform. Does the platform include a position on net neutrality, for example, or copyright and patent reform? Is there a possibility of forging a link within technoprogressivism with the work of people like Lawrence Lessig, or at least his earlier work, or Cory Doctorow and some of the people that are fighting these digital rights battles?

James: Well, certainly that was the implication there. I mean I’m a very close follower of Cory’s work, and I’m proud to say that he’s happy about my work too. So we’re mutually admiring, although he’s far more productive than I am.

[0:30:06]

But in terms of the digital rights movement, the first important thing to say is that part of the technoprogressive initiative is to understand that we are not primarily a pressure group within transhumanism or futurism. Many of us have roots in that milieu but that our primary audience should be the broader social movements that actually are working to change the world in very tangible ways that have unfortunately come under a kind of Luddite influence for a variety of reasons since World War II. So we have a dialogue to have with those movements about what it means to be truly free and emancipated human beings.

So with the reproductive rights movement, they focus a lot on contraception the abortion, but they haven’t been very comfortable talking about genomic choice or artificial reproductive technologies and so forth. With the disability rights movement, they see anything about human enhancement as being another step towards the gas chamber. We have to say, no, no, we’re actually on the same page with you about morphological freedom. Yes, we think people should have a right to make their kids as able-bodied as possible, but that doesn’t mean that we want to put any person with a disability in the gas chamber.

With digital rights, it’s a lot easier because there’s very little Luddism in that milieu. We’ve actually had some people involved in the Pirate Parties who were prominent transhumanists as well. One of the prominent Swedish Pirate Party founders was a transhumanist.

So there’s been some discussion about the Pirate Parties being a political vehicle for transhumanism, but the problem is that the digital rights movement itself is not as sufficient just like transhumanism, in my opinion, is not a sufficiently broad set of issues to actually be the basis of a political party. It has worked a little bit as a kind of political leverage in Europe, but in terms of actually addressing any broader set of social concerns, I don’t think it does that.

Yes. So what’s the relationship? Basically there’s a fundamental set of freedom claims or emancipatory claims in the enlightenment — rights to control your body, to control your brain, to control the way you think, freedom of expression, rights to have the kinds of children that you want to have. We think that clearly the digital rights movement is about expanding the rights to expression and personal control over your own kind of intellectual property as opposed to the overreach of intellectual property that seems to be screwing up the commons.

Jon: What do you think are the most pressing specific issues that we need to work the hardest on today? A lot of the stuff is speculative around technologies that don’t actually exist yet and clearly we should be planning for those. But what, right now, do you think is the most urgent? We’ve mentioned some things like technological unemployment. I don’t know if that’s the candidate that would fit that mold. What do you think is most important?

James: Today, technological unemployment is not that easy to talk about in the United States because we’re adding jobs. I think you and I and most futurists see that as a short-term thing. That’s like saying that because it’s cold outside today that global warming isn’t happening.

Ted: I do want to mention that it’s very warm outside today.

James: Say what?

Ted: I just wanted to mention that it’s very, very warm outside today here in Los Angeles in the winter. It’s a heat wave right now.

James: Well, here in Connecticut it’s quite cold, so it may not be the thing to talk about today. But in the short term I think establishing this conversation around technological unemployment.

I was very encouraged, by the way. About six months ago we participated in a confab at Singularity University which has been a libertarian hotbed, with a bunch of different kinds of folks, big tech heavies, to talk about technological unemployment. It was chaired by Diamandis who’s, of course, a libertarian. He asked the 30 or 40 of us gathered there whether we all thought technological unemployment was inevitable, and most of us did, like 95% of us. And then he asked what public policy we thought was the most logical to promote, and 60% of us were for basic income guarantee. He was astonished. So that led to a great deal of optimism that when we engage futurists seriously with public policy that we’re going to have a very fruitful conversation.

But the problem is that nobody in D.C. currently wants to talk about the social policy implications of life extension or technological unemployment or any of these kinds of technologies. There are some, there are some futurists who work for the CIA or for the DoD who write about this stuff. What’s the world going to be like in 2035? They say, “Well, when we all have cognitive enhancement and life extension and nanobots in our back pockets, then the world is going to look like this.” But nobody in the health and human services is talking about that. So we have to have that conversation. We have to get that promoted.

Part of the issue is that it’s very difficult to predict what issues are going to crystallize the next big thing. For instance, I’ll give you an example, that with bioethics, we thought that the issues of who gets to decide to pull the plug for a spouse and what brain death meant was decided in the ’90s or the ’80s. And then Terri Schiavo in the late 2000s, and all of a sudden the governor and the legislature of Florida is intervening and right wing and left wing folks are demonstrating against pulling Terri Schiavo’s plug, and then the Congress gets involved. The whole kind of consensus seemed to fall apart.

It’s very difficult to know when it might be that we get like a genetically engineered animal or we get a nano-plague or we get a semiautonomous car that runs over some kid. Who knows what the issues are going to be?

So we kind of have to keep a broad perspective and maintain a strategic focus so that when these issues arise we can put them in the right context, mobilize whatever we can around them, make the right kinds of arguments, intervene in the media and politically in the most opportune way.

Ted: Well, yeah. I mean I was about to say I completely understand the point about political expediency, and you’re absolutely right that external events often create what’s possible in pragmatic everyday politics. But I want to kind of throw the question right back at you again and just say like assuming for a moment that it’s not caused by some external crisis, which of all these issues do you think would be the best for us to be focusing on. I mean, at the end of the day, what do you think is most important today, not five years or ten years from now, of these things that we’re talking about? If you could pick one thing to have an opportune moment for, what would it be?

James: Well, I think most transhumanists would agree that if we could live long enough, we could experience most of the other benefits that we look forward to. So life extension seems to be at the top of the list not only in terms of a political agenda, a personal agenda, but also in terms of its radical effects on society and its popularity. Of all the things that we talk about and advocate for, life extension is probably the most popular. When Americans were polled about this last year by Pew, 80% of Americans wanted the benefits of radical life longevity therapies when they became available. Sixty percent of Americans, however, were certain that they would only be available to the rich.

So right now, if you don’t have access to healthcare in the United States — well, with Obamacare, that fewer and fewer people don’t — but if you haven’t had life insurance in the past, your life expectancy is five/ten years shorter maybe. But in the future, with longevity therapies, that gap between the rich and the poor could be decades or more and increasing.

So we need to, I think, work on both the research, fundamental research on longevity. That is one of the issues that as a kind platform basis will allow us to establish majoritarian support for a lot of things and then universal access in combination with that. So that’s the beginning point.

But of course then, there are some other assumptions about the world that go along with saying, well, we have to live long enough to get there. There are a lot of folks worried about catastrophic risks, and I don’t count myself among the singularitarians. I think there are a lot of millennialists and apocalyptic cognitive bias in a way that people think about artificial intelligence and the singularity. But I think in general, catastrophic risks have to be at least close to the top of the agenda. And again, very few people argue that we shouldn’t be worried about the remaining nukes in the world or the possibilities of catastrophic pandemic diseases, et cetera, et cetera.

But we futurists have a set of things to add to that list that I do think we should be adding, on artificial intelligence, the potential building. The way I would frame it is that we need to build a global electronic infrastructure that’s resilient to all the different forms of electronic interference that can be reached, so both cyber theft, cyber warfare, as well as forms of artificial life and artificial intelligence that might go rogue. All these things need to be prepared for by having more resilient electronic infrastructure.

So I think that those things have to be closer to the top of the list. And depending on health survivalists, when I wake up in the morning I sometimes put that above life extension.

[0:40:18]

And then I think after that, the issues around genomic choice and cognitive enhancement, those tend to be the most controversial, and the immediate social benefits right now seem to be the least arguable or at least defensible in that very few people can afford in vitro fertilization, so until we have a widespread access to some cheap gene therapy or way of doing some sperm sorting or something like that that would be effective for dramatically changing your progenies’ prospects, I don’t think that there is going to be much capital to be focused a lot of energy on genomic choice, although we have to defend it.

Similarly, with cognitive enhancement, the purported benefits of modafinil or I mean I believe we should deregulate modafinil. We should deregulate methylphenidate and Adderall and so forth. The benefits to their widespread access far outweigh the risks. But the benefits aren’t so huge that it’s like we’re going to double the economic productivity if we give everybody Adderall. I think life extension and catastrophic risks are probably at the top of the list, and then everything else is some place below.

Jon: There is definitely a strain of the futurist community that seems to focus so much on the existential risk, almost to the exclusion of other things. Why should we really even talk about technological unemployment if ten years…

Ted: We’re all going to be killed by giant computer robots.

Jon: Ten years out we’re just all going to die anyways.

James: Like I said, I think there’s a lot of cognitive bias in that community. In the first place, a lot of them act like they are part of a religious sect and they think like they’re part of a religious sect. And if you look back at 999, year 999, there are a lot of people who thought the end of the world was nigh, and they stopped plowing their fields. Sometimes they burn down the local church and prepared for the end of times.

But you see that same kind of apocalyptic strain. I think it leads to both cognitive constraints in terms of how they imagine the disruptive potentials of artificial intelligence occurring, so as you say, they dismiss the importance of something like technological unemployment because what they imagine is that it’s going to be some demon that’s going to jump out of their laptop and take over the world instead of like being a slow accretion of technological changes that gradually changes the world.

And even if it is something that jumps out of your laptop, it might not be a super God. I mean this is again the 13- to 20-year-old male framing that oh, if I could be super powerful, what I’d do is take over the world. Instead, most of the animals on the planet, the things that plague us the most are microscopic to the size of rats or feral dogs or things like that. So artificial life that jumps out of your laptop and acts like a feral dog could be incredibly disruptive if it starts to breed in the intertubes. And it doesn’t have to want to take over the world and turn everything into computronium. It might just want to eat and gnaw on things.

So I think that yeah, there at least are a variety of biases, and it fits in as a kind of elective affinity with libertarianism, because in the first place, they all think they are smarter than anybody involved in public policy or governance. So, no one in governance or public policy could ever really understand what computer people do and therefore they couldn’t possibly come up with a regulation that would work.

The second place, they think since it’s going to happen in about ten seconds flat, there is nothing that anybody can actually do about it except be the first one to build the super robots that protect us from the bad super robots. I think it’s just that silly. It’s just silly thinking.

Jon: Although I think you would agree that it’s like a non-zero probability, these fears, right? So it’s just that we shouldn’t necessarily focus on one existential risk scenario to the exclusion of all these other issues.

James: There are lots of non-zero probabilities out there. One of the non-zero probabilities is that we’re all living in a stimulation that’s about to be turned off. So let’s work on that one or a non-zero probability that where there’s going to be a gamma ray burst in a nearby supernova and it’s going to wipe out all life in this galaxy. So there is nothing to do about some of these things.

[0:44:57]

So, in terms of the particular strategy that I would argue around the catastrophic risk of artificial intelligence, as I said, building a resilient internet, discussing things like having internet off switches which some countries have. North Korea has one tube that goes into North Korea, and they have an internet off switch. It’s not a particularly attractive possibility. It gives powers to government that I’m not particularly comfortable with. But if we take seriously the possibility that there might be some kind of rogue A life that would threaten human civilization, then we need to talk about having those kinds of switches.

We need to talk about the regulation of dangerous technological research. We already have a global internet interdiction regime set up to try to figure out who’s doing bad nuclear weapons as opposed to the good nuclear weapons research, who’s doing the bad nuclear weapons research like Iran and North Korea, and we have had zero success so far preventing that because we don’t have sufficiently powerful transnational institutions.

That’s another part of the picture here is we need stronger transnational institutions, stronger transnational technological agreements so that we can begin to actually not only identify but actually intervene to prevent the development of global catastrophic risk-making technologies, which includes not only chemical, biological, and nuclear weapons but also potentially nanotechnology and artificial intelligence.

Try talking to anybody in computer science about the prospect of having global regulation of artificial intelligence risk, and they just laugh at you. And granted it’s a very difficult prospect to wrap your mind around how it would actually occur since artificial intelligence research occurs in so many different places. But, as I’ve said to Ben Goertzel, the first day somebody takes him seriously about what he is actually trying to do, he is going to be locked up in the basement in the DoD.

Ted: Yeah, maybe that surfer-like persona that he projects is all just a ruse to keep him.

James: He doesn’t want them to take him seriously.

Ted: Yeah, exactly. He wants to keep doing his research, so they think he’s harmless.

Jon: Yeah, he has a casual way of saying the most like disturbingly bold predictions. But, yeah, so a couple more issues too that I want to ask you about before we wrap up. So let’s talk about building consensus with the other groups of the left and specifically how to do that. I think, as you point out, some of them are even like the environmental movement or some of them are even actually opposed to sort of the techno-optimistic perspective and something needs to be done that sort of bring those people in. So can you talk about the variety of people on the left, some of whom are struggling to get comfortable with technology, and how to go about stitching together?

Ted: Yeah, basically just how can we combat Luddism on the left? You said and I think this is really interesting that this movement is not so much about changing people’s minds within futurism. It’s more about changing people’s minds within the existing traditional left particularly here in America where we have a kind of strange coalition of social justice advocates, poverty elimination advocates, environmentalists, people interested in minority rights, people who are interested in immigrant rights, and of course, labor which is a shrinking source of power.

And how do you suppose we — yeah, how do we reach out to them and get them to accept technological intervention as a means of achieving their goals?

James: Well, in the first place, you recognize that within each of those different social movements there are techno-optimistic predecessors and strains of thought that can be mobilized and referenced and built upon. So, within labor, there are people going back, as I said, to the enlightenment who have labor-ish concerns or were actual labor union leaders or builders or socialist leaders, who look forward to the project of eliminating work altogether.

[0:49:55]

It becomes a lot harder when you’re actually representing a group of existing workers and your trade union is funded by their money taken out of their wages to say, “We want to eliminate your jobs.” That becomes a very hard thing to say. And this gets back to an old debate within the left is the difference between sectoral interests versus vanguard intellectuals. I see the technoprogressives as playing a vanguard intellectual role. We’re kind of like the Fabians where — and, in fact, I think, very similar to Fabians in certain ways — of the late 19th century, early 20th century. They were labor organizers themselves, but they were the thinkers who established the ideas that then influenced the founding of the labor movement in a similar way. And that helps overcome, that interplay between vanguard intellectuals and the political party, helps overcome the sectoral interests and the kind of anti-general interests of some of these movements.

So in the case of the labor movement, you need someone to say like, “No, you really should come to the negotiating table because even if you do when more money in your package this month, it’s bad for our society in these particular ways.” That’s the role of a labor party, our Social Democratic Party to influence how labor negotiates for its position.

In this case, the vanguard intellectual role of technoprogressives is to try to build this conversation about what the future of the economy looks like and how people who are interested in the future of labor rights can roll with these waves of change and continue to protect people who are being exploited in different ways, build new technological means of reaching out to them and so forth. I think that that conversation can be had, and there are people who have done it in the past.

With reproductive rights, as I said, there are many women who are staunched on women controlling their bodies. But when it comes to something like sex selection they suddenly say, “Oh, no, no, we can’t do that.” We have say, well, isn’t it a little odd to say that the way to protect women’s rights is to deny actual existing women the right to know the contents of their own womb and to make a choice about whether to continue a pregnancy to ensure that every boy 20 years from now will have a date to the prom? Is that really the way that you want to protect women’s rights today or isn’t making sure that every woman can control the contents of their own womb, the really primary right?

That’s the same argument with the disability rights community. One of the principal flashpoints with them is prenatal choice, which they see is threatening disability rights, or cochlear implants or other kinds of therapies which might actually eliminate certain disabilities in society. They see that as a threat to their communities. You have to say, well, are you more interested in making sure that there are X% of the population who’s deaf or who have Down syndrome in the future or are you more interested in making sure that every disabled person has the full capabilities of their life available to them that’s possible? Because we’re on the same page if that’s your concern, but what you want to do is make sure that there’s 5% of the population in perpetuity who has Down syndrome. Well, we can’t go there.

Ted: Right. We’re not on that. That’s so interesting because that seems like it’s a kind of perverse incentive for existing interest groups that —

Jon: Well, that’s analogous to the labor one, right?

Ted: It’s analogous to the labor thing, right. Exactly. They have some incentive to keep people in, say, poorly paid labor jobs or continue having a certain percentage of X disability in the population because that’s their interest group, that’s who’s funding them, that’s who’s sending them to Washington to go.

Jon: That’s their entire existence.

Ted: Yes, exactly. Think about as far as assistive technology, think about eyeglasses. I can see on the video you’re wearing eyeglasses. Both Jon and I are wearing eyeglasses right now. None of us considers ourselves disabled. But, of course, all three of us are, and we’re using a very old assistive technology that’s just been completely accepted by society to the point where that disability has disappeared essentially. It’s not even considered a disability anymore.

If I were somebody who was representing, say, paraplegics or deaf people, deaf people is maybe a good analogy because of the cochlear implants improving recently, I’d be worried about that maybe. I could see that position of being worried,. “Oh, no, we’re going to go down in numbers and then we’re going to stop being a thing that anyone cares about.”

So yeah, I think we have to reassure those people that we are in fact on their side and that we want to give them more choice, not less choice. I think, actually, the language of libertarianism, if not the policies, works well here where you basically say, “Well, it’s about the person.” You have access to the technology. You can make the choice whether that’s prenatal choice or whether it’s the choice to accept any particular —

James: It’s not just the libertarian emphasis though because it’s also the positive rights aspect that comes from traditional social democratic thinking which was not just that you have the right to make that choice to use a wheelchair or not; it’s that we want to make sure that you have access to a wheelchair.

Ted: You have the actual choice to really choose it. Right, right.

James: Exactly.

Jon: Yes. So, something else that I wanted to ask you about is as far as trying to actually influence politics, one way you can do that is to start a political party, and you mentioned before the Pirate Party which seems to be mostly a European phenomenon as far as I can tell. Then very recently Zoltan Istvan started this transhumanist party. I don’t know that that’s going to have much traction but it’s certainly of interest to me. What do you make of that strategy and what specifically do you make of those two parties and their influence?

James: Well, the first thing to say is that different political structures around the world lend themselves to party building as an exercise to different degrees. This is an old debate within the left. First-past-the-post political systems like the United States really are structurally biased against third parties in ways that make it not very attractive to spend your energies that way.

[0:55:02]

The most effective way in my opinion in first-past-the-post systems to influence politics is to have political action committees, think tanks, newspapers, interest groups, nonprofit organizations, and so forth that then can work with multiple political parties or work with caucuses within political parties. So I would love to see a transhumanist caucus or a technoprogressive caucus within the Democrats and maybe one within the Republicans, and maybe there would be certain issues that they could ally on just as there are certain odd alliances that occur already around different issues.

In certain European countries or places like Israel, so Israel has like a 1% — you only have to get 1% of the vote to get in proportional representation to get a representative, and then that’s it. So they have a lot of small parties that represent odd things. Russia, I don’t know if maybe they have changed their laws now, but they have had that kind of a system. Italy has a 5% proof barrier, I think.

So there are countries like that where you could imagine and where it’s occurred that there is a feminist party, Pirate Party, things like that. They represent fairly narrow concerns who get in and can represent that point of view and work with other folks who are sympathetic within that political framework. So there may be countries where a transhumanist or technoprogressive political party is a good project.

Now to Zoltan Istvan. I can’t imagine a worse representation of transhumanism, and certainly he has nothing to do with technoprogressivism based on his own personal politics but even more so on his novel which is even worse politics than it is fiction. So Zoltan, I have published some of the essays that he has written. He writes widely. He is very productive and sometimes he finds an opinion that I can put on the IEET website. But in terms of him being a representative of any of our politics, I think that that’s disastrous. He has already poisoned the well in certain ways because the right bioconservatives have been making great hay out of the things that he has been saying.

He is a good example of why we need technoprogressives to be more forcefully organized because there is a great hunger now for transhumanists to actually get out of the salon and into the streets, and we need to be serious about what it means to have a transhumanist politics. Not everybody who raises a transhumanist flag is going to be worth supporting. There have been people from the far right who have tried to become transhumanist activists, and there is a group 15 years ago who were trying to organize a transhumanist green socialist organization that was actually a new Nazi group.

So we have to be pretty explicit about what the values are, the principles that we are defending. I think a much more positive thing has just happened in the UK. Amon Twyman, he had been dabbling in creating the zero state thing and he and David Wood who is the leader of the London Futurists and a group of other established transhumanists in the UK have come together around a fairly technoprogressive platform as the basis of the UK Transhumanist Party.

Now, UK is equally resistant to — they have three parties but it’s equally resistant to minor parties. They have a number of minor parties. They have like the honking madman’s party that gets elected every once in a while. They have a green party and so forth. So it might be that you could have a transhumanist elected to some council election or city borough election or something like that. That would be an interesting experiment in the UK.

I think what’s more interesting at this point is just that to get a bunch of us futurists and transhumanists together in a room together and try to hammer out these political ideas and the result of the — of course David and Amon who were both involved in writing the Technoprogressive Declaration, so it’s no surprise that the UK Transhumanist Party reflects a lot of those ideas. But I would like to see that effort occur in other countries as well.

The Italians, of course, are very political. The French group is very political. The French group helped us write the Technoprogressive Declaration in the first place and they call themselves Technoprog because they want to make that clear that they are technoprogressives.

More troubling, you have — for instance in Russia, you have three factions of Russian transhumanists, so you have the pro-Putin transhumanists the liberal transhumanists or anti-Putin, and then you have the anarchist transhumanists. So in some places like Russia, Italy, you already have transhumanists at each other’s throat over politics. Rather than having us be at each other’s throat, I’d just like us to at least be explicit about what the different flavors are and say, “If we’re going to organize this transhumanist party, is it going to have more of a techno progressive flavor or more of a libertarian flavor? Are we going to try to combine the two somehow and get us both to work together? Are we not going to do that?” We haven’t had that conversation explicitly in the past. We just try to beat each other’s brains out.

Ted: Right. Well, it’s been such an apolitical movement, I think, too for a lot of — it’s history. That makes a lot of sense.

Jon: I mean it’s a sign of a mature movement though to actually be having these internal fractures and discussions.

Ted: Sure. It shows that it’s getting more important because it’s all of a sudden worth it to hash out these differences because there is a chance that we might actually influence real policy. There is a chance that we might be asked to weigh in on actual political questions, which I think maybe was not the case even 20 years ago or something.

James: This gets me back to the issue of what our focus should be. Should our primary focus be winning over other futurists? I mean just to get people in the room to have this conversation, we have to get some futurists interested. But that’s not the primary audience. The primary audience is the larger policy intelligentsia out there who might be interested in these ideas that we put together. There are people in academe and in public policy who have been paying attention to the futurist community. Kurzweil has had enormous influence in the futurist wing of the public policy intelligence but there is a lot of openings.

I know I’ve been invited to DARPA, DoD events. I’ve been invited to French parliamentary events. I’ve been invited to EU events as have many of the people in the futurist community. There is an appetite for the things that we’re talking about in the ways that we frame them. Even if they’re often skeptical, raise their eyebrows, think that we’re just one odd flavor that they’ve added to the mix, we often can be quite catalytic.

Jon: Yes. I agree. I mean, the thing is that everybody can observe how dramatically technology is affecting us and has been affecting us.

Ted: Which is getting harder and harder to deny, I think.

Jon: Yeah. I think people are just getting more receptive to these more radical ideas. You mentioned Kurzweil. I think definitely he has done a lot to bring those to people. Of course, he’s very weirdly apolitical in his writing. He always acknowledges there are problems to be worked out, but somehow they’ll be worked out.

Ted: Yes. He’s very helpful that they’ll be solved, but he doesn’t really give you much indication of how he thinks that’s going to happen. I guess that’s outside of his area of expertise.

James: He’s an engineer because his kind of default is that engineering libertarianism that everything is an engineering problem rather than a public policy problem. I think a catalytic moment for him was seven years ago, six years ago when folks published the genetic information of the 1918 flu online. He and Bill Joy got together and published an op-ed saying that that should be banned, that no one should be able to publish the genetic code of pathogens online. It’s like, oh, suddenly you discover a role for the state in preventing and mitigating catastrophic risks. It’s like maybe you could give this a more careful thought instead of just all of a sudden, out of the blue deciding that the government should ban certain lines of scientific research, which even people who are worried about that in the scientific community said, “Well, we need to have convention about this and discuss the different ways of delaying the public implication of this.” “No, no. He just wants to ban it right out.”

That’s the consequence of not taking seriously public policy. You don’t look at all the different avenues that you have available to you encourage or discourage the kinds of things that you’re worried about.

Jon: Yes. Is there anything else that you want to say that you feel like we should have asked you about but didn’t?

James: I could talk about this stuff all day. I’ll just plug the other things that I’m working on. The other project that I’ve been working on for a long time since my last book is Cyborg Buddha about moral enhancement. I think this, for me, probably addresses issues that are at the end — I sometimes call the current set of technoprogressive issues the telos of liberal democracy because we’re trying to establish an egalitarian and solidaristic and the best possible world for what comes next, the things that we can’t imagine that come next.

One of the things that’s going to happen that will fundamentally change our understanding of politics is when we truly have control over our brains. When we do that, we’re going to be able to edit our memories, share our memories, edit our most basic drives and feelings, apply those capabilities to the treatment of mental illness, social deviance, our criminal deviance and political deviance as well. So we’re going to face a world in which politics is fundamentally transformed.

You could see us kind of stab at that in Joss Whedon’s Dollhouse where personalities are being copied back and forth between people, and it just leads to a singularity chaos of a certain sort in his final chapters of that TV drama. That’s, I think, the big question that we face is what does it mean? One simple shorthanded way of framing this is what does it mean if we eventually have the borg for politics? Will the borg get one vote or a million votes? Will we be able to coexist with the borg? Will we truly be able to coexist when there are some people who have a thousand times more cognitive capacity than others? Even if that’s the result of individual free choices which it may be, will we be able to coexist in one society? How would we have accountability between groups within that society?

So those are questions that I don’t think the technoprogressive agenda can address yet. We’re just trying to make sure everyone has egalitarian and safe access to good technology that we see coming down the pike and that it doesn’t destroy the planet. That will make sure than we can then begin to answer these next set of questions that neurotechnology really poses.

Jon: Yeah. It’s hard enough to get consensus now, right? So when we imagine a future with this wild diversity of possibly artificial beings or upgraded animals, and you’re talking about literally just different strains of personality and hacking your personality —

Ted: With individuals and hive minds coexisting at the same time–

Jon: Different brain types and augmented beings non-augmented beings, it just becomes head spinning, like how do you get consensus with that group?

Ted: Right. I’m reminded of the parliament scene in the newer Star Wars movies where Palpatine walks out to give a speech or something and you just pan over the endless rows of little pods that each one contains a senator, and you just think about the utter cacophony and think about how much grid lock we have now. We may need a new kind of political organization entirely to deal with a world of that much complexity. It’s a really interesting thought to end on.

Thanks very much for joining us today. We’ll obviously link to IEET. If there’s anything else you want us to link too on the post, we’ll do that.

Jon: Yes. Probably a lot of our listeners know about the IEET, but I really want to encourage people to check out their website because you guys really do a fantastic job of keeping that thing updated, and we have very active blog there and a lot of articles expressing a lot of different points of view.

James: Great. Thank you.

Jon: So thanks for listening to our episode with Dr. James Hughes. I just want to remind everybody that we could use your help with iTunes ratings or Stitcher ratings or —

Ted: Yes. However you’re listening to the podcast, if you could just take five seconds and give us a rating and a review, it means a lot to us. We’ve gotten, I think, about 20 so far. We’d love to have 20 more. If you’re out there and you’re listening, it really does help us a lot. And if you don’t mindsharing this episode on your Facebook or Twitter feed or however you like to share things, that would be great too.

Jon: You can always contact us via Twitter. We’re @RTF_podcast or at feedback@reviewthefuture.com. We’d really love to get mail from people. We’d love to start maybe addressing some of that on the show, especially if you’re findings things you disagree with in this episode or even previous episodes.

Ted: If you can think of a topic you’d like us to cover, we’re always looking for new ideas. So by all mean, send us a message, a tweet, an email. We also have a Facebook page now which is facebook.com/reviewthefuture. So if you’d like to get updates via Facebook instead of all those other ways, you can do that as well.

Jon: We’re just going to keep turning out episodes. So join us in two weeks.

Ted: In two weeks we’ll have something new for you. Thanks for listening.

To subscribe or leave a comment on this episode, please visit reviewthefuture.com. You can also send emails to feedback@reviewthefuture.com. Thanks for listening.

[1:10:33] End of Audio