J P O'Malley

Interview with a writer: Jaron Lanier

In his new book, Who Owns The Future?, computer scientist, Jaron Lanier, argues that as technology has become more advanced, so too has our dependency on information tools. Lanier believes that if we continue on our present path, where we think of computers as passive tools, instead of machines that real people create, our myopia will result in less understanding of both computers and human beings, which could cause the demise of democracy, mass unemployment, the erosion of the middle class, and social chaos.

Lanier encourages human beings to take back control of their own economic destiny by creating a society that values the work of all industries, and not just those with the fastest networks. By monetizing information, he foresees a more egalitarian society, which can still adhere to the principles of free market capitalism.

What will the future economy look like if technology keeps advancing the way it does and we do nothing?

Well, identify almost any human role in our current society, and imagine that being aggregated into a software scheme in the future where the people don’t get directly paid anymore.  We can already say that there are virtual editors of newspapers. In the future nearly every existing job will be gradually weakened because of cloud software. The only one left standing at some future date is the owner of the largest computer on the network. Whoever has the biggest computer wins in our current system.

Is this true for politics as well?

Yes. If you have the biggest computer and the biggest data, you can calculate how to target people with a political message, and have almost a guaranteed deterministic level of success. Politics then becomes about who has the biggest computer instead of what the agenda is. The way Obama won the last US election was by having the best computer strategy. That method of winning an election works, but if that is to be the future of politics, it will no longer have meaning. The path we are on is not compatible with democracy.

You say ‘It is entirely legitimate to understand that people are still needed and valuable even when the loom can run without human muscle power. [But] it is still running on human thought.’ What do you mean by this?

The reason I brought up a loom in the book is because it has appeared twice already between the history of people and machines: once in very ancient times with Aristotle — the idea of the self-operating loom — and then [in England in the 19th century] with the Luddites.

Also one of the earliest computers was an automated loom. Let us suppose in the future there is some sort of automatic loom that can just turn out clothing for you. Where does the design for the clothing come from? Somebody might say: from an artificial intelligence algorithm, running on cloud software, using big data. But this data actually comes from a large number of people who have been anonymized and disenfranchised. If there was proper counting of where the data came from we would see that even in this highly advanced hypothetical automated loom, there would be real people who make the data possible to create a design.

And you are suggesting that they get paid, right?

Yes. If there were micro payments made to the people who fed the big data — which allowed that automated loom to make the artificial clothing for you — then there would still be an economy.  It’s not as if the people have disappeared from the economy, it’s just that we pretend they don’t exist.

You describe how the barrier between ego and algorithm is unavoidable in the age of cloud software. And that ‘drawing the line between what we forfeit to calculation and what we reserve for the heroics of free will is the story of our time.’ Can you explain this in more detail?

I think a lot of the critical elements of what we call human society — particularly our economy — has to do with what we define as the proper realm of free will. So for instance, in capitalism, we make certain decisions of not trying to intervene on the function of a market place, which is a sort of inanimate, algorithmic result of the things that we do. We say instead of allowing human politics to decide everything, we are going to limit our reach, and allow this more abstract mathematical thing of the market place to sort out our affairs. So the interesting question is: where do you put the end of the human ego? And where do you let the algorithm sort out human affairs? I don’t think we should decide in advance where that line should be. Instead we should experiment to find it. In the old days before computation: this was the line between market and government.

Are there cases when giving up ego and letting algorithms play out are beneficial?

In the case of a market place, yes. But this is why it is so critical that market places can’t be corrupt and need to be honest. The problem with our cloud software right now is that it does tend to be run by the person with the biggest computer on the network, and serve certain interests more than others. It’s not an honest broker. We are constantly running into a situation where a company like Google is saying: we are being the honest broker. Of course that is ridiculous because they are a commercial concern. So in order for us to be rationally ready to cede control to some cloud software, it really does have to achieve some state of honesty. I believe that should look more like a real market place.

You criticize the culture of the tech world several times throughout this book, but you are also part of it: can you explain this paradox?

There are a lot of very positive things about the tech world. It’s remarkably unprejudiced and I’ve never encountered racism in it. There are a lot of good qualities, so I don’t want to criticize it too much. I remain in it, and I enjoy it. However, there is a smugness, or a kind of religious aspect to it. There is a sensibility that says: we have skills that other people don’t, therefore we are supermen and we deserve more. You run into this attitude, that if ordinary people cannot set their Facebook privacy settings, then they deserve what is coming to them. There is a hacker superiority complex to this.

Do you think this culture of superiority in the tech world is making society less democratic?

Well I think this culture really undermines our discipline because to me the only proper way to describe the profession of engineering is to serve people, otherwise it’s not a sensible activity. There is no rational basis without people as the beneficiaries. Just in order for me to function sensibly I need to believe in people, not robots. When we don’t put people at the centre of the world, I think we create rather bizarre technologies that don’t tend to make sense.

Describing ‘humanistic information economics’ you say everyone should be entitled to universal commercial rights when it comes to data. Can you explain how this would work?

Currently if you have a big computer then you get to keep your data secret, and you have tremendous rights, nobody can touch it, the government can’t even see it. Google’s computers, for instance, have to go through an elaborate system of legal requests. Now what I would like to see is a situation where everybody has commercial rights to data: so everyone is a first class citizen that shares the same interests with anyone that has a bigger computer.

How could this be done? 

I am advocating a certain kind role for government in this scheme: for the simple reason that if you rely on a private concern like a Facebook or a Google to own your personal identity in the world for you, it makes you particularly vulnerable. Primarily because companies die over time, and they also go through periods of corruption and dysfunction. So we cannot have [so called] too-big-to-fail-digital companies. People must have some self-determination, and some social mobility, independently of whether some company is failing or not. Otherwise you cannot have an authentic market, and you cannot have real capitalism.

Many people — particularly in the US — are suspicious of government: do you think they will be convinced of your argument?

People think the idea of a big company owning your digital identity is better than the government, but actually it’s just the opposite: it’s much more market friendly to have the government cover those basics, because it creates continuity and lessens the dependency. There has to be a government function for some of the basics in digital identity. I don’t see any way around that.

Comments