Are we suboptimal bots?
A conversation about technology and myth


Technology is developing at an incredible pace. When we think about how social media impacts our democratic elections, how our smart home devices are listening in on us or how wearables are measuring our every movement, we have to acknowledge that we have crossed the creepy line a long time ago. So, when I met University professor and Ethical IT Design advocate, Sarah Spiekermann, at this year’s Privacy Week, I knew I had to ask her about her take on digitalization.

In your talk at the Privacy Week you said that you wished for a future where there would be more philosophers in IT. I’m sold – but how could one also make business-minded people realize that?

Sarah Spiekermann: The problem is that business sciences are based on a very small spectrum of philosophy. If you read Adam Smith’s “The Theory of Moral Sentiments”, you will read “The Wealth of Nations” in a completely different way than before. Ricardo just took a few citations out of “The Wealth of Nations” to reinforce his own economic theory. The result is, that we are forced to operate with very reduced concepts of economic thinking. As one Harvard professor once said: “Bad management theory leads to bad management practice.” Executives today are educated towards profit maximization. If your only goal is short time shareholder value, then everything that costs extra, will be left out. Take the food industry, for example: They wanted to maximize profits by producing cheap food quickly. The result was that it made people ill in the long-term. Only now, as consumers demand organic food, the food market is changing.

Do you think that there might be a similar kind of movement when it comes to data protection?

Sarah Spiekermann: That’s definitely the wish of the Internet community. But I think that this will still take decades. People need to start noticing that it doesn’t do them any good to share that much data. Because it backfires on their real life: by having to pay more for a mortgage, by not getting an apartment, by not getting a job. People don’t yet understand these causalities. It will take decades of awareness training until they do. In the case of the food industry it took many people that actually got sick.

Are we leading the same old technocracy debate of the 1970ies today, only with a new focus on digitalization?

Sarah Spiekermann: The questions haven’t changed. If you read Hans Jonas or think about Joseph Weizenbaum’s ELIZA experiment – all of this is prevailing today. The debate has become more urgent because technology has caught up with us. We observe how people fall into dependent relationships with technology, and into an almost fanatical belief in technology. Many people don’t waste many thoughts on this topic and suffer the consequences: attention deficit syndrome, permanent self-interruptions or the FOMO phenomenon. But I don’t want to demonize smartphones and social media. I also use those tools. What I find way more dangerous is that there are people who believe that artificial intelligence will save mankind from itself. They understand human beings as some kind of bot, but a suboptimal one. And they expect machines to bring us to the next stage of human evolution.

At this year’s Philosophicum Lech there was a speaker who finished her talk about mankind’s relation to technology with the words: “We ask ourselves if machines can replace humans. What we should be asking instead is: What has happened to human beings that ask such a question?

Sarah Spiekermann: Yes, absolutely. We grow up in a particular time, with a particular social background in a particular society. This shapes our thinking. If we are educated to think in a certain factual driven way, we train our brains to work in this way. I think that our thinking has turned towards the merely rational side. Of course, machines are “better” in this regard. If we consider this to be “rationality”. Many would dispute this point of view; and argue that only a holistic view on human existence can be truly “rational”. Machines depend on a set of data records and can therefore only assess small excerpts of reality.

Why is algorithm transparency so important?

Sarah Spiekermann: Algorithms are supposed to help us with making decisions. An algorithm selects data according to specific rules. If we really want to use algorithms to assist us when making decisions, it has to be comprehensible how the machine gets to this point. We see human autonomy as a great good. It’s essential to human dignity. How can I, as the decision maker, stay autonomous, if I don’t know how a suggestion came to be? There’s a danger that people follow the suggestions of algorithms without really thinking about them. Because they think of themselves as suboptimal processing systems.

That sounds as if humanity has no self-esteem.

Sarah Spiekermann: That’s true. Humanity has no self-esteem, because it has become a resource. In business sciences we train ourselves to perceive ourselves as a resource. But humans are not resources. Humans are not a means of production. There’s so much potential within us that we don’t yet understand. This potential is uniquely different in each human being. We think of ourselves as inferior to machines, whereas the opposite is true. We develop machines as tools. But that’s all they can be. That’s why the demystification of technology is so important.

Photo credits: Cover image by Unsplash

Share this post

About Verena Ehrnberger

Verena works as a data privacy legal expert and studies philosophy at the University of Vienna. Always juggling multiple projects, she is seriously addicted to coffee.

Leave a comment

Your email address will not be published. Required fields are marked *



*