Viktor Mayer-Schönberger and Daniel Dewey at TEDxVienna 2013

Not many sciences will have such an enormous impact on human society and history as IT will do in the 21st century. It now already penetrates every aspect of modern life, from the way we shop, handle tasks, interact with others, make scientific progress; IT has become indispensable. It’s the dawn of big data. And furthermore even artificial intelligence. What are the benefits, the costs and risks of such developments? Two speakers at our UNLIMITED conference this November will provide a deeper insight.

Viktor Mayer-Schönberger on Big Data

Mr Mayer-Schönberger, born in Zell am See, Austria, is now a Professor of Internet Governance and Regulation at the Oxford internet institute. He studied law in Salzburg, Cambridge and Harvard and later set foot in IT as a software entrepreneur where he founded Ikarus Software, focusing on data security. The combination of law and IT makes him an outstanding expert in these fields and provide him with both technical and legal perspectives like not many others could have.

In his latest well-received book Big Data: A Revolution That Transforms How we Work, Live, and Think he tackles important issues and moral conflicts arising from big data. It is a term describing extremely large and complex data sets which are hard to process and with which we are confronted in many areas today. It’s a hard task to extract valuable information out of noisy data-chaos, but one which often seems to be worth the effort. A rush for the “gold-nuggets” within data has begun.

The company Inrix, for example, was able to find strong correlation between road traffic and the health of local economy, which is being used for investment-analysis of companies in such areas. With these methods, they were aware of financial developments of the companies even before their quarterly earning announcements were published.

Big data can even save lives. At Toronto’s hospital, by analyzing a lot of medical information like heart and respiration rates of premature babies, researchers around Dr. Carolyn McGregor could successfully detect early signs of infections and thus help medical prevention. Amazing, right?

But there are of course other sides to these technologies as well. Like the conflict with privacy when Google or Facebook process highly private information of our lives to create profit out of it. Or just take the recent NSA-scandal where a monstrous amount of big data analysis was used to profile as many citizens as possible in favor of crime prevention. Prevention is better than punishment, isn’t it? But what about the moral issue of trying to prevent a crime before it has been committed, just like in Minority Report?

Mr Mayer-Schönberger warns about fetishizing data and a dictatorship of it. Numbers and statistics are enjoying a powerful level of trust but this appeal is dangerous because in the end they are still probabilities and always carry an inherent risk of uncertainty.

Daniel Dewey on intelligence explosion

Big data is a phenomenon for which huge amounts of machine learning is being used today. Machine learning has become a very powerful branch in IT and the efficiency of such algorithms is due to their capabilities of changing and improving themselves in order to achieve their instructed tasks. And lots of these algorithms are doing this autonomously because their environment of huge data is often too complex and too abstract for our brains to cope with.

But if we extrapolate the need for autonomy of such systems up until the very likely point in near future where we will be able to create true and powerful artificial intelligence, a serious question emerges: What would happen if an artificial intelligence either improves itself dramatically or creates a more powerful “successor”, a better AI, which then again creates a better AI and so forth? It could become an ultrafast chain reaction leading to some form of superintelligence. But could we then control such a powerful entity and would we know by which motives it would itself guide by? Could it develop its own motives which may work against ours? This scenario is called intelligence explosion and in science fiction rarely had good outcomes for humanity.

That’s why good care and caution is needed in AI-research. But how can we guide and control such highly autonomous systems? Analyzing the deep motivations and theoretical foundations of self-improving algorithms and principles of AI-research, Daniel Dewey‘s research at the  Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford is focused on how to guarantee that intelligent systems will stick to their instructed motives and not pursue or develop others.

Such a problem isn’t one which can be solved technically, it poses profound questions within the fields of decision theory, logic and philosophy. How so and what could be possible solutions for these problems, Mr Dewey will explain in detail at TEDxVienna’s UNLIMITED on the 2nd of November.

Unlimited possibilites

Want to go deeper? You have the chance to listen and talk to Viktor Mayer-Schönberger and Daniel Dewey among many other amazing speakers at our big conference. We have another batch of late bird student tickets and a limited 1+1 discounted combo category. So get your ticket now!

Header Image(s) from Pixabay & Gratisography

Share this post

Leave a comment

Your email address will not be published. Required fields are marked *