Data vs. Humans: Why the Why Matters in Big Data 2


In their famous book about Big Data, the authors Viktor Mayer-Schönberger and Kenneth Cukier state the following: “the era of big data challenges the way we live and interact with the world. Most strikingly, society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what.” According to the authors, in many situations knowing that something is happening but not why it is happening is “good enough”. But what about the other situations where that’s just not the case?

Should we really trade the WHAT for the WHY?

Human minds work on the principle of causality. As humans we always wonder what factors generated a given outcome. We want to know cause and effect. We want to know what exactely is going on around us, and we need to know why certain things are happening to be prepared for similar future events. We created a lot of sciences to answer this one question: WHY? Now, Big Data wants to trade causality for correlation.

Big data analyses can be “predictive” but only in one very limited way: algorithms can tell us WHAT happens, but not WHY it is happening. The understanding of this difference is crucial when judging the ethics of Big Data.

But what is an algorithm? An algorithm is simply an ordered list of steps that produces predictions about us if we feed it our data. The output of an algorithm depends on its input and on the way it is processing this input. But the input and the rules on how to process this input are designed by humans. So, what if one of these rules is wrong? What if the order of the necessary steps is wrong? What if the data we feed the algorithm is incomplete? We will never know what the algorithm is actually predicting unless it is transparent.

That’s why for many data privacy specialists algorithm transparency is a prerequisite to predictive big data algorithms. We have to know how the algorithms work that predict our behavior. In the future we may base decisions on the output of algorithms. So we should be sure that the algorithm is designed in a way that has considered all possible factors. In short, the algorithm has to be perfect. But we live in an imperfect world.

We already rely on algorithms to facilitate our life. The Facebook algorithm is deciding which posts we can see in our timeline and hence with whom we should stay in contact. The Google algorithm decides what kind of search results is most suited for our personality and filters out the rest. The Amazon algorithm makes recommendations based on what we have purchased and never shows us anything new. The Netflix algorithm shapes our viewing habits by matching our movie taste with the taste of others. Either we believe that those companies know what´s best for us or we demand transparent algorithms.

Algorithms can be wrong.

Privacy activists try to show this crucial point of the big data movement through art projects. The art-app pplkpr (“People Keeper”), for example, has found an algorithm to manage friendships based on our emotional reactions to a person. The pplkpr algorithm can automanage your social life: it sends messages to people that are good for you, so you can stay in touch and it deletes people from your contact list that are bad for you, so you can get rid of emotionally exhausting friends. In theory, this sounds like a reasonable concept. If you are comfortable leaving huge life decisions to an algorithm without knowing exactly how it operates.

The creators of pplkpr point out that the app can help to sort out complicated relationships. So, it´s important to remember that big data algorithms can do a lot of positive things for us. Though, as illustrated in a piece by WIRED magazine, the pplkpr creators also recount a story about two friends who were studying together for an exam and their stress levels raised. So the pplkpr algorithm decided that stress isn’t a desirable condition and blocked their respective contact information in their smartphones. The algorithm meant well – but it just considered WHAT was happening and didn’t see the WHY of it.

How could it? Humans are complex beings. We do objectively “bad” things (the what) all the time, but often with a positive intention (the why). But big data algorithms are only able to see correlations, not causes. So, the algorithm often doesn’t get the big picture. Smart algorithms can be a useful tool. But we should keep in mind that human behaviour is more than the sum of our data. Big data consists of a what. Humans will always consist of a what, a why and a who.

Do you want to know more about algorithms? Watch these two TED talks:

Header Picture Source:

TJ Alshemaeree

Share this post

About Verena Ehrnberger

Verena works as a data privacy legal expert and studies philosophy at the University of Vienna. Always juggling multiple projects, she is seriously addicted to coffee.


Leave a comment

Your email address will not be published. Required fields are marked *



*


2 thoughts on “Data vs. Humans: Why the Why Matters in Big Data

  • Andreas

    great blog entry! so many things in our daily life are processed automatically in the background without our knowledge and we don’t question it as long as the results seem to fit our expectation. although i think that making everything public would be an information overflow, it would definitely increase transparency and also (data)security

    • Verena Ehrnberger Post author

      Absolutely right. The point is that expectation and reality differ immensely. We expect to see the content of the friends we are close to in our newsfeeds on Facebook, but in reality if we don’t chat with them for a couple of weeks (maybe because we actually see each other in real life…) they disappear from our newsfeeds. Those algorithms are based on assumptions that might be right for some people (in this case maybe people who change acquaintances on a regular basis) but they simply aren’t right for everybody. Apart from the fact that an algorithm might fit our needs or not: we should all be aware of the way these algorithms work if we allow them to make such huge decisions about our social life or our life in general.