Artificial Intelligence and the Sorites Paradox 4


If you have one single grain of sand and continually add further grains, at which point do you call them a heap or a hill? When does quantity change quality? Welcome to the continuum fallacy.
What does that have to do with artificial intelligence? A lot. But let me first dig deeper into the paradox.

Sorites Paradox

A derivation of the ancient greek sorites paradox goes as follows: Assume two things

1. A single grain of sand clearly is not a heap

2. Adding one grain of sand to other grains which aren’t a heap, doesn’t turn them into a heap by adding exactly this single grain.

We all can agree on these assumptions, right? But let’s start with a single grain of sand. It’s no heap according to assumption 1. We now put one further grain on it. Is it a heap? Because of assumption 2 not. Let’s put another grain on it. Is it a heap yet? No again. Now we repeat this process over and over again and let our amount of sand steadily grow.

Eventually at some size we would call these grains of sand a heap. But from which point on? A single modification (adding a grain) doesn’t change the whole, but if we repeat it often enough then it does somewhere. Where do we draw the line between these states? We can’t pinpoint it down due to the fluid’ nature of quantity. Kinda frustrating, right? Things may not be black or white, but gray. And the same holds true for AI. How? We will get to that. But first of, what is AI anyway?

 

What is AI and intelligence?

It is very hard to define AI, mostly for the reason that intelligence alone is already hard to define.

Several approaches in cognitive science exist, trying to explain the underlying functions of intelligence (like the influential g factor by Spearman), intelligence in general is closely linked to learning. The faster and better someone is learning, the more intelligent we would call this person. Learning is necessary for intelligence whether it is sufficient is another question. We now concentrate on the ability to learn and leave the question of sufficiency for a later date since it would bust the frame for this blog.

So, since learning is a necessary factor for intelligence, AI, regardless of its deeper structure, also requires the ability to learn. In IT it’s called Machine learning and it’s already being done on a large scale today. Let’s have a look at the state of the art in AI research.

 

State of the Art

Robots

A first good example for machine learning are robots, not only because they are tangible but also because you can see how they mimic natural movements of humans or animals.

This robot, developed at the Cornell university isn’t programmed with a ready-to-use algorithm for how to walk but is instead learning through trial and error like infants do. It might not move with much grace, but it successfully demonstrates the ability to learn:

The robots of Boston Dynamics are a bit more elegant. However, their overall movements are again based on machine learning in which the robots don’t learn everything from scratch but are instead guided by their controllers. Here is one of their humanoid robots named Petman and apparently dressed up to make it as scary as possible (yep, they’re funded by DARPA and designed for military purpose):

 

Deep learning

A relatively new branch of machine learning is Deep learning. It simulates the principles of neural networks in brains far better than previous models and it’s able to create ‘concepts’ of the things it perceives. It’s one of the best machine learning systems to date and has been recently tested by Google with lots of computation power. They simply let it loose on more than 10 million thumbnails of Youtube videos and watched to see what patterns it would find on its own without any human guidance at all.
The result? Cats. Lots of cats. Why? Because Youtube is full of cats. This result alone gives us insight into what we humans prioritize, but more interesting, as Dr. Dean from the research team said, is that We never told it during the training,This is a cat’ […] It basically invented the concept of a cat.“ Here’s Peter Norvig, explaining it in a nutshell.

 

Watson

IBM’s flagship in AI and natural language processing, called Watson, has beaten a nifty test in 2011: it defeated the best players in Jeopardy!, the popular US quiz show. In this quiz you are given an answer and have to find the appropriate question. For example: “Wanted for general evilness, last seen at the Tower of Barad-Dur.” and the right question would be “Who is Sauron?” That’s a pretty hard task for a computer since it requires not only knowledge but also the ability to process natural language and find the right semantic connections. And Watson definitely rules on that.

Watson is now being used for trials in hospitals where it assists doctors in evaluating patient syndromes and diagnoses and even proposes treatments supporting its judgments with scientific data and journals (and apparently doing so better than human doctors!). For more on Watson watch this video.

 

Back to Sorites

These are some snapshots of the state of art in machine learning and AI research. The developments are astonishing and we see how machines are now able to do things which were thought almost impossible just some years before. They can be fed with huge amounts of data, sort it, interpret the results and recognize patterns; they are able to learn. But would we call them ‘intelligent’? Watson surely behaves intelligently, but whether it actually is intelligent is another question entirely.

Regardless of this question though, the quality of machine learning systems is continually improving since Hard- and Software is always getting more powerful; even on an exponential scale.

And that’s where we get back to the Sorites paradox.

While we wouldn’t perceive these systems as intelligent, they are continually getting better at learning and so at their intelligent behavior. The border between ‘simple’ machine learning algorithms and human-like AI will continue to get blurrier throughout the following decades.

Contrary to most science fiction, in which AI is created at one exact point in time (like Skynet in Terminator or the Maschinenmensch in Metropolis) is not true for the real word. AI in reality is fluid in both its nature and its development.

The big question now is where to draw the line between systems we would call ‘behaving intelligently’ and ‘being intelligent’?

Do you want to find out more about AI and all the recent discoveries? Join the Tech Natives at their upcoming event on April, 22 and meet great speakers like Bart de Witte – Healthcare Executive at IBM, Simon Colton from the Imperial College London, Fumiya Iida – a SNF professor for bio-inspired robotics at ETH Zurich and Franz Wotawa from TU Graz.

Header Image(s) from Pixabay & Gratisography

Share this post

Leave a comment

Your email address will not be published. Required fields are marked *



*


4 thoughts on “Artificial Intelligence and the Sorites Paradox

  • Bernhard Huemer

    Rodney Brooks, founder of MIT’s Humanoid Robotics Group, sort of argues that we’ll never get to the question whether artificial intelligence is now “fully intelligent”. The reason is that technology will be integrated into our own bodies beforehand, thereby making it incredibly difficult to argue about what part of it is actually doing the “thinking”.
    http://spectrum.ieee.org/computing/hardware/i-rodney-brooks-am-a-robot/0

    Having said that, I enjoyed reading your article nonetheless. 😉

    • Stefan Resch Post author

      Hello Bernhard!

      I absolutely agree with your point. We will integrate intelligent systems into ourselves as well, regardless of the specific technology. Whether it will be nanobots, Brain-Computer-Interfaces or natural speech recognition; they will be part of us. And if we agree on that, we can ask further to what degree aren’t we doing it today already: Isn’t our human intelligence already merging with machine intelligence when we use Google or GPS? They definitely take over some cognitive workload. So, yes: who is doing the thinking? 😉
      I often think of that as a sort of “hybrid” intelligence emerging, similar to what your link is describing. And this is pretty “gray” as well and gives us the trouble of where to draw the line again. 😉

      Thank you for the link and feedback!

  • Bernhard Huemer

    Thanks for your response. 🙂 It’s true that there’s a line to draw again, but that line is no longer an instance of the Sorites paradox – that’s where I was trying to get at. The vagueness of the term intelligence is not an issue in that case. Just keep making the machine a little bit more intelligent in each case to see the difference, namely that the line will move in the “hybrid” case (it does more thinking than before, so the line needs to move). As a result, you can infer that you will never cross that line by making the machine more intelligent, even without being able to tell where the line is.

    So to conclude again, yes, the Sorites paradox occurs in AI, but according to Rodney Brooks we won’t need to resolve that paradox. Drawing the line between different components of a “hybrid” intelligence is a different matter. 🙂

    • Stefan Resch Post author

      You mean that we will never cross the line completely because AI wouldn’t survive without human intelligence and vice-versa? If so, yes, drawing the line between the components of intelligence (human vs artificial parts) is a different matter than focusing on the whole thing, the ensuing hybrid intelligence like Brooks argued.