If you have one single grain of sand and continually add further grains, at which point do you call them a heap or a hill? When does quantity change quality? Welcome to the continuum fallacy.
What does that have to do with artificial intelligence? A lot. But let me first dig deeper into the paradox.
A derivation of the ancient greek sorites paradox goes as follows: Assume two things
1. A single grain of sand clearly is not a heap
2. Adding one grain of sand to other grains which aren’t a heap, doesn’t turn them into a heap by adding exactly this single grain.
We all can agree on these assumptions, right? But let’s start with a single grain of sand. It’s no heap according to assumption 1. We now put one further grain on it. Is it a heap? Because of assumption 2 not. Let’s put another grain on it. Is it a heap yet? No again. Now we repeat this process over and over again and let our amount of sand steadily grow.
Eventually at some size we would call these grains of sand a heap. But from which point on? A single modification (adding a grain) doesn’t change the whole, but if we repeat it often enough then it does somewhere. Where do we draw the line between these states? We can’t pinpoint it down due to the ‘fluid’ nature of quantity. Kinda frustrating, right? Things may not be black or white, but gray. And the same holds true for AI. How? We will get to that. But first of, what is AI anyway?
What is AI and intelligence?
It is very hard to define AI, mostly for the reason that intelligence alone is already hard to define.
Several approaches in cognitive science exist, trying to explain the underlying functions of intelligence (like the influential g factor by Spearman), intelligence in general is closely linked to learning. The faster and better someone is learning, the more intelligent we would call this person. Learning is necessary for intelligence whether it is sufficient is another question. We now concentrate on the ability to learn and leave the question of sufficiency for a later date since it would bust the frame for this blog.
So, since learning is a necessary factor for intelligence, AI, regardless of its deeper structure, also requires the ability to learn. In IT it’s called Machine learning and it’s already being done on a large scale today. Let’s have a look at the state of the art in AI research.
State of the Art
A first good example for machine learning are robots, not only because they are tangible but also because you can see how they mimic natural movements of humans or animals.
This robot, developed at the Cornell university isn’t programmed with a ready-to-use algorithm for how to walk but is instead learning through trial and error like infants do. It might not move with much grace, but it successfully demonstrates the ability to learn:
The robots of Boston Dynamics are a bit more elegant. However, their overall movements are again based on machine learning in which the robots don’t learn everything from scratch but are instead guided by their controllers. Here is one of their humanoid robots named Petman and apparently dressed up to make it as scary as possible (yep, they’re funded by DARPA and designed for military purpose):
A relatively new branch of machine learning is Deep learning. It simulates the principles of neural networks in brains far better than previous models and it’s able to create ‘concepts’ of the things it perceives. It’s one of the best machine learning systems to date and has been recently tested by Google with lots of computation power. They simply let it loose on more than 10 million thumbnails of Youtube videos and watched to see what patterns it would find on its own without any human guidance at all.
The result? Cats. Lots of cats. Why? Because Youtube is full of cats. This result alone gives us insight into what we humans prioritize, but more interesting, as Dr. Dean from the research team said, is that “We never told it during the training, ‘This is a cat’ […] It basically invented the concept of a cat.“ Here’s Peter Norvig, explaining it in a nutshell.
IBM’s flagship in AI and natural language processing, called Watson, has beaten a nifty test in 2011: it defeated the best players in Jeopardy!, the popular US quiz show. In this quiz you are given an answer and have to find the appropriate question. For example: “Wanted for general evilness, last seen at the Tower of Barad-Dur.” and the right question would be “Who is Sauron?” That’s a pretty hard task for a computer since it requires not only knowledge but also the ability to process natural language and find the right semantic connections. And Watson definitely rules on that.
Watson is now being used for trials in hospitals where it assists doctors in evaluating patient syndromes and diagnoses and even proposes treatments supporting its judgments with scientific data and journals (and apparently doing so better than human doctors!). For more on Watson watch this video.
Back to Sorites
These are some snapshots of the state of art in machine learning and AI research. The developments are astonishing and we see how machines are now able to do things which were thought almost impossible just some years before. They can be fed with huge amounts of data, sort it, interpret the results and recognize patterns; they are able to learn. But would we call them ‘intelligent’? Watson surely behaves intelligently, but whether it actually is intelligent is another question entirely.
Regardless of this question though, the quality of machine learning systems is continually improving since Hard- and Software is always getting more powerful; even on an exponential scale.
And that’s where we get back to the Sorites paradox.
While we wouldn’t perceive these systems as intelligent, they are continually getting better at learning and so at their intelligent behavior. The border between ‘simple’ machine learning algorithms and human-like AI will continue to get blurrier throughout the following decades.
Contrary to most science fiction, in which AI is created at one exact point in time (like Skynet in Terminator or the Maschinenmensch in Metropolis) is not true for the real word. AI in reality is fluid in both its nature and its development.
The big question now is where to draw the line between systems we would call ‘behaving intelligently’ and ‘being intelligent’?
Do you want to find out more about AI and all the recent discoveries? Join the Tech Natives at their upcoming event on April, 22 and meet great speakers like Bart de Witte – Healthcare Executive at IBM, Simon Colton from the Imperial College London, Fumiya Iida – a SNF professor for bio-inspired robotics at ETH Zurich and Franz Wotawa from TU Graz.