Fears about AI
Publications and reports about the risks and dangers of AI development have reached an inflationary level during the last few months. From boulevard papers to most respected scientists, we get warnings about jobs being eradicated and the end of humanity in dystopian scenarios.
The reactions vary widely depending on the different topics presented to the public. This is an indicator that people actually have no clear picture of the risks and benefits of AI applications. Which is an important observation.
How to use a risky technology
A society which starts to use a promising but risky technology needs to develop a trade-off between the extent of beneficial use AND limitations of use for the mitigation of possible downsides. Mankind has shown in many areas that prudent creation of knowledge and rules of use can reduce the level of risk to an acceptable level (although there are also examples of failure to do so). This process must be based on fundamental knowledge of possible risks and damages. But what are the actual risks of AIs wreaking havoc?
An examination of disasters and risks
that are usually discussed reveal something interesting: There is only a very small fraction of scenarios that can be created by rampaging AIs only. Most nightmare scenarios (e.g. extinction by technology) are already possible today,
either created by misuse of existing technology, or simply by self-indulgent humans or organizations.
Time for action
For the time being, it seems that it would be more productive to direct the creative power of reformation to our existing construction sites as they need more immanent attention. Examples are the out-of-control financial system, permanent war, inequality, poverty and overwhelmingly bad governance.
Yet, a small community of people
and institutions has already started to create rules for the safe construction of future AI systems
. There was a very good talk about possible AI rules
presented at a TEDxVienna event. Given the potential risks of AI development it is required to build at least similar international organizations as we have it now for nuclear technologies (i.e. the International Atomic Energy Agency). We are far away from this level now. However, the greatest challenge in this endeavor is that it is not imaginable for us how a superintelligent machine would act once unleashed
. Many of the scenarios currently discussed are more or less only extensions of adverse human behavior – it can be speculated that we are simply projecting fears about ourselves onto AIs
The discussion about possible AI threats is further restricted by certain taboos. E.g. only few people consider machines becoming conscious as a possible scenario. Many discourses in the AI community are about safe goal setting. However, a conscious machine would possibly create its own goals quite emphatically. As an AI enthusiast, I would love to see an AI system created for the sole purpose to continuously evaluate new risk scenarios combined with a second AI which creates strategies to mitigate those risks.
This brings me to two developments which I think are important for the co-development of mankind and AIs. Firstly, mankind will not build one singular AI system, but millions. An ecosystem of diverse AIs at many different levels will significantly reduce the risk of dominance of a single benign AI system. Biological systems where biodiversity increases the resistance against bugs follow a similar strategy. Open-sourcing AI software is a crucial step into this direction. Major organizations like Google, Facebook and open AI are following this idea. Adverse for this development are secretive developments in military and intelligence communities.
Secondly, I believe that the time of evolutionary development of mankind is over. The introduction of CRISPR with the possibility to change the DNA code inside a living cell has at least the same exponential potential as AI. So, humans will soon start to improve themselves. Imagine a superhuman AI system – half-human, half-cybernetic – carrying humanity into a fascinating, but still goose-bumping borg future.