Recently, I am visiting and watching many talks online about Artificial Intelligence (AI), since a friend of mine recommended me this article.
AI describes the intelligence exhibited by machines and software. While this can be very basic, as for example a Google search, it can become really complex. An example for this is the supercomputer from IBM called Watson. He is fed with huge amount of data regarding health problems and is already able, in some extend, to recognize sickness patterns better than human doctors. Impressive, right?
So, machines are becoming smarter and smarter, that’s a fact. And they will do so because our processing and storage capacity increases more and more, while the cost for it decreases at the same time. Furthermore, AI is a very profitable business. Imagine you don’t really have any assets besides some servers and your code and algorithms.
After hearing some talks on AI, I am sure the near future will be bright and we will see many advances in the health sector, in self-driving cars, in teaching, in image detection, in the internet of things and of course in easy and repetitive automated tasks.
But what comes afterwards?
At some point we can create machines that will be much smarter than we are. Those machines are called super-intelligent AI and are considered the last invention that humans need to make. Because from that point on the machine will learn at a much faster rate than humans can and human progress that normally lasted a few years, will be just a matter of month, and then days – an exponential development. By the way, Super-intelligent AI is expected to arrive between 2040-2075.
While I am super excited about what changes the advance in AI will bring to humankind, I am also quite afraid. Let me explain why.
The problem with software and machines is that they only do what you tell them to do. They are completly rational and do not feel any emotions as us humans do – they are amoral. While research right now is mainly focused on developing the super-intelligent AI, few people think about how to teach emotions to machines.
You now probably think – well then we just tell those machines that they shouldn’t kill humans, or something like that. But this is not as easy as it sounds. For a super-intelligent AI, we are just like ants. It won’t kill us on purpose and would even go out of its way to not harm us. But if our existence is interfering with its goals, or if it thinks it could do something more efficient without humans, it will be our end.
Now, AI is a little bit more scary right?
I sincerely hope some smart people think about this in depth. Because if the super-intelligent AI will be a good one, it will make sure we will live forever.
[Sources and further readings:
The Future of Artificial Intelligence – a Techhub event at Google Campus, Madrid]