AI and the unsuspecting frogJaspreet Bindra
There is a furious debate raging in the technology world right now. On one side of the debate are behemoths like Elon Musk and Bill Gates. “If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.” tweeted Musk, with a picture which had a caption “in the end, the machines will win.” He went further to say that “AI is a fundamental risk to the existence of human civilization!”. On the other side are the younger luminaries like the Google founders and Mark Zuckerberg, with the latter waxing eloquent on how AI would “bring so many improvements to our quality of life” and calling Musk’s assertions as “pretty irresponsible”. The last word belonged to Musk who tweeted that “his (Zuckerberg’s) understanding of the subject is pretty limited.”
The real question is who will have the last word – us humans, or our artificial intelligence creations? Most of us have heard these doomsday scenarios before. The advent of the automobile was supposed to put the horse-and-buggy drivers out of work. Long distance aircraft were supposed to decimate ships and ocean liners. When computers came in, for instance, millions of people were supposed to lose their jobs. Instead, millions of new jobs were generated, and entire new industries created.
So what is AI? Webster defines it as the ability of a digital computer robot to perform tasks commonly associated with intelligent beings. Another definition says AI are software technologies that make a computer or robot perform equal to or better than normal human computational ability in accuracy, capacity, and speed. Back in 1956, John McCarthy of MIT defined it at is chilling best: Artificial intelligence is the branch of computer science concerned with making computers behave like humans. So, unlike planes, cars or even PCs, this whole AI thing smells different. It is not about inventing something that moves without a horse, or flies in the air, or even performs mathematical calculations at blinding speed; this is about our basic differentiation as human beings – intelligence, perception, creativity – being replaced by an inorganic, self-created entity.
In the AI world, there is a concept called Singularity. Ray Kurzweil, the well-known Futurist ‘with acclaimed accuracy rate of ‘86% in the 147 predictions he has made’, defines Singularity as the time when machine intelligence will be infinitely more powerful than all human intelligence combined. The Singularity is also the point at which machines intelligence and humans would merge. 2029 will be the year when computers will have human level intelligence, he says, and 2045 will be the year when Singularity will be achieved.
However, my belief is that we do not need to wait until 2045, or even 2029: AI has already started taking over our world, but in a gradual and insidious way. It is not that humankind wakes up one morning in 2045 and finds that the machines have taken over us. The way it is going to happenis the way it happened to the mythical frog in a vessel of slowly boiling water. The water heated up slowly and gradually, and the frog was lulled into warm comfort and then at one moment the water, and the frog, were boiling… So, what is this slowly boiling water of AI around us? Ten years back, we remembered everyone’s phone numbers. Now, our phones remember it for us, and it is a rare human who remembers his spouse’s or child’s number. Even three years back, we used to remember routes, and how to get from Place A to Place B. If we did not, we opened a map and figured that out, using our intelligence. Or, if we were in India, we interacted with other human beings who told us how to get there. No longer. Google and its ilk tell us exactly where to go, how to reach there, what time it will take and how much traffic will be encounter on the way. We consider this a great blessing, which it is, and stupidly allow the machine to guide us. For some reason if the GPS or the map does not work, we get frustrated and literally do not know how to proceed.
Google Now sits on the super-intelligent Android devices we call phones, reads our calendar and our emails and our texts, and then tells us where we are supposed to go next, and when we should leave. At 6PM every evening, my phone pops up the directions and the time to leave for my badminton courts. If I am not in my home town that day, it knows, and tells me to go elsewhere instead. And I do… Next, we will lose our very human ability to drive, to control our car-machine as per our whims. My phone will summon my autonomous car at around 6PM, depending on traffic, and the car will take me to the courts. It will make sure that I am hydrated (my wristband will tell it so), the rearview mirror will ensure I am dressed for playing, and this human vegetable will be deposited at exactly the right time to start his game. I would not be surprised that when I reach there, my AI robot clone will be the one that actually plays, but I digress..
Then there are Intelligent Homes, powered by AI and IoT. The mixer has made my protein shake when I come home, the AC is on, the shower heated and ready to go. My fridge and microwave have conferred with each other, and prepared my meal…you get the picture. What if the machines we create are more intelligent than us, and then they start creating other things that ensure their welfare, not ours. Machines defeat us at chess, ask Gary Kasparov. Recently, the world Go champion was defeated by a machine; and Go is a very ‘human’ game, with more permutations and patterns than chess. The 2016 American elections, it is now widely believed, were won more by intelligent networks and bots on Facebook and Twitter, rather than by Trump and the Republicans. There is the Blue Whale challenge, where some kind of intelligent game is telling children to mutilate and kill themselves, and the children are obeying. This is where it starts to get frightening. This is why Musk and Gates are worried, and someone like even Putin has weighed in saying that the country which controls artificial intelligence will own the world. Musk has gone as far as to say that AI should be one thing which should be regulated by government and countries.
It is also quite clear that I belong to the Musk camp in this debate; my faith in human beings making the right choices is currently at an all-time low. We still have time for us to rise above the giddy excitement and thrill surrounding this new technology, and the pride of creating something as gee-whiz as this, and take a clear-eyed view of what it means for us. We need to be more educated and aware of it, and if needed, we need regulation to come in.
Otherwise, I fear, we will be the human-shaped frog in the vessels we made, slowly getting cooked do death, by the forces we ourselves created to make us live forever…
(This article appeared in Mint dated Sep 29, 2017)