The Battle to Define AI's Soul

Tech Leaders Disagree on the Dangers AI Presents to the Human Race

17.04.2018 | by Kezia Parkins
Photo by ... on ...
Photo by ... on ...

Tech tycoon Elon Musk said everybody should be worried if someone as intelligent as Steven Pinker doesn’t understand the difference between narrow AI and general AI and, consequently, how costly that oversight could be.

The rebuke came after Pinker took a swipe at Musk for constantly warning the public about the threat artificial intelligence could pose to humanity as a whole, whilst simultaneously profiting from the technology in the production of his own self-driving Tesla cars.

It was only a few years ago in 2015 that Musk claimed that Google could create “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.” This mindset sounds similar to the kind of catastrophic thinking that doomsday philosopher Nick Bostrom — the director of the Future of Humanity Institute at Oxford — has become renowned for.

However, Musk is still a proponent for AI, as opposed to a general and all-encompassing self-learning technology.  Some of the worlds biggest tech titans and potential profiteers of AI have weighed in on the debate.

Bill Gates was originally on the side of Musk, urging people to be wary of the growing “cause for concern” — but in the last few months, he has shown a change of heart and proclaimed that there is no need to panic. Gates even suggested that AI could, and should, be our friend.

Pinker’s solution to avoiding the problem is simple: build safeguards into the tech, just as we always have. He uses the example of cars to demonstrate his point, remarking on how we build in airbags and bumpers. He believes the fear of superintelligence taking over is based on Prometheus and Pandora myths, and is “confusing the idea of high intelligence with megalomaniacal goals.”

Pinker adds that it is a projection of alpha male psychology onto the very concept of intelligence and sees no reason why robots would want to subjugate humanity. He surmises that tech humanity will develop into a product of Darwinian evolution, which is inherently competitive. This accounts for the parallel between high intelligence and power. However, if we create intelligence by design it will not crave the same kind of power we as humans do — unless, of course, we program it to.



Musk’s pursuit of interplanetary travel the ability to colonise Mars is one of his notable goals, and he has reasoned how it could be a way of humanity avoiding mass extinction. After hearing this, Demis Hassabis issued a jovial response, claiming that said robots would simply follow us. Musk owned shares in Hassabis’ DeepMind before Google acquired the company in 2014, but remarked it was only to keep an eye on the tech’s development.  

Demis Hassabis, CEO of DeepMind

The AI market was valued as a $525 million industry in 2015. By 2017, Statista reported that the market had a value of $2.42 billion, predicting it to rise to $36.8 billion by 2025. The portal for market data also deduced that the market would only grow as AI technology edges closer to human-level functionality. It is evident that the market is already experiencing exponential growth, with that growth only continuing as it affects more and more industries.

The UK government released a parliamentary report in April, stating that “AI must be for the common good.” Hopefully, the world’s foremost leading experts on the matter know what they are doing and can guarantee it will be. The human race may depend on it.

Join the discussion

Leave a Reply

Related Shakers

Steven Pinker

Harvard University

Elon Musk

Tesla, SpaceX

Photo by .lwpkommunikacio on flickr
"The underlying point held; experience as well as common sense indicated that the most reliable method of avoiding self-extinction was not to equip oneself with the means to accomplish it in the first place." - Iain M. Banks, Authour (from Consider Phlebas)

Related Shakers