Will AI really lead to human extinction, as Hawking suggested at GMIC?

Marlon Luft Uncategorized

Could artificial intelligence spell the end for mankind? This was the question posed by world-famous physicist Stephen Hawking at the opening of the Global Mobile Internet Conference (GMIC) in Beijing on Thursday.

Rather than the internet, AI was the main topic on everybody’s mind, with academics and experts sharing their thoughts about the future of the technology. 

In a keynote speech delivered via big screen, Hawking warned that the development of AI may “conceivably destroy” humankind.

The rise of AI could be the best thing to happen to us, or could easily become the worst, if our AI systems don’t do what we want them to do, he said.

This is not the first time Hawking has challenged the rapid development of AI, but his comments sparked heated discussions among academics attending the fair. 

When we hear about AI, various associations – from chess with AlphaGo to virtual reality, and even R2D2 in “Star Wars” – spring to mind. Many think of science-fiction films.

Indeed, the AI presented in films matches our expectations and definitions of artificial intelligence. 

Experts have categorized three types of AI: weak AI, strong AI and ultra AI. According to this standard, we are currently in the weak AI phase, heading towards strong AI, so that we do not yet think about the potential threats AI poses to us. 

Hawking has often cautioned against the potential threats brought by machines, and described on Thursday AI’s potential to surpass human beings. 

“AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” he said. 

But not everyone agrees.

Lee delivered a speech with the theme “scientists starting businesses in the era of AI

Kai-Fu Lee, a venture capitalist, technology executive and computer scientist, told the conference following Hawking’s remarks that although AI may one day destroy humans, with the current level of technological advances, that outcome is still far away.

‍He emphasized four issues that should be focused on right now to ensure this does not get out of control.

“Firstly, AI will generate tremendous wealth, a chance for humans to retrieve from poverty; secondly, we should pay more attention to potential misconduct by companies with superior AI technologies and data; thirdly, if AI takes over 50 percent of jobs, what will these people do in 10 to 15 years? Finally, what will be the missions and opportunities for AI developers? Will they all become entrepreneurs? Or will they seek what we humans can do in future?”

Lee noted the importance of training versatile talents, because of the complex learning process required to master AI. If one uses AI as an accompanying tool, he argues they won’t be taken over by the technology. 


Share this Post