British theoretical physicist Stephen Hawking has for a long time sounded the alarm bells over the development of full-grade Artificial Intelligence (AI). According to the physicist, AI could spell doom for the human race if it is pursued without caution. Mr Hawking consistently repeated his waning, attracting attention from academics and inventors alike.
In 2014, the American business magnate Elon Musk publicly added his voice to Hawking’s, that although superhuman AI could provide incalculable benefits, it could also wipe out the human race. In fact, Musk also holds the opinion that advanced AI is an existential threatto humanity.
Both Hawking and Musk sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity. The two men decided to take their AI warning further by letting the world know what is exactly at stake.
The Future of Life Institute under the auspices of Musk, Hawking and others in academia and the business world wrote an open letter, directing it to the broader AI research community. The letter was made public at a conference in Puerto Rico in 2015.
The letter called for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from AI, but called for concrete research on how to prevent certain potential “pitfalls”: AI has the potential to eradicate disease and poverty, but its proponents and researchers should refrain from creating something that can’t be controlled.
The letter demonstrated the experts’ concern over the development of full AI by the AI community, without considering its potential risk to humans. The letter added mileage to exercising caution regarding the development of full AI. The documented letter by Hawking and his concerned friends was heard loud and clear among governments and the AI community around the world.
Due to the influential nature of the signatories of the letter, the letter sparked serious debate among the AI community. There are some in the AI community who believe the content of the letter should be taken seriously. Others also think that although the letter raises some genuine concerns, the development of full AI should not be abandoned.
One prominent member of the AI community openly shared his views regarding the controversial topic. Dr. Ben Goertzel is a Brazilian-born American, serving as the Chief Scientist for the financial prediction firm Aidyia Holdings, and robotics firm Hanson Robotics; and chairman of AI software company Novamente LLC, a privately held software company. Goertzel also serves as the chairman of the Artificial General Intelligence Society and the OpenCog Foundation; and is vice chairman of futurist nonprofit Humanity+; and also scientific advisor of biopharma firm Genescient Corp. Apart from these positions, Dr. Goertzel also holds numerous positions in think-tanks, academia, and other businesses connected to the development of AI.
In an op-ed for Big Think, Dr. Goertzel reveals that AI will surpass human ability before the end of this century. The AI guru explains that many current science fiction movies regarding full AI, such as The Terminator, will soon become reality, and this will happen for the better or worse.
Although Dr. Goertzel is pro-full AI development, he isn’t sure about its negative consequences on society. Rather, he is nervous about the negative impact, and thinks it may even spark a war between humans and robots. However, he is adamant and advocates for full AI development nonetheless, allowing humans to even become machines if they wish.
“Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. It will provide all basic human needs – food, shelter, and water – and those of us who wish to experience a higher echelon of consciousness and intelligence will be able to upgrade to become super-human. Or, perhaps there will be war – there’s a bit of uncertainty there,” Dr. Goertzel writes.
Dr. Goertzel didn’t give a specific date for the eventuality of the robotic revolution or apocalypse, but predicts it will happen during his lifetime. “There’s a lot of work to get to the point where intelligence explodes… But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting,” he says.
Mr Hawking warned humanity about this very issue: the stage when machines become more intelligent than humans.
The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded,” Mr Hawking said about AI in 2014.
From what Dr. Goertzel revealed, it’s clear that Mr Hawking and the others against the full development of AI are right. The machines we are inventing today will likely become more intelligent than us one day. The earlier we stop this from happening, the better for us. Our governments should wake up for once and protect their citizens from any possibility of this happening.