FileHippo News

The latest software and tech news

You have seen the Terminator films, right? Mankind creates SkyNet, the highly advanced AI. As soon as it becomes self-aware, it sees humanity as a threat... AI Experts Sign Agreement To Protect Mankind From The Rise Of The Machines

You have seen the Terminator films, right? Mankind creates SkyNet, the highly advanced AI. As soon as it becomes self-aware, it sees humanity as a threat to its existence and then decides to trigger a nuclear holocaust and deploy an army of Terminator machines against humanity. Although this is science fiction, and I personally believe that this will never happen, the prospect that a computer could become so powerful that it could enslave mankind, is on the minds of some of the most intelligent people in the fields of science and technology.

Experts in AI from around the globe are signing an open letter, that was issued on Sunday by the Future of Life Institute, which has made the pledge to safely and carefully coordinate progress in the field of AI to ensure that any actual AI does not grow beyond mankind’s control.

The signees of the document include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts in some of technology’s biggest corporations, including the team at IBM’s Watson supercomputer and Microsoft Research.

The letter says in part, ”The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence….We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

There is also a research document, which has been attached to the open letter that outlines some potential pitfalls and recommended guidelines for continued AI development. This letter comes after experts have issued warnings about the dangers of super-intelligent machines. There are ethicists, for example, that worry how a self-driving car could weigh the lives of cyclists against the passengers as it performs a  manoeuvre to avoid a collision.

In 2013, a United Nations representative called for a moratorium on the testing, production and the use of ‘autonomous weapons’ These are the weapons that can select targets and engage in attacks without human intervention.

Stephen Hawking and Tesla Motors CEO Elon Musk have also mentioned their concerns regarding allowing AI to run amok.

Hawking said in an article that he co-wrote back in May for The Independent, ”One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand
Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

In August of last year Elon Musk tweeted “we need to be super careful with AI. Potentially more dangerous than nukes
He also told an audience at the Massachusetts Institute of Technology in October, “I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

The Future of Life Institute is a volunteer-only research organisation, whose primary goal is mitigating the potential risks of human-level manmade intelligence, which may subsequently advance exponentially. The institute was founded by mathematicians and computer science experts from across the globe, chiefly the co-founder of Skype, Jaan Tallinn and MIT professor Max Tegmark.

What do you think? Is this a good safeguard against the misuse of technology in the field of AI? Or is it as worthless as the paper it is written on? Do you think that AI is purely Science fiction and will not become a reality? Let us know your thoughts in the comments section.

[Image via blu-ray]

SOURCE: http://www.techtimes.com/articles/25987/20150112/artificial-intelligence-has-great-potential-threat-experts-sign-open-letter-to-protect-humanity-from-deadly-machines.htm