Teaching Artificial Intelligence to reason, learn like humans, and grow a conscience may be done best by using the power of analogy.

At least that’s according to new research currently being undertaken by Ken Forbus at North-western University.


The HAL 9000 series computer from the realms of science-fiction

Is this how Skynet starts?

Probably not, but it could be quite significant all the same.

Forbus has been working on a new way of trying to create AI that can make moral decisions by infusing the AI ‘mind’ with a ‘Structure Mapping Engine,’ (SME) filled with the latest cognitive scientific theories.

The new model currently under development seeks to use analogical problem solving, specifically attempting to mirror use analogies from one learned experience and applying them to different situations.

“Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas…. In terms of thinking like humans, analogies are where it’s at,” Forbus and his team said.

The project has some quite important backers, including several branches of the military, such as the US Navy, and Air Force.

Why is this AI research different?

The answer lies in the core approach scientists take to try and create Artificial Intelligence that can overcome the gap between mankind and computers.

Google’s own version of AI, AlphaGo, relies on deep learning, which in essence works on the hypothesis that the best way to make a computer more human and capable of reasoned decisions by computing the probability that a decision it takes is the right one based on the examination of massive amounts of data.

SME concept designed AI however works on the premise that it can make better decisions from much less data, in the same way humans do, associating analogies from just 10 or 12 stories to make a decision.

At its most basic, it’s a bit like a child who puts their hand in a fire. Generally speaking, they learn fast that doing so, burns.

Time will tell whether SME based systems are the next step in the creation of a self-learning and self-aware computer capable of truly independent decisions, or if it becomes just another footnote in the advancement of technology.