FileHippo News

The latest software and tech news

A Google software engineer has said that the company’s “deep learning” decision-making systems have been able to crack coding problems that the engineers who... Google’s “Deep Learning” Computers Out-Smart The Creators

A Google software engineer has said that the company’s “deep learning” decision-making systems have been able to crack coding problems that the engineers who designed them can’t.

Quoc V.Le made the revelation at the Machine Learning Conference in San Francisco on Friday, where he also outlined how Google are able to use these large clusters of computers known as “deep learning” systems.

deep learning

Google’s technology is based around a layered architecture, with each successive layer building upon what all the layers beneath it have learnt. So the bottom-most layer of the neural network can detect changes in color in an image’s pixels, and then the layer above may be able to use that to recognize certain types of edges. After adding successive analysis layers, different branches of the system can develop detection methods for faces, rocking chairs, computers, and so on.

This tech is used for services such as Google translate and Android’s voice-controlled search.

But as Quoc V.Le explains, the software has actually learnt to pick out features in objects that humans struggle to see, like a paper shredder for example.

Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.”

Qouc’s colleagues struggled to identify various models of paper shredders when shown photos of them, yet the computer system had no trouble and had a greater success rate. He admitted that he would be unable to write a program to do this.

“We had to rely on data to engineer the features for us, rather than engineer the features ourselves,” Quoc explained.

So for some things, Google researchers are no longer able to explain how the system has learnt to indentify certain objects. It would seem that the programming can think independently from those who created it and its complex learning processes are unfathomable. Although this “thinking” is limited to very specific situations, the proof is there that it can happen.

Cutting Back On The Human Experts

Google doesn’t think that this technology will ever develop into full blown artificial intelligence. But what is so appealing to the tech giant is that with “deep learning”, the company can hire fewer human experts because the systems will be able to solve problems the researchers can’t.

“Machine learning can be difficult because it turns out that even though in theory you could use logistic regression and so on, but in practice what happens is we spend a lot of time on data processing inventing features and so on. For every single problem we have to hire domain experts,” said Quoc.

“We want to move beyond that … there are certainly problems we can’t engineer features of and we want machines to do that.”

Although this is exciting stuff, it also has an air of Skynet about it don’t you think? Should we be excited or afraid? Share your thoughts.

[Image via colleen sharen]

SOURCE: http://www.theregister.co.uk/2013/11/15/google_thinking_machines/