With all the advances in technology and artificial intelligence happening around us every moment of the day, what will life look like in the next 100 years? Just within the past year we’ve seen so many advances in autonomous cars, robotic employees, the possibility of a robot army – just where is it all going to stop? If robots can do everything so well – what’s the need for humans? Robots in theory might make better employees – after all, they don’t need breaks, time off – they don’t even need a paycheck. And they do exactly what you tell them to do without any kind of complaint. Sounds like a perfect employee to me! But what happens if, as robots begin to get smarter and start to slowly take over the world, they decide that they’d be doing humans a favor if they just wiped us all out? That’s the question Nell Watson, a futurist, CEO, and engineer is asking lately.
She points out in a talk at The Conference in Sweden that for as smart and human-like as machines may become, they will still lack human ethics and morals. Hmm. That might be a problem down the road, especially if they ever outnumber us.
Her quote, courtesy of Gizmodo, which has got me thinking is this one:
When we start to see super-intelligent artificial intelligences, are they going to be friendly, or are they going to be unfriendly? [. . .] Having a kind intelligence is not quite enough, because to paraphrase Arthur C. Clarke, “any sufficiently benevolent action is indistinguishable from malevolence.” If you’re really, really, really kind, that might be seen as really evil. A truly kind intelligence might decide that the kindest and best thing for humanity is to end us.
That last sentence of hers has really got me thinking. In other words, robots with all of their artificial intelligence might just decide to all-out mercy kill the entire human race.
What do you think? Should we be worried? Let us know your thoughts regarding this story in the comments section below.
[Image via WorldLeaks]