Grady Booch Says We Should Not Fear Superintelligent AI

Wednesday, March 22, 2017

Superintelligent AI

Artificial Intelligence

For Grady Booch, instead of worrying about an unlikely existential threat of super-intelligent artificial intelligence, he urges us to consider how the technology will enhance our lives.

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don’t need to be afraid an all-powerful, unfeeling AI. Booch allays our worst science fiction fears about superintelligent computers by explaining how we’ll teach, not program, them to share our human values. Speaking at a recent TED event (video below), he says rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

"We are on an incredible journey of co-evolution with our machines. The humans we are today are not the humans we will be then.."
"We stand at a remarkable time in human history," states Booch. "Where, driven by refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining."

Related articles
When it comes to the threat posed by artificial intelligence, Booch doesn't think that the warnings brought up by the likes of Nick Bostrom and others will come to pass.

"With all due respect to these brilliant minds, I believe that they are fundamentally wrong," he says. "There are a lot of pieces of Dr. Bostrom's argument to unpack... but very briefly, consider this: super knowing is very different than super doing. [Stanley Kubrick's HAL 9000] was a threat to the Discovery crew only insofar as HAL commanded all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world."

"This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen," continues Booch.

"We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end — don't tell Siri this — we can always unplug them."

Booch is optimistic about the future of artificial intelligence. He is currently Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition.

We are on an incredible journey of co-evolution with our machines. The humans we are today are not the humans we will be then. To worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes? How can I bring understanding and education throughout the globe and still respect our differences? How might I extend and enhance human life through cognitive healthcare? How might I use computing to help take us to the stars?

That is the exciting thing, says Booch. "The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning."


By  33rd SquareEmbed


Post a Comment