Sam Harris Asks If We Can Control AI

Friday, September 30, 2016

Sam Harris Asks If We Can Control AI

Artificial Intelligence

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some abstract theoretical way. We're going to build superhuman machines, says Harris, and our collective emotional and analytic response to the dangers is not where it should be.

TED has finally released the long awaited talk by neuroscientist and author Sam Harris. (Video is below.) In his presentation, Harris emphatically urges the audience to consider just how important the development of artificial intelligence is, and how our emotional response so far is "not appropriate tot he dangers that lie ahead." Even Harris admits,"I am unable to marshal this response, and I'm giving this talk."
"It seems overwhelmingly likely that the spectrum of intelligence extends much further than we currently conceive."
"It's not that our machines will become spontaneously malevolent." Harris continues. "The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us."

The central point of Harris's argument is:
Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there's just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines. 

We don't stand on a peak of intelligence, or anywhere near it, likely argues Harris. "And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable."

scale of intelligence

Related articles
Citing the intelligence explosion hypothesis of I.J. Good, Harris speaks of how our minds cannot really fathom the exponential development of intelligence in such a scenario.

"It seems overwhelmingly likely that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine." he says.

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

Harris relates how the computer scientist Stuart Russell once said, imagine that we received a message from an alien civilization, which read: "People of Earth, we will arrive on your planet in 50 years.Get ready." And now we're just counting down the months until the mothership lands? "We would feel a little more urgency than we do," says Harris.

"I don't have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence," says Harris.

Not to build it, because I think we'll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you're talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

Harris is the author of five New York Times bestsellers. His books include Waking Up: A Guide to Spirituality Without Religion, The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying. The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.


By  33rd SquareEmbed


Post a Comment