MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
intelligence
Search

Voices in AI – Episode 76: A Conversation with Rudy Rucker

Thursday December 27, 2018. 02:00 PM , from The Apple Blog
Today's leading minds talk AI with host Byron Reese



About this Episode
Episode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity’s good or ill. Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and he’s probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.
Rudy Rucker: It’s nice to be here Byron. This seems like a very interesting series you have and I’m glad to hold forth on my thoughts about AI.
Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?
Well a good working definition has always been the Turing test. If you have a device or program that can convince you that it’s a person, then that’s pretty close to being intelligent.
So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can’t converse, it’s not AI?
No those other things are also a big part of if. You’d want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.
Well, let me try a different tack, what do you think intelligence is?
I think intelligence is to have a sort of complex interplay with what’s happening around you. You don’t want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, “do not help me.” You want something that’s flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. It’s a richness of behavior, a sort of complexity that engages your imagination.
And do you think it’s artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn’t actually have any, there’s no one actually home?
Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram’s points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it’s not programmed. So then, it’s not clear that there’s some bright line that separates human intelligence from the rest of the intelligence. I think when we say “artificial intelligence,” what we’re getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.
So, on the Stephen Wolfram thread, his view is everything’s computation and that you can’t really say there’s much difference between a human brain and a hurricane, because what’s going on in there is essentially a giant clockwork running its program, and it’s all really computational equivalence, it’s all kind of the same in the end, do you ascribe to that?
Yeah I’m a convert. I wouldn’t use the word ‘clockwork’ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have things—
But it’s deterministic, isn’t it?
It’s deterministic, yes, so I guess in that sense it’s like clockwork.
So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won’t use that word—but everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and they’re not unpredictable inherently, like from a universal standpoint. But they’re unpredictable from how finite our minds are.
They’re in practice unpredictable?
Correct.
So, a lot of natural processes, like well there’s like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where it’s going to land, because it’s going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they don’t actually travel on perfect parabolas, they have this odd shaped curve due to air friction, that’s not linear, it depends how fast they’re going. And then, you skip into saying “Well, I really would have to simulate this click.”
And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that’s the catch. We can take a natural process and it’s computational in the sense that it’s deterministic, so you think well, cool, I’ll just find out the rule it’s using and then I’ll use some math tricks and I’ll predict what it’s going to do.
For most processes, it turns out there aren’t any quick shortcuts, that’s actually all. It was worked on by Alan Turing way back when he proved that you can’t effectively get extreme speed ups of universal processes. So then we’re stuck with saying, maybe it’s deterministic, but we can’t predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, “we’re not like deterministic processes, because nobody can predict what we do.” And the thing is if you get a really good AI program that’s running at its top level, then you’re not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictability’s enough.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Voices in AI Visit VoicesInAI.com to access the podcast, or subscribe now:




iTunes





Play








Stitcher





RSS






 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
https://gigaom.com/2018/12/27/voices-in-ai-episode-76-a-conversation-with-rudy-rucker/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 21 - 20:24 CET