MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
what
Search

Voices in AI – Episode 75: A Conversation with Kevin Kelly

Thursday December 13, 2018. 02:00 PM , from The Apple Blog
Today's leading minds talk AI with host Byron Reese



About this Episode
Episode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI and Kevin’s thoughts on its inevitability. Kevin has written books such as ‘The New Rules for a New Economy’, ‘What Technology Wants’, and ‘The Inevitable’. Kevin also started Wired Magazine, an internet and print magazine of tech and culture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didn’t even know where to start or where to end. He’s perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. I’m super excited to have him on the show, welcome Kevin.
Kevin Kelly: It’s a real delight to be here, thanks for inviting me.
So what is inevitable?
There’s a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.
So that would say that things like you’re going to find on any planet that has water, you’ll find fish, it has life and in water you’ll find fish, or will things, if you rewound the tape of life you’d probably get flying animals again and again, but you’ll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internet’s inevitable, but Google’s not. AI’s inevitable, but the particular variety or character, the specific species of AI is not. That’s what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.
And what are some examples of those that you discuss in your book?
So, technology’s basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have  a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.
The idea that there’s any kind of simplification going on in technology is completely erroneous, there isn’t. It’s not that the iPhone is any simpler. There’s a simple interface. It’s like you have an egg, it’s a very simple interface but inside it’s very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.
So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe there’s another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and that’s also going to be true about AI, we will have more and more specialized varieties of AI.
So let’s talk a little bit about [AI]. Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence? And in what sense is AI artificial?
Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we don’t really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don’t really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so…
That collection is still being mapped, and we know that there’s something like symbolic reasoning. We know that there’s kind of deductive logic, that there’s something about spatial navigation as a kind of intelligence. We know that there’s mathematical type thinking; we know that there’s emotional intelligence; we know that there’s perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.
So all the deep learning neural net stuff that we’re doing is really just varieties of perception of perceiving patterns, and whether there’s audio patterns or image patterns, that’s really as far as we’ve gotten. But there’s all these other types, and in fact we don’t even know what all the varieties of types [are]. We don’t know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So it’s not just that we’re creating artificial minds, it’s the fact that that creation—that process—is the scope that we’re going to use to discover what our minds are made of.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Voices in AI Visit VoicesInAI.com to access the podcast, or subscribe now:




iTunes





Play








Stitcher





RSS






 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
https://gigaom.com/2018/12/13/voices-in-ai-episode-75-a-conversation-with-kevin-kelly/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Nov, Thu 21 - 19:44 CET