MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
intelligence
Search

Voices in AI – Episode 87: A Conversation with Sameer Maskey

Thursday May 16, 2019. 02:00 PM , from The Apple Blog
[voices_in_ai_byline]
About this Episode
Episode 87 of Voices in AI features Byron speaking with Sameer Maskey of Fusemachines about the development of machine learning, languages and AI capabilities.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Sameer Maskey. He is the founder and CEO of Fusemachines and he’s an adjunct assistant professor at Columbia. He holds an undergraduate degree in Math and Physics from Bates College and a PhD in Computer Science from Columbia University as well. Welcome to the show, Sameer.
Sameer Maskey: Thanks Byron, glad to be here.
Can you recall the first time you ever heard the term ‘artificial intelligence’ or has it always just been kind of a fixture of your life?
It’s always been a fixture of my life. But the first time I heard about it in the way it is understood in today’s world of what AI is, was in my first year undergrad when I was thinking of building talking machines. That was my dream, building a machine that can sort of converse with you. And in doing that research I happened to run into several books on AI and particularly a book called Voice and Speech Synthesis, and that’s how my journey in AI came into fruition.
So a conversational AI, it sounds like something that I mean I assume early on you heard about the Turing Test and thought ‘I wonder how you would build a device that could pass that.’ Is that fair to say?
Yeah, I’d heard about Turing test but my interest stemmed from being able to build a machine that could just talk, read a book and then talk with you about it. And I was particularly interested on being able to build the machine in Nepal. So I grew up in Nepal and I was always interested in building machines that can talk Nepali. So more than the Turing Test was just this notion of ‘can we build a machine that can talk in Nepali and converse with you?’
Would that require a general intelligence or are not anywhere near a general intelligence? For it to be able to like read a book and then have a conversation with you about The Great Gatsby or whatever. Would that require general intelligence?
Being able to build a machine that can read a book and then just talk about it would require I guess what is being termed as artificial general intelligence. That begs many different other kinds of question of what AGI is and how it’s different from AI in what form. But we are still quite far ways from being able to build a machine that can just read a novel or a history book and then just be able to sit down with you and discuss it. I think we are quite far away from it even though there’s a lot of research being done from a conversational AI perspective.
Yeah I mean the minute a computer can learn something, you can just point it at the Internet and say “go learn everything” right?
Exactly. And we’re not there, at all.
Pedro Domingo wrote a book called The Master Algorithm. He said he believes there is like some uber algorithm yet we haven’t discovered which accounts for intelligence in all of its variants, and part of the reason he believes that is, we’re made with shockingly little code DNA. And the amount of that code which is different than a chimp, say, you may only be six or seven mbps in that tiny bit of code. It doesn’t have intelligence obviously, but it knows how to build intelligence. So is it possible that… do you think that that level of artificial intelligence, whether you want to call it AGI or not but that level of AI, do you think that might be a really simple thing that we just haven’t… that’s like right in front of us and we can’t see it? Or do you think it’s going to be a long hard slog to finally get there and it’ll be a piece at a time?
To answer that question and to sort of be able to say maybe there is this Master Algorithm that’s just not discovered, I think it’s hard to make anything towards it, because we as a human being even neurologically and neuroscientists and so forth don’t even fully understand how all the pieces of the cognition work. Like how my four and a half year old kid is just able to learn from couple of different words and put together and start having conversations about it. So I think we don’t even understand how human brains work. I get a little nervous when people claim or suggest there’s this one master algorithm that’s just yet to be discovered.
We had this one trick that is working now where we take a bunch of data about the past and we study it with computers and we look for patterns, and we use those patterns to predict the future. And that’s kind of what we do. I mean that’s machine learning in a nutshell. And it’s hard for me for instance to see how will that ever write The Great Gatsby, let alone read it and understand it, but how could it ever be creative? But maybe it can be.
Through one lens, we’re not that far with AI and why do you think it’s turning out to be so hard? I guess that’s my question. Why is AI so hard? We’re intelligent and we can kind of reflect on our own intelligence and we kind of figure out how we learn things, but we have this brute force way of just cramming a bunch of data down the machine’s throat, and then it can spot spam email or route you through traffic and nothing else. So why is AI turning out to be so hard?
Because I think the machinery that’s been built over many, many years on how AI has evolved and is to a point right now, like you pointed out it is still a lot of systems looking at a lot of historical data, building models that figure out patterns on it and doing predictions on it and it requires a lot of data. And one of the reasons deep learning is working very well is there’s so much data right now.
We haven’t figured out how, with a very little bit of data you can create generalization on the patterns to be able to do things. And that piece on how to model or build a machine that can generalize decision making process based on just a few pieces of information… we haven’t figured that out. And until we figure that out, it is still going to be very hard to make AGI or a system that can just write The Great Gatsby. And I don’t know how long will it be until we figure that part out.
A lot of times people think that a general intelligence is just an evolutionary product from narrow. We get narrow then we get…First they can play Go and then it can play all games, all strategy games. And then it can do this and it gets better and better and then one day it’s general.
Is it possible that what we know how to do now has absolutely nothing to do with general intelligence? Like we haven’t even started working on that problem, it’s a completely different problem. All we’re able to do is make things that can fake intelligence, but we don’t know how to make anything that’s really intelligent. Or do you think we are on a path that’s going to just get better and better and better until one day we have something that can make coffee and play Go and compose sonnets?
There is some new research being done on AGI, but the path right now which is where we train more and more data on bigger and bigger architecture and sort of simulate our fake intelligence, I don’t think that would probably lead into solutions that can have general intelligence the way we are talking about. It is still a very similar model that we’ve been using before, and that’s been invented a long time ago.
They are much more popular right now because they can do more with more data with more compute power and so forth. So when it is able to drive a car based on computer vision and neural net and learning behind it, it simulates intelligence. But it’s not really probably the way we describe human intelligence, so that it can write books and write poetry. So are we on the path to AGI? I don’t think that with the current evolution of the way the machinery is done is probably going to lead you to AGI. There’s probably some fundamental new ways of exploring things that is required and how the problem is framed to sort of thinking about how general intelligence works.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
https://gigaom.com/2019/05/16/voices-in-ai-episode-87-a-conversation-with-sameer-maskey/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Mar, Thu 28 - 10:46 CET