Voices in AI – Episode 27: A Conversation with Josh Sutton
Wednesday February 14, 2018. 02:52 AM , from The Apple Blog
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese, and today our guest is Josh Sutton. He is a partner at a venture firm called AI.vc. Before that he had a long career at Publicis.Sapient, where he was the global head of data and AI. He holds a degree from MIT. Welcome to the show, Josh.
Josh Sutton: Thanks Byron, great to be here.
So what made you decide to leave the corporate world, I guess, and go into the venture world?
Well, it’s really a natural extension of how I viewed my career from the onset. When I left MIT, I joined Sapient, and the reason that I joined Sapient was something that was very compelling to me about the value proposition of the company. The vision was to change the way the world worked, and that resonated with me at a very visceral level. Through twenty-plus years at that company, it really was a driving force behind how I thought about my day-to-day activities, how I tried to prioritize what I would do.
Over the past five or so years, as I’ve been spending more and more time focused on applied AI—looking at what artificial intelligence is just beginning to enable in society at large—it is so compelling to me, and it really is that natural extension of changing the way the world works. For me, it is stepping into a role where I continue to further that vision, and do it a very active way—I’m investing in companies that I think are driving the realization of that.
And in terms of timing, why not a year ago or a year from now? Is there something special about this moment, like it’s the time where it’s matured enough? Speak to me about timing.
I do think it’s a special time right now. Look at prior iterations of technology that really changed the fabric of how society worked. I’ll go back to the Internet as a good example of that. There were two real waves of innovation that happened. The first was around the underlying technology—as you looked at browsers, looked at some of the core platforms—and then the next was in the application of those technologies to all of the businesses that could be transformed.
While there was a significant amount of value created in that first wave, the real value creation, from a long-term point of view, came in the transformation of industries—the Amazon.com’s of the world, that used that technology to fundamentally shift the way something was done and make it better. When we look at AI right now, I think we’re just coming out, or still in the mid part of that first wave of companies that are deploying horizontal platform plays, and they’re starting to create tremendous value.
But I believe the real value creation is yet to come, and we’re starting to see that in the very early days of some seed-stage, and Series A-stage companies, which are using AI to transform existing businesses. And they’re doing it in a way that, I believe, will make them dramatically better than they are today, and look fundamentally different a decade from now than what we see around us every day.
So, I tried to quantify the value of the Internet—I know that’s a fool’s errand—but, you know, you start by adding up the value of all Internet companies, and then try to figure out what value it’s added to other companies, and, just on the back of the envelope, I came up with twenty-five trillion dollars. This, in a world where the combined wealth of everybody is two hundred and fifty trillion dollars.
Nobody ever thought that was going to happen. Nobody said, “Hey, if you bolt together a bunch of computers and let them share a common communication protocol, HTTP and all that, you’ll create twenty-five trillion in wealth.”
Nobody saw that coming, so how do you think AI compares to that? If the Internet made twenty-five trillion in wealth, if it’s worth that, how do you think AI compares to that?
I think if you use a comparable metric, AI’s going to have a bigger impact. There are a lot of very respected organizations coming out with numbers that are significant in and of themselves. Now I think PwC just came out with a number that by 2030, AI technologies would contribute over fifteen trillion dollars to the global economy, and that’s from an actual revenue point of view, not a market cap point of view.
So, when I look at the impact that AI can have, I actually think it’s going to be more substantial, because the Internet impacted, primarily, consumer-driven interaction—and, yes, there are definitely parts of B2B, parts of agriculture, parts of other core industries that make up a large portion of the global GDP, that it impacted in some way that maybe aren’t as significant as the way it impacted retail and consumer communication. But when you look at AI, and you look at what a cognitive multiplier can do across pretty much every industry on the planet, I think it’s going to be as impactful as the Internet was to advertising and retail—but across every industry on the planet, so I think it’s going to be larger.
The world-wide GDP is about seventy-five trillion dollars, and so if PricewaterhouseCoopers is right, fifteen trillion is, what, one fifth of it, twenty percent of that?
Yeah, I think they were predicting a fourteen percent increase, based on whatever numbers they were using.
And that seems like a reasonable thesis to you, that AI can improve the revenue of everything on the planet by fourteen percent? That passes the sniff test?
It does, with a pause—because that’s a big number no matter where you slice it—and I think that there’s going to be a tremendous amount of disruption associated with that, some of which will be very positive, some of which will be very negative and very frustrating. But I do think that the net increase in productivity is going to be rather amazing.
The fourth Industrial Revolution construct has been floating around for a while, which I like, but a different way of talking about it that I like even more is that there have been a couple of step changes in how our society has functioned. One was in using steam and electricity—so, like kind of a combination of the first two Industrial Revolutions—to serve as a magnifying factor on how people could perform things that traditionally required manual labor. So, really a magnifying factor of what people could do from a physical point of view.
Then you had digital as really the third Industrial Revolution, which focused on the prevalence of digital technologies, but really, to me, what the impact of that was—and we encapsulate it a little bit in our talk over the Internet here—is the ability to communicate instantaneously and share information around the globe. So it was really a communication vehicle, moreso than anything else.
When you look at artificial intelligence, I think AI bears much more similarity to the first and second Industrial Revolutions, in that it’s taking our cognitive capability and putting a multiplicative factor on top of that. So, it’s enabling us to be smarter in every single business that we do, and change the way that we analyze information, and process information, and make decisions.
I can’t tell you right now whether, you know, a fifteen percent GDP increase is the right number, or whether it’s five percent or whether it’s fifty percent. But I can tell you that I believe that the value derived in a knowledge-based economy, which is what we’re becoming more and more of, is going to be significantly higher than it is today. And I think that when you put those pieces together in that way, it does pass the sniff test for me.
You made passing reference in your caveat that disruption will happen, “Some will be good, some will be frustrating.” Talk to me about each of those.
It’s going to change the way that we work today, so there are a lot of people talking about the end of work, or jobs going away. And I personally do not believe that to be true in any way shape or form. I actually think that when we look back fifty years from now, there’ll be as much work or more work than we have today, but it’ll be very different.
What AI’s going to do is change the nature of the work that needs to be done, and what we value. Certain jobs performed by manual labor—or by other things that will be automated—today, and that people are paid for today; those tasks are going to go away. Entire jobs probably won’t go away, with a few exceptions, but a lot of the tasks within individual jobs will, so people will get much more efficient. But there’ll be new demand created for a number of different things that we all of a sudden have time for and prioritize and put greater value on.
So when you tie that together, what’s going to happen is, we’re going to see a lot of jobs go away, but we’re going to see a lot of new jobs get created. But that’s going to happen in a shorter time frame than we’ve historically seen. Instead of happening over multiple generations, it’s going to happen over the span of a generation or two, which means that there’s going to be a lot of very frustrated people who’ve spent their lives learning how to do something that is no longer as relevant as they expected it to be. And the retooling and retraining of those individuals is going to be very, very challenging. I think it’s actually going to be the biggest social issue that we face as a society over the next twenty to thirty years.
Well, you know the country went from generating five percent of its power with steam to eighty-five percent in twenty-two years. Electrification of industry happened very quickly, too. It’s always been the case that when something new comes along, businesses race to adopt it, because it gives them an edge. I mean those were speedy adoptions of those technologies. Will AI be adopted faster, and why do you think it’s different?
So, when you look at steam and electricity, I agree with you on the timeframe of the actual deployment of that as a source of energy. What took a much longer was the application of that new energy to different applications. Today, in what is largely an on-demand, knowledge-driven society, we’re seeing the application of AI happen almost instantaneously.
Whereas when you look to electricity, you know, just because you had electricity wired to almost every city across the world, or at least across a country, didn’t mean people knew how to use that and apply that. With AI, we’re already seeing that application proceed faster than, frankly, most people had projected.
It actually was shocking, even to me—who is a staunch believer in the power of AI and automation—to see a report that McKinsey came out with earlier this year that said that forty-nine percent of the tasks that people are paid to do today can already be automated with existing proven technology. So when you look at that, I think it’s happening faster from an adoption point of view than we’ve seen in historical transformations.
I applaud you that you were very precise about the wording, “forty-nine percent of the tasks that people do in their job, ” but wouldn’t that be the same as when the PC came along? If you were an office worker before the PC, and then it comes along, it probably changed a big chunk of your job. And the Internet, you know, now you don’t type letters anymore, and you don’t wait for the mail in the morning and all of that…
It did, and things are changing, so there’s no debate about that. But I think that the way it’s changing is different. The Internet, at the end of the day, dramatically improved our ability to communicate—in some ways good, in some ways bad—but there’s no debating that. The sharing of information is dramatically better facilitated via the Internet than it ever was before. But it didn’t fundamentally shift how work got done.
I think something that was more along those lines might have been the earlier days of the computer—as you looked at spreadsheets, as you looked at digital technology as an overall piece of the puzzle—but that advanced at a very slow rate. What we were able to do with computers in the early days was very rudimentary, and it didn’t touch every business. It was very scientific in its nature. It started in academia, and it moved into engineering, and over time moved into various business domains. It really took, probably, a good fifty years from the time that computers came out to the time they were prevalent across the workforce.
When you look at the rate at which we’re adopting new technologies and new innovations today, that’s getting progressively shorter and shorter, which means that that ability to adapt and learn is also decreasing. Just looking at adoption of AI-powered devices, or digital devices in general, one example that caught me a little bit by surprise, actually, was the rate at which in-home devices like Amazon Alexa and Google Home have been adopted. They just passed, in the US, the ten percent adoption threshold, where over ten percent of the households have one in their homes. And that happened in the timespan of about two-and-a-half years, which is half of the time that it took for smartphones to be adopted.
So, as a society, we’re becoming more and more comfortable with new innovations changing the way we work, and adopting them faster. So what used to be a process of leading edge adopters to mass adoption is getting shorter and shorter. And the impact that’s having on the way our jobs get done is getting shorter and shorter as well, which is leading to a number of businesses becoming transformed, and in many cases put out of business much sooner than they would have been otherwise. You can look at the macro theme of that, and the rate of the time a company stays in the S&P 500—that used to be seventy-five years, now it’s down to fifteen years, and it’s continuing to decrease.
For this book I have coming out, I tried really hard to figure out the half-life of a job, and it’s a hard thing to do. But the conclusion I came to, what I really believe, is that if you look at the period of 1950 to 2000, I think half the jobs vanished. But let’s say it’s a third—I’m very convinced it’s over a third—many of those were manufacturing jobs. If you look at the period of 1900 to 1950, it looked to me like fifty percent—mostly farm jobs—but let’s call it a third. From 1850 to 1900, I saw about the same—because you had the Industrial Revolution, you had the train and all of that coming along. So, I came to this conclusion that about every fifty years, half the jobs are gone, or forty percent or something.
If you were a betting man, and I know I’m putting you on the spot here, it sounds like you think that AI is going to accelerate that, that maybe instead of it taking fifty years to eliminate a third of the jobs, it might take X years, is that true?
Potentially, potentially not. I think it comes down to what a given job actually represents. So if you go back to the 1800s or early 1900s, if you were a bank teller, you knew exactly what you did and over the entire course of your career, that didn’t change much. In today’s world—and I think this is progressively true, and I don’t have any data on this, but I’d be fascinated by a study about it—the rate of change within an existing job role, increases over those same time periods.
I look at myself as a relatively odd example of somebody that spent over twenty years in one company, but I can definitively tell you that over those twenty years, what I did varied dramatically over that time period. And the job I was originally hired to do had nothing to do with the job that I did over the entire duration of my time there. And the people that were hired into that job twenty years later bore little-to-no correlation to what I was hired to do in that same title. So while that job still existed, what it represented meant something fundamentally different.
I think that acceleration is going faster than we’ve seen in previous times, and I think AI’s going to continue that acceleration. So while we might only see a third of the actual jobs disappear over that time period, I think that what a given job represents across almost all industries is going to be more fundamentally changed than it has been in prior times.
The last question I’ll ask along these lines—and then I’d love to talk more how businesses are adopting the technology—is… I think everybody listening has the experience of, you know, they get a new assignment at work, something they’ve not had to do before—like what happened to you over the course of twenty years—and they get online, Google it, read the Wikipedia entry, download some stuff, and basically teach themselves this the new thing.
But you’re right, people didn’t have to do that before. I think of my father’s generation, he had one job for thirty-five years that didn’t change very much. But don’t you think we’re all up to that challenge? Like, yes the job can change, but we’ll just change with it—we aren’t straining at the capabilities of human beings to learn new things, are we?
No, I don’t think we’re straining at the capabilities of human beings to learn new things. I think what we’re doing is shifting where value is going to be created. This goes back to the original question around, “How much value is going to be created by AI?”
If you take AI and proceed on the assumption that I believe to be true—that a lot of the early applications which are successful are going to be in the narrow AI space which augment humans, not replace humans, and remove the onus of performing a lot of tasks that had traditionally been relatively low-level manual tasks, maybe required a bit of common sense, maybe a little bit of rote memorisation or pattern matching, but weren’t things that really stretched our creativity, imagination, or forced us to do things that pushed boundaries—that as you automate that lower-level of cognitive and knowledge work, what you’re doing is you’re freeing up everybody in their given job to perform new activities. Activities that, tomorrow, as market leaders, should be things that are higher value-add, than what was being automated and replaced by AI.
I think it’s actually a good thing overall, and I think in many industries you’re going to wind up seeing people have more work than they do today. Because they have more options and more flexibility, and a lot of the baseline work that had prevented them from doing other things is going to be automated.
You know, Mark Cuban said the first trillionaires are going to be minted from artificial intelligence companies, would you agree with that?
I would agree with that. I think that when you look at the value creation possibilities, for the ways that AI can be applied, I struggle to see a scenario where we don’t have trillionaires coming out of that space. Whether they’re the first, or the next incarnation of Bezos beats them to it remains to be seen. But I really do believe that the first companies that span into the multi-trillion dollar market caps are going to be companies that are powered by AI in a meaningful way, and they’re going to be doing things that transform society. So I don’t necessarily think that they’re going to be pure AI horizontal plays, but I think they will be doing things that create value that would not have been possible without artificial intelligence technologies.
Well that would be so exciting, because we’ve never had a single company worth a trillion dollars, some are getting close to it.
Exactly, all though by the time this airs, who knows. If Apple keeps going…
Right right, so, that would bespeak of it generating an enormous amount of wealth. So, to switch gears a little bit, tell me about your investment thesis. When you look at companies and what they’re doing in AI, what is it that you think is valuable, that you want to get involved with?
When we look at companies, it really isn’t fundamentally different than how I would look at companies from an enterprise adoption point of view, either as a consumer or as an investor in another space. The things that matter to me most are, what fundamental problems are the companies solving? Are they doing it in a way that’s creating outsized value for their customers and for the company itself and their investors? And do they have truly differentiated advantages that are enabling them to create that value in a way that would be difficult, if not impossible, for other companies to replicate in any meaningful timeframe?
So, when I think about that in the AI space, I really get excited about companies in applied AI. I think that there’s a lot in the horizontal space—I think that a lot of amazing work is being done by the Googles and Microsofts and Amazons and IBMs of the world. But the application of that to specific businesses is relatively untapped. As we look at companies that are identifying problems which historically couldn’t be solved via normal technologies, and creating unique value propositions that have fiscal value associated with them, as well as better outcome and experiential results—that’s what I get excited about from an investment thesis.
Where do you think the low-hanging fruit is? Is it in the medical field? Is it in using AI to improve business processes? Is it in scoring sales leads, or where do you think there are easy wins?
I think that there are easy wins across most industries. I mean you just hit on a bunch—healthcare, finance, sales automation; in ad-tech there’s a lot, and even in auto there’s a lot. But I think that the wins get definitively easier to quantify as companies try to tackle specific problems. A big red flag that I have when I look at potential companies is when they’re trying to boil the ocean. I don’t think anybody has succeeded at boiling the ocean. If Google or Microsoft can’t boil the ocean yet, then I have very little faith that a twenty-person team that’s trying to do something brand new is going to have that same level of success.
I get excited when I see focus, and scared when I see wide-ranging breadth. So, when I look at the areas that I think have the most opportunity, healthcare is a huge one. You look at the healthcare demands in this country, and I think if we could increase the effectiveness and efficiency of our healthcare system by five hundred percent, or one thousand percent, there would still be too much work to be done. From a research point of view, from a treatment point of view, from an ongoing wellness point of view—that’s a relatively greenfield suite of opportunities.
Financial services is such a data-driven industry that there’s a tremendous amount of optimization possible there. I don’t necessarily think about a tremendous amount of incremental revenue generation there, but I do think about the removal of a lot of inefficiencies from the system. I think there’s the entire paradigm around transportation changes, with autos and autonomous vehicles, that’s going to be a very fascinating space, to look at companies in the applied area and asking what does it mean to be in a vehicle when you’re no longer driving?
Would you agree that there’s a labor shortage of practitioners in the field that is hampering the growth of a lot of these applied solutions you want to see?
Absolutely, there’s a massive labor shortage of people in a few different areas. Everyone immediately jumps to the shortage of data scientists in AI, scientists in machine learning and deep learning experts. And I agree with that wholeheartedly, I think there is a shortage of supply there and that’s a group of people in very high demand and will likely remain in very high demand for the foreseeable future.
What I also think is that there’s a tremendous shortage of, to an even greater degree, people that have a deep understanding of a given business area and a well-grounded understanding of what is and isn’t possible with AI technologies, and can bridge those two things together to identify what solutions are possible, and what solutions would have a meaningful impact on a given business.
That is the diamond in the rough that I’m looking for—people that can combine those two pieces of knowledge together, and create a positive outcome from that, and I think that’s even in shorter supply than machine learning scientists right now.
Talk to me a little bit about geography. Where are you basing AI.vc?
AI.vc and also AI Capital, we go by both—AI Capital for a wide range of things. We are based, I jokingly say, on an airplane. We have operations out of New York and Denver, but our belief is that for companies deploying AI solutions, more often than not, they’re going to be based in and around the industries that they’re trying to disrupt. So if you’re a finance company, it’s likely that you’re going to be in New York, if you’re a healthcare company you’re going to be around hospital networks like a Hopkins or somewhere like that. If you’re in the agriculture industry, you’re going to be in the plain states or Denver.
There’s a tremendous amount of benefit that these companies are deriving from being near the industries that they’re disrupting, and that’s a very different paradigm than what we’ve seen over the past couple of decades, with the center of the venture universe, the startup universe, being in Silicon Valley. I think Silicon Valley served an amazing purpose for what it was, which was creating a culture of innovation, and creating a new paradigm for how people could communicate and work with one another. But as we look at companies using AI to transform industries, I think that it’s much more about transformation rather than outright disruption, and elimination of the way something worked. I believe—based on what I’ve seen in our portfolio companies as well—the best performing companies are working hand-in-hand with business industries that they’re aligned with in those geographies. So, even though we’ve got our headquarters in New York and Denver, we’re traveling the country every day, going to where the best performing companies are.
You know, you made that comment about the S&P 500—that the average time is down to fifteen years—and the original Dow Industrial stocks, only one of them is still on the list, and that’s GE, which got dropped twice. Do you think that large corporations are up to the challenge of seeing the potential in this technology and adopting it? Or is it going to be a lot of overthrowing the old guard going on?
I do think that there’ll be a significant amount of overthrowing the old guard. I think there are some very real and meaningful advantages for being incumbent in a number of industries you look at, especially areas like healthcare and finance, which are heavily regulated. They are, I don’t want to say insurmountable, but there are very real advantages for the incumbents. And I think that what we’ll see a lot of is, some of the incumbents that are the best at adopting the power of what AI can do to improve their businesses, become acquirers of laggers.
So we’ll see some consolidation of the old guard, and in parallel with that, we will see, obviously, some new companies come into the mix, that create value propositions and grow very quickly and change the dynamic of a given industry. I think it’ll be a mix. I do think that if you take that rate-of-change paradigm that’s playing out right now, and hold it steady for the next decade, it means three out of every four companies or so on the S&P 500 will have changed. And I think that’s directionally correct, and it’s probably going to come half from new companies and half from M&A of existing companies that are sub-performing compared to their peers, being bought and rolled in.
What do you see around the world in terms of the technology? Vladimir Putin said whoever controls AI runs the world. The Chinese have committed to investing an enormous amount of money in strategic technology. Do you think the United States holds the lead in the science with it? Are you only going to invest in US-based companies?
We’re primarily investing in US-based companies. If there’s an amazing company outside of the US, we are not limiting ourselves to that. I think especially as you look at the UK, Canada, and some other areas near us, there are some great companies there, and I think you’re going to see some amazing things.
Back to the first part of the question around does the US hold a lead right now, and is that going to remain—I think absolutely the US holds the lead today, and it’s a fairly significant lead. But I believe that we are under-investing in many ways compared to other countries.
You have two different forces in play. One is the corporate investment ecosystem. I still think that the US corporate investment ecosystem is at, or above, the level of anywhere else in the world—you have your market leaders in the US that are committed to AI as a transformative component of the future, and are beginning to invest really heavily in that.
From a nation-state point of view, however, I think that we’re lagging. I think that there’s more that we could be doing, from a US government point of view, to foster innovation around artificial intelligence. If you look at Canada, as a great example, they are beginning to poach a lot of top talent from the US, because of the government’s funding and backing of major AI initiatives. I’m friends with a number of very talented individuals who have moved from New York to Toronto as a direct result of the opportunities that were created, either directly or indirectly, by the Canadian government.
Similarly I think, as you said, you’re seeing China invest remarkably aggressively in this space, and it’s only a matter of time before they continue to produce outsized results as well. I think it remains to be seen whether the corporate investment ecosystem of the US is enough to maintain the lead, even with the governmental investment in other locations, but it’s by no means a given. It could go either way in my opinion.
So do you think it’s likely the US would, at some government level, somehow have incentives for developing the technology? Or is that just kind of not part of our DNA in this country?
It’s a great question. I think it’s possible, but I think it would have to be in the context of applications of AI. If you look at the US, historically, I think that we’ve done a lot of great work in early innovations in AI through the government, through vehicles like DARPA, and I think that that could continue. And as you look at the next wave of AI, and where AI goes above and beyond deep learning, I think that the US can very much be in a driver’s seat on that—via government investment, that, for better or worse, will be in the same way the government tends to invest in new technology, which is via lot of the three-letter agencies and the DARPAs of the world, that are focused more on defense spending than anything else. The natural trickle down from that, though, is tremendous amounts of applied applications coming out.
I do think that, from a regulatory point of view, it’s somewhat interesting, in that the relatively lax policy that the US has around data is in some ways actually accretive to our ability to develop advanced AI solutions faster than others. You know, if you compare the ability to leverage data assets in the US compared to Europe, it’s a definitive advantage. The amount of access we have to data in the US—you can take any side of that debate as to whether our laissez faire attitude towards data ownership is a good thing or not—but the fact is it’s easier to get access to a wide source of data here than it is elsewhere.
And, you know, in Europe they also have, I think it’s an EU-wide, or it’s about to be, this “right to know.”
Right, Wouldn’t that also be an inhibitor to innovation? Because if you were to ask Google, “Why am I number six in this search, and they’re number five?” I assume at this point they’d be like, “I dunno.”
Exactly. Now GDPR is going to be, I think, one of the biggest challenges for Europe, from a business point of view, as it relates to the application of deep learning applications, and machine learning more broadly. Because what GDPR stipulates is that, if you’re making decisions related to finance or health or a fairly wide range of activities, you can’t do it with an automated system unless you can provide a human explainable rationale as to why that decision was made.
And, I’m sure I just butchered that, but directionally speaking, that’s what GDPR requires as it relates to that portion of it, and the fines for it are incredibly steep. I believe it’s four percent of a company’s revenue per incident of violating that, so it’s got incredibly meaningful teeth behind it. And my fear, for Europe, is that fear of that legislation is going to prevent people from adopting certain components, driven by machine learning, that could dramatically improve their business. It will make them, in many ways, susceptible to companies coming in from outside of the EU, to offer services there that don’t have the same issues to deal with as they develop their product offerings.
We were talking about the talent shortage earlier, and one response has been to, kind of, raid universities. I’m sure it’s a great time to be a professor of artificial intelligence or data at a major university. Do you have any opinion on how that’s worked out? The people from academia who’ve gone into this industry, have they thought like business people in terms of, build products, ship products, make profitable products and so forth?
I think it’s too early to tell, but my belief—and I could be wrong on this—is that most people that enjoy teaching and enjoy being professors, enjoy it because they love the purity of the science and the purity of what they’re doing. And the reality of the corporate world is, it’s messy, it’s dirty and it’s never as clean as anyone would like it to be, and there’s a lot of tradeoffs that need to be made in the corporate world to make a company successful.
So I’d be surprised if you’d see a lot of people make a pivot from being a professor to being a leader in a corporation. Because the mentality that’s required to teach somebody the way the world should work and the way the world does work, is very different than the mentality that’s required to often get a company off the ground and be successful, and deal with all the volatility of the public markets and the fickleness of consumers, whether they’re corporate or individual retail consumers.
As long as there’s been cryptography, there’s been this ongoing battle between the people that make the codes and the people that try to break them—and there’s still a debate about who has the easier job. We’re seeing more and more data breaches, or at least we’re hearing about more and more of them. Do you think that that is something that will continue, as this technology can be used for that? Or is it just as likely that artificial intelligence will be used to defend against these sorts of attacks in the future? Should we just kind of count on nothing being private or safe or secure?
Well, I think that question, in my mind, has very little to do with artificial intelligence, because artificial intelligence is a consumer of data. What I think the root of that question is, is a shift that we’re going through as a society, which is, we are more data-driven, as a society, than we ever have been, and our trajectory is to continue to become even more data-driven in the future. So given that, I think it’s only natural that we’ll continue to see data breaches and challenges with it, because we’re more reliant on data, and there’s more of it than we’ve ever seen in history.
I think AI will be a force for operation on both sides of that, for good and for ill, as people try to use it to both hack into systems as well as defend them. And that will, unfortunately, create a relatively inefficient ecosystem of itself, of people spending a lot of money on both sides of something to hopefully keep it at status quo.
We were talking earlier about the creation of wealth from this, that it will affect every industry and so forth—and you’ve seen lots of examples of how enterprises have implemented artificial intelligence. If there are listeners, and I’m sure there are, who’re like, “Okay I’m convinced. My organization needs to start being very serious about this.” What kind of advice do you give them?
Is it like it was in ‘95, when everybody made a web department? Now you make an AI department? Where is it led, who does it, and all of that?
No. So, the way I think about it—and I’ve walked very large numbers of companies through this going back to my days at Publicis.Sapient and consulting there—is going through a very structured process, in a very short time-frame, about how to think about AI and the enterprise. And the first step in that is to sit down with your team, and brainstorm and identify all of the different things that you could theoretically transform with AI. Not that you’re going to, but what in your business could change as a result of leveraging the various types of AI technologies—from natural language understanding, machine learning, machine vision; all of the different flavors. And I tend to bucket them into the macro use cases that I think about—conversational technologies, insight generation, and task-level automation—as the three big buckets that just help me frame a context of the different ways to assess what could change in a business.
Once you’ve done that, then the next step is to take a look at all those things, and figure out, what are the data assets that I’d need for this? How does all of this fit together with data that I already have in-house, or data that I’d need to get externally? That then paints a picture of, “Here’s the full range of how my business could change, and here’s the full range of data that might be needed to do that.”
From there you can start to decompose that into, “What are the individual services that would be required?” You know, I know I need a natural language understanding capability to handle these twelve different use cases I identified. I know I need a deep learning capability, because a lot of it’s insight generation from these different sources of data. Then you can take that to say, “Okay great, and I want to understand the services I need, well, let me look across different technologies that are out there, both in solutions that are industry-specific and pre-packaged, as well as the platform plays, so I can pull together what I need.” And the nice thing there is, you’ve decompose it into services, so it’s fairly straightforward to get a small sampling of technologies that are going to address most of what you need to do. I’d love to say there’s one company out there that can do everything, but I haven’t seen it yet. I’d love it if it plays out someday, but it’s not there today.
Then, and only then, can you start—and by the way, many companies can go through the entire four-step process that I’ve just laid out in a period of days, so it’s not a big onerous task; but I think it’s very important to start with why you’re trying to do things, and then the data you need, and then move into the technology—to then go into very tight iterative developments—and this is a term that I stole from a gentleman at Lloyds who runs their machine learning program—called “proof of value.”
A lot of people talk about pilots, or proof of concept, and the problem with that is that there’s an implication that it doesn’t need to produce value. I love the phrase “proof of value,” because that’s what you’re trying to do in a very short time-frame, is take a specific use case and demonstrate that, with AI, you can produce a real outcome that’s going to influence the business. Pick a number of those that you can execute over a period of typically weeks rather than months, and build those out, learn from them, and then start to get really focused on—as you learn what works and what doesn’t—how do you create an experiential design around that, so that your systems are adopted and used.
That’s one of the areas where I think a lot of the enterprises fail, is they get caught up in the solution, and forget about the experience. And that experiential design is a huge component of what I’ve seen make solutions successful in enterprise, making it easy for people to adopt. Then really just iterating and accepting that there are going to be mistakes and failures, and people are going to do things that confuse and annoy you; and you’re going to learn things about your business that you didn’t expect to as well. So it’s going to be a very iterative process, and I think enterprises need to think about it in that way as well. I apologize, because that was kind of a long-winded answer, but that’s generally how I kind of think about walking a corporation through deploying artificial intelligence as a transformative agent within the company.
The things that make the headlines are when artificial intelligence beats the best player of some game—you had Deep Blue and Kasparov in ‘97, you had Ken Jennings and Jeopardy, you had Lee Sedol and Go—and the reason games work so well is, it’s kind of constrained universes with defined rules and all of that. Is it a useful way to look around your enterprise, at what looks like a game?
Like, we have employees that get great performance reviews, and we have employees that get bad ones, and we have a bunch of applicants—how do we pick the applicants that look like these other ones? Getting from Point A to Point B can look like a game. Is that a useful metaphor for a company, or not particularly?
I think it’s actually a useful metaphor, for today, as companies get started. And it’s not necessarily what you can do as a game, but narrowing the scope of the solution you’re trying to build, as something that’s very definable. I’ll give you an example: One of our portfolio companies, Luminosa, does a great job at taking all sorts of customer feedback, and written things, and tying that to specific outcomes—like customer churn—and just ripping through all of that and identifying, “Here are the top fifteen things that are driving customer churn, and the relative correlation of each of them.” So it’s a tight problem set, and you can create some meaningful insights out of that.
I think that ability to define the narrow problem set that you’re trying to solve, define the answers and outcomes you’re trying to get, and have a clear vision of what “winning” looks like—in the term of a game—is a nice way to frame it.
But I think as we move forward, it’s going to become also very powerful to look at—as you have additional insights into the business—what are some things you could do today that you couldn’t yesterday, because of either cost pressures or lack of knowledge that was preventing you.
Why do you think artificial intelligence is so hard? I mean, we’re almost like pleasantly surprised when it works. Is it because intelligence itself is hard? Or we don’t know what we’re doing yet? I mean, why are chatbots so poor, and as a general rule, the experience of interacting with the systems doesn’t wow me?
I think it’s a combination of a few different things. The first is, very few companies that are building AI solutions and deploying them think about experience design, and experience design is so important to adoption. If constraints are set, and if people’s expectations are set, a lot of what we’ve produced would be incredibly positively received, but since we don’t set expectations and since we don’t design for experience, it winds up being viewed negatively. The initial roll out of Siri, I think, is one of the best examples of that. From a company that traditionally is incredibly good at design, they still missed the boat, because they didn’t set expectations about what Siri could and couldn’t do.
Another contributor to the problem, is the way that artificial intelligence has been portrayed in the media, from movies through to advertisements. I think IBM has done a great job of putting AI on everybody’s radar with Watson and Deep Blue and all of that, but they’ve created an advertising campaign that created a perception in people’s minds about what AI can do that is more aspirational than reality, at this point in time. And I think that, combined with movies, has created a perception that AI is this magic silver bullet that can do everything, which is simply not true yet. It can do a lot of amazing things, but it’s not what you believe it to be, based on what you’ve seen in the movies, so that also creates a disconnect in people’s minds.
The third thing is—and you alluded to this in the question itself—we’re still figuring AI out. If you rewound seven or eight years and talked to any data scientist about neural nets and deep learning, they would have laughed at you and said, “Yeah, Minsky proved that wrong years ago.” Clearly not the case, as we’re finding out. But the reality is that we’re still learning, and we’re in our infancy in figuring out how to deploy AI technologies. And I think we’re going to go through another decade-plus of learning curves of different types of AI technologies, and I actually believe that we’ll see, probably, some other technologies that are from the past, that we have ruled out, that come back into play, as we have incremental data and processing power to capitalize on.
I read something recently that I think almost every guest I’ve had on the show would disagree with. It was somebody who was trying to quantify and say that all of the advances we’ve seen in AI could be attributed to Moore’s Law, and that it’s just the fact that the processors are doubling and doubling and doubling—and that’s basically the success we’re seeing in artificial intelligence. I assume you think it’s a little more than that, right?
Oh, it’s significantly more than that. Moore’s Law is a straight computational extension. What I think we’re seeing in artificial intelligence is the realization of a number of different ways of processing information, and analyzing information, and creating new insights, and deriving information that we’ve known about, from a theoretical sense, for a number of decades—really going back to the earliest days of AI, back in the ‘50s, and then as it was really developed further in the ‘60s and ‘70s, by the giants of the field, like Minsky—that’s continued to push forward. And we’re just starting to see the realization of some of those insights.
Just like with the invention of many transformative technologies, it takes a while for the application to catch up, and AI as a concept is fifty years old. Actually it’s a lot more than fifty years old.
Yeah, 1954, and we’re just starting to see the realization of that today. And I think it has nothing to do with any increases in computational power in that sense. I do think, as it relates to deep learning specifically, the advancement in computational power, and the increase in available data, has allowed us to demonstrate that what we thought was theoretically possible decades ago is indeed real today. And, Hinton, and everyone that’s worked with him, have really been at the forefront of demonstrating that to the world. And I think we’re going to see incremental advances from other players as well.
So, I, along with every other guest, would disagree with that, because I think what we’re looking at is more of a new idea and a new concept of how to apply different ways of analyzing and processing data, that has really very little to do with computational power—other than the fact that it’s an enabler of something that we’ve wanted to be able to do for awhile.
I, of course, completely agree with you.
So as real as AI is, I’m sure in your role you see that every startup out raising money figures out a way to kind of work AI into the deck somewhere, right? What’s kind of litmus test do you use to say, “Okay, that’s real, and this is not”? Is it that they’re building learning systems, or—
The way I look at it is, I’m much more concerned with the application of AI. Is it enabling and doing something that you couldn’t do otherwise? And I’ve got a little bit of advantage over other investors in that I’ve got a very strong tech background, and spent a number of years in the role I was in, where I was looking at applying AI solutions to real problems for corporations. So when I think about a company, and start to dig in, it comes down to the fundamental question of, how are you using this technology? And what are you doing that would differentiate it from how you could do it with human staff, and is that a meaningful differentiator?
Using AI, just to be an AI company—if it doesn’t provide an advantage—is frankly a reason I’d take money away from a company rather than give it to them, because they’re wasting time and effort. I actually had this debate with a colleague at another investment company the other week, and between the two of us, we were in that five to twenty percent range of companies that claim to be AI companies that are actually using AI in a real or meaningful way—I was on the twenty side, but as I’ve thought about it more since the conversation, I think he might have been right in the closer to five percent.
We’re nearing the end of our time, and this has been a really unusual episode, because I knew going into it that I wanted to spend my time with you talking about the here and the now, how do you do things, and that kind of thing.
Often, I spend a bunch of the show exploring the future. In like the last three minutes, tell me what you think is the net of this at a society level, are you bullish? Are you optimistic that life in twenty years or thirty years or forty years is going to be better than it is now? Or not? And how do you see the future unfolding?
I’m incredibly bullish. I look at every major cycle of transformation that we’ve been through, and I think that this is going to be one of the larger ones. And every single one of them, while it’s had issues along the way, has resulted in a dramatically higher quality of life for everybody on the planet than was the case before. I think that this is going to be no exception to that, and I think that as we look at, “What is AI transforming every business on the planet going to mean to us?” It’s going to mean a much more open society. It’s going to mean that we’re able to process and analyze information in a way that is fundamentally better and differentiated from anything that we’re used to today.
And as we look at solving problems, and trying to address the biggest challenges we have in the world—you know, hunger, poverty, health—these are things that AI will be a force for good in.
And I’ve always been an optimist, but I really find myself thinking more positively about the future at this point in time, than I ever have in the past, so I’m unashamedly bullish on the future.
Well that is a wonderful place to leave it. And I want to thank you for being on the show, you’re invited to come back any time you like, Josh. It’s fascinating talking to you.
Thanks for having me Byron, great to talk to you as always.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI Visit VoicesInAI.com to access the podcast, or subscribe now:
Nov, Wed 21 - 05:26 CET