AI was expected to revolutionize the way we do just about everything, but the changes that were promised haven’t materialized as quickly as expected. What’s holding AI back?
On this episode, host John Stackhouse sits down with Ajay Agrawal to dig into this question and more. Ajay is a professor at the University of Toronto’s Rotman School of Management; he was named to The Order of Canada this year for his contributions to enhance Canada’s productivity, competitiveness, and prosperity through innovation and entrepreneurship, and he’s the founder of the Creative Destruction Lab, an early proponent of AI ingenuity.
Ajay is also the author of two books about AI. His latest, Power and Prediction: The Disruptive Economics of Artificial Intelligence, co-written with fellow Rotman professors Joshua Gans and Avi Goldfarb, focuses on the fact that AI hasn’t lived up to the excitement that he himself helped create. When he looked back at the predictions made in his 2018 bestseller, Prediction Machines: The Simple Economics of Artificial Intelligence, he realized it was time to shift focus away from AI as a technology and instead look at the economics of the systems in which it operates.
This episode also features an exciting new AI technology called GPT-3, which uses deep learning to produce text that reads like it was written by a human. It was created by Open AI, an organization founded in San Francisco in 2015. Ilya Sutskever, their chief scientist, is Canadian and a U of T alum.
GPT-3 even provided three brief summaries of John and Ajay’s conversation:
“AI is reducing the cost of prediction, but has not had as big of an impact as many people thought it would. This is because there are many other parts of the system that need to change in order for the benefits of AI to be realized. One example of this is Uber, which is a system-level change that was enabled by AI.
AI is not going to figure out the complexities of health care, it’s a matter of time for organizations to experiment with A.I. to figure out how to best use it. A.I. is a prediction tool to help organizations optimize against an objective function.
A.I. will have a profound impact on the workforce, but it is important to remember that humans will still be needed in many jobs. A.I. has the potential to reduce discrimination by identifying and correcting bias. Canada is doing well on the research side of A.I., but there is still room for improvement on the application side.”
“Creative Destruction Lab was designed to address the market failure of commercializing early stage science. The program helps entrepreneurs with the judgment they need to turn their scientific innovation into a business. AI is characterized as a drop in the cost of prediction.
AI is not going to figure out the complexities of health care. There are many barriers to deploying AI in health care, including system frictions that are not aligned with the incentives of hospitals, doctors, and insurers. It is difficult to experiment with AI in health care because of the need for a system-level overhaul.
AI has the potential to help reduce discrimination by making it easier to detect and then fix. However, too much regulation of AI has the potential to stifle innovation. Canada is doing well on the research side of AI, but there is room for improvement on the application side.”
“AI is not as prevalent as we thought it would be, and the reason is that we severely underestimated when we wrote our first book, the importance of all the other parts of the system that need to change above and beyond the actual prediction tool created by the AI.
The book is about A.I. and its impact on the workforce. A.I. is not going to figure out complexities that people will need to in order to make health care cheaper, faster, and more accessible. There is a need for a culture of experiments with a purpose in order to accelerate change. A.I. can help with optimizing for a thing like an objective function, such as resilience.
A.I. may actually help reduce discrimination by providing a more objective way to detect and fix bias. However, too much regulation of A.I. could stifle innovation. Canada is doing well on the research side of A.I., but there is room for improvement on the industrial/application side.”
Amazingly concise! This episode also features an AI-generated John Stackhouse, so listen in and see if you can tell the difference.
To read Ajay Agrawal’s newest book, “Power and Prediction: The Disruptive Economics of Artificial Intelligence”, co-written with fellow Rotman School of Management professors Joshua Gans and Avi Goldfarb click here. Follow this link to the University of Toronto’s article about testing out GPT-3 and this one for more about Open AI, GPT-3 and Dall-E2. Some background on IBM Watson can be found here.
Speaker 1 [00:00:01] Hi, it’s John here on this show. We’ll be talking about artificial intelligence or AI. But first, I want you to meet someone. Or should I say something? Hi. It’s an AI version of John here. It’s remarkable how much I sound like the real thing. Another remarkable thing I can do is write podcast episode titles. For example, our friends at the University of Toronto were kind enough to feed the interview you are about to hear through GPT three. Hi, it’s the real John again. GPT three stands for Generative Pre-Trained Transformer and it was created by Open AI, an organization founded in San Francisco in 2015 by Elon Musk and Y Combinator. Sam Altman. Elias Suzuki over there. Chief Scientist is Canadian. He’s a U of T alumna. He’s also a former grad student of A.I. pioneer Geoffrey Hinton, who we’ll hear about a few times in this episode. GPT three uses deep learning to produce text that read like it was written by a human and their daily to app is making headlines for its ability to turn text descriptions into hyper realistic images. We wanted to test this tech marvel, so we took the interview that you’re about to hear and gave it to GPT three and came up with a clever title. AI. We weren’t as prepared as we thought. No kidding. Hi, it’s me again. A.I. John. A.I. has proven itself capable of many things, like reading and delivering a podcast, introduction and writing great titles. But has it proven to be the game changer many thought and promised it would be? I would say no, or at least not yet. Thanks for that Amy. GPT three also generated concise summaries of the interview you’re about to hear. One in particular reads A.I. is not going to figure out the complexities of health care. It’s a matter of time for organizations to experiment with A.I. to figure out how best to use it. A.I. is a prediction tool to help organizations optimize against an objective function. But you decide. See if you agree with GPT. Three Thoughts on the topic. This is Disruptors, an RBC podcast. I’m your host, John Stackhouse. Today’s guest is one of the many people who sees the potential but knows we still have a long way to go. AJ Agarwal is a professor at the University of Toronto’s Rotman School of Management, a global center of research and teaching excellence at the heart of Canada’s commercial capital. AJ was named to the Order of Canada this year for his contributions to enhance Canada’s productivity, competitiveness and prosperity through innovation and entrepreneurship. Looking back, AJ admits that when he wrote his 2018 bestseller Prediction Machines, he overestimated how quickly A.I. led changes would come. This was the impetus for power and prediction. His latest book that looks at the economics of the systems in which technology operates. AJ may also be a familiar voice to our listeners as he was a guest on our special two part series called The Creativity Economy, which we produced last year. We recently got back together to talk about how Canada can stay at the forefront of air developments and how businesses of all sizes can stay ahead of the curve. Here’s our conversation. AJ, Welcome to Disruptors.
Speaker 2 [00:03:29] Thanks very much, John. Happy to be here.
Speaker 1 [00:03:31] You’re the founder of the Creative Destruction Lab. For those listeners who may not be familiar with Kdl, can you give us a quick sense of what it is and what motivated you to launch it?
Speaker 2 [00:03:41] Sure. So Creative Destruction Lab is a not for profit program found at the University of Toronto at Rotman School. And the mission is to enhance the commercialization of science for the betterment of humankind. So my Ph.D. dissertation was on the economics of commercializing early stage science, and I spent a few years at MIT, and my first faculty job was at Queen’s. So I moved back to Canada. This had become a topical issue in Ottawa that Canada was doing a good job on the science side, but not a great job at commercializing the science given my topic area. I started getting invited to various policy meetings and white papers and roundtables and so on, and I thought that was great. And after about ten years of doing that, I realized, Wait a minute, I’ve been doing this for ten years, and everyone’s still talking about exactly the same thing as when I first arrived in Ontario. And so that point, I decided I was never going to do another roundtable or breakfast meeting. I wanted to focus on actually doing something. And so when we launched Creative Destruction Lab, it was very small. There were 25 companies that came into the program today. There’s this year we took in 650 startups around the world, but when we started, it was 25 companies. And the idea was that there was a missing market for what we call judgment. Judgment is simply when an entrepreneur wakes up in the morning, they have a thousand things they could be doing and they think they don’t have the bandwidth, do all those thousand things. And so let’s say they can do three things. How do they pick from the list? Thus, judgment. You can’t go down to Bay Street and buy five units of judgment. It’s not for sale. And so that’s what in economics we call a market failure, where there’s willing buyers and willing sellers, but somehow there’s a friction that prevents the market from clearing. And so Creative Destruction Lab was designed to address that market failure that we bring together the inventors who need the judgment and the people who have the judgment.
Speaker 1 [00:05:31] Is that a natural extension from that into AI, which Sidel has become fairly well-known for, and what will get into more deeply in this conversation?
Speaker 2 [00:05:40] Yeah, so that was our first introduction to the modern incarnation of AI, which is through machine learning, is that some graduate students of Jeff Hinton, who’s a well-known professor, pioneer in deep learning. They came in 2012 into the first year of Creative Destruction Lab, and they introduced us to this new technique. In the beginning, we didn’t fully appreciate it, but they came to Creative Destruction Lab as a way to help them turn their scientific innovation into a business. In fact, the first one was a student named API Fitz, and he had come up with a way of using this new technique, using A.I. to predict which molecule would most effectively bind with which protein. And created a company called Atom Wise. So he brought that into Creative Destruction Lab and in the process introduced us to what would become a revolution in artificial intelligence.
Speaker 1 [00:06:29] I suppose there are. Just ask for your definition of A.I. artificial intelligence.
Speaker 2 [00:06:35] Sure. In terms of the characterization, this was the definition that we had in our first book, from an economics perspective, is we characterize a rise in A.I. as a drop in the cost of prediction. And so effectively, it’s the technology that makes prediction cheap. In economics, we do that with all technology. So we strip away the technical parts of a technology. And in economics we always ask the same question about every technology is what does this reduce the cost of? So semiconductors reduces the cost of arithmetic makes a metric cheap Internet reduces the cost of search costs, digital distribution of goods and services and AI reduces the cost of prediction over the.
Speaker 1 [00:07:21] Past ten years, there have been incredible advances in A.I., but it’s also not played out in the ways that many of us thought it might. AJ What over the last five years of AI’s development has surprised you most.
Speaker 2 [00:07:37] On the negative side? I would have thought that it would be far more prevalent, but it hasn’t transformed the world the way that we thought we would be further along in that transformation. When we wrote that book in 2018, on the positive side, there are some things like, for example, the large language models, Darley and and the various other permutations where you can type in a sentence describing a scene and it creates the picture and GPT three where you type in a prompt and it writes a paragraph or a page or multiple pages and it’s like autofill, except it’s rather than just filling in the rest of the sentence, it fills in the rest of the story. And so those things are, I would say, better than what I would had expected by this time. But from an impact on the economy perspective, it’s been significantly less than we expect and that’s really motivated the second book.
Speaker 1 [00:08:32] You have some really startling statistics in the book. One is that only 11% of corporations that are invested in AI have achieved success in scaling AI applications only 11%. These are big companies with lots of really smart people and capital and data to work with, and most of them are failing. Why is that?
Speaker 2 [00:08:54] Yes, that’s really the motivating question. The puzzle in the book is what happened as we poked and prodded and met with lots of people in industry who are working with AI. We came to the conclusion that we had severely underestimated the importance of all the other parts of the system that need to change above and beyond the actual prediction tool created by the AI. And in my view, this is really the key insight here from the book. We make this example comparison to electricity and how electricity took so long to take off. And the original value proposition for electricity was this was will save energy. And so let’s say you have a factory. It didn’t make any economic sense to tear out your existing infrastructure and replace it with distributed electricity. But as new factories came online and some people were willing to experiment with this new electricity, they started to discover benefits from the electricity above and beyond fuel savings. So, for example, in order to actually transmit the power to the machines in the factory, it would come into the building via a big steel shaft that would turn and power machine. And that thing was very heavy. And so it required a lot of bracing and quite a significant structural requirements to the building. As soon as you remove that. Buildings could be constructed much lighter. And so it just lowered the capital costs of building a factory. Secondly, because they wanted to keep those heavy shafts as short as possible, they would put all the machines as close as they could to the wall and they would build vertically. Once you remove that constraint, they realized it could make single storey buildings again, much cheaper for construction. And then perhaps the biggest advantage they discovered is that before they would power everything off a single shaft so that when one machine went down, the entire factory came to a halt. With this new distributor, electricity, each machine had its own power source. And so when one machine went down, the others could keep operating. And that made the whole thing a lot more productive. And so when you start adding up all those benefits, it became much greater than the energy savings. And so what they realized was the key benefit was not so much energy saving, which was the original value proposition, but instead it was decoupling the machine from the power source. But that was the real value proposition.
Speaker 1 [00:11:24] So the challenge really is the organization. It’s the systems, it’s not the task. And one of the other examples I’ve been reflecting on a lot from your book is IBM Watson. Does everyone remember IBM Watson It was kind of supposed to change everything just because it won Jeopardy. But there was a sense that health care particularly would be where IBM Watson would make the most profound change, as we all know, the inefficiencies of health care. Wouldn’t it be great if AI rooted and driven technologies could get us better? Health care for less money really happened because of the complexities, the systems of health care. Is that a forever thing with a complex system like health care, or is that just a matter of time for AI to figure out the complexities?
Speaker 2 [00:12:11] Well, A.I. is definitely going to pick out those complexities, in other words. So, John, about three weeks ago, my coauthors, Avi Goldfarb, Joshua Gans, as well as one of our colleagues at MIT named Kathryn Tucker. The four of us organized a conference in Toronto. Some of the top economists in the world focused on AI. And the second day, Watson focused entirely on A.I. and health. Since you raised the issue of health and if I were to characterize that day, I would say it was a mix of both elation and depression. The elation was all the experiments that people were reporting in of eyes that were performing really miraculously in health care applications. So superhuman, better than doctors in all kinds of, for example, diagnostic capabilities, being able to read medical images, pathology slides and eyes that are better able to predict mental health crises, attempted suicides than mental health experts, you know, all kinds of things that would make health care cheaper, faster and more accessible, especially to lower income environments. So that’s on the positive side. On the negative side, the reason it was depressing was because so few of these things have been demonstrated in any kind of application setting outside of the research labs because there are so many barriers to deploying them, these system frictions that they are not aligned with the incentives of hospitals or the fixes are not aligned with the incentives of doctors, of insurers. There’s all kinds of frictions that basically require an entire system level overhaul. And these people who have been working in this area, you know, see the benefits. They’ve measured how effective they can be. And yet they’re seeing them being dismissed in terms of application. And so, you know, the question is, what’s it going to take for a system overhaul? People were asking, what maybe, maybe in some other countries, maybe in lower income countries where there’s less regulatory barriers and people can try things more easily? We don’t know.
Speaker 1 [00:14:09] You also make the point in the book that AI’s success often requires experimentation. That’s probably true for all technologies, but especially for data based technologies and software. That’s harder to do in business and harder to do in business. That involves people even harder to do in a regulated business where human safety or privacy or other legitimate concerns are at play. Do we just have to give this more time than we may have thought five years ago? Or are there greater challenges then than time?
Speaker 2 [00:14:40] Well, certainly time is one. And also a disposition and willingness to experiment. I think a number of things need to come together in terms of leadership, whether it’s of a hospital or a health system and the capital and the willingness to experiment. And I suspect there’ll be some kind of catalyzing event where somewhere in one of the OECD countries, somebody will really push the boundaries, then they’ll demonstrate the benefits, and then others will follow.
Speaker 1 [00:15:11] If I’m running an organization, big or small, in any sector, if I’m listening to this and thinking about how this might affect whatever it is I do, how can I move faster without having to wait for that catalytic event or a crisis?
Speaker 2 [00:15:26] I think, you know, creating a an environment with a very purposeful process for running experiments. I think in most corporations there is not a culture of experiments, experiments with a purpose. In other words, we’re running this experiment to test this particular hypothesis. Once we learn the outcome of that, that reduces the risk for scaling it to this next step and then this next step. So in other words, there is a North Star that we are pursuing. Let me give you an example. Imagine going to the doctor and you go for a checkup and the doctor does their annual checkup and then says, you know, I think you’re going to get really sick in about three years. So, you know, thank you for coming in and we’ll see you next checkup. He would say, wait, wait, wait a minute. What do you mean? I’m going to get sick in three years? Aren’t you going to tell me what’s the matter with me? And aren’t you going to give me some kind of treatment plan? And imagine, Doctor said no, but you would think that’s crazy. It doesn’t make any sense. But that’s what the insurance industry does all the time. Let’s say you and I both want health insurance and let’s say my premiums are 25% more than yours. That’s because the the insurance company has made some prediction that I’m more likely to file a claim than you. And their capabilities today are so much greater than they were before. Now they’re able to make predictions down at the sub peril level. So in other words, like the likelihood that you’re going to have a leaky pipe that would cause a basement flood or an electrical fire, that they’re down at that level of precision in their predictive capability. And given that they’ve got that kind of predictive capability, they could figure out whether it’s worth it for, let’s say, you or me to buy a $500 device to do early detection of a leaky pipe or a device that you can plug in to your wall that gives you early detection of electrical fire. So those devices exist. You and I might see them advertised on late night TV, and we don’t know whether it’s worth it for us to buy that. But they know, Hey, I know what the risk is. I’m pricing your risk. John It wouldn’t be worth it for you to pay the $500, but. AJ It would be worth it for you because you’re at a higher risk for that kind of a of a peril. So they have that kind of information. And yet for the most part, the insurance industry does not do risk mitigation. And part of that is that the agents who sell us insurance, it’s not in their interest to offer us things that will lower our premiums. That’s going to change. So it means changing the cost structure. It means investing in risk mitigation solutions or partnering with risk mitigation solution companies and so on. It’s a different business model, but I suspect these will be senior leadership teams at large corporations, financial services like banking, insurance, automobiles. And it’s just hard to imagine a world, John, where like that we won’t ultimately go there. It will likely take time because of all the system changes that are required to get there. But the guys are laying the foundation for value propositions that are very different than what they were in the absence they are.
Speaker 1 [00:18:42] In just a moment, Professor Agarwal will give us his thoughts on how Ottawa is regulating and stimulating the innovation economy and how businesses can make the most of the power of AI predictions. Back in a minute.
Speaker 3 [00:18:59] You’re listening to Disruptors, an RBC podcast. I’m from Theresa Doerr. I’d like to share with you our latest agriculture report from RBC Economics on thought leadership called The Transformative Seven Technologies That Can Drive Canada’s Next Green Revolution. In it, we identify seven key agtech innovations we believe can both meaningfully reduce emissions and present opportunities for Canada to lead. Some, like anaerobic digesters, carbon capture and precision technology, are ready to scale now. Others, like vertical farms, plant science and cellular agriculture, will be key solutions for the future. In every case, maximizing their potential will mean building the right platforms for collaboration among not just farmers and entrepreneurs, but also investors, corporates and governments. To learn more, visit RBC E-commerce Thought Leadership.
Speaker 1 [00:19:57] Welcome back. Today on Disruptors, we’re speaking with Professor AJ Agarwal on the success and failures of AI and how industries can make the most of this powerful technology. One of the takeaways for me from the book is the need to focus on value rather than cost. And too much focus was on cost rather than value creation. And I’m wondering how that changes in a post-pandemic reality, where resilience is as important for many organizations as, say, efficiency is. How do you think about the power of AI in that kind of complex economy?
Speaker 2 [00:20:36] I’m interpreting your question as resilience is more important today because we just went through COVID than it was pre-COVID, but we care about it more because it’s just more salient for us. And so that’s a change compared to five years ago. There’s an increased emphasis on resilience today than there was, let’s say, five years ago. And so what role does A.I. play in that? And I would say that think of AI as a prediction tool to help you optimize against things like an objective function. And so no matter what you put in the objective function, AI’s job is to make predictions to optimize against that thing. So in other words, you’re asking it to trade off, say, hey, you know, now resilience is more important to me. I’m willing to trade offs and speed or trade offs something else in order to get more resilience. And so the idea says, okay, I’m going to optimize for that now. His job is not to decide what’s the objective. That’s our job. But once we set the objective, then AI’s job is to do the the statistical calculations to make its predictions accordingly.
Speaker 1 [00:21:37] This can also be really helpful in terms of how we think about jobs and the evolution of work. You mentioned early on in the conversation the great Geoffrey Hinton, one of the godfathers of modern AI. I still remember something he said that I think was wrong maybe five years ago, and this was to do with radiologists. And he said declaratively, we should stop training radiologists right now. They will not be needed. And you have a fascinating chart in the book of all the tasks that a professional radiologists is required to do. I think it was 30, and of the 30 only one that machine learning could replace the rest involved humans. So we’re actually going to need radiologists for a very long time. How are you thinking differently about A.I. in its impact on the workforce?
Speaker 2 [00:22:27] The reason why it was reasonable for Professor Hinton to make his comment was that while the radiologist has these 30 different tasks and image recognition, that’s one that that where there’s been a lot of advance. That’s one where as a radiology student, you spend the majority of your time training on that task. That’s kind of the defining one of the field. So that’s, I think, why he said it. Yes, the computer scientist is not a manager or I don’t think he would claim he has any expertize in change management. And I think what we all underestimated was how hard that is. And so if you were to take away the image recognition requirement from radiologists, you probably cut out several years of schooling. And maybe ultimately that job could be done by someone who has more basic medical training. But what his remark severely underestimated is the system change in order to do that. He was imagining that everything will change at the pace of the AI’s predictive capabilities. So his thing was, I bet you we can train AI’s to be as good at image recognition as a human within five years. He’s probably right. But he fully underestimated, as I suspect, many people, including the three authors of our book, of how hard it is to change the system.
Speaker 1 [00:23:44] There’s been a lot said over the last five, ten years about what AI is going to do to society, particularly how it may fuel discrimination and lead to other negative social outcomes. You make an intriguing argument towards the end of the book about how I may actually turn that tide and may already be reducing discrimination. Yeah.
Speaker 2 [00:24:05] So I think the two basic elements to reducing discrimination are step one, detecting it and then step to fixing it. So, for example, Amazon had an item it was using for, for h.R. And it, it was trained on human data and it became very biased in favor of males. So much so that if you were a male applicant but even mentioned the word woman on your CV like said that you are the coach of a women’s soccer team, it would disqualify. And so that became a very high profile, disastrous case of applying A.I. and amplifying bias. But I think once we push harder on this, we’ll find there’s already been quite a bit of evidence for this that I can be much more screwed about than humans, because we can ask it’s so many questions. And then once we find evidence of bias, we can fix it in a much more effective way than we can fix. Once you find evidence of bias, you can go in and fix it. Whereas with humans, it is. Not at all obvious that we can do that. You know, there’s been a lot of effort at various training for so-called unconscious, systematic bias. And the evidence is very mixed of whether that works at all.
Speaker 1 [00:25:20] Are we wrong, then, to try to regulate A.I.?
Speaker 2 [00:25:23] No, not at all. I think I can be very dangerous like these samples we’ve seen.
Speaker 1 [00:25:28] But those were corrected without regulation. And technology can also self-correct. One might argue that too much regulation is going to stymie the innovation that you’ve been making the case for.
Speaker 2 [00:25:39] I think that obviously you never want to overregulate, but I do think that some regulation would be not just good for society and, for example, protecting people that might be discriminated against, but actually good for innovation. Because I think once we set the guardrails, it will spur more innovation and more use of A.I. And probably the greatest example of that is the FDA. Before the FDA, nobody would invest the types of of of money that was required to create drugs, which is why that industry before the FDA was really snake oil salespeople, because a citizen, if you got sick, you had no way of evaluating whether something was real or snake oil. And so everyone just assumed, you know, there’s a 50% chance, whatever I get, it’s going to be snake oil. So once the FDA came along, that regulation, well, many people think of it as stifling innovation because it’s so burdensome. It also created the incentive for people to make very significant investments in pharmaceuticals, because they knew that once we clear that, that there’s a third party verification that this thing works.
Speaker 1 [00:26:42] AJ As we move towards close, I’d love to get your perspective on how Canada is doing in a I. It’s been roughly five years now since the launch of a national A.I. strategy. The government, federal government has invested hundreds of millions of dollars and committed more to two, five, five years is a very short period of time with which to assess any policy impact. But generally, how do you think we’re doing?
Speaker 2 [00:27:07] I think we’re still doing very well on the research side, and I think the government deserves a lot of credit for that. In other words, we had some early successes in this field with you already mentioned Jeff in Toronto, Joshua Benjamin of Montreal, Rich Sutton, University of Alberta, and then a whole host of others that both faculty and graduate students. And then the development of various research centers. Vector in Toronto, Milan, Montreal, similar. Amy in Alberta We’ve done very well punching above our weight in terms of attracting students to Canada to develop expertize in applied statistics and machine learning and so on. It’s more mixed on the industrial side, on the application side. So part of that is that we don’t own a lot of the infrastructure that needs to be embedded into. That’s a challenge. If I were to say, you know, which are the the billion dollar businesses that have been created in Canada that predicated on our expertize. And I you know, there are some but it’s not a huge number. VeriFone is one would me might have been the first one out of the gates to really achieve a significant valuation and acquired acquired by Nasdaq. And then there’s a handful of others across the country now, but there’s not a huge number. You know, I think we’re still in the early innings, but we probably have a fair amount of improvement we could make in terms of leaning hard on the application side into in Canada.
Speaker 1 [00:28:32] What do you think would be the single best thing we could do?
Speaker 2 [00:28:35] The single best thing we could do, I think, is develop a muscle for experimenting. And it’s not like, you know, a large organization should have one or two experiments, they should have dozens, and they should really be leaning hard into saying, okay, this is not just business as usual. We need to rethink our R&D budget, which is probably this is a bad time right now for companies to be rethinking their R&D budget, given that many are trying to cut costs. But now is the time, because in every industry someone’s going to do it.
Speaker 1 [00:29:09] In fact, it’s probably the best time to do that because your competitors may be tightening up and this is where you can really make some moves, at least for the daring. What a great inspirational call to action. AJ Let the age of experimentation begin. Thank you for being on Disruptors.
Speaker 2 [00:29:24] My pleasure, John. Thanks for having me and thanks for your interest in our new book.
Speaker 1 [00:29:29] My guest today was Ajay Agarwal, professor at the University of Toronto’s Rotman School of Management and the Jeffrey Tambor chair in Entrepreneurship and Innovation. His latest book is called Power and Prediction The Disruptive Economics of Artificial Intelligence, which he co-wrote with fellow Rotman profs, Joshua Gans and Avi Goldfarb. If you’d like to read GPT three full episode summaries which are actually quite good, please visit rbc dot com slash Disruptors. And to be fair, since the. I took hours to learn my voice. Why not let it close out the show? Thanks, John. I’m John Stackhouse. And this is Disruptors, an RBC podcast. Talk to you soon.
Speaker 3 [00:30:13] Disruptors, an RBC podcast is created by the RBC Thought Leadership Group and does not constitute a recommendation for any organization, product or service. It’s produced and recorded by JAR Audio. For more disruptors content like or subscribe wherever you get your podcasts and visit our rbc.com slash disruptors.
Jennifer Marron produces “Disruptors, an RBC podcast”. Prior to joining RBC, Jennifer spent five years as Community Manager at MaRS Discovery District and cultivated a large network of industry leaders, entrepreneurs and partners to support the Canadian startup ecosystem. Her writing has appeared in The National Post, Financial Post, Techvibes, IT Business, CWTA Magazine and Procter & Gamble’s magazine, Rouge. Follow her on Twitter @J_Marron.
Jennifer Marron produces "Disruptors, an RBC podcast". Prior to joining RBC, Jennifer spent five years as Community Manager at MaRS Discovery District and cultivated a large network of industry leaders, entrepreneurs and partners to support the Canadian startup ecosystem. Her writing has appeared in The National Post, Financial Post, Techvibes, IT Business, CWTA Magazine and Procter & Gamble’s magazine, Rouge. Follow her on Twitter @J_Marron.
This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.