The World Economic Forum (WEF) is known for its annual gathering of global leaders, policymakers, CEOs, and academics — convening to address the most pressing issues facing people and the planet.

“Rebuilding Trust” was the theme this year, though the topic of the week was artificial intelligence — dominating the dialogue on the world stage and the advertisement space in the streets.

The promenade was draped with banners proclaiming the positive power of AI, with plenty of stores converted to trade pavilions for the week to promote artificial intelligence. With the pursuit of productivity on everyone’s mind — each AI session was jam packed — even the overflow rooms were turning people away.

Sam Altman attracted a bigger crowd than any world leader and Bill Gates predicts that AI is going to be bigger than the internet.

While AI’s role in the world is ever evolving and its path is still being paved, how we move from demo to product will be a critical step in widespread adoption. And as we enter a year of elections — in at least 50 countries with more than two billion voters — there’s going to be a lot of debate around information and disinformation.

This tech revolution promises to enhance human capabilities, drive innovation, and address complex societal challenges — but we must also consider the ethical implications and associated risks.

But how much of it is hype? And is the rhetoric outpacing reality?

We’re speaking with some of the pioneers and visionaries on the ground in Davos including Erik Brynjolfsson (Stanford Digital Economy Lab); Michelle Zatlyn (Cloudflare); Ashvin Parmar (Capgemini); Anna Paula Assis (IBM); Nicholas Thompson (The Atlantic); and Andrew Ng (Coursera).

On this episode of Disruptors, John Stackhouse is joined by Trinh Theresa Do to help make sense of it all.

What’s clear is that AI’s role in the world is going to be disruptive to nearly every aspect of society and will require careful consideration and management — in a way that doesn’t stifle innovation.

To read the Davos 2024: A year of creative destruction, or just destruction? click here.

Speaker 1 [00:00:02] Hi, it’s John here. I was recently in Davos, Switzerland for the World Economic Forum. And you may know the forum for its big stage where world leaders speak. But when I think of Davos, I often think of its main street, which is known for much of the year as the promenade. It’s a picturesque, winding road through a valley in the Alps, lined with boutiques and cafes. But for this one week in January, this street is taken over by the crowd. There’s those world leaders and global CEOs who make the news, but there’s also a whole range of entrepreneurs, activists, artists, celebrities and scientists. And there on the promenade, there’s a host of businesses and even nations trying to grab some of that global attention by taking over one of those shops for the week. To Davos regulars, the themes of the promenade have become sometimes a bit of a running joke. A few years ago there were so many blockchain pavilions it was nicknamed Blockchain Boulevard. Last year, I think it was called Crypto Corridor. This year it was hard not to see why it was called AI Avenue. One evening, I stood in front of my house, which was sponsored by a bunch of tech companies. I stared across the street to a range of storefronts with slogans like, let’s get real about AI and who says I can’t be robust and reliable? And there was a slogan that maybe could have been the slogan for Davos 2024; The future is AI, the future is humans. Back in the Congress Center, I spent the week jumping from one AI session to another, listening to scientists and entrepreneurs, CEOs, ethicists and regulators. Bill Gates was there and told us he thinks AI is going to be bigger than the internet. Sam Altman arrived and dropped a bit of a bomb, saying, ChatGPT is going to make us all more uncomfortable. I’ve tried to capture some of the many AI voices that were at Davos, and we’ll share some of them on this episode. Is this a hype cycle? Perhaps another crypto? Will I wipe out millions of jobs. Or will it make all our jobs more interesting and rewarding? The future, as that sign said, maybe AI and human, but it will take all of us to ensure we get the balance right. I’m John Stackhouse, and this is Disruptors, an RBC podcast. AI was definitely the topic of the week at Davos. Every AI session I came across was jammed and even the overflow rooms were turning people away. It just shows the degree of hype there is right now. And today we’re going to get beyond the hype and speak with some of the pioneers and visionaries. Who are doing real work. To help make sense of it all. I’m joined by my colleague and former co-host of Disruptors, Trinh Theresa Do. Theresa, it’s great to have you back on the podcast.
Speaker 2 [00:03:00] Hey, John, it’s great to be back. I am excited to dig into the hype versus reality of AI.
Speaker 1 [00:03:06] Well, I’m excited to have brought home some really interesting voices, which we’re going to share on today’s pod, starting with Erik Brynjolfsson. He’s a big name in AI, and a decade ago, actually, in 2013, Erik gave a Ted talk on the implications of artificial intelligence that seems to resonate still today. He argued that the key to making AI work was to use it to augment human capabilities, rather than to replace them. Erik is considered a leading voice in AI. He teaches a graduate course at Stanford called the AI awakening. He’s written nine books. He’s testified about AI in front of the U.S. Congress and participated in AI summits at the White House, and plenty of other places. I got to sit with him at dinner one night, and here’s a clip from our conversation.
Speaker 3 [00:03:53] When you walk down the street here in Davos, every sign is about AI. I’ve never seen anything like it. A few years ago I’d go to Capitol Hill in Washington or talk to CEOs around the world, and they were sort of interested in AI, but it wasn’t necessarily their top interest. Now they hang on every word. They’ve read my papers in advance before I meet with them. It’s a complete seed change in terms of how seriously they’re taking AI, and the tone here at Davos is just a affirmation of that, that this is the thing that everyone’s focused on right now. And 2024 will be the year that AI gets a body and gets put into action. And it’s not just AI writing funny stories or memos, but AI changing the way companies run businesses, workers do their jobs. And that will start translating into higher productivity, better business performance, and real changes in the economy. So in one study we did they rolled it out in a call center. And within a few months they were getting 14% productivity improvements. Some people 35%. You just don’t see that. And I think it’s simply just a very capable technology compared to some of the other ones. What’s your biggest worry? Well, there’s so many but in 2024. Like everyone, I’m worried that the elections will be polarizing, will be hijacked. There may be billions of bots out there giving customized misinformation or what I call nut picking, which is things that are technically true but are very misleading or misrepresented. And I made the actually superhuman persuasion before it’s superhuman in intelligence, and that’s going to be a very bad combination.
Speaker 1 [00:05:23] Theresa, it was so interesting to hear Erik talk about persuasion as well as productivity. In fact, productivity, I think, was the word of the week. But the point about persuasion is also critical, and not just out there in the public square and on social media. As the world moves into a year of elections where there’s going to be a lot of debate around information and disinformation. Companies need to think about this, too. How is AI helping your productivity? But at the same time, is it becoming a persuasion tool that also might need more management?
Speaker 2 [00:05:54] Yeah, I think that’s such an interesting point, because the 14 to 35% of productivity enhancements that Erik cites is obviously a big boon to businesses. But when you factor in the risk of misinformation, hallucinations, making sure that the information that the AI generates is correct, what then happens to overall productivity? I’m not really sure. There’s so many more resources and hours that will have to be expended to fact check the AI’s work to detect and flag that misinformation — continuously monitor. So all of that being taken to account, I’m not sure we will yet see the productivity that we want in the near term. I think it’ll take a bit more of a learning curve for corporations to be able to get to the point where, okay, sure, this new technology can be used in the most productive way, but only by everyone kind of getting up to speed on what the risks are and what the benefits are.
Speaker 1 [00:06:51] So there’s no free productivity left here, in other words. And in some ways, that’s the way of innovation, right? Trial and error means you spend a lot of time on things that aren’t going to pay off, but you test and learn and you keep developing to get to a better place. I’m interested to hear a lot of people talk about Copilots, which has become a bit of a buzzword in AI circles. So think of AI as your copilot. And that’s an interesting metaphor as well as application, because Copilots allowed pilots to fly longer and farther. So there’s a productivity lift. But flying also requires a lot of on the ground monitoring and governance and surveillance, even to make sure that the pilot and copilot are flying safely. Not sure that that metaphor plays out entirely in every application, but it’s a good reminder that whether we’re flying alone or we have a copilot in whatever we do, that it’s not just us and the machine that there’s others who are going to be involved.
Speaker 2 [00:07:47] And then something that you brought up in the past, John, is could we harken back to another moment where technology was set to drastically change corporations and that software in the 70s or 80s?
Speaker 1 [00:07:59] Yeah, a lot of AI today is really just enterprise software on steroids. Enterprise software really took off in the 70s and 80s. At the time, there were a lot of predictions that enterprise software would mean the end of all sorts of office jobs. And of course it did. But we have more white collar workers, more people working in offices than we had 50 years ago, and they’re a lot more productive. AI offers the same opportunity, but again, on steroids. It will be at a level and scale far beyond Excel or whatever we’re used to working with.
Speaker 2 [00:08:36] Yeah. And then, it will then of course vary on the industry and the ability of workers to actually be able to adapt to those changes. But yeah, huge transformations to come.
Speaker 1 [00:08:45] And those transformations are going to require a lot of what techies call product. AI right now has a lot of ambitions, a lot of aspirations, but it hasn’t been turned into that Excel or that outlook product that then gets distributed and applied at scale. And there are few people I know who understand this better than an old Disruptors guest Michelle Zatlyn, a Canadian who is the co-founder of Cloudflare. Cloudflare is an enterprise software application and product that allows companies and governments to protect themselves in the cyber wars. Her company protects 20% of the world’s internet traffic, and blocks an average of 140 billion threats per day. I ran into Michelle at Davos, and here’s what she had to say about what may be on the horizon for the next big tech revolution.
Speaker 4 [00:09:37] AI is as big as the invention of electricity and the internet. That’s how big of a deal it is. And that’s why it’s everywhere at Davos. So one thing I’ve learned about AI is right now we have a lot of demos, and I think a lot of people mistake the demo for a product. A year from now, what we’re going to be talking about is holy smokes, building the products are a lot harder using AI than the demos. There’s a lot to understand. Even the way that you build code today is a very deterministic fashion. With AI, it’s a non-deterministic action that’s going to be a shift in how we think, how we work, how we manage. It’s going to be a lot close to the policy side, which everyone’s really thought about. But a year from now, I think we’re going to be talking about, why don’t we have more AI built into products, why aren’t they solving real use cases for businesses yet? And it’s not because there is going away. It’s really, really important. It’s going to take a lot longer. So I guess I’m trying to say we’re at peak hype right now and it’s going to take a little bit longer to bake. But I’m very bullish on and in the long term.
Speaker 1 [00:10:28] That’s a great perspective from someone who’s actually built a multi-billion dollar company, a unicorn in the digital space. The point I think Michelle was trying to stress is that while AI may change the world, it’s actually going to require a lot of people, entrepreneurs especially, to do the hard work of creating products that we can all use in our daily lives, in the work that we do, and in the way that we communicate. There’s tremendous opportunity for a lot of companies, and it’ll be interesting to see who comes out this year with the kind of applications that certainly ChatGPT showed can have a really powerful impact on the market.
Speaker 2 [00:11:05] Demos are the easy part, and now is the grunt work of actually taking an idea and a vision and scaling it up, building it, testing it, maintaining it, making sure that the users can access it intuitively. And companies have to make sure they have you know, robust data pipelines. They got to refine their models, procuring the right hardware and chips, making sure there’s the right talent and labour. It’s no secret that the pool of AI talent is fairly small and demand is massive. And, on the point of hardware, let’s not forget that the entire global AI industry has a single point of failure in the form of TSMC in Taiwan that could completely upend the industry and grind things to a halt unless we see other manufacturers like Samsung and Intel ramp up to the same degree of volume. All of which to say, there’s so many challenges in the face of building proper AI products.
Speaker 1 [00:11:58] It’s a really interesting point you make, Theresa, about market concentration, and there’s a lot of worries right now that the generative AI universe is going to be dominated by the cloud companies. So Amazon, Google, Microsoft, principally, those are the companies that control all the data that we share from our phones or whatever we’re using. When we send it into the cloud. They’ve got a scale to do things with AI that very few others do. So that could lead to even more market concentration. But as we’ve seen through generations of tech disruption, it’s often those incumbents who get disrupted. AI yes, it may end up in the hands of a few and we’ll have that concentration and those single points of failure. But it also could lead to new ecosystems of developers and those products that are going to do things that none of us, including the incumbents, can see at this point in time.
Speaker 2 [00:12:53] Yeah, I’d love to see the misfits and rebels of AI, and if they are able to compete with these large language models that the incumbents have.
Speaker 1 [00:13:01] Well, it’s going to take a different approach to business and innovation in a lot of places. And that’s a wonderful thing about technology. I got to hear a bit more about this from some people at Capgemini, the big consulting firm. They had the coolest display at Davos, actually, right at the main entrance to the big Congress hall. They had a Peugeot sports car that is powered by AI to change the way that race car drivers drive, but also change the way that all of us drive one day. Capgemini released a report during Davos that states 96% of organizations say generative AI is on their boardroom agenda. Capgemini is doing a lot of work with those big tech giants, whether it’s Google or Microsoft or Salesforce or Oracle, but also working with a lot of entrepreneurs and smaller operators around the world. They have an AI Center of Excellence that is focused on empowering organizations through AI solutions. I met the head of that center, Ashwin Parmar, when I went in to see that race car, and here’s some of what he had to say.
Speaker 3 [00:14:04] I think he is a pervasive across all industries, and there’s a tremendous amount of interest in terms of applying gen AI to drive better forecasting, better predictability, reducing the overhead to make better decisions and drive value. There is a tremendous amount of apprehension about the safety and soundness, as well as about the ethics and trust implications of AI. So responsible AI is top of mind of everybody to make sure that they embrace it in a meaningful way, but do that in a responsible way. I think a year from now, you’ll be seeing a lot of new and novel ways people would have implemented AI to transform their businesses and to really create new and innovative solutions across the board. At the Capgemini Research Institute, just issued research in terms of where some of the key investments are going in terms of digital tools and technology, and what they have found is 88% of the executives are looking to focus on spending to drive AI driven innovation, which is a significant amount.
Speaker 1 [00:15:08] Theresa, I keep thinking about that race car because the car doesn’t drive itself. There is a human driver, a highly skilled human driver, who is getting better and better information about when to clutch, when to turn, how the wheels are burning and they figure in a 300 lap race, a great driver can shave a full lap off their time. So that gets back to that efficiency and productivity question. It reminds me of a Disruptors’ episode we did with AJ Agrawal from the Creative Destruction Lab. When we dug into the question, what’s holding AI back? And I think Ashwin’s point really reiterates the message that we need to get beyond predictions. Then maybe it’s time to shift focus away a bit from AI as a technology, and instead look at the economics and the human dynamics of the systems in which it operates.
Speaker 2 [00:15:57] Yeah, as a as a formula one enthusiast, I’m just imagining the 2025 champion will be Max Verstappen and his AI copilot. I mean, AI would be just another input into how workers manage their days and their task and organize their lives, right? And to your point about the economics and the human part of it, as anyone who’s worked on an enterprise wide transformation would know, and I’ve worked on a couple, the most important thing is being able to bring your employees with you every step of that journey. There are so many technology implementations that have failed because employees weren’t consulted or trained, or asked to contribute to identifying how that new technology could add value to their jobs and to their teams. And so this is going to be the same thing with AI as organizations seek to implement it and to engender that trust.
Speaker 1 [00:16:52] What a great challenge, Theresa, and great calling for any leader or anyone to think about how AI can empower groups. It’s not a plug and play, and we have to think of it as much about culture, as about technology. As you were talking, I was thinking of that great business school case study. I can’t remember the name of it, but it’s about, tragically, a plane crash and the recording from the Black Box shows a real disconnect between the pilots, and it was as much human error as technology error. And that’s important for AI, because a lot of people may think like, why do we need pilots to begin with? Machines can do it and I can do it. Very well, and any pilot will tell you that their job is as necessary as ever. It’s just what they do up in the cockpit is very different, and that includes the way they communicate with each other as well as with the machine. So it does come back to us as people as well as to the technology.
Speaker 2 [00:17:48] Absolutely. I actually just read something interesting on pilot and airplane safety, referencing the concept of a cascade of errors, and that often what gets to a fatal plane crash is not just one single mistake. And I think that is a crucial lesson for leaders as we go through this big AI transformative journey, making sure that you identify what it is that the AI can actually offer you and not to misinterpret that, because then that sets off a domino effect across the rest of that transformation.
Speaker 1 [00:18:17] Well, let’s hear from someone who is working with many of the world’s biggest organizations on all of these challenges. Anna Paula Aziz was listed by Forbes as one of the most powerful women in Brazil. She’s taken her tech talents around the globe, from general manager at IBM in Latin America to other major markets like the U.S. and China, where she’s focused on digital transformation, AI and hybrid cloud strategy. Today, Anna is the chair and general manager for IBM in Europe, the Middle East and Africa. I got to listen to her on an AI panel and she spoke with me afterwards.
Speaker 2 [00:18:54] AI can really drive tremendous productivity. We expect up to $4.4 trillion of annual productivity impact coming from AI. So it’s a big opportunity, and it’s time now to really understand how this is going to be incorporated in our processes and how you work to generate those benefits. It’s early stages that companies are still trying to understand how to get the value out of AI, and at the same time, how to make it safe. I’m excited about improving the productivity that is so needed in the world right now.
Speaker 1 [00:19:25] So there’s that word productivity again. But I think Anna would also stress that productivity is not really so much about what technology can do, but what humans can do with the technology. And it was fascinating to hear from organizations at Davos about what they’re doing in terms of sales force enablement, call centers and even coding. Very few executives who I spoke to were looking to cut their workforces in those areas, but rather use AI to enhance the capabilities of whether it’s a salesperson or someone in a call center or even their elite coders. And they’re finding that people can do significantly more with AI by cutting out a lot of those mundane things that all of us have to do. I’m looking forward to the day when AI can take care of my inbox, and that’s just the kind of time saver that will allow all of us to focus on the more inspiring parts of our jobs, and also the parts that create more value for not just for organizations, but maybe for society.
Speaker 2 [00:20:26] Yeah, that would be the dream. But I think that in order for an organization to get to that point, data governance and hygiene is so key as companies are thinking about, do I invest in AI? Where do I invest in AI? Before you even get to those questions, I think it’s important to get the data that your company owns and generates in order to bring in or train the necessary talent to be able to manage those technical requirements. Companies are only going to get value from those AI tools and investments that are optimized and tweaked for their own industries and sectors. And again, that’s about managing the data that you have and being able to clean it up properly. So yeah, there’s so much legwork that needs to go in before we can really extract the value of these big AI investments.
Speaker 1 [00:21:11] You’re probably very familiar with some of the use cases out there. We’re all familiar with that situation. When you call up a call center and get agitated. They’re now AI applications that help call center staff manage people no matter what state they are in. It’s also helping a lot of salespeople deal with no. As one company explained to me, most of us say no to an initial pitch. Well, AI is actually figuring out ways to get beyond no, and that’s helping salespeople not only increase their productivity, but probably enjoy selling a bit more when you’re not dealing with no all the time. But as you were saying, Theresa, it all comes down to data. And data isn’t just words and numbers. There was a fascinating discussion that I listened to a Davos about visual data. Let’s hear from Nicholas Thompson. He’s the CEO of The Atlantic, which is of course, one of the most respected news organizations in the world. He also chaired this fascinating discussion that included Aiden Gomes from the Canadian startup Cohere, which we’ve had on Disruptors. He’s also the co-founder of speakeasy AI. And here’s some of what he had to say after his panel discussion.
Speaker 3 [00:22:20] AI is probably at the center of a lot of those conversations, because people now recognize that it is genuinely changing jobs. It is genuinely changing the way we work, the way we relate to each other. It is genuinely changing democracy. They want to figure out where it’s going so they can plan their next actions. Well, I just moderated a fantastic panel where we covered three main questions. The first was, will it continue to improve as quickly as it has? And the general consensus was yes, it will change in certain ways. We will have to add video in order to improve it. You have to change the way the models are constructed and the data inputs, but they will improve rapidly. Second question was AGI, should artificial intelligence researchers be trying to build something that is more intelligent than humans? And the consensus was, yeah, you’ve got to do that. You also need to do other things. You need to make it as good as possible at certain tasks, like helping with human biology. And then the third question was a very controversial one of open source. So the panel pretty much agreed that open source models, models that people can modify, that can be shared, are better for increasing access to AI and increasing equality. I think a year from now, we’ll have more examples of how AI has changed industries and changed companies and changed individual lives. Right now, it’s still mostly theoretical and next year it will be more real.
Speaker 2 [00:23:37] So, John, I’m curious, why is the consensus that AI has to be smarter than humans?
Speaker 1 [00:23:44] That was a raging debate on this panel, and it was a fantastic panel that included Yann LeCun, the AI scientist at Meta. And the consensus, after a pretty vigorous debate, is that you can never pull back or rein in intelligence. That is the lesson through millennia of human progress. There are all sorts of aspects of human intelligence that even the most sophisticated AI models are nowhere near replicating. So let’s continue to push our own frontiers of intelligence, but also push machine intelligence, because, like the great space explorers have always said, we just have to explore. We just have to find new frontiers and then decide what to make of it. So maybe it’s a bit of that Buzz Lightyear motto to infinity and beyond.
Speaker 2 [00:24:30] Yeah, and I guess some with the large language models that generative AI systems are built upon now, it’s use mostly text as a way to learn. And, you know, we’ve had discussions about the limitations of learning through just text. I think I had read that the average four year old’s ability to learn is better than the most complex LLMs that exists today. And so what other inputs are required to be able to get to that next stage of intelligence, as you mentioned? Video I think, that is a crucial way, but how do these models incorporate video? I’m not sure.
Speaker 1 [00:25:06] Well, that’s a real shortcoming of AI. In fact, one of my takeaways and great hopes from this panel was a conviction from some of the world’s leading AI scientists that humans are still way smarter than the best AI out there. So maybe that’s a bit of, relief there. But one of the reasons is that we are visual learners and visual communicators. Machines are great with, data sets with text and numbers, but have no real capacity right now to learn from video or visual records. And that’s actually how we all learned as babies and infants. When you’re three and four years old, you’ve learned, as Yann LeCun said, more than you might learn through the rest of your life. And you can’t read a word, and machines are nowhere near there yet.
Speaker 2 [00:25:53] You know, for posterity, the risk averse side of me wants to ensure that with the accumulation and development of this intelligence, that we also build in the proper guardrails and build in the human values that can steer this intelligence rather than just having intelligence for intelligence sake. But it’s housed in a psychopathic model.
Speaker 1 [00:26:14] I love that, a psychopathic model. Well, there are many people who have thought more about that question than Andrew Ng. He has spearheaded so many efforts to democratize deep learning, and he’s taught millions of students through online courses. One of the co-founders of Coursera, Andrew has coauthored more than 300 publications on robotics, and his work in computer vision and deep learning has been awarded on many occasions. Andrew has been named one of Fast Company’s Most Creative People. He’s been named time magazine’s 100 Most Influential People in the world, and most recently, he was named one of the most influential people in AI. Andrew is the founder of Deep learning AI and landing AI. He’s a general partner at AI fund and an adjunct professor at Stanford’s computer science department. I got to talk to him in the hallways at Davos. Here’s his take.
Speaker 3 [00:27:07] AI is relevant to almost everyone here at Davos. I see a lot of excitement about an identifying building, just finding the right use cases for everyone here. It’s been exciting to speak to a lot of people about what they’re doing. Workforce upskilling and identify use cases of how to build and deploy responsive AI into their organizations. The other dimension of the conversations has been on the regulatory track, how regulators can play a constructive role in accelerating safe adoption technology is something still being worked out.
Speaker 1 [00:27:36] It’s one of Andrew’s most provocative lines was you cannot regulate intelligence. And I think that’s the challenge as we think about AI, it’s going to have to have guardrails, as you say, Theresa, but we can’t have regulations that somehow limit those frontiers of intelligence, both human intelligence and machine intelligence. How we get there, open to debate.
Speaker 2 [00:27:58] Yeah, this is a very challenging area. I mean, I don’t know I don’t know the answer to that question either. I had the opportunity to chat with Steve Wozniak last year, and he calls for more regulation of the AI industry. But looking at specifically things that are created by AI and that responsibility for anything created by AI and then shared with the public, has to rest with the person or organization that created it. You know, genie’s out of the bottle, but we need to make sure that users are aware of whatever magic is created and that it can be educated accordingly and behave and react accordingly. So I think that is a very elegant solution to what we have right now. And we’ll see in the next few years what else emerges.
Speaker 1 [00:28:40] What an amazing opportunity to have a conversation with Woz the brains behind so much at Apple. I’m curious, Theresa, what you think and what he thinks of regulating what hasn’t been created.
Speaker 2 [00:28:54] Oh wow, what a deep question. How do you put guardrails around something that doesn’t yet exist? I don’t know, how would you answer that question?
Speaker 1 [00:29:03] Well, I don’t think we can, we can only have principles and certainly repercussions for bad actors. For people who knowingly caused damage to others. There was a lot of debate at Davos, especially between the Europeans and the Americans, as to whether to take a prescriptive and even preventative approach or more of an innovation mindset, to this new frontier of AI. I think we’ll see the Europeans continue to try to prevent missteps, whereas the Americans would prefer to let people take steps. And if there are missteps, then correct for those with whatever mechanisms, society has. No perfect way at it, and we as Canadians will probably find a middle path there. But the world is moving rapidly into a new era of AI, and we all have to be aware of both the opportunities and those risks.
Speaker 2 [00:29:55] Yeah, I think of that early Google ethos; don’t be evil. I think you can’t quite regulate something that it doesn’t yet exist, but you can set an intention for what you hope you can create.
Speaker 1 [00:30:07] What a great word to end with “intention”. We all have to be more intentional when we take on AI. Theresa, thanks so much for being on Disruptors.
Speaker 2 [00:30:16] Thanks for having me John. This has been awesome.
Speaker 1 [00:30:20] This is disruptors, an RBC podcast. I’m John Stackhouse. Talk to you soon.
Speaker 2 [00:30:30] Disruptors an RBC podcast is created by the RBC Thought Leadership group and does not constitute a recommendation for any organization, product or service. For more Disruptors content, visit RBC.com/Disruptors and leave us a five-star rating if you like our show.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.