With cybercriminals leveraging AI to fuel scams and misinformation, how do we verify what’s real? Joined by cybersecurity experts Shuman Ghosemajumder (former Global Head of Product Trust at Google and Co-founder of Reken), and Ken Nickerson (Inventor and Entrepreneur, iBinary, ex-Microsoft, ex-Rogers, ex-Kobo, ex-OMERS, and behind Sealed, a tool designed to verify digital content), John Stackhouse and Sonia Sennik confront a startling new reality where AI-generated deepfakes can mimic voices, images, and even entire identities with frightening accuracy.
Together, they unpack the rapidly shifting landscape of AI-driven fraud, explore the concept of “zero trust,” and highlight innovative solutions that could help us navigate an era where digital deception is the norm.
They explore how to protect democracy, businesses, and personal identities in a world where proof of authenticity is harder than ever.
Sonia Sennik: and Sonia.
John Stackhouse: Welcome to Disruptors x CDL: The Innovation Era.
Sonia Sennik: All right, John, let’s start light today. What’s the meaning of life?
John Stackhouse: Such an easy question to kick things off.
Sonia Sennik: Exactly. No small talk here, straight to the big existential dilemmas.
John Stackhouse: All right. If I had to take a swing at it, I think life is about experiences, seeing new places, meeting interesting people, and eating incredible food.
How about you? What’s your take?
Sonia Sennik: I think it’s connection. Finding people who get you, who make the weirdness of life feel a little less weird. Creating bonds, especially with your cat.
John Stackhouse: That’s insightful. And what may I ask is your cat’s name?
Sonia Sennik: It’s Salt.
John Stackhouse: I totally forgot. My bad. And, uh, what was the name of the street you grew up on?
Or perhaps your favorite teacher?
Sonia Sennik: This conversation is taking a turn
John Stackhouse: Maybe it’s because we’re not real.[00:01:00]
So Sonia, that was bizarre because that was not you and me talking. Those were deep fakes.
Sonia Sennik: So John, is this you right now?
Of course it’s me, but you may want to verify that.
Sonia Sennik: That’s exactly what your AI would say, John.
John Stackhouse: It’s remarkable that we’re suddenly in this age where maybe we have to prove who we are and verify who we are and also ask people, even those who we know really well, to also verify that it’s really them.
And this has become much more than what we used to call a parlor game. This is a growing source of international crime, of challenges to democracy, and even disruptions to our communities and society.
Sonia Sennik: Establishing trust is really difficult in a world where you can recreate someone’s voice, their image or their likeness in seconds.
I’m sure you heard of that story in Hong Kong recently. A bunch of folks thought they were on a zoom call with their chief financial officer who was directing them to send 25 million to an offshore bank account. They verified it [00:02:00] was him. They thought they were on the call with him. And of course, the money went walking out the door.
John Stackhouse: And I fear that sort of thing is happening probably on a daily basis, maybe at a smaller scale, but increasingly prevalent all over the world. And I think we have to anticipate that those challenges are going to get more intense. It’s not just money either. We may see in a likely spring election this year, foreign interference using deep fakes.
Unfortunately, we have two outstanding thinkers on all things deep fake and much more on this episode.
Sonia Sennik: Leaders in the emerging technology space who are looking to harness best in class technology to re establish trust in the systems we use every day, the conversations and transactions we do every hour.
John Stackhouse: Long time listeners may remember Shuman Ghosemajumder who was on the podcast in 2019 talking about what was then just this emerging idea of deepfakes. Shumann is one of the world’s leading experts in cybersecurity, fraud prevention, and AI [00:03:00] driven threats. He was Google’s global head of product trust and safety, and played a key role there in tackling some of the internet’s biggest fraud challenges.
Now, as the co-founder of Reckon, he’s at the forefront of using AI to combat AI driven fraud.
Sonia Sennik: We’re also joined by Ken Nickerson. Ken is an inventor, coder, entrepreneur, investor, and a veteran in the cyber security world. Ken was one of the earliest mentors at Creative Destruction Lab. He has spent years co-founding emerging technology companies.
Ken is also the co-founder of Sealed, an open source method to prove ownership of media.
John Stackhouse: Shuman, Ken, welcome to the podcast. Good to be here. Thank you. Shuman, let’s start with you because you were on Disruptors in 2019. We’re talking about deepfakes. What has changed in the last six years?
Shuman Ghosemajumder: Well, you might remember in 2019, we actually made a demo of you playing the role of Simon Cowell, uh, changing his face into your face. And it was [00:04:00] amazing that after three days of computation, one of my engineers was able to generate something that looked halfway decent, but it was actually pretty poor quality. And so I think that the big thing that’s changed in the last many years is the. Ability of the technology to be able to create deep fakes that are hyper realistic in a very short amount of time and really democratize this technology so that anyone can use it and to be able to use it across many different types of media.
So, being able to take two seconds of somebody’s voice and clone it absolutely realistically. Being able to translate their words into a completely different language and then clone their voice so that you can match it up and make it look like they’re actually speaking in another language in their own accent.
These are things that were just theoretical science fiction five, six years ago, and now they’re technologies that are being used by people [00:05:00] all over the world. And if you can think out six years. Where do you think we’ll be? There’s this, uh, great quote from William Gibson that the future is already here.
It’s just not evenly distributed. So there was the ability for folks who had enough computing power, who had enough skill and, you know, had enough talent to be able to create highly realistic deep fakes six years ago, or even before that. But it was really painstaking work. And now what’s happened is that it’s really easy for folks without GPU power, without talent, without a skill to be able to, uh, do much higher quality work.
And we’re just going to see an extension of that six years from now, where it’s going to be built into all kinds of different tools that we have built into different products, and it’s going to. Change the way that we think about content generation in ways that are currently difficult to imagine, but we know it’s going to become more ubiquitous.
So an example of this is every single time [00:06:00] Apple or Google launch a new phone, they’re now talking about AI capabilities that allow you to be able to modify that image in ways that would have been inconceivable 10 years ago.
John Stackhouse: Ken you’ve been watching this and we’ve been talking about this for years? I wonder if you can tell our listeners a bit about your work with someone who is definitely not fake, Margaret Atwood because it’s a fascinating window into this challenge.
Ken Nickerson: Yes, so I’ll keep it as shallow as possible When I had a breakfast a couple of years ago with Ms. Atwood, I asked her what her biggest worry was, and we were both speaking at a conference, and she was concerned that people were taking the cover of the artwork from her books, and possibly her text, and using them for the artwork.
so called AI, uh, you know, content generation. And, uh, you know, it bothered me. And I already had an idea for a project I had thought about a few years earlier. And I hired a summer student and built it. And it’s publicly available in open source. It’s called sealed. [00:07:00] ch. The thing about deep fake versus deep real versus real versus fake.
is that it’s really hard to prove a negative. It’s known as an NP hard problem. NPS can be a really hard problem to figure out something’s fake. But you can prove that something’s real. And so a concept actually came from the Musée d’Orsay in Paris. I was there about 13 years earlier, where on a kind of a private tour underground there, I found out that the way they ensured the art was by taking The frame off and photographing the edges.
So I took that old 100 year old technology and applied it with using a fairly modern programming language called rust and making it open source. So anyone could take an image of video, a text or an audio, run it through sealed. Compile it on their own. So full, full disclosure and trust, but be able to then prove beyond the shadow of a doubt in many, many court cases now that they were the original owners of that content.
And so if you flip the problem on its head, what you could do is make it so that all browsers. [00:08:00] Chat tools applications would have to see like HTTPS, like an SSL certificate, they would have to see proof that that is real versus trying to verify something is unreal, which is quite frankly, a really essentially an uncomputable problem.
Sonia Sennik: Shuman, many refer to AI as the ultimate fraud machine because it can do things, as you were mentioning through the list and examples that you gave us, it’s just an endless sea of opportunities to create, as Ken mentioned, things that are not real. What do you see for verification? How do you think about authentication or harnessing emerging technology to verify media or content?
Shuman Ghosemajumder: I think it’s critically important. I think that you are constantly looking for opportunities, like Ken was mentioning, to be able to authenticate identity, to be able to authenticate content, and you want to be able to Make that as implicit as possible in [00:09:00] every single communication and every single type of content that you’re consuming.
The challenge is that we have this entire back catalog of the entire internet that was built without any of that. We have this entire back catalog of all of human civilization that was built without any of that. And so how do you. Harmonize those two things. How do you take all of the images and videos that have existed historically and verify that something really is a true historical record and not something that was fake?
Because now with generative AI, we can show a behind the scenes video that demonstrates how the moon landing was faked. If we wanted, like we could create like a 60 minute documentary that. Films inside of NASA from the, the viewer’s point of view, how the moon landing was faked. And that’s going to look like it’s completely realistic.
And then if someone asks, well, what’s the, uh, watermark that shows that this is a [00:10:00] real video? You say, well, it was filmed in the 1960s. We don’t have a watermark. How do you know what’s real and what’s not in terms of any kind of content that is excluded from the ability to be authenticated that way?
Sonia Sennik: Traditional verifications and authentication methods just really aren’t keeping up. From your experience working with small businesses, medium businesses, and large enterprise, what would you say is the biggest blind spot that needs to be addressed right now?
Ken Nickerson: Yeah, so the way that things are done today, obviously in a digital world, they’re computed.
And so we have things like certs, certificates, we have things called CRC, cycle redundancy checks, and whether it’s quantum or Racks of hundreds of thousands of GPUs. There is the potential for abuse by essentially reverse engineering a photo, putting it back together with fake information or a video or whatever, and then re [00:11:00] CRCing it so that it looks authentic.
And so the computational challenge to authenticity is quite severe. And so my guess is that there’ll be an analog component to the future of authenticity. And by analog, things that become you know, more in line with nature that become harder to compute just using raw horsepower of computation, the ability to reverse engineer and create these fakes is accelerating.
You’ll soon be able to do it on your phone. Certainly people do it today with filters on any of the social media tools. Finding a method that is incomputable for verifying authenticity is the problem that needs to be solved. The folks like Adobe and Leica in particular, the camera company Leica, they and a large consortium have gotten together to create something called content credentials.
There’s an API and software. But you know, the reality is anything that can be computed can probably be reversed. And so I’m looking for solutions beyond the kind of current standards for [00:12:00] authenticity.
John Stackhouse: Ken, this is a really interesting idea that computation may not be sufficient and you’re looking for something in nature.
Take us a bit deeper into that and whether it’s even possible to capture non digital solutions in a digital universe.
Ken Nickerson: Yeah, so there’s kind of cheating ways, leaving out information that can’t be computed no matter how much horsepower you have. Think about cropping a photo. So say you took a photo and it’s Sonia and I standing shaking hands, but I have a knife over my head and it’s a wide angle shot and you crop it out.
So it’s just us smiling and shaking hands. And there’s a bunch of famous perception images like that. The point is the cropping has changed the narrative and it’s left information out. And therefore it has literally changed the story of the image. It’s leaving out information because the outside of that frame could not be computed.
And that’s basically how Sealed works. If you look beyond that in the analog realm, there’s things you [00:13:00] can do that are very hard to reverse. So they’re not based on pure math. So for example, you could sequence someone’s DNA. And if you do sequence your DNA, it’s about a 512 megabyte file. And then you could say, here’s an offset from the start of that file.
And here’s the first six DNA strands. Now tell me the next 20. These are not things that can be computed. They’re hidden information.
Sonia Sennik: So Shuman, just to pivot to your latest venture, what is unique about Reckon’s approach to tackling AI driven fraud?
Shuman Ghosemajumder: Well, I think that you were alluding to it before in terms of how we’ve called AI the ultimate fraud machine. So what we have embarked upon as an industry in terms of how we’re trying to realize the dream of artificial general intelligence is not An extension of the types of AI that [00:14:00] we had in the 1990s and earlier, where we’re trying to teach machines how to reason, but instead taking things like large language models, which allow us to simulate what realistic reasoning machines would actually produce.
And so we know some of the drawbacks of that, that sometimes the large language models will hallucinate and they’ll come up with answers that are completely wrong. However, what they’re always doing is generating answers that look highly plausible to the viewer. And so, that’s the case for image generation models, for audio generation models.
It’s not actually generating a true sample of someone’s voice or a true image of that person, but it is generating something that can look highly plausible to the viewer. Who is consuming it? So on one hand, you’ve got legitimate enterprise who is trying to take generative AI and solve the problems of hallucinations and [00:15:00] errors that are introduced by that approach.
And then they discover in certain cases that if they actually hand the keys over in terms of decision making to content that is coming from a generative model, then it could lead to disastrous results when that model hallucinates. But there’s another side to this, which is that cyber criminals have been looking for a way to be able to automate their operations for the last 20 years, and they’ve been succeeding at greater and greater levels in terms of being able to have different federated groups of cyber criminals who specialize in different aspects of cybercrime.
A cyber attack in order to be able to create the cybercriminal equivalent of the open source ecosystem, where they can collaborate together to create more sophisticated attacks than any cybercriminal could individually. But there was always a last mile problem, if you want to call it that, that cybercriminals had where there were certain operations that still required humans.
And now with generative AI, what they’ve discovered [00:16:00] is that they can understand natural language for the first time. They can generate realistic audio and video for the very first time. And unlike the case of legitimate enterprise struggling with the limitations of generative AI as far as hallucinations are concerned, for cybercriminals, none of that is a problem.
Because essentially when you’re engaged in fraud, everything is a hallucination. And so now with generative AI, something that generates output that’s plausible to their victims, cybercriminals are adopting this at massive scale. And so this is what we’re focused on. How do you deal with that problem of cybercriminals being able to create more realistic fraud than we’ve ever seen before, and to be able to distribute that at scale?
And as we were discussing, the problem is much greater than just being able to identify that the content is AI generated. The problem extends to identity and being able to verify that someone is actually who they claim to be.
Sonia Sennik: Ken, do [00:17:00] you agree with the overview of how large language models or today’s latest and greatest version of AI is being harnessed by cybercriminals?
And is there a fix for deepfakes?
Ken Nickerson: Yeah, so anyone with an agenda is going to find a tool to meet that agenda. It just so happens this tool is sharper than the last one. But the question on identity is a little different. On identity, you can follow that trail. And so the key things in identity that I look for, you know, and I’ve talked with this all time is just, do you have authority?
Do you have accountability? And then lastly, is there an audit? There’s lots of methods of doing that in a large, massive scale. Certainly the blockchain is one interesting aspect, but I think the reality is going to be that whether we like it or not, there’s always an agenda. I think we could easily see a time by the end of this decade There’ll be some form of identity to log on to the internet to begin with.
And so that truth chain that’ll [00:18:00] start to be formed will start with your ability to get on. So, so I don’t think the tools of making things more sophisticated and more believable are going to go away. I think they’re actually going to get incredibly aggressive over the next 12 to 24 months, but the ability to trace it back to the source may become more possible.
You can prove something’s true or not, but not something’s fake or not. But you may be able to discover the trail that led to that fake. And there’s some hope for that, I would say.
Sonia Sennik: Shuman, earlier in our conversation, we mentioned the term zero trust. For listeners who aren’t familiar with the encryption space or the cybersecurity space, can you explain the concept of zero trust and why it’s really useful for transactions?
Shuman Ghosemajumder: Sure. What it refers to is the idea that you shouldn’t just have an authentication mechanism, like asking someone for a password, and then give that person or account the ability to do whatever they [00:19:00] want because you fully trust them.
The idea of zero trust is that you never Provide absolute trust, but are constantly looking at the behavior that occurs post authentication or post whatever transaction is some kind of dividing point, and you look for signs of fraud or abuse that might not have been evident Just in that authentication step just because somebody provides a password doesn’t necessarily mean that they’re the rightful holder of that password just because somebody has a token doesn’t necessarily mean that they should have access to that token And because of that you have to constantly analyze their behavior
Sonia Sennik: So, simply put, Zero Trust means every time I speak to Ken, Ken has to prove to me he’s Ken, and I have to prove to Ken I’m Sonia.
Shuman Ghosemajumder: Yeah, absolutely. And it basically never ends. So, at the beginning of that conversation that you have with Ken, you first have to decide whether or not you’ve contacted the real Ken. [00:20:00] And he has to decide whether or not he’s contacted the real Sonia, and then maybe you exchange some information to be able to give yourselves a greater sense of trust, but there’s never any point at which you fully trust, because there might be something that the canon quotation marks says 10 minutes into the conversation that makes you think, hold on a second, I thought it was Ken, but maybe this is just the next level technology representation of a synthetic version of Ken, and it fooled me in the first few steps.
That’s really what Zero Trust is all about, in terms of being able to constantly look for signs that there may be a new sophisticated technology that eluded your previous ability to detect it.
Sonia Sennik: So Ken, 24/7 authentication, what does it look like?
Ken Nickerson: The key thing is that we’re moving into this kind of digital twin world.
You know, we’ve over many decades have developed tools for trust in the analog world. And those tools, we jerry rigged them a little bit for the digital world, you know, what’s the [00:21:00] password and stuff like that. But we’re on a process of discovery. You know, we have to redefine what it means to live half your life in the analog world, half your life in the digital world.
What protocols exist? A handshake. Oh, they got a pulse and they’re right in front of me. The analog world. I guess I can trust Sonia. She’s in front of me. The digital world. What are the models I can use for establishing trust, and not just at the start of the call, but if I were with, say, a sophisticated agency, I would allow the call to be established and then cut in seamlessly during the call, and you wouldn’t even realize it.
So pass the protocol. And so how do you continuously requalify trust in a digital world? I suspect it’ll look like something like very peer to peer where whether we have an agent process based on seconds, then every few moments While we’re in conversation, trying to be social in a digital setting, it will re establish that we both exist.
And there’s a lot of ways to do that. One would be, we’re in this call right [00:22:00] now. Maybe a three letter agency has already replaced me. And you’re going to go, that doesn’t sound like Kat. It’s a physical act. So an analog act, participating in the digital world to reestablish a bridge of trust between analog and digital.
Uh, one model that I worked on years ago, after reading a book of all things, a science fiction book, was that we’d all start smoking, because the calculations for the smoke would be just so expensive that, that nobody could afford to replicate smoking in a video call. You probably wouldn’t spend a hundred thousand dollars to replicate the smoke coming out of my mouth.
Shuman Ghosemajumder: I think science fiction points the way. I think that, uh, uh, it’s really been, uh, science fiction authors who have thought very deeply about future societies that have technology that isn’t available to us yet. And what we’re trending towards in terms of being able to authenticate that someone is human, for instance, which is a problem that I worked on in my previous company and at Google before that, is the [00:23:00] Voight Kampff test from, uh, Blade Runner.
Analyzing someone’s reactions to different questions and asking them to, uh, tell you about their mother and, you know, all the different ways that humans would respond differently than what we think a simulated human might do. So, an example of this is, uh, CAPTCHA. CAPTCHA stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart.
So, this is something that has been trying to establish humanness by being able to discover any mechanical system, essentially, that can’t pass a Turing test. But the problem is that we now have generative AI that’s capable of effectively passing most of those Turing tests that we’ve conceived of. And so CAPTCHA today is now doing the exact opposite of what it was intended to do.
It’s become a huge impediment for real humans who can no longer identify whether or not, you know, that looks like a bicycle [00:24:00] or that looks like the edge of a car in the image that I’m trying to identify. But for machine learning based optical character recognition and other capture solving mechanisms, cybercriminals have no problem at all being able to solve those captions.
John Stackhouse: So you guys mentioned science fiction. My mind’s also going to old spy novels and spy movies and the John le Carre conceits and devices to masquerade people. I’m wondering how much this is going to require us to change as people. Maybe we become more distrusting or maybe we just become more astute in observation.
Ken Nickerson: You know, in the 1980s, I forget where I was, but I was at some research place, and I got my first piece of spam, and I answered it very politely. Dear spam. Oh, I’m sorry, you’ve got the wrong guy, if I can help you. And then you get the second one, you’re like, ah, yeah. So I would say kids are [00:25:00] far more astute than we are online.
I mean, their perception of the world is, you know, both analog and digital. They’re very comfortable in a digital space. GPT and LLMs doing fake images, two years ago I could have told you right away, today it’s getting harder, by the end of this year, I’ve got to be honest, I’m not sure I’ll be able to tell you what’s real or what’s fake, I don’t expect to be able to.
And so I’m going to have to just accept that’s the worldview. Kids are going to come up with their own models for that discovery. Uh, I think far sooner than us, maybe they’ll have to teach us just like they had to teach us how to reset the clock from 12 on VCRs when, you know, we were kids.
Shuman Ghosemajumder: I totally agree.
I think that what is changing in society from generation to generation is the expectation of how much technological change you’re going to experience in your lifetime. So, a hundred years ago, your expectation was that you’re not going to experience a whole lot of technological change in your lifetime.
Whereas now everyone expects that the technology that we use on a day to day basis is going to look very [00:26:00] different 10 years from now than it does today. And so if you’re born into that, if you’re generation alpha or generation beta, you’re expecting artificial intelligence changing. Every aspect of your job or your life on a pretty frequent basis, and you’re constantly learning and adjusting to that, but there is an adjustment period there.
There is that period where you have to come up to speed. And so one of the differences with younger generations versus older generations is that. Nothing has generally happened to them at that point in their life that has made them that paranoid yet. And so when you look at the stats, and when I talk to IT departments at universities, for instance, what they consistently say is that Gen Z falls for scams at a much greater rate than their boomer grandparents do.
Which is astonishing when you consider how otherwise technologically sophisticated they are.
Ken Nickerson: If you go back to the analog world, when I was a kid, there [00:27:00] were, you know, we knew that on the walk home from school, we knew the bully kids houses and we didn’t walk by those. We took different routes. And so you developed what’s then known as a street savvy.
The equivalent has to happen in the digital world. It’s the same world just flipped upside down. You know, we’re through the looking glass. We have to develop that digital savviness. I doubt anyone on this call or probably a large majority of your listeners will develop that to any real sophisticated level, but their kids or grandkids will immediately.
And I think that’s a good thing. It changes us, but I don’t think it’s a change for the worse. I think it brings back critical thinking and situational awareness in the digital sphere that just quite frankly doesn’t exist for the vast majority of us today.
John Stackhouse: So we started this conversation with some pretty dire outlooks, and I’m sensing a bit more positivity from you both.
Are you more hopeful about where this is taking us?
Shuman Ghosemajumder: Absolutely. I think that to expand upon what Ken was saying, I think that people [00:28:00] are capable of evolving and societies are capable of evolving in a way that allows them to be able to protect themselves more effectively without making life a drag where you have to be paranoid and scared all the time.
I think that there are many instances of society going through difficult times and emerging stronger as a result of that. And I think that right now, there are a whole bunch of new technologies that are challenging the way that we think about the information that we consume and are challenging the way that we think about how much we can trust different communications.
But this is one of the reasons that we’ve started our company, because we think that technology has a role to play in terms of being able to address those problems while allowing society to actually be a lot more positive. And so I think that we’re going to discover exactly how those types of solutions integrate into every aspect of how we live our lives in the coming years.
And it’s going to allow us to be [00:29:00] able to, uh, be more positive in the future.
Ken Nickerson: Whether it’s digital or analog, hope can be demonstrated every night when you set your alarm clock for the next morning. So, we are a hopeful species by default. It takes a lot to squeeze hope out of someone. And these are new tools and new worlds to discover.
I am super hopeful, especially in the way of education. If I had to pick the one thing that I’m the most excited about with this evolution of digital twin and VR and XR and AI is a hope that there’s going to be just a total sea change in the education. I really think that any average 10 year old in 2040 will have twice the IQ that I could ever hope to aspire to because they’ve not only learned something, but they’re going to have literally the experience of walking through Shakespeare’s Macbeth and being one of the assassins or um, Uh, flying an airplane through the Alps.
I’m super excited. I, I don’t know if I’ll [00:30:00] live long enough for that, but I genuinely think we’re at a step function potential. I think this next step function is actually evolving us rapidly now to kind of like a human 2.0 and any kid born 20, 30 or after I have nothing but massive hopes for what they have the capacity to become.
John Stackhouse: That was very deep and not fake. Thanks for being on the podcast.
Shuman Ghosemajumder: That’s fine.
Ken Nickerson: Take care.
John Stackhouse: Sonia, I didn’t imagine we would end that podcast on some very strong notes of hope.
Sonia Sennik: I think Ken and Shuman were speaking to this evolution of computation, an evolution of how we use our technology, and actually building a safer world for the next generations and beyond.
John Stackhouse: I guess if there’s a message though in it, that safer world, especially the safer digital world, ain’t gonna happen on its own. It’s not gonna program itself. Humans are going to have to program it and continue to reprogram it with [00:31:00] principles and direction that avoids those horrific traps that we described at the beginning of the episode.
Sonia Sennik: And what an opportunity for creativity, innovation, and tackling challenging problems in a totally new way. I think it’s going to take a lot of different types of people to solve this problem.
John Stackhouse: Well, maybe that’s a good note to wrap up with and offer a truly human, non fake goodbye. Trusted and verified.
I’m John Stackhouse, and this is Disruptors, an RBC podcast.
Sonia Sennik: And I’m Sonia Sennik. Thanks for listening.
John Stackhouse: Talk to you soon.
This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.