Canada may have a competitive edge in advancing artificial intelligence for good.

In this interview series, John Stackhouse speaks to Foteini Agrafioti, the Chief Science Officer at RBC and Head of Borealis AI, RBC’s Research Institute in Artificial Intelligence. Their conversation has been edited for length and clarity.

John: Can you give us an example of where you’ve seen bias in AI?

Foteini: One example that has been so interesting to me is gender bias in the Google translation engine. This wouldn’t have been immediately obvious to me beforehand, which shows how far our understanding has come. So, in this case I’m referring to the way we used to translate text from one language to another was by matching word to word, say from English to French.

However, when machine learning became more mainstream, we took a different approach. We fed documents – a lot of long texts – to deep networks in both English and French, and basically asked the machines to learn how the languages corresponded. The performance in translation increased dramatically, beyond what the scientific community anticipated.

What we didn’t anticipate is that bias was built into the documents about how the different genders are depicted. And that was learned by the machine and then perpetuated. So, a typical example now is that if you translate from English to French something with a doctor who is a woman and a nurse who is a man, the machine will mistake the genders because it has learned that more doctors are men and more nurses are women.

If I could break down the problem here: primarily, the bias already exists in the data. It was baked into the real-world information collected by real people, about real people.

John: So how as a scientist do you address this?

It’s a tough one to address. If I could break down the problem here: primarily, the bias already exists in the data. It was baked into the real-world information collected by real people, about real people. In many cases it simply reflects our civic life and all the social structures in place that keep our societies afloat – for better and sadly, for worse. It’s not necessarily that there are bad actors manipulating the levers. The people at Google that built the system are good human beings. This was not intentional in any way on their part. It’s just that this bias was built into the most precious resource that we have.

So, number one is being aware of this pre-existing risk. Don’t assume that data is an objective entity. And I think companies are beginning to recognize that that is the case. It’s the recognition that AI is not only a technology that allows you to transform the way you operate, but it’s one that truly opens up brand new risks. Some of them you can anticipate — but you have to also remember that a lot of them can blindside you. Simply put, we don’t know what we don’t know – yet. So, until we do it’s imperative to have a backup plan for how you’re going to both anticipate and mitigate these risks.

John: One of the interesting emerging areas for bias is biometrics. What should we understand about the risks of bias when it comes to biometrics?

Foteini: Biometric security has always been a very sensitive area. By definition, biometric data have personally identifiable information, which is extremely sensitive if comprised.

What AI brought into the mix is this unprecedented accuracy in human authentication to biometric modalities. So for example, facial recognition. The way biometrics used to work, if you want to recognize somebody consistently you would want them to maintain the same facial expression, not grow a beard, never put makeup on, make sure things are consistent.

But now, with deep learning, all of that vulnerability can be tolerated — even if you grow a beard, change your hairstyle, we can actually still recognize you.

So, now in addition to sensitivity, we have to worry about bias. There’s very systemically an under-representation of so-called ethnic backgrounds in data systems. So the recognition technology is very powerful, but because of inconsistencies in the number of skin tones the models were trained on, it doesn’t work for certain people or the algorithm will systematically pick out particular ethnicity.

It was clear 'San Francisco' put public safety first, despite the many useful applications of the technology...That was a really, really big deal.

John: Just earlier this month, San Francisco became the first major city to ban the use of facial recognition technology by local government agencies. What do you think of one of the most tech-savvy cities in the world making this move?

Foteini: With regards to the decision in San Francisco, this type of ethical dilemma is really what we should be thinking about, across the public and private sector.

From my purview, this was an excellent, bold move, recognizing that there are risks with the technology and not just jumping on board before taking stock of the potential misuses. It was clear they put public safety first, despite the many useful applications of the technology. So, taking a step back and deciding you know where are you going to draw the line? That was a really, really big deal.

John: But what will we be giving up as citizens and consumers, with these tighter controls, if they are indeed expanded?

Foteini: Security and convenience is usually what’s enabled with biometric systems.

There was a very interesting case of this a few years back here in Ontario with the casinos, where face recognition was being deployed in order to automatically recognize and stop problem gamblers from entering casinos. So people would self-register with the system and the expectation was that if they showed up at a casino in Ontario, they would be stopped from entering by security. Now that list grew very big. The security guards weren’t able to memorize all those faces. So facial recognition came into play as an interesting way of potentially recognizing them and stopping them from entering. It was a huge privacy concern at the time, and I think the privacy commissioner has done an incredible job figuring out how to enable privacy in that context.

John: What risk is there, that other countries — and I’m thinking specifically of China — will move ahead much more quickly scientifically? Because they don’t have these sorts of restrictions or even concerns about individual rights.

Foteini: Well, I want to believe that they do. And I certainly see the scientific community across the world putting pressure on how these systems are being designed, deployed, and adopted broadly. But assuming that that is not the case and you can just do anything you like…it’s not just a head start. It’s leapfrogged many, many, many steps ahead.

The ability to have unconstrained access to data, test systems, deploy them in the real world, get feedback and isolate without any consideration for things that can go wrong? I mean, I would learn a lot more about how to build a better surveillance system if I’m able to deploy a surveillance system and get data and insights out of that.

John: I’m thinking of an analogy of athletes who have to compete with athletes from other countries that use steroids. I don’t know if that’s a fair comparison. But as a scientist, do you feel like you’re losing ground to your Chinese peers?

Foteini: I’m generally trying to stay on the positive side.

One great thing that the academic community has done, especially in the machine learning/computer science space, is they require any new state-of-the-art machine learning system to be benchmarked and tested and proven on a public data set. So we’re all working on the same standard. They are holding people accountable to that.

'Bias' is very hard to solve. And in many cases you may not want to solve for it. Some biases are good.

John: Do you believe that the AI community will be able to smooth out the challenges of both the data collection that you have you referred to, and the initial coding of algorithms, to a reasonable degree that bias will be contained?

Foteini: It’s hard, to be honest. And I don’t want to be pessimistic here about it. But it’s very hard to solve.

And in many cases you may not want to solve for it. Some biases are good.

John: Give us an example.

Foteini: The decisions that we make when we’re driving our vehicles. I know we’re constantly talking about a responsible, self-driving autonomous vehicle. But sometimes it is human bias that saves us from accidents, like a driver’s bias towards driving slower than the speed limit when it’s wet or when they’re rounding a corner.

Or in trading, I can build a system that trades in a consistent way, or I could benefit from the special intuition that an extreme trader has — that’s a place where we may want to keep that bias.

What's beautiful about Canada is that most of the data that we have, organically, is very diverse.

John: Is there a competitive advantage for RBC or indeed for Canada in having this approach to bias?

Foteini: What’s beautiful about Canada is that most of the data that we have, organically, is very diverse. And if you’re building the next generation diagnostic system using MRI technology and you’re using data that was collected in Canada, you’re very likely to be working with data that is very well represented by different ethnic backgrounds. So that’s our home advantage.

The other aspect that is unique here, and part of our Canadian values, is respect for each other and tolerance — and I think we apply a very critical lens when we look at AI technologies and what they could do to our society. I think we have a lower tolerance for risk in imbedding these in our lives. We’re being critical of them and that’s good. That type of pressure is sometimes necessary to get the scientific community and the companies as well to push in the right direction when developing the next technology.

John: This is great Foteini. Thank you.

Foteini: Yes, thank you.

Listen to our conversation on the RBC Disruptors podcast about the potential of artificial intelligence.


Listen on Apple Podcasts, Google Podcasts, Spotify or Simplecast

 

As Senior Vice-President, Office of the CEO, John advises the executive leadership on emerging trends in Canada’s economy, providing insights grounded in his travels across the country and around the world. His work focuses on technological change and innovation, examining how to successfully navigate the new economy so more people can thrive in the age of disruption. Prior to joining RBC, John spent nearly 25 years at the Globe and Mail, where he served as editor-in-chief, editor of Report on Business, and a foreign correspondent in New Delhi, India. Having interviewed a range of prominent world leaders and figures, including Vladimir Putin, Kofi Annan, and Benazir Bhutto, he possesses a deep understanding of national and international affairs. In the community, John serves as a Senior Fellow at the Munk School of Global Affairs, C.D. ‎Howe Institute and is a member of the advisory council for both the Wilson Center’s Canada Institute and the Canadian International Council. John is the author of four books: Out of Poverty, Timbit Nation, and Mass Disruption: Thirty Years on the Front Lines of a Media Revolution and Planet Canada: How Our Expats Are Shaping the Future.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.