From recommending music to managing through a global pandemic, Artificial Intelligence (AI) technologies have interwoven themselves into our personal and professional lives.

But Foteini Agrafioti and Alex LaPlante say there is much work to be done to ensure that these technologies are used for the good of humanity. Agrafioti is chief science officer at RBC and head of Borealis AI; LaPlante is director of business development and product management at Borealis AI.

In an opinion piece published in the Globe and Mail, they write: “Many have rightfully questioned how data is being harnessed by multibillion-dollar enterprises. Or whether AI will further entrench systemic bias, discrimination and misinformation, creating further divisions within our societies. Even more worrisome is the public perception that businesses are not upholding their core responsibilities of accountability and transparency. ”

The authors lay out a path for Canada to step into a leadership role in driving the development and implementation of ethical and humane AI. And cite Canada’s banks and financial regulators as natural “early adopters” in this effort, given that rules have long been enforced to ensure AI is safe to use in this sector.

More broadly, however, ethical implementations of AI are often perceived as a barrier for Canadian businesses. A survey found 93 per cent of respondents “experience barriers to implementing AI in an ethical and responsible way, with many citing cost, time and lack of understanding as the fundamental issues.”

To address this problem, RBC and Borealis AI created Respect AI, a joint program that includes open-source research code, tutorials, academic research and lectures for individuals and businesses. These resources can help streamline the process of ensuring fair models, helping leaders uncover bias in models by auditing them at each step of development.

Through Respect AI, Agrafioti and LaPlante note four key avenues for cultivating a Canadian AI ecosystem that works to eliminate bias and is based on accountability, responsibility, and trust:

  • Sharing industry knowledge and best practices, especially from regulated environments such as financial industries or health care, on how to approach responsible AI development.
  • Using technology to expose bias. It is imperative that companies audit their AI for bias not just on launch day, but every step of the way.
  • Diversifying the industry and our data. Bringing more diverse voices into the industry, and vigilance in ensuring that data is representative of our population will fuel the creation of technology that works for everyone.
  • Educating the public. People need to be aware of the positive implications – and risks – that AI will have on their lives. The more our future generations understand AI and its societal and ethical implications, the better prepared they will be to ask tough questions of our leaders.

This article offers general information only and is not intended as legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. While information presented is believed to be factual and current, its accuracy is not guaranteed and it should not be regarded as a complete analysis of the subject matter discussed. All expressions of opinion reflect the judgment of the author(s) as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or its affiliates.