Solving bias in machine learning will take a human touch.

Ahead of the next RBC Disruptors event on May 23, “Battling Bias in AI,” our Thought Leadership team is examining the societal and ethical implications of artificial intelligence. In this interview series, John Stackhouse asks Layla El Asri, a Research Manager at Microsoft Research Montréal, about the role of humans in solving AI bias.

John: When did you first become aware and concerned about bias in AI?

Layla: I’ve really been thinking about it just for the last three or four years. After I joined Microsoft, that was about the same time that news stories were breaking about different issues surrounding bias and AI. There was the famous example with Google software, where their algorithm misclassified an African American as a gorilla. That was just one example of bias in the product because the data was not representative enough. It was very striking for me because it was really terrible.

Then there was this example, also with Google software, that if you would type a name would be mostly used within the African American community, then you would get ads about searching for a criminal record. And that’s bias in the system, because of bias in the way humans were using the system.

So those examples were really striking and really showed that things could go wrong if we weren’t more careful with the data that we were using in the models that we were putting out there.

The model just tries to optimize its performance. It doesn't ask, am I being fair?

John: As a scientist, how do you think about bias?

Layla: Your model can only be as biased as your data.

The model just tries to optimize its performance. It doesn’t ask, am I being fair? There are ways to put this into the model, but it needs to be put into the model. From a scientific point of view, that’s a matter of changing the objective of the models so that it not only tries to maximize performance, but is incentivized to not amplify bias or to try to reduce bias if possible.

John: So that opportunity to de-bug bias, if I can put it that way, is it as straightforward as you’re laying out? Just a matter of recoding?

Layla: You know, it really depends. It is possible in certain cases to try to kind of re-engineer the models so that it becomes less biased or unbiased. But in certain cases, it is just impossible. If you have data, for instance, that comes from human decisions — you might not even know of the bias that is present in the human decisions in the first place. And it’s a matter of really testing the model to see if it’s biased.

Sometimes in order to make it unbiased, you just need more data, especially when you have a problem with under-representation like you haven’t got data for darker skin in computer vision. There’s nothing you can do except collect data for darker skin and then retrain your models so that it learns about that data.

John: How do you at Microsoft come to grips with these challenges?

Layla: The way it’s been tackled here is very different. There are research groups that are dedicated to researching these questions — fairness, accountability, transparency and ethics. So questions like, what does it mean for a machine learning model to be fair? Fundamental questions that are yet to be answered. That’s at the research level.

And then there is also a committee within Microsoft which is called the AETHER Committee (AI and Ethics in Engineering and Research). This committee serves as kind of a consulting branch for product teams and leadership within Microsoft.

These groups know the technical issues with machine learning models, they know what they can and cannot do. And it’s really important to advise production and leadership about this, so we can all make an educated decision about whether or not it is safe to release machine learning at this stage or not.

Those are the kind of things that have been put in place within Microsoft as safeguards against unsafe use of AI, and ethical use of AI. And auditing has been a really impactful thing to do, too.

John: What kind of people are on these committees?

Layla: It’s actually a good mix of technical people and also people who have more of a sociological background. We have historians, we have anthropologists, sociologists, all working on these questions to try to understand the potential sociology consequences of certain patient learning technology.

I think we have to find a way to have a good working relationship between machine learning models and human beings, so that we can leverage the amazing adaptation capabilities of human beings and the amazing computational and kind of number-crunching capabilities of machine learning models.

John: You were talking earlier about the unintended consequences of machine learning. We have lots of unintended consequences with human learning and human decision-making. Should we be more confident in the ability of science to minimize the unintended consequences on the machine side?

Layla: You know in the future, I want to be optimistic that the answer will be yes.

But one really important flaw that I see right now, and which makes me lean towards no, is that the models are trained on historical data, so they cannot change. Unless you change their learning objectives, or you change the data that they were trained on. And currently they need a lot of data to learn something new.

Human beings, on the other hand, adapt very quickly. So if there is a problem with bias within your organization, you can talk to people and educate them and they will be able to react very quickly. A machine learning model right now will not be able to react very quickly. It will need a lot of data so it can be trained on it.

So I think that you know for the time being, as long as machine learning model cannot really adapt quickly and learn new things quickly, I think they have to work with human beings. And I think we have to find a way to have a good working relationship between machine learning models and human beings, so that we can leverage the amazing adaptation capabilities of human beings and the amazing computational and kind of number-crunching capabilities of machine learning models.

John: Maybe I can wrap up with a question about your own research in dialogue systems. Are voice and text biased? What should we, as consumers or producers of voice and text information, be thinking about?

Layla: If your model understands only certain people and certain voices and doesn’t understand for instance, the elderly or different accents, then you have a biased system because it doesn’t work for everybody. The good thing with human beings is that we kind of work with everybody, we kind of understand all sorts of accidents. Machine learning models might not always.

And even in text, if you only understand really well-formed English and your model doesn’t understand certain idioms or slang that might be used by certain communities, then you have a product that doesn’t really work for everybody and then you have a problem with bias.

You need to be able to understand all the people that you want to serve, really — all the people that you want your product to work for.

John: That’s fantastic. Really great insights. Thank you.

Layla: Great. Thank you.

Listen to our conversation on the RBC Disruptors podcast about the potential of artificial intelligence.

Listen on Apple Podcasts, Google Podcasts, Spotify or Simplecast

 

As Senior Vice-President, Office of the CEO, John advises the executive leadership on emerging trends in Canada’s economy, providing insights grounded in his travels across the country and around the world. His work focuses on technological change and innovation, examining how to successfully navigate the new economy so more people can thrive in the age of disruption. Prior to joining RBC, John spent nearly 25 years at the Globe and Mail, where he served as editor-in-chief, editor of Report on Business, and a foreign correspondent in New Delhi, India. Having interviewed a range of prominent world leaders and figures, including Vladimir Putin, Kofi Annan, and Benazir Bhutto, he possesses a deep understanding of national and international affairs. In the community, John serves as a Senior Fellow at the Munk School of Global Affairs, C.D. ‎Howe Institute and is a member of the advisory council for both the Wilson Center’s Canada Institute and the Canadian International Council. John is the author of four books: Out of Poverty, Timbit Nation, and Mass Disruption: Thirty Years on the Front Lines of a Media Revolution and Planet Canada: How Our Expats Are Shaping the Future.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.