TechTalk Responsible AI
As the role of Artificial Intelligence (AI) in everyday life continues to grow, organizations face questions about how to apply it responsibly.
By Insight Editor / 13 Oct 2020 / Topics: Data and AI
By Insight Editor / 13 Oct 2020 / Topics: Data and AI
Using AI in ethical, secure ways is a priority for modern businesses. In this TechTalk, Insight Digital Innovation teammates Ken Seier, Kristina Heinz and Michael Griffin discuss best practices for responsible AI.
To experience this week’s episode, listen on the player above, watch the conversation below, or scroll down to read a complete transcript. You can also subscribe to Insight TechTalk on Apple Podcasts, Pandora, and Spotify.
Audio transcript:
Published October 12, 2020
KEN
Hello, thanks for joining us for another Insight TechTalk, I'm Ken Seier, National Practice Lead for Data and AI at Insight. I'm joined today by Kristina Heinz, AI Center of Excellence Lead, and Dr. Michael Griffin, Lead Data Scientist.
With the growing impact of artificial intelligence on our everyday lives, it is increasingly important, that we use AI responsibly. But, defining what's responsible AI use is, and ensuring deployed AI remain responsible, is the challenging task for many organizations.
Kristina, can you talk a little bit about what responsible AI is?
KRISTINA
Sure. So, if we think about AI as a mechanism for decision support, it's a little bit easier for me to talk about it. We all know that decisions have consequences. So, that means responsible AI is really about mitigating the risks of negative consequences in the decisions that we make. At the highest level, you know, that means not intentionally using AI in a way that isn't, in support of positive human outcomes, and most people don't fall into that category. We all intend to do no harm.
So, it's really about ensuring that we take care, not to accidentally or unknowingly do that. And that can come in the form of privacy and security risks, different types of data being leaked, understanding how we're using the data, what decisions are being made and why they're being made, as well as, really being sure not to have bias, making sure that our models are diverse. It's a really long checklist, but at the end of it, it's really just about ensuring that we know why we're making the decisions that we are, and that those decisions, to the best of our ability, aren't causing negative consequences.
KEN
We hear a lot about bias in artificial intelligence in the news. Mike, could you tell us a little bit about how bias gets into a model, and what kind of steps we might be able to take to remove bias from a model?
MICHAEL
I think it's important to understand that all AI models are biased because the data that's used to train AI is biased, and there's just no getting around it. I think with... First, it's important to acknowledge that. And second, it's important to understand where those biases are. And, in fact, I think it's okay to have bias in your means, because your models serve a community, and your community might be all white like me, or it might be brown, or it might be high income, low income, male, female, all kinds of different communities for serve. So, you want your models to be biased toward the communities they serve.
KEN
All right. And what is the human impact of responsible AI?
KRISTINA
So, I'll take a first shot, Mike, and then you can have your turn.
If we about the severe impacts, right? The things like self-driving cars, where we really want to understand the decisions, the calculations that go into decision to slow down, or to stop, right? Because it matters, it matters a great deal. But even in lower risks scenarios, there are still consequences for people, deciding whether or not to grant someone credit is very consequential to them. So, we do need to understand what went into that decision, and make sure that it's fair, make sure that it's well thought out, that it's reasoned, and having a human component to those decisions around the AI really helps to facilitate that.
MICHAEL
And I think the consequences of irresponsible AI are going to become bigger and bigger with time. Right now, every one of us consumes AI, to some extent, we all have one of these in our pockets, and those things are AI powered. And, all of our web applications that we use are AI powered, or for the most part they are. And it's already everywhere and it will become even more everywhere. And it will be also be more and more invisible to us. We won't know when AI is impacting our lives. So, it's going to be largely a burden of the developers, and the distributors of AI, not the consumers necessarily to develop responsible, and deliver responsible AI. And you can imagine if it's used irresponsibly, there can be a lot of negative social consequences.
KRISTINA
So Mike can you...
KEN
Sorry.
KRISTINA
No, that's okay. I was just going to add to that, there are more and more regulations these days, about the right to know, the right to know how your data is used. And that plays a big role in responsible AIs, and going back to the explainability, how is my information being used to make decisions that impact my life? So, in addition to having a responsibility just ethically, there are regulatory responsibilities that need to be factored in. And I think that that's good.
KEN
Many major organizations are right now lobbying for responsible AI. And that includes explainable AI. Can you talk a little bit Mike, about why explainable AI is important right now?
MICHAEL
There are many reasons, but I can give you a real good example. I work a lot in the health and life sciences space, and we deliver clinical decision support to doctors and care providers. And it will not, AI will not be used in the clinical setting if it's not understood and for very good reason, right? A doctor, they go to school, they have training, and they have many years of work behind them to understand how they make decisions. And if they're presented with a possible decision, they want to know, from an AI application, they want to know why that AI application made that call. Why this AI application predicts that this patient will get developed sepsis in the next 24 hours so they can do something about it, they're not just, you know, flying blind.
KEN
So, we've talked a lot about what responsible AI is, and its impact. We know that it involves, mediating or understanding the bias in your models, in making sure that that model performs responsibly and safely in securing that model and the information that generates, and explaining how that model came to its conclusion. Can you talk about how you can help an organization, go from beginning their AI journey, to enabling consistent responsible AI?
KRISTINA
Yeah, so, there are more and more tools available, to help with this problem, thankfully, right? Everything from evaluations of fairness and bias within models, to the explainability.
I think that starting with a policy that prioritizes responsibility, and really defining what your values are, as an organization is something that's really important, and something that we can definitely help with. And then putting some policies and processes around those values. And that could be the implementation of variety of tools, and the decision to really understand, not just what your models are deciding and when and why, but also why are we using our models to begin with? So starting with, a good intent, starting with a good policy, and then making sure that we're diligent and tracking, over time, not just when we implement, but afterward what those decisions are doing in the real world. I think that's a good start.
MICHAEL
And I think another thing that we can do and that we are doing with some of our clients is, helping them to sort of automate responsible AI. So, AI applications don't make decisions for you. They just provide you with some evidence that something might happen. And you, the subject matter expert, has to make a decision based on that prediction. And you might have to make hundreds or thousands of predictions a day and certainly a week or a year. And what we can do is provide a layer on top of that, an optimization layer, that allows you to set a goal, a goal, say, I want to implement responsible AI. I want to serve my community as best as possible, I mean the most beneficial way, or I want to make the most amount of money, or I want to, provide the safest possible services I can.
All of those things can be quantified. And all of the decisions that you make, can be optimized in such a way that they lead you, to achieve those goals. And that I think is probably, the single most valuable thing we can do. Because so many decisions need to be made, and it's so easy to go wrong a little bit here, a little bit there, that real optimization can help you, meet your goals in a responsible way, over the long haul.
KEN
Great, thank you both for your insights today, Kristina and Michael, you can find more information and insights, for the future of our business, in our digital magazine, the Tech Journal, read and subscribe at insight.com/techjournal.
MICHAEL
Thank you.
KRISTINA
Thank you.
[Music]