Knowledge Institute Podcasts
Ahead in the Cloud: Understanding the Ethics of Algorithms, AI, and Automation with Emre Kazim
Holistic AI founder, Emre Kazim, discusses the importance of getting ethics, trust and transparency right in the early days of the algorithmic age.
Hosted by Chad Watt, researcher and writer with the Infosys Knowledge Institute.
“The last decade was dominated by data, and this decade will be dominated by algorithms.”
“The marriage of the computational power with the data sets is really what's giving birth to this current AI paradigm, the specific area of AI that we're doing.”
“If you ask people how many times are you using an algorithm or what's an AI, for some reason Google doesn't spring to mind, but every one of us is doing a search.”
- Emre Kazim
- Studying the ethics of algorithms and not doing it in industry or in the world is the equivalent of studying medicine and not spending a day in the hospital. We really wanted to have an impact. We wanted to do this in the real world.
- I think the problem that people get wrong with AI is they think that it is something that's still in the science fiction realm. But it's just automation. And understanding that this is automation is the logical conclusion of the digital transformation. And the next wave will be the algorithmic age. So the last decade was dominated by data and this decade will be dominated by algorithms.
- We've got this explosion in algorithmic use. Companies have to automate to remain competitive. But automating brings up their existential risks. And the trustworthy AI side is necessary for the adoption of algorithms. As the use of algorithms scales, the risk management frameworks will scale too.
- One of the things that's really interesting is: if you ask people how many times are you using an algorithm or what's an AI, for some reason, a Google doesn't spring to mind. But every one of us is doing a search, many times a day. So we've been using algorithms from very early on. It's the kind of explosion of these digital technologies occurring at the same time.
- When you ask someone what Google does, most people can understand it. It's able to search information on your behalf and rank that information in a way which it thinks is the most consumed. But understanding what it does behind the screen, what's happening in the black box, is a different kind of question. That's the kind of question that you do need to have an expertise to answer, which is what we are doing.
- What's happening now is the impetus through regulation and through kind of market motivations and customers concern is how can we maximize the transparency in this space so that we can understand why is Google's algorithm ranking this search above another. Why this term above another term, why conjugating these words with each other and so on and so forth. So the real question is really being able to maximize transparency.
- One of the things that we've found as a business is - we often go in and the first question we ask is, how many systems do you actually have? And seldom anyone is actually able to answer that question. So before doing the risk management, a company has to know that answer.
- We see through the numbers that the AI adoption is just moving exponentially up. And this is having a significant impact on the technology adoption and on the implications of these systems in sectors outside of traditional technology.
Chad introduces himself and Emre
What is the current focus of your research work at UCL?
What is AI?
In the context of artificial intelligence, what is an algorithm?
Tell me a little bit about Holistic AI. What led you and Adriano Koshiyama to start this enterprise?
So between your academic research and your business activities, what is it that business leaders most commonly get wrong about AI?
Why are we talking about this so much right now? How do we get from the information age to this algorithmic age, the time where every business is jumping into AI somehow?
Automation predates artificial intelligence. Computers and programming have been doing some forms of automation. What is different about AI driven automation compared with kind of process programmatic automation?
Some of our research at the Knowledge Institute led us to the position that AI itself is not new, but it is very much newly democratized and available to companies of a wide range of technological proficiency. Do you concur and what's the thing that led to this?
That proximity really lead to understanding, do we really understand what AI is doing?
How do you make AI ethics that's proactive and not just kind of reacting and banding over things that we've seen go incorrect?
How do you prove that these are the proper steps and that they are actually in place and functioning in an organization?
So you mentioned risk management, as we've said, AI ethics is a new field, risk management is something that we have experience with. Are there some other established areas you can use to build the foundation of AI ethics and bring kind of business practices or known quantities to the build a foundation?
Emri talks about AI algorithms and processes.
Where do you see a major AI breakthrough? We've been talking about generative AI already. Is that the area or is there some other area?
In the context of business AI, what progress do you expect from business AI, particularly among non-technology companies, not the Googles and the Apples in the world, these generalists who are getting in?
Chad Watt: Welcome to Ahead in the Cloud where business leaders share what they've learned on their cloud journey. I'm Chad Watt, Infosys Knowledge Institute researcher and writer. Here today with Emre Kazim, an AI research fellow at University College of London and co-founder of Holistic AI, an AI risk management and auditing firm. We're going to discuss artificial intelligence and ethics. Emre, welcome.
Emre Kazim: Thanks for having me. Really excited to join this chat.
Chad Watt: What is the current focus on your research work at UCL?
Emre Kazim: So at UCL, we're really interested in foundational questions about how do you go about assessing an algorithm. So it's a new thing, right? I mean what is- first, you know, people will say, "What is AI?" Uh, and then secondly, they do not know what does it mean to do an assessment, a risk assessment, or an ethical assessment of an algorithm? So we're really interested in those kinda questions and also the questions around how to regulate algorithms.
Chad Watt: Emre, let's go there. Uh, what is AI?
Emre Kazim: So we can think of AI, uh, in a number of ways. And probably the most simple way of thinking about AI is just automation, systems that automate or replicate decisions and activities that would've been traditionally the purview of human discretion or decision making. That's a very high level effectively what we're describing. When we talk about it in a more technical way, then we go into different kinds of systems, different kinds of technologies, and so on. And then we have a more specific definition for things like advanced statistics, machine learning, and then artificial intelligence, and then general artificial intelligence. But really the way we're using it generally is quite lay and just about automation.
Chad Watt: Got it. Let me extend on that real quick. In the context of artificial intelligence, what is an algorithm?
Emre Kazim: It's just literally, if you will, a sequence of steps that you take to get to a particular result, or a methodology of getting to particular kinds of results.
Chad Watt: Tell me a little bit about Holistic AI, what led you and Adriano Koshiyama to start this, enterprise called Holistic AI?
Emre Kazim: Chad, we were doing this research and, you know, the way I make the analogy with our students and other people around is, you know, studying the ethics of algorithms is the equivalent of, and not doing it in industry or in the world, is the equivalent of studying medicine and not spending a day in the hospital. We really wanted to have impact, we wanted to do this in the real world. And we were just engaging and doing stuff in industry. We say, "Hey, look, we really wanna make an impact, we really wanna get this to have maximum, you know, coverage, and just like be able to truly explore this." So we spun the company out of the back of that.
Chad Watt: So, between your academic research and your business activities, what is it that business leaders most commonly get wrong about AI?
Emre Kazim: I think the problem that they get wrong is they think it's something that's still in the science fiction world. So it's still just this kind of really weird and wacky technology that's, you know, robots taking over. It's just automation. To understand that this is automation, this is the logical conclusion of the digital transformation. And the next wave is, if you will, the algorithmic age, the last decade was dominated by data and this decade will be dominated by algorithms.
Chad Watt: That's a fascinating thought. That makes a lot of sense. What was the catalyst? Why are we talking about this so much right now? How do we get from the information age to this algorithmic age? The- time when every business is jumping into AI somehow.
Emre Kazim: So, you know what? It's a couple things. I think the first thing is, uh, to say that there's been a maturity in the technology. So, we've got these vast datasets, huge datasets that have emerged as a result of the digital infrastructure that's been in place since the internet, effectively. And all of a sudden you've got what is all you would need for a machine learning model, which is a really robust and powerful training data. So that's the first thing. Secondly, we've got computational power, that we've never had before. And the marriage of the computational power with the datasets is really what's giving birth to this current AI paradigm, this specific area of AI that we're doing.
The second kinda question that allays with that is so we've got this explosion in algorithmic use, it's just make sense, you know. Companies have to automate to remain competitive, but automating, uh, brings with it existential risks and the trustworthy AI side, if you will, is a natural corollary to the adoption of algorithms. As algorithms, use scales, their risk management frameworks are gonna scale too.
Chad Watt: I wanna come back to automation. Automation predates artificial intelligence. Computers and programming have been doing some forms of automation. What is different about AI driven automation compared with kind of process programmatic automation I would call it?
Emre Kazim: So if- the way I try to step back on that and just say actually it's probably just- really it's just about the capabilities of the modern algorithmic techniques, as compared to the traditional ones. So you're right, absolutely right. We can imagine how robust the use of algorithms were, for example, in when they put the rocket onto the moon, or into space, in the aeronautical industries, and so on. So, yes, absolutely, you know. Chury was doing this, what was it? 60 or so years ago.
Chad Watt: Right, right. And in that context, some of our research at the Knowledge Institute led us to the position that AI itself is not new, but it is very much newly democratized and available to companies of a wide range of technological proficiency. Do you concur? And what's the thing that led to this?
Emre Kazim: Yeah. So I think one of the things that's really interesting is if you ask people how many times they're using an algorithm or what's an AI, you know, for some reason Google doesn't spring to mind. But every one of us is doing a search God knows how many times a day. So we've been using algorithms as you said from very early on. I think it's the impact that it's having, given that it's- it's kind of acceleration. It's a kind of Cambrian explosion of these digital technologies occurring at the same time.
Chad Watt: You know, you've got me thinking, Emre, about the pre-Google searches we would do on the internet. I'm old enough to remember you had to have a little bit of a Boolean logic and you had to kind of finesse your search to get any decent result. And then Google came along and you could pretty much speak normal to it, or type normal. And now we're getting to the stage where you're having conversations with AI.
Emre Kazim: Yeah. It's this generative, stuff has been really interesting because it's caught the imagination with such a, uh, as a kind of real litmus and explosion there. And, I was testing it myself, you know, it was of my own curiosity. And we actually put in the question about what's ethical AI. And I thought this is one of the funniest stories, I remember is hearing from someone in Silicon Valley where they- where people were saying you know, "How are you going to deal with the problem of AI ethics?" And they were like, "It's not a problem. No worries. All we're gonna do is build in a powerful enough algorithm, and then we're gonna ask it, 'How do you behave ethically?' or, 'What is the ethical status?' And it's gonna spit the answer back to us." So-
Chad Watt: That's like letting your child set their own bedtime and dinner schedule.
Emre Kazim: Yeah. But it's of course it's ridiculous. Uh, but it was funny to hear that. But it's just also interesting because people are playing with it. And what we notice is that, um, there is a relationship, this is a hypothesis that there's a relationship between the proximity you have to a technology development and understanding of the technology and how much one fears it. So the traditional communities where technology was being done upon them are the kind of communities that are fearful of the algorithmic age, if you will. They've had a legacy of bad experiences, maybe through surveillance or whatever other means. And they feel the same about it. Whereas, let's say engineers, who are playing with this tool every day, don't have that kind of fear. So there is a relationship between proximity and, in the development of the technology, or ownership of the technology, and fear of it. An inverse relationship, I should say.
Chad Watt: Does that proximity really lead to understanding? Do we really understand what AI is doing?
Emre Kazim: So there's two kind of question I think. There's one question about when we say what is it doing, in a way if you ask someone what Google does, most people would understand it, right? It's able to search information on your behalf. And rank that information or order that information in a way which it thinks is the most consume- or consumable way. Understanding what it does in the sense of what the hell is going on behind the screen, what's happening in the black box, is a different kind of question. And it's that's the kind of question which when you do need to have the kind of expertise and so on. Which is the kind of stuff that we're doing. So that's really where we are.
What's happening now is the impetus through regulation and through kind of market motivations and customers concern is how can we maximize the transparency in this space so that we can understand why is Google's algorithm ranking this search above another, why this term above another term, why conjugating these words with each other and so on and so forth. So the real question is really being able to maximize transparency.
Chad Watt: So when I think about AI ethics, you guys were thinking about digital ethics, but a lot of I think the regulation, and this is always the case with regulation and oversight, it's a reaction to a bad outcome. We hear stories of biased AI or AI going wrong, or language models that are just inherently racist about who goes into a bar, what it says about them. These sort of models, how do you make AI ethics that's proactive and not just kind of reacting and bandaiding over things that we've seen go incorrect?
Emre Kazim: Fantastic question, Chad. Well, 'cause what I've noticed is that sometimes you can have, and this is for want of a better phrase, a moral panic where you have this real high profile case of harm. It could be a manipulation of the democratic process. There are other cases where algorithms were used for example to determine how long in people's, criminal justice- how long their sentences should be. People were saying, "Oh my God, these are horribly biased systems." And they're sending people from particular demographics into prison, also recommending that they go for prison much longer than other, let's say more protected groups, or less discriminated against groups. And then there's another one which is the use of algorithms in CV sifting so the recommending of let's say men over women, systematically for a particular role.
So you've got all these kind of cases which legitimately make people say, "Hey, hey, hey, we don't wanna have this. We don't wanna have, uh, algorithms being used in this way." And this precipitates those kind of regulatory interventions which are reactive. And we can talk about that, for example, with the EU laws are, in some ways, very similar to that. But I think what we need to do is probably the technology's always ahead of the regulation. And it's really about pressuring industry and maintaining our kind of consciousness to say, you know, demanding that we wanna know how these systems are being used, what are the controls in place, how can you validate that these systems are indeed working well.
Chad Watt: How do you prove that these are the proper steps and that they are actually in place and functioning in an organization?
Emre Kazim: So I think the first thing to do is to take the problem seriously. The way to take the problem seriously is basically to put in good processes that are able to evidence and validate that you have taken the problem seriously. So one way to do that for example is to say, "Do you have a risk management process in place regarding the use of your algorithms?" Secondly, are you aware of the use of algorithms across your business. So one of the things that we found as a business, um, as a company is we often go in and we say, the first question we ask is, "How many systems you actually have?" And I can tell you seldom is anyone able to actually answer that question. (laughs). So we then do that before we have to be able to do the risk management.
Chad Watt: So you mentioned risk management, as we've said AI ethics is a new field. Risk management is something that we have experience with. Are there some other established areas you can use to build the foundation of AI ethics and bring kind of business practices or known quantities to build a foundation?
Emre Kazim: So one thing that we've found is that actually in a business context it's top down. You need a mandate from the C-suite to say, "Look, take this problem seriously and act on it." I think the reason for that is because all companies have, you know, big industry has risk management processes in place. And, um, and risk management is generally considered a negative thing from an innovation and an operational, uh, perspective because it's-
Chad Watt: Right. It's compliance. It slows you down. Yeah.
Emre Kazim: Yeah. Yeah. It's a negative aspect. But what we've found is actually we've had all of our fruits and our positive engagements by working with the innovation teams in companies and saying, "Hey, hang on a minute. Being able to say you've got good control over your algorithmic systems, you're able to justify that they're used responsibly is a technical question. Is a good kind of thing to put into your technical architecture rather than just simply a compli-" it's like yes, we do compliance, but we could do so much more.
Chad Watt: We did our research report, Data and AI Radar, we found that companies with good confidence in their ethical practices were more satisfied with their AI outputs and we had seven different measures of AI ethics. We kinda deconstructed this. One, are you getting clear useful outputs. Two, are your algorithms explainable. Do you have processes to detect bias, incentives to detect bias, can you clearly show where the data came from, and do you have good data stewardship, and do the models make sense? Are they understandable to the outside. Of those kind of seven things, which of those are the right did we identify the right ones? Did we leave anything out? And which ones, matter most to you?
Emre Kazim: Yeah. So I think probably it just depends on the use, uh, where the systems are being used. So for example, detecting the process to detect bias are super critical in the context where they're really gonna have an impact on customers' or individuals' life prospects. It just depends on where things are being used. But generally speaking, in terms of the, uh, the first one, clear and useful outputs, critically and- and fundamentally ensuring that the systems are working is- is obviously the principal objective. And then if you've got a system that's unreliable, that's consistency producing, uh, if you will, results that just don't replicate themselves, then you can't really do any of the other assessments. It's just not a good system. So good engineering is really at the root of all of this.
Chad Watt: Where do you see a major AI breakthrough? We've been talking about generative AI already. Is that the area, or is there some other area?
Emre Kazim: In 2023, I think the big event will probably be those generative AI models. I think let's see what the consequence of them are more generally on the ecosystem. And I think probably actually the big story in AI is probably the regulation that's coming in. It's likely that the EU AI Act will pass next year and there's gonna be intended and unintended consequences on the AI marketplace. It could mean that, ecosystems outside of the UK and Europe, Europe in particular, really take the lead in the innovative side of AI.
Chad Watt: And, in the context of business AI, what progress do you expect from business AI, particularly among non-technology companies? Not the Googles and the Apples of the world, these generalists who are getting into AI.
Emre Kazim: Yeah. I think that's really where the real growth is. I think that it's the automation in companies, that traditionally were not using such technologies. I think that it's really being understood and seen. And we see through the numbers that the AI adoption is just moving exponentially up and, this is having a significant impact on the technology adoption and on the implications of these systems in sectors outside of traditional technology that we know of.
Chad Watt: Good. Good. All right. Emre, are you ready for a lightning round?
Emre Kazim: Go for it. Yeah. (laughs).
Chad Watt: Okay. Okay. Let's start in chemistry. What's your favorite element or compound?
Emre Kazim: Silicon. (laughs).
Chad Watt: Good answer. Philosophy, is human nature fundamentally good or evil?
Emre Kazim: Oh absolutely good. No doubt in my mind about that.
Chad Watt: Why? How? Prove it.
Emre Kazim: If you look at it from an aggregate level, if you look at it generally, I think if humanity was in its core nefarious, I don't think we would've got this far.
Chad Watt: All right. Ethics. What are our moral obligations to each other?
Emre Kazim: Foundational respect and empathy. So I'm a Kantian in that respect.
Chad Watt: Okay. Gotcha. Gotcha. (laughs). All right. AI question. Will we see artificial general intelligence before the turn of the next century?
Emre Kazim: No. No.
Chad Watt: Okay.
Emre Kazim: (laughs).
Chad Watt: That's fine by me. And in that same sort of timeframe, will society develop, agree, and abide by clear definitions of AI ethics?
Emre Kazim: No. Ethics is the realm of substantive moral difference and that will continue until no human beings exist anymore.
Chad Watt: So philosophy majors can't have a job.
Emre Kazim: Well, you know, if philosophers had definitive answers, then yeah, they would be out of a job, so probably we should remain in the realm of ambiguity.
Chad Watt: Got it. Got it. Thank you, Emre Kazim for your time and your insights today. This was fun.
Emre Kazim: Awesome. Thank you so much, Chad. Really appreciate it.
Chad Watt: This podcast is part of our collaboration with MIT Tech Review, in partnership with Infosys Cobalt. Visit our content hub on technologyreview.com to learn more about how businesses across the globe are moving from cloud chaos to cloud clarity. Be sure to follow Ahead in the Cloud wherever you get your podcasts. You can find more details in our show notes and transcripts at infosys.com/iki in our podcast section. Thanks to our producers Catherine Burdette, Christine Calhoun, and Yulia Debari. Dode Bigley is our audio technician. I'm Chad Watt with the Infosys Knowledge Institute signing off. Until next time, keep learning and keep sharing.
About Emre Kazim
Dr Emre Kazim, is co-founder and COO of Holistic AI - the leading platform provider for AI Risk Management.
- On LinkedIn
Connect with Emre Kazim
- “About the Infosys Knowledge Institute” Infosys Knowledge Institute
- MIT Technology Review
- “Technology and Taxation: How New Technology is Disrupting the Nation State” by Emre Kazim
Mentioned in the podcast