Responsible AI and the Future Workforce
Insights
- Responsible AI turns governance into a strategic accelerator, enabling safer innovation and stronger business outcomes.
- Transparency about how AI works and why it’s used is becoming a core driver of trust for employees, customers, and stakeholders.
- AI is shifting work from task execution to higher-order judgment, creativity, and contextual decision-making.
- While AI removes repetitive tasks, it simultaneously creates new roles in AI-embedded design, domain-driven development, and cross-functional innovation.
Moderator: Linda Gossage, AVP, Global Alliances Infosys Ltd.
Panelists:
- Hemant Ahire, Worldwide Head of Technology, Amazon Web Services (AWS)
- Sanjmeet Abrol, HR transformation, IBM
- Spencer Beemiller, AMS Innovation Officer, ServiceNow
- Vidya Gugnani, Americas AI Lead, AI Partner Product Management, SAP
- Dylan Cosper, Program Manager and Researcher, Infosys Knowledge Institute
This panel brings together thought leaders from leading technology organizations to explore how AI transformation is reshaping security, governance, and the future workforce. The conversation examines responsible AI adoption, evolving governance models, and the emerging human–AI collaboration paradigm that is redefining roles, skills, and organizational structures. Panelists discuss the ethical, strategic, and human implications of AI in the workplace and share their perspectives on what a future shaped by intelligent technologies will look like.
Explore more videos:
- Exploring AI’s Role in the Future of Work with Stanford's Dr. Ruyu Chen
- Scaling GenAI and the Future of Work
- AI Revolution: Is AI Driving a New Industrial Revolution?
- Engineering the Future: HDR’s Mitch Dabling on AI, Infrastructure, and Human-Centered Transformation
- Humans in the Loop: Foot Locker’s Approach to Quality, Innovation, and AI
Linda Gassage:
Thank you all for joining. I am really excited for this panel today. Our distinguished guests have actually flown in to be here with us today. And I think this is a really opportune time for us to be having this conversation, right? When you look at all of the international collaboration that’s going around AI, governance, responsible AI, setting the standards to make sure that these agents are human-centric and non-biased. It has really kind of impacted the regulations throughout the world. Europe, China, the United States, and down to the state level with a lot of the things that the state of California has been implementing, Texas, Utah, to name just a few, who have implemented very strict guidelines on AI use in employment and advancement. So I just want to kick it off with that. It's really exciting. I think it's a great time to be having this conversation. So we have representatives here today from AWS, IBM, SAP, our very own Infosys Knowledge Institute, and ServiceNow.
What does responsible AI mean to you in the context of your organization at AWS?
Hemant Ahire:
Thank you. Before I just talk about responsible AI, just wanted to mention, and this is obviously, everybody knows this, but if you look at generative AI as a technology, it's one of the most transformative technologies that has happened since the advent of internet. And as technologists, as we start to think about rolling out these new GenAI technologies that's changing our lifestyle, the way we work, the way we play, it's becoming more and more important that it's not only about building new solutions with generative AI or agentic AI, but it's more about how do we do it more responsibly. Now, at AWS, we are committed to developing AI very responsibly. take a very people-centric approach, prioritizing things like science, education, and of course, our customers. That's most important. And we look to integrate those responsible AI dimensions throughout the end-to-end life cycle of AI, whether it's things like data collection, to monitoring the models, performance, and lot of the post-deployment monitoring aspects of that. I would say that's kind of our approach. The other thing is obviously we're looking to help all of our customers transform responsible AI from theory into practice. And the way we do that today is through a set of tools providing very prescriptive guidance and a set of resources that they need to be successful in today's market. I'm sure a lot of you are aware of some of the portfolio of AI and GenAI services from AWS, things like, I would say, Amazon Bedrock Guardrails, Bedrock evaluation capabilities. We also have a whole slew of SageMaker services with SageMaker Clarify and doing machine learning governance with SageMaker AI itself. So there's a ton of capabilities that we can talk about. But today, I just wanted to talk about some of the key set of services where we have taken a very responsible AI lens to it, And from our perspective, Amazon in general and AWS, are always looking to innovate and invest in new set of features that move the needle towards responsible AI. One of the things that, I will say the very first is how do we enhance safeguards within one of our flagship service, Amazon Bedrock, I'm sure you are all aware and how do we improve the trust and the security and transparency with it. We have a feature called Amazon Bedrock guardrails, which essentially looks to help customers implement these application level safeguards. It's not just the infrastructure level, app level specific safeguards. And looking to use them, leverage them to implement these safeguards based on their organization's specific AI policies and their specific use cases, right? For example, one of the things we have announced recently is this notion of automated reasoning checks as part of the Amazon Bedrock guardrails, right? So that is really helping to uncover what's really, what are the hallucinations that are coming out of foundational models and reduce those amount of factual errors that are coming out of it. So that's one example. The other thing is how do we improve the generative AI application responses and the quality of those responses using some of the new capabilities. One of them is called the Amazon Bedrock evaluation capabilities. And within that we have announced obviously as part of the culture of continuously innovating we have announced things like LLM as a judge for example, which looks at you know It can actually perform some test and evaluate other models doing a human level quality and really judging those models based on that based on the customers own specific data sets, right? And then the third is from a I'm sure you all heard about Amazon Nova, which is our own next generation of the foundation models and within that it has a slew of features. I don't want to get into too many details, but from starting to look at how do you remove some of the content that's not relevant or harmful. Also rejecting some of the user inputs that comes in the form of prompt engineering and also the filtering some of the outputs that has some inappropriate content like it could be a malicious content or violence or things like that. So those are, as you can start to figure out, these are all the capabilities that the AWS GenAI portfolio of services, including Amazon Bedrock and others, are providing to our customers from a responsible AI perspective. Happy to take more questions later on if you have more time, but hopefully that gives you a sense of how AWS is contributing today from a responsible AI perspective.
Linda Gassage:
Absolutely. And we see that across the board, right? Technology partners are really building out solutions, right? To address the responsible AI, right? So I think it'd be interesting to hear from Dylan, right? We're not a service, we're not a products company, we're a services company. So what does responsible AI mean to you personally and to Infosys?
Dylan Cosper:
Yeah, absolutely. And, you know, it's a pretty large and complex topic. But I think, you know, myself and Infosys are really aligned on what it means. And responsible AI is the intentional development, deployment and, of course, governance of artificial intelligence in a way that isn't just ethical, transparent, but also grounded in human values. And a lot of the research that we've been doing, we see a common thread coming through that companies who prioritize and implement responsible AI practices on average have fewer incidents, things like privacy violations, fines, things like that, loss of brand trust. But it's not just the avoidance of the negative consequences. They actually experience better returns from their AI initiatives. So this isn't something about, you know, oh I don't want to get in trouble, right? It's something that can really drive your business forward. Another thing that we've seen is leaders believe in responsible AI. Executives believe in it. I think it's nearly 80% of the companies that we've studied, executives have said that responsible AI will be core to driving longstanding business value for their companies. And adoption. And adoption.
But what's interesting about that is there's a lot of belief. I mean, nearly 80%. That's almost everybody, right? There's a few stragglers. But when it comes to doing it, it's less than 40%. Less than 40% of companies have actually implemented responsible AI frameworks across the enterprise. It's quite a divide there. And this isn't meant to be doom and gloom. But there is kind of a universal that isn't the greatest and should drive people forward for responsible AI. And that's nearly all of the companies that we've studied have experienced some sort of negative consequence from poor AI usage. And that's either by having poor responsible AI practices or just simply not having them. Again, that's not to be doom and gloom here, but really to illustrate why there is such a demand and an opportunity for responsible AI adoption today and in the future.
Linda Gassage:
Thank you, Dylan. And I think that's a great segue into the question that I have for you, Sanjmeet. 40% of companies? Under 40% of companies. Under 40%. So I think a lot of companies are kind of sitting waiting it out and trying to figure out what is responsible AI, how do we govern it, how do we ensure the right outcomes. And so we talk a lot about safety, transparency and trust, right, when it comes to responsible AI. So Sanjmeet, I want to ask you, from your viewpoint and IBM's viewpoint, what is AI governance and what are the key components around AI governance?
Sanjmeet Abrol:
Yeah, it's a very related topic and between everything that Haman said and Dylan said, you heard about the components of governance. But essentially, if you had to segregate governance from responsible AI, although it's not a segregation, but governance is like the operating system of responsible AI. Right? So very practically speaking, it's how do you put structure to design? How do you put structure to AI monitoring, AI development, AI evaluation, how do you put structures around it? So it starts with principles, like Dylan said, having principles around that any AI that is deployed needs to be fair, needs to be accountable, needs to be ethical. And then it moves on to processes and roles. Who owns the model? How is risk evaluated? What are the checks that you do before? What are the checks that you do after deployment? In terms of IBM, practically speaking, one of the core parts of governance is our AI ethics board now called the Responsible Technology Board. This board publishes the framework of any AI that's deployed within IBM or to our clients. And what they do is they don't just say that, OK, your AI needs to be fair, but exactly what does fairness mean and how will it be evaluated? They also publish a list of approved use cases. So approved use cases for enterprise AI. And then if we have new use cases, they go through the board review. And the board is not just a person from software or a person from sales or an exec from IBM research. It's a cohort. So you’ve got people from so many different disciplines coming together and then looking at use cases and then talking about, how do they stand against the principles? But that's just the initiation part of it. Then when you talk about design and development, there's all the conversations I came and said, say you're building a solution on AWS. Then you talk about, how will you manage hallucination? How do you assess risk? How do you do your bias testing? How do you do your drifts? And then once you've deployed it, besides monitoring, we also publish AI fact sheets so that there's a full audit of what was the purpose of this AI solution. Because know very, very, not very often, but often it happens what started from an idea once it was deployed became something else. You know, and you have to remember what the purpose was, you have to remember what data did you collect, you have to show what the performance is after you've deployed. So then coming back, it's the operating system and you have to be intentional about it because your operating system will determine what's the manifestation of the responsible AI in the enterprise. But do open up if there's other things that folks here want to add.
Linda Gassage:
It's a very complex governance system, right? And so I think when we talk about governance, IBM has created this whole infrastructure, for lack of a better word, to support that. But there's also lot of conversations around who should regulate the governance and responsible AI. Should it be self-governed? Should it be governed by regulation? So Spencer, I wanted to kind of broach the question with you in that who should be responsible for monitoring AI systems for ethical compliance?
Spencer Beemiller:
Yeah, let's talk about it. So perfectly articulated on a framework. Wow, that was extremely well dialogued. I think that could be printed out of LinkedIn. A bunch of companies helped in the 40 percentile that don't have a governance protocol. I guess I'd probably also query the audience here. Who here likes to be governed? Zero hands. We got one. It's funny. We don't necessarily want to be governed, but we do want what happens after the governance comes into play. So if you think about the brake on your car, what is that? That's a governing agent to keep you safe in the realms of how you go about your three-dimensional space of driving your car. You also take advantage sometimes of, hopefully never, but having to call 911. That's a governing body that we've all agreed into as a society to say, yes, we want this service to be able to help us when something catastrophe happens. What is that really? Governance in that case is what I heard from a customer once and I just love to coin this in a new phrase. It's participatory leadership. So everybody in this room that is adhering to the AI governance protocol that's set up within the framework of the organization is participating in that company wanting to be an AI forward or an AI first company going forward. So I think the answer to the long-winded answer to the question is it's really everybody's responsibility in participation of how your leadership sets up the structure. So leadership has to say first, we're gonna be AI because it's happening in this world whether we like it or not and we wanna stay relevant and we want you to come with us. So they set up the framework and the guardrails accordingly and then each time you open up your laptops, your phones to be able to type in something, you are participating in that conversation with your leadership to say I also am gonna adhere to the AI governance that's set up in my organization. So, it's really everybody. And the beautiful thing about AI is it's breaking, it's like finally breaking down these silos that we've been talking about in organizations for decades and decades and decades. I mean, just the digital transformations of the past still haven't necessarily taken wave into actually breaking down those silos for us to have to work together. Now these governance frameworks are requiring us to. So I'm actually quite excited about that.
Linda Gassage:
So that's really interesting, right? Because I've never heard somebody answer it that way. It's up to everybody. So it's up to all of us. However, we all come from different backgrounds, different cultures, different sense of what might be fair, what might be right, what might be wrong, what might be corporate policy, right? So Vidya, that's gonna lead me to your question, right? In terms of given we are all responsible, to be responsible AI people or bots or whatever. How do we ensure transparency and fairness, right, given our multicultural environment when it comes to AI systems and how we're developing those?
Vidya Gugnani:
I think there's fundamental differences in how you perceive AI. You as a consumer or you as an enterprise. So as a consumer, so many of you use Instagram, WhatsApp, you use Google, but the moment you open it and you ask a prompt or you ask a question, now I think the first thing that you see is an AI-generated response. Or do you want to leverage AI? So you see that there is this kind of in-your-face response telling you that AI is an action. Already that kind of consumer transparency that comes in when you're using a lot of these tools. That becomes a little more difficult when you go to the enterprise level. So when you say transparency, fairness, ethics, governance, while in principle all of us on the seat here, can say that IBM, SAP, ServiceNow, Infosys, AWS, they have the best guardrails in the world. They have all of these different ethics, governance, committees in place which do all of these tools. But when it comes to an enterprise, your customer who asks you who sits at the table, the first thing that they do is, hey, are you going to use my data to train your models? I don't want you to do it. And we get this response from almost every customer who talks to us. that's why I say there's a difference between you as a customer and enterprise data, because the amount of fairness, transparency that you bring in requires that regulation. So whether it is your internal experience and governance policy. There's also global ethics and governance policies. So it's like the United Nations. So you also have the UNESCO. So for those of you who don't know it, the UNESCO a few years ago laid out a 13-point charter for AI, ethics and compliance. So whether it's human response, oversight, fairness, I think I heard this, whether it is a response which leads to harassment, negative feedback, human oversight, all of those are covered in these guidance principles, which is governed by UNESCO. So most of our organizations adhere to this. They call it the AI ethics policy. We as SAP, for example, we have to adhere to it. We have to adhere to it in our product development so that when a customer sees something that they use from SAP, they know that it is like an ISO certification. It is an AI ethics compliant tag that you're getting in giving you that response, in handling your data, in spitting out the information when you're giving a prompt. For those of you who used SAP UI, It's literally not the best at most times. So if you've navigated between the SEAT, the workbench screens and others, life becomes really difficult. So then we introduce something called Joule, which is what we call as the AI for UX. now Joule is like a prevalent chatbot, our interface that talks across all of these different SAP systems. So the first question again that our customers asked us was, where is Joule getting that information from? You know, can you opt in, can I opt out? So I think that kind of transparency that you share when you use, so even when you as a customer purchase an AI solution, for example, from SAP, you purchase Joule, there is this kind of a clause, you know, clear declaration of intent that, we are not going to use third party data or to train LLMs. So use the LLM information and then input your data and create certain, and leverage it for our own benefit. But of course, most organizations, especially all at this table, use it for our development. And we have to declare it to you. So we have to tell you that if you are, if you're leveraging this information for our own product development, because we will. We need data for AI development. It's literally something that leverages on all of your information, your interactions, your prompt responses. We declare that. We give you that information in terms of this. So all what Spencer, Dylan, Sanjmeet, Hemant said, if you talk about transparency, if you talk about fairness and response, you do have these governance models in place. But I think the key word here is transparency. If you are a consumer, you need to know that AI is soliciting, handling, working with the information. And that also comes with a little bit of human oversight. You cannot simply just leave it to someone and say, yes, everyone is responsible. But at the end, you as the human are the one who are bringing in this kind of transparency and fairness and adhering to that ethics policy, if that makes sense.
Linda Gassage:
You know what, that is absolutely spot on. I know we've talked a lot about responsible AI and governance, but how does that really, how does all of that oversight and structure and focus in that specific area impact innovation? So Vidya, I wanna ask you, how do we balance all of these concerns and regulations while fostering innovation? And then Hemant, I would like for you to also have that dialogue with Vidya.
Vidya Gugnani:
I think it's a given reality that we use now transformative AI for almost all of our development. So whether, I think I heard from HR, you have your performance management system. I know at SAP, my managers have started generating this AI responses to what I've been doing every single day and keeping a track of it. We don't like it, but they do it. And then they have this score card that now comes in the background. So we're using what they call as workforce innovation on AI with these new tools that are being brought in for productivity. For those of you who have used SAP's Concur expense management tool, now the moment that you enter where you want to go and you want to take a taxi, you want to book a hotel, you know, based on your journey type and you don't have too much freedom to say how much you will spend. AI kind of gives you this kind of response that this is the estimated cost that you need to stay under. So you see these kind of workforce innovation tools that are being built in and like baked in where you see this productivity. But at the same time, I think, in the spirit of what we want to do in AI, I believe the biggest transformative factor is partnerships. Our partnerships with our key partners, our ServiceNow, our Infosys, and AWS, and IBM. We are encouraging, of course, each other to use our platforms, but I think it was at one and a half weeks ago, we announced our agents that work with ServiceNow to trigger the entire ticketing system. The IBM HR agent that you spoke about, Sanjmeet, is built and integrates with SAP Joule. We've launched a coordination program with AWS, which Infosys is a part of, where we actually fund your programs and try to create these kind of innovative cross-transformative projects. So, like Dylan said, it's not all doom and gloom. So every time that you have four things that you did in a performance management system being now transformed by an AI agent in the background, you have four other opportunities by these expanded partnerships that pop up, which you can leverage, which you can take part in your day. I mean, SAP has a partnership with everyone, so whether it's the NVIDIAs, the Mistral, the OpenAI, Microsoft, you name it, that partnership, that engagement is there. Why is it there? I mean, it's because, of course, every organization is selfish. They want to have those logos.But it is also because at the end of the day, you know that you cannot drive this innovation and productivity gain in AI alone. We need these players, the players at this table, to work with us. And maybe a shameless plug, you know, SAP runs 85 % of the world's transactions. Hey, if not us, who's going to drive productivity?
Hemant Ahire:
I can't agree more with everything you said. From an AWS perspective, I think, especially coming from somebody like myself who's been a technologist all my career, I think this is such a timely and relevant question that we all are facing and grappling with in terms of how do you balance and how do you counteract innovation with that sense of responsibility and having that sense of accountability.
And at AWS, I think we definitely look at both of these aspects, not just as a competing priorities, but a set of complementary forces that strengthen each other. And at the end of the day, I feel like this is from my accountability perspective, it's not so much about what can you do when things go wrong, but it's more about how do you build systems, how do you build processes that actually help you to get ahead of things, make sure they don't happen at the first place. That's going to be the key. And then you really start looking at how do you build innovation within your programs, within your company's culture itself to drive that innovation and help benefit that ultimately benefits our customers. Now, I will say a couple of other things from a first principles perspective. And you look at AWS approach. We follow certain best practices, right? So the first thing is building guardrails, we have talked about, lot of my fellow panelists have talked about that. If you look at AWS approach of building guardrails, we actually look at that as building that directly into all of our AI services, GenAI services, and agentic services. That is security cannot be a bolt on, we all know that. It has to be an integral part of anything that we actually design and architect, right? So if you look at going back to the example of Amazon Bedrock, we have a set of built-in tools that actually help you to measure the performance of the models that are part of the Bedrock. You can also look at things like latency and accuracy and bias. Sanjmeet talked about, you know, how do you measure bias and things like that. So those things are really important. And then of course, our approach and our philosophy from AWS perspective has always been how do we give customers the control over their data and over their AI applications, right? So the way we look at it is we give a single API for accessing all of the underlying first party foundation models like Amazon Nova, Titan, but ee also have all of the third party models like Meta’s Llama, whether it's Coherent, Anthropic, we have probably the list is ever growing. At the end of the day, it's about giving choice to the customers so that they can fine tune those foundation models to their specific needs and then actually drive those use cases that help the customers, right? So that's really the fundamental approach. If I have to summarize, I will just leave you with this thought that when it comes to responsibility and accountability, it has to be an integral part of the innovation that customers want to build and really drive those benefits to the customers. cannot be an afterthought.
Linda Gassage:
Thank you for that. So, I'm going to switch gears a little bit with you Dylan. I have teenagers. What are they? Gen Xers? Gen Zers? They don’t trust AI. They don't like it, they don't use it, their friends don't like it, their friends don't use it, they are very anti-AI and it's all because of trust. It's all because of trust. So can you talk to us a little bit about what we're doing here at Infosys to build trust among our employees, our customers, and our stakeholders?
Dylan Cosper:
Yeah, and I mean, I would like to say when the recording comes out, just rewind to whenever Vidya and Hemant here were discussing, because I think a lot of that is what builds the trust, right? It's about reaching understanding. What's happening with AI, at least here at Infosys, trust is the foundation, and I don't think that really changes whether we're talking about employees, customers, or stakeholders. In the current era that's being defined and driven by AI, and probably the last decade that's been driven by constant change, trust isn't something that's just declared. I'd make the argument it's easier to lose that trust today than it is to gain the trust. It's just like with humans.
Linda Gassage:
Exactly. I think if companies truly want to earn trust, it's about consistency, transparency, and acting with human values in mind. Right? I know we're talking about technology, but we can't lose sight of the human side. And with how fast the technology is moving, right? How fast AI is evolving. I mean, you're going to hear a lot of advice today. And the advice continues to change. I think we can all agree as we learn more. I think it's about just trying to reach understanding. I think one way of doing that is keying in on AI explainability. Do you have a way of communicating to your customers, your employees, your greater stakeholders about why you're using AI, how you're using it, what's informing it, things that have already been covered, right? And I think there's a lot of opportunity if you can really reach that level of understanding, particularly through AI explainability. Because just take customers, for example, in a recent study of ours, we found that over 50% of consumers will buy more products from a company that has AI that they perceive as ethical and transparent. This isn't necessarily all about making the big bucks, but I think what it shows is that responsible AI isn't just about insulating yourself from consequences, from keeping your hand being smacked. I mean, it's truly what can turn trust from just something you have to being a brand differentiator.
I think one of the most difficult areas when we talk about trust in AI is with consumers. I'm a victim of it. I Google a fishing pole for my son, right? And all of a sudden, all I'm getting is fishing pole ads in my feed, right? So I think for me as a consumer of AI, I think that's probably gonna be one of the most challenging spaces that we need to hit up. Okay, we have talked a lot about governance and responsibility and all that stuff. So let's shift gears. We're gonna talk about the future workforce. And I would love Sanjmeet to kick that off for us. I worked at IBM for 17 years. The last seven years I was there, IBM was like one of the first pioneers to integrate AI into our internal HR systems, right? So it was a big adjustment for me as an IBM employee. I had to like I was used to talking to a human I could pick up the phone and call my HR rep all of a sudden I'm being told click on the bottom right hand corner. There's a chat box that'll come up and you ask your question there and you'll get your answer, 70% of the time the answers were right. But since it was in beta 30% were like what is this? But anyhow it's come a long way with IBM and it would be great Sanjmeet if you could talk to us a little bit about how is AI reshaping what you're doing at IBM and the future workforce?
Sanjmeet Abrol:
Yeah, before I talk about IBM, let's talk about what's happening right now, what's been happening since ChatGPT came on the corner, right? Everyone went ahead, started writing emails using it. Oh man, had to spend the entire afternoon. I remember I used to have like calendar blocks where, okay, I use this time to draft up that email that I was having a lot of difficulty putting my thoughts into, and it has to go to this very tricky SVP, and I have to make sure the tone is right. And now, hey ChatGPT, I want to send an email to an SVP, want to make sure that it does not come off as too much declaration of that I don't agree with you, but I want to say this point. And I have the email. You know, that. I got married last year. Almost entire wedding planning from my friends flying in from the US to India to what do I need to discuss with the decorators, to what do I need to negotiate upon. All of that happened with the help of LLMs. So when you have a tool that's saving a lot of time, it's obvious that what it means to be skilled at work will change. Right? It can't be that you have these technologies that help you move with speed, that help boost productivity are at your hands. And still, I judge you using the same parameters. That can't happen. The re-skilling or the definition of what's being scaled at work will change. What's happening right now is that across the workforce, we are moving away from process execution to being creative, to being able to judge things. So if you were a master at doing something, you knew how to do something, now the mastery is about what to ask the AI so that it's able to do the same thing, but then also validate whether the output is good. I heard that previously, right? We were trusting ChatGPT blindly, and then we realized that it's not doing what it's supposed to do. So the art is in, how do you put context upfront, front and center, the context of your work, the context of work that happens in the enterprise, knowing that context is extremely important. And then how do you judge or evaluate when the work is done by another system, in this case, AI, how you are able to judge it. If I had to put that into a skill framework, then three things come up first and foremost. And none of these three things will surprise anyone here is AI literacy. How good are you at utilizing AI systems? I lead a team of data scientists and product managers. The starting entry level of data scientists at IBM is a band six. A band six or a band seven data scientist coming in today joining my team if I give them a project that say has 900 K rows of data, right? Earlier, maybe in 2022, 2021, I would have expected them to have some experience, but then learn from the seniors on the team that if they're using pandas in Jupyter Notebook to process 900K rows, it'll take a lot of time. Today, if they observe that it's taking a lot of time, I expect them to have that conversation with our internal LLM platforms, and I know for sure the moment they say they're working with 900K rows, the first thing that'll come out is, hey, why you using a Pandas framework? Why aren't you using Polars or a Parquet file to speed up the process? So that's my expectation for the band six, band sevens who are just coming in. You know, use AI as a soundboard. So that's about AI literacy, but then that applies at different levels, right? How you utilize AI systems. The second skill set then is domain expertise. And what that means is that it's not just the technical teams or the technology teams that need to sharpen their tools, know how to use AI, et cetera. Domain expertise is fundamental, because when we were building AskHR, I'll come to this AskHR example now. And we were working with the business leaders across the board, the HR leaders. One key input that we wanted from them was the personality of AI. That cannot come in if you do not understand your domain well. If there's a benefits leader, if there's a travel and expense leader, if there's a compensation leader. The culture that they had, the essence of compensation strategy at IBM needs to be translated into these AI systems through domain expertise. Otherwise, yes, there'll be some computation that will happen. Yes, there'll be some answers that will come up. But the systems won't flow the way they are supposed to if the context isn't there.
And then the third is all the human-centered capabilities that we know humans are good at. Those are even more relevant now. So ethical judgment, right? Bringing in the ethics into these spaces. We've talked so much about responsibility, governance. Those ideas came in because there were and there are brilliant humans who are like, we can't hand over something. There needs to be these systems. So human values. Then connection, Vidya said, about partnerships, right? That's at the grand scale. At the team scale, again, I'll bring it back to ask HR. Yes, we were involved with all the business leaders within HR space when we were building it, but then we also brought legal to the table. They gave the input that, have you thought about the apologetic nature of generative AI? And if you're putting something like an Ask HR agent, which is representing HR, you can't have it apologizing in situations where we do not train humans to be apologetic, where we hear out of empathy. But an apology from the system can be determined as an apology from IBM. And that input couldn't have come in if we wouldn't have had this fusion of cross-functional team alignment going around. Because no one person, no one subject matter expert can build these systems. So coming back to Linda's question about reskilling, reskilling is happening. The tools are amazing. But then there are these certain skills that are becoming extremely relevant in the workforce these days.
Linda Gassage:
Thank you for that. And unfortunately, there are skills that corporations are viewing as replaceable. And we've seen that very recently, right, with all of the layoffs that are happening most recently with Amazon, right? They're replacing these resources with AI. So it makes me sad. But Spencer, so how do you see the roles likely are changing or being augmented by AI? And how is that impacting our workforce.
Spencer Beemiller:
It's not all doom and gloom until you get the question of who's gonna lose their job. Well, good news is we've done a lot of research on this. Those of you that have your laptop open, if you Google ServiceNow, or if you ChatGPT, ServiceNow workplace skills, we've done an extensive index report serving 5,600 jobs across 10 different countries and nine different industries to ask that question, what's still relevant? What's changing? What's shifting? We found that, I mean this number is a bit crazy, but eight million of those jobs are going to be affected in some way by AI. Some augmentation component of it. And you can do your own independent research on this. Gartner's got plenty of stuff on it. The World Economic Forum has plenty of numbers as well. Maximum I think I've seen is 80% of the jobs or so that we know as white collar jobs will be affected in some way. Now, that can feel a little bit scary, especially if you're sitting in a job that has to do a lot of repetitive manual work. Our CEO likes to call it the drudgery of the days of how you do the things that you do, the drudgery, the things that we don't love. So there's a few in these that actually have a bit of a warmer light. If we think of like a change management or an implementation specialist, this gets specifically called out in the research. There's a fair amount of that that's repetitive. However, they're expecting that about 15 hours of that gets saved from the weekly 40 hours of that individual. So now what does that do? Well, if you have a proper upskilling component to the organization and these individuals understand that they have a growth path, well, they know that they can take that 15 hours and then double down and say, well, I can actually get more efficient with the change process and I can retroactively go back and look and see how I could do it differently better with the AI tools at my fingertips. So I can plug that same change management process in and say, can do it better by X, Y, Z things and run what if scenarios. So now that change agent or implementation specialist in the software world is that much more efficient and is able to do that many more change processes or that many more implementations. The trickier ones are the secretary scheduling, which almost entirely will be handled by agents and personal agents or Copilot agents. The data entry components where somebody's literally old school term but swivel chairing or I like to call it app switching now in modern worlds between different systems to put in one here entry in here to overhear those probably will not exist within a matter of years. And what's important on that is back to the leadership component. So if you don't have a way for those people to see their skills growing into the next generation of leveraging these tools at their fingertips, they're going to be scared. And they're either going to be, you know, they might quit on their own accord, or they might, you know, trickle out into retirement. And that's a bit of a fearful state. None of us really want to work for a fearful company, I can imagine.
But if we have the leadership in place to say we're AI first, we're AI forward, whatever terminology you want to use, and we care about you as the workforce that you work for us in service of the customer, well, we have multiple opportunities on what you're going to do now with that data entry that is going to be automated by these components. And as such, you might have a higher touch with that customer that you're talking to, like a medical scheduling nurse, for instance, that is sitting in the room that might previously have been calling insurance to make sure that you're covered for the doctor appointment that you're about to see. Well, now some of that will be handled, or most of it will be handled, agentically. So that nurse can then spend that much more time to welcome the patient in and make sure that they understand everything that they're getting into. It's back to that humanistic approach. Those human skills are going to be even more tailored and even more needed in that world where we're leveraging these agentic components at our fingertips. So it'll just expand from there.
Linda Gassage:
That is a great example because that's one place where I see AI is actually improving human interaction. I went to my annual physical exam and my doctor at Stanford said, is it okay? We're using AI now, right? It's gonna transcribe everything you say so that I can actually have a conversation with you. And I was like, absolutely.
Spencer Beemiller:
What a better experience, right? I was at the HIMSS conference this year in Vegas. HIMSS is the largest medical device healthcare tech conference in the world. And one of the biggest things is actually scribing in the rooms. Because if you've been to a doctor's appointment, most of the time, the past five, 10 years, they kind of have their back towards you, they're listening to you, and they're typing away, recording everything you're saying. And something so simple as turning on a listening agent or a scribe in the room now allows that doctor to be with you entirely present.
Linda Gassage:
It was a completely different experience. So it was wonderful, which is a great segue. Thank you so much for that. For you, Vidya, we see, yes, a lot of jobs are being eliminated. The swivel chair function will no longer exist. And there are opportunities for AI to actually provide innovation opportunities or how should I say this, opportunity for professional growth and development, right? So what are some, do you, from your perspective, jobs and opportunities that can be created in parallel with the elimination of jobs with AI?
Vidya Gugnani:
I'm going to speak from my personal experience, but before I start, there's always this one quote that stands with me. I think a friend said it. I don't want AI to do my laundry so that I can focus and replace my work. I want to focus on my work and have AI do my laundry. So I think the paradox is that with a lot of the AI automation that's ongoing right now, we have our jobs, the nature of what we do at work being questioned. That's also the same for myself. As I said, I've switched between three geographies. For most of those who've been in SAP, we start as ABAP developers. Nobody knows the word ABAP anymore in this world. And the ABAP developer got replaced because we created these tools called SAP Build Code or ABAP and Eclipse, which is all AI generated. So if you want to generate the previous what consultants were paid for. So know, the click of a button, you're typing it and all of the source code is getting generated. You want to write a test class, all of that is getting generated. So in that sense, the role that I knew that I'm trained, conversed in as Sanjmeet said, the maturity of your domain expertise that you were built in and brought in, in one sense no longer exists. However, I am the only one who knows the source code, the function models that goes in an S4 system which was built and therefore I use that knowledge. I use what I need to do to develop the new AI functionality for example, the analytics with an SAP or the SAP Analytics Cloud. So just some statistics with an SAP. think SAP has not been a stranger to the victim of all of these recent reorganizations and changes. We've had a lot of our skill force also impacted, both globally and regionally. However, for every one position that we've seen has either been impacted or displaced, at least statistically there have been at least four new opportunities that have been created. What have these four new opportunities been under? So whether it is under product development or understanding how AI itself could be embedded, and I think that's the word that we use as SAP, embedded in all of the functionality that we ship out, that's basically the role that most of us within our skill force are assuming. So whether you're an age-old developer, you've been climatized to many SAP tools, et cetera, our roles and functions have evolved to leveraging what we know to apply AI to the same. There's nobody better than us, the human here, sitting here, who knows more. AI is what we as humans tell, and I think sometimes many of us forget that because a bot or an agent, a business process can ultimately only do what this brain is thinking. If I have to tell it, that is a job, that is a skill. So ultimately those three additional new jobs that are getting generated for innovation, for creativity, are tapping into what is embedded within your skills to create something embedded within AI in each of the tools that you used. So for me, the biggest differentiator, the influence that I think that we are all fearful of the changes. I mean, there's no harm in accepting that, and there's no shame in saying that. I think this is a new world, but apart from the training, the literacy, I think that's the word that you use, the literacy and the domain expertise that you carry, you have to believe that your skills, your knowledge, your experience will carry you forward in generating those agents, in generating those business processes, in identifying where AI can actually lead to value creation within your organization. Most customers of ours who are sitting with you, who are working with you, are not setting out on the AI journey. Your first job is not to think, I need to take away 10,000 people. But yes, their job is to say, how can I reduce $10,000 a day in my operating expenses? So if you are upskilling, reskilling, additional jobs that are being generated are able to provide that value, that is where the whole workforce trigger will come in. Now I was saying this to my son, a lot of the words sometimes we use job re-skilling, job up-skilling when we talk about AI because HR mandates that we have to take three training certification courses on AI every day. I think she'll hate me after this, but mostly it's still a way for you to understand that this embedded functionality, this feature shipment from you as an individual, as a human on AI, can only come in when you are training yourself and you are attuning your mind to become relevant to the new realities of the world. And for us, when I go to a job opening now within SAP or you open LinkedIn, almost all of these positions that you see, whatever that you did earlier, it's a plus AI. So if it is HR, so you have to know an AI system that's there. If you're in supply chain, you need to know what is the AI system or the AI agents that are now relevant. That is a job. That is a skill. That is the workforce innovation that we are looking for. So it's a job generator, in my opinion. It's not job re-skilling. It's not job up-skilling. It is a job generation that we need to look at when we deal with the overall impact of AI in workforce innovation.
Linda Gassage:
That was absolutely beautiful. And I actually had additional questions after that. But we are out of time. And I actually thought that was the perfect wrap up and inspirational talk with all of us, right? On the opportunities that AI actually opens up for all of us. Thank you for that Vidya. That was a beautiful close.
Yes, so HR needs to generate jobs.
I want take the next, I want each of you one sentence, something that you want this audience to take away from today's amazing panel discussion. Hemant, I'm going to start with you, one sentence.
Hemant Ahire:
I'll say just don't hesitate, play with the technology and get your hands dirty, play with it and learn, don't try to be perfect. Start small, get some big small wins under your belt and then just continue to iterate until you get to the point where it meets your expectations and the benefits for your customers.
Linda Gassage:
Thank you, Hemant. Sanjmeet, please.
Sanjmeet Abrol:
I would add on to it, just be curious and optimistic because if you're able to learn all of these things, you're already ahead. look deep inside yourself. There's so much that everyone's already gained in the work experience that we've had. It's just, you know, just be curious to apply what you know and know your value. Don't wait for anyone else to define whether this is your value or it isn't. You should be the one setting your value.
Linda Gassage:
Vidya?
Vidya Gugnani:
I think I mostly covered it, but again, I'll leave you with that one statement. It's not human in the loop. The human is the loop in AI.
Linda Gassage:
Dylan?
Dylan Cosper:
I mean, I would say one of the things that's kind of been a throughput through all of this conversation from responsibility to workforce readiness is transparency and trust. And both are extremely important when we talk about AI. But if there's one thought I want to leave everyone with, it's that transparency and trust are different. Transparency is about telling people what is happening. It's really the first step on the journey to trust. Trust involves not just telling people what's happening, but explaining how something is happening, what it means to them as an individual, you as an employee, you as a customer, and then measuring and mapping what the benefits and the risks of that thing is, be it a new AI use case, a new chat bot, or something like that. But don't confuse the two with one another. There's a difference.
Linda Gassage:
All right, last but not least, again, Spencer.
Spencer Beemiller:
It's rounded out, simple, AI won't replace people. AI is going to take away the tasks that weren't meant to be human in the first place.
Linda Gassage:
Love that. All right.