Episode 11, Part 2

AI Beyond the Hype - The Long Game, Operational Wins, and Leadership Shifts


Share
00:00
00:00
    Listen on

Transcript

  • Venky Ananth
    0:09
    Venky Ananth:

    Hello and welcome to the next episode of PaceSetters. Venky Ananth. I lead the healthcare business for Infosys. Today, I have a special guest, Paul Hlivko, Executive Vice President and CIO Wellmark. Well, a lot of firms out there are treating AI as SaaS, software as a service. Why is that a problem?

  • Paul Hlivko
    0:33
    Paul Hlivko:

    Well, software as a service is constructed under the premise that I can solve 70 to 80% of a given business process either in an industry or across industries. And I can sell that product pretty widely into business, right? It's like HR processes, if I get them 70 to 80% correct, I can cut across every industry workday, right? So or CRM processes, whether it's Salesforce or Microsoft, if I get them 70 to 80% correct, I can sell that that product across all industries. AI is going to show up differently, like 70 to 80% correct on a business process. I don't think it's going to work. I think it'll work in a in some use cases that that level of proficiency is acceptable. But I think in many cases like these, the more difficult the task, the more complex the business process, the more fine grain and niche the actual industry is, the more work there's going to be in actually deploying AI for it. I mean, this is exactly why the economist at MIT suggested that as the complexity of tasks increase, it's going to be harder to deploy profitable automation through AI like so I think that is different than software as a service. Like even a complex process, as long as you can get 70 to 80% of it correct, you can broadly deploy SaaS solutions. So I think it's just a, it's a different problem space and we're not going to see AI show up in the same way because the last mile problem of getting it to work for a specific company in a specific environment is going to be actual heavy lifting work, not just I configure a SaaS product.

  • Venky Ananth
    2:21
    Venky Ananth:

    But you don't think SaaS products will have AI features natively?

  • Paul Hlivko
    2:25
    Paul Hlivko:

    They will, yeah.

  • Venky Ananth
    2:26
    Venky Ananth:

    Right. That's another way to manifest AI into your business.

  • Paul Hlivko
    2:30
    Paul Hlivko:

    Yeah, I mean flows, Yeah, the so SaaS products are going to have AI native. They already have AI native features. That's not a debate,

  • Venky Ananth
    2:36
    Venky Ananth:

    Right

  • Paul Hlivko
    2:37
    Paul Hlivko:

    The question is like if you look at all tasks performed, like what portion of them are actually going to apply general purpose AI consistently across all companies in an industry, right? So non differentiated things or across industries in total, I think that's going to end up being a smaller percentage. So I do think if you're looking at a given enterprise, the large volume of their AI use is going to still come through SaaS partnerships or software providers for sure. And a portion of it's going to be built by an individual company. But I mean, given the business you're in, like the integration of those two, I mean, even with enterprise software is a large lift. Like buying commercial software today and getting it to actual production value is a big. AI will be no different.

  • Venky Ananth
    3:27
    Venky Ananth:

    Absolutely. It brings me to the question, and we've discussed this in the past, you always have this view that AI is a long game, you know, So what does it look like? How long is this long game going to play out? And what do you mean by, you know, it's a long game?

  • Paul Hlivko
    3:46
    Paul Hlivko:

    Well, I mean, I, I think we need to focus on, because the diffusion is going to take so long, because it's a general purpose technology, we need to focus on what we can do now, right? So like we should train, we should focus on training people rather than focus all of our energy on training models, right? It's like we should focus on building systems. And I don't mean systems as in software, I mean systems in an organization allow change to happen and how allow change to be absorbed effectively. We should focus on building those types of systems rather than building demos and pilots. So I think there's we should focus on margin performance rather than like the moon shot of the next AI advancement. So I think there's this whole set of things that organizational leaders can, yes, still pilot because that's part of your learning cycles and cadence. But there's a lot of inherent mindset and skill development that we can focus on that's durable. Whether you're building that type of an environment and culture for AI or you're building that type of environment culture for the next emerging technology or just normal technology that you're trying to deploy in your enterprise. Like those muscles are useful in general.

  • Venky Ananth
    4:58
    Venky Ananth:

    So Paul, if AI is really, you know, as you call it, like almost becoming a commodity layer. So where is the competitive edge for enterprises? You know, how can you kind of drive that secret sauce for yourself as a firm? What's your view?

  • Paul Hlivko
    5:17
    Paul Hlivko:

    Well to back up, your AI is going to be kind of evenly deployed, which is how most GPTS are general purpose technologies. I think the the competitive advantage has almost never come from a single tech or a single capability in a company. Genrally, how competitive advantage works in a healthy market is it's a collection of unique capabilities at varying degrees of maturity that are assembled together. And then those unique capabilities in a company at a certain assembly is actually creating a specific advantage in a segment of a market, right. So like finding a lonely place in a market tends to be where companies then catch customers and win. So I don't it's not one thing. And the reason I think that AI is not going to be a main, a primary factor is most GPTs are evenly dispersed across companies eventually. Like the Internet is evenly dispersed, electricity is evenly dispersed, AI is likely to be the same. And then you question like are you able to provide other forms of protection? So like patents would be a good form of protection, Sure. And there's certain cases where you could possibly patent advances in this space, but math can't be patented like it never could be patented. So I and a large portion of AI is math. And then I think the other thing pushing against that AI being a competitive differentiator is AI’s history has been from an open source perspective, like in the 50s, 60s and 70s, like community around artificial intelligence and the education institutions, the government research, the backed it. Like I think pulling back away from an open collaborative, community based scientific progress, which is how AI has kind of worked over the last 56 years to all of a sudden close the doors and only one person or one company benefit from it. I, I don't see that happening. Like I think history has demonstrated it won't. And then I think treating it as a general purpose technology that also has history has also proven that that doesn't show up in a competitive advantage scenario typically, sure.

  • Venky Ananth
    7:40
    Venky Ananth:

    Next one operational wins over moon shots. Why do you think that's an important mindset in a? Because it kind of sounds almost contradictory to the earlier point it was made, which is this is a long game, but then it also shooting for quick wins. So just help us understand that.

  • Paul Hlivko
    8:04
    Paul Hlivko:

    Well, I think at any given organization and leader needs to figure out are they in the moon shot business or are they in the operational winds business? And it's a ratio. It doesn't have to be 100% one or the other. And that's back to the like are you in the scientific discovery business, which is moon shots or are you in the actual innovation business which is operational wins. I think most enterprises in a mature market are in the operational wins business. Understanding moonshots and using them as a way to generate motion in an organization and generate excitement is is entirely useful. But I think we need to think about speed to learning in this case, rather than thinking like speed to market, I think we need to think about learning cycles in this case, like how fast we're learning in an organization.

  • Venky Ananth
    8:53
    Venky Ananth:

    Speed to learning, then speed to market.

  • Paul Hlivko
    8:54
    Paul Hlivko:

    Yeah, Like I, I think this is a way of thinking about operational wins, right? So if you think more about how fast you're learning rather than just about how fast you're getting to market, they're both important. You don't lose sight of building muscles in an organization that can be useful not just for AI, but that can be used for all sorts of change management. That's important to how you then can ultimately compete like the faster you can inject change, right? So like a learning cycle rather than a release cycle. Like we focus a lot on how fast can we build and release things, great, but how fast can we learn allows us to react to changes in our market like the healthcare markets always moving broadly, technology markets are always moving at a fast pace. And the intersection of those two requires companies to be able to do the sensing of what's happening in market, do the sense making to figure out what to go do about it. And then if you build those muscles in an organization, you're going to get more operational wins. Like there isn't any doubt about it. I think where you get fewer operational wins is when you're chasing a single moon shot, like, and you're not thinking about the actual business value of the technology that's being introduced.

  • Venky Ananth
    10:08
    Venky Ananth:

    That's a boat segue, Paul. How do you build these teams? You know, how do you build that culture? Because like you said, it's not just a technical aspect of it, right? There's an emotional dimension to it. There's also a safety net that you need to kind of lay in terms for, for an organization. So tell us what have you done or what's your view on this?

  • Paul Hlivko
    10:32
    Paul Hlivko:

    So First off, I think you have to acknowledge how people are feeling. Again, if you do not provide legitimacy to where people are AT and meet them wherever they happen and happen to show up. And some people will be on the curve of I'm ready to go. And some people are going to be a bit skeptical and some will be on the other end wondering and thinking about the loss aversion back to one of the other topics that we were we were covering. So meet them where they're at. Empathy matters a lot in organizations, Psychological safety matters a lot in an organization. And you cannot create that over overnight. Like that is a bunch of intentional choices by leadership to create a safe learning environment. And and this is just AI just happens to be the next thing that's showing up in an enterprise. The other thing I would, I would suggest on the people front is find your builders, Like find the creators. Every organization has them. And I think what's fascinating about artificial intelligence is the builders, the like the engineers that are all sitting in the technology organization, the builders aren't going to be entirely that group anymore. Like this technology is allowing other people to be creators and it's allowing other people to be builders. It's democratizing knowledge, so you don't no longer have to wait in a queue to go talk to X expert. And you can use this technology to refine your skills and build new capabilities for an organization, whether you're in an engineering builder capacity or you're in kind of a citizen development building capacity. So I would encourage organizations to find their builders because those tend to be the people that are finding ways to create value and other people bring tend to follow them or take their lead and come along for the journey. But you got to start with empathy and then you got to find the people that are actually leaning into it and are going to be your builders of the future.

  • Venky Ananth
    12:30
    Venky Ananth:

    And what about leaders? What what are the biggest shift that they need to make? And I'm talking really about, you know, the business leaders, technology leaders, operational leaders, you know, financially, HR, you know, we have all these groups, each of them have their own leadership. You know, what kind of what are the biggest shift that they need to make from their perspective?

  • Paul Hlivko
    12:54
    Paul Hlivko:

    Yeah, let me talk about technology leaders to to start. And I, but I, I do think this can apply more broadly. And I'll, I'll kind of blend that in. So I think leaders in technology in particular, and this is for CIOs or their leadership teams or, or in general like we need to shift from controlling to orchestrating. So if you think about the history of the technology profession, there's been a lot of centralization control and that comes from a variety of reasons. But some of it's regulatory like we, there's certain standards that need to be met. It's risk management oriented. Those all still need to be met, but we need to think about our roles more now as orchestrators. And what I mean by orchestrators is if we're going to have more builders in different parts of the organization, we need to help orchestrate transformation and innovation rather than control it, right? We may not be the only ones able to build software anymore like that will become a more ubiquitous task that whether it's AI helping do it or it's 4th generation low code, no code solutions helping do it like so us shifting from control to orchestration, I think it's going to be one of the big factors. We also need to like we used to leaders used to provide answers, right? So there would be some ambiguity and we would generate answers or solutions to the ambiguity. So I think we need to shift from answers that we're providing to questions we're asking, right? Because often times, like when you're working, especially with generative AI, the better you can structure a question and iterate on the problem definition, the more insight you're going to get out of these models anyways. So focus less on answers because guess who has a lot of the answers now, generative AI models and more on the quality of the questions and the quality of the problem definition. And then I also think the other shift that leaders are going to have to work through is speed will always matter. But when you're dealing with emerging technology, durability matters, right? So you have to balance the how fast are we going with how durable are we actually creating solutions for our customers, for the enterprise, for our employees like that. Those are two things. So I'd say like out of those three that I mentioned, I mean, I think the control to orchestrator is the biggest shift. Like it's going to be let because knowledge is so diffused with these models and almost everyone has access to almost every profession like being able to orchestrate rather than control. I think it's going to be one of the biggest shifts for leaders.

  • Venky Ananth
    15:31
    Venky Ananth:

    Control to orchestration and think that's going to be hard or what's your sense?

  • Paul Hlivko
    15:35
    Paul Hlivko:

    I mean, human nature. I mean, we all love controlling things.

  • Venky Ananth
    15:38
    Venky Ananth:

    Exactly.

  • Paul Hlivko
    15:39
    Paul Hlivko:

    Like I yes, like we're all going to struggle getting up and you got to there's no doubt about it.

  • Venky Ananth
    15:43
    Venky Ananth:

    And you got to become a leader because by knowing a lot of things, right? And so now you're suddenly saying you're not no longer the one who is telling how things should be done or what should be done.

  • Paul Hlivko
    15:54
    Paul Hlivko:

    There are different ways to get to a leadership role. Yes, I do think in some organizations knowing a lot of things is a path to leadership. I think that's the wrong path to be frank. I think the more you can lead through others and orchestrate value. Even in the past before knowledge was available to everyone, I think that is the stronger set of competencies than the knowledge you possess. I mean the knowledge you possess does not exist in one person, right? Even in the past, like you can go find someone else was with a roughly similar knowledge set to what I have. But I think the how you lead and how you get to value matters much more. And I think that's going to get further emphasized with the fact that knowledge is now truly ubiquitous.

  • Venky Ananth
    16:35
    Venky Ananth:

    Correct.

  • Paul Hlivko
    16:36
    Paul Hlivko:

    Like, you can get access to almost any profession now through the use of a Gen AI model, right.

  • Venky Ananth
    16:43
    Venky Ananth:

    All right, let's let's talk about healthcare then. All right, this is probably the longest I have done PaceSetters episode. We have not we haven't spoken about healthcare at all. So let's let's talk about it. You kind of mentioned about, you know, emerging tech, especially AI and real world complexity. So if you'll take a healthcare lens to it, where do you see the biggest source of friction today?

  • Paul Hlivko
    17:10
    Paul Hlivko:

    That's a great question. Let me use an example. So I think you'll remember this and I'm sure everyone that listens probably will too. IBM Watson in healthcare I would argue I mean, almost everyone knows great at Jeopardy, then went after then went after healthcare, right? B Ken Jennings now it's time to go solve healthcare that wasn't A and it's it doesn't exist today. For those that don't know it was it was sold for parts that wasn't a failure in the actual technical progress. It wasn't like IBM Watson in healthcare technically, just the pure AI in tech success. I think what IBM failed is everything we've been talking about didn't understand the market, didn't understand the people that it was actually being sold to, didn't understand the friction around adoption, adoption cycles. So I think the difficulty of taking an invention to innovation, that's a perfect use case, right? Like, it was great as an invention. It was terrible as innovation. And timing matters too, right? So like maybe it was just ahead of its time as well. So like, timing is a important aspect in thinking about. Like, are all the ingredients lined up for the end consumer? Whether it's the consumer, whether in that case it's the doctor, whether it's the hospital administrator in that case, like were they ready for artificial intelligence or were they still struggling with basic data analysis and infrastructure, right? So I think timing matters a lot. The other thing in healthcare in particular that I think we'll, we will collectively kind of push back and forth is the do no harm, which is healthcare and move fast and break things, which is tech, right? So I think the push and pull between we want to ensure that as we're adopting new technologies, whether it's a pharmaceutical innovation, whether it's AI and clinical workflows, like we want to make sure that do no harm is still held true. And arguably is it is improved, which I think it can be in many cases balanced against move fast and break things, right? Tech is constantly pushing the envelope, and we need to make sure that risk and safety and patient outcomes are top of mind. That's a great way to frame this.

  • Venky Ananth
    19:39
    Venky Ananth:

    And I know when you frame it like that, where Watson was an invention, but, you know, the innovation people missing, it's it kind of, you know,

  • Paul Hlivko
    19:52
    Paul Hlivko:

    It's black and white.

  • Venky Ananth
    19:52
    Venky Ananth:

    Yeah, Exactly. Right.

  • Paul Hlivko
    19:53
    Paul Hlivko:

    It's hard to argue against, Correct. I'm sure there's stories inside of IBM that I'm missing.

  • Venky Ananth
    19:58
    Venky Ananth:

    Exactly.

  • Paul Hlivko
    19:58
    Paul Hlivko:

    They're welcome to tell those stories and we would probably equally be fascinated.

  • Venky Ananth
    20:03
    Venky Ananth:

    The other piece I want to talk about Paul is. When we speak about AI in healthcare, most people automatically tend to go towards, you know, molecular, you know, discovery of molecule, the protein structures to clinical decision support to the popular ones. But what is also overlooked is how we can bring in AI to drive far more impact in the industry in core administration in terms of benefit administration or operational areas. Just share your perspective on, you know, what are we missing by not kind of giving this area enough focus, if you will.

  • Paul Hlivko
    20:50
    Paul Hlivko:

    Well, I, I can comment on that area, but I let me share some thoughts where I think it's going to matter the most. Like, and our aim has always been to improve the quality of life and extend life, to keep it super simple. And I think that's where AI is actually the most exciting in healthcare, right? So imagine being able to find every possible permutation of a protein structure in a drug discovery process, right? Like how many more drugs are we going to then find that actually have an impact on quality of life or extending life, like I would assume more than we find currently on an annualized basis? Like one of the barriers often and burdens is the cost of that drug discovery. That is actually AI, Generative AI in particular is going to probably speed up that process and lower the burden of the discovery, which is going to ultimately save cost both in the actual process of getting from research to FDA approval. But then that that cost then translates into what it shows up for a given company or given member or consumer to actually pay out of pocket, right? So if we can actually improve the drug discovery process, it's a win win. Like it's actually more cost effective to get new things to market, but it's also saving lives, extending lives, improving quality of life. So I think drug discovery alone is one of the areas I'm most excited about the deployment of AI. A couple of others, there's a company, I think, if I recall right, it's Emerald. And imagine a case where you have a Wi-Fi like device that sits in your room and your house. Wi-Fi goes through walls and it can observe the electromagnetic frequencies that everybody's body gives off and see if you have dementia in advance of you even knowing it like we can. We're Alzheimer's like, can we treat sooner to accomplish the same outcomes, extending life and improving the quality of life Like that is something that is in flight right now in tests to validate. And that is taking electromagnetic waves, turning using Gen AI to turn it into predictive results to then change how we actually treat diseases. Like what else can we do in a home, passively observing people to identify opportunities to improve their health and their health outcomes. Imagine having access to the best radiologist in the world at all times. Not whoever's in a certain city that you live by, like the best radiologist in the entire world to detect what type of tumor that is in a scan. So like Gen AI and imaging, which imaging AI has been around for quite a while. But the combination of those two, like making sure that everyone gets the very best answer at all times is quite impactful. It's like I'm personally more excited about those opportunities. Is it going to help improve the efficiency of every industry? Yes. Like whether it's benefit administration or other processes that we manage, for sure is it going to help people understand complex jargon that exists also in every industry. Healthcare is no different. It's a very complex thing for consumers, our members to understand. Absolutely. Like there's all sorts of use cases to lower the cognitive load, lower the barrier of access. But I think the big things which all of us are probably most interested in is can we live longer and can we live better? And I think AI is going to contribute to both of them.

  • Venky Ananth
    24:31
    Venky Ananth:

    Awesome. So if there is one advice that you would give to a CEO or a CFO who is thinking about funding the next AI initiative, what would it be first?

  • Paul Hlivko
    24:47
    Paul Hlivko:

    I would for CEO and CFO, I would run AI like a business. Don't ignore margins, don't ignore ROI. Make sure that you're, yes, investing some in startups, which is kind of your moon-shot side of your investment equation, but you're also investing in operational wins. So I think you need to balance the both and decipher when is it a moon shot that you're trying to invest in and when is it margins, you're trying to invest in and know what you're investing in. I think the second piece of advice specifically for CFOs, and this is not unique to artificial intelligence, but when we build business cases, especially on emerging tech or foundational infrastructure investments, which are going to be required for AI, especially around your data, those tend to not hold water and have a return on investment for the first idea, right? So your discounted cash flows for three to four years out might be upside down. CFOs can help people use option value and option pricing to think about what's the idea 2, what's idea three, what's idea four and five that are only possible because you did idea one, right? And technically then the business case for idea one is the value of the negative business case of idea one and the positive option value of 2, 3-4 and 5. Like clouds a good example. There are many enterprises today still struggling with cloud adoption because the three to four or five year discounted cash flow doesn't make sense for their organization. What they're missing is all the option value. Like we were early adopters of cloud infrastructure. I didn't necessarily see the AI showing up on the scene almost immediately when we were done with our cloud migrations. But that is option value because guess what you can't do without cloud infrastructure, artificial intelligence. So now companies are facing this. I have to adopt cloud. We're not. We're well past that path, right, while they're also figuring out how to deal with artificial intelligence coming on scene. So I think CFOs in particular can help enterprise leaders build business cases for both the near term value and also the option value that's going to show up in the future for the next set of ideas, right?

  • Venky Ananth
    27:04
    Venky Ananth:

    Fascinating. All right, now one last question. That's all I have. We have time for. If there is one thing you think we will be laughing at five years from now, what would that be one thing we're going to be laughing at five years from now?

  • Paul Hlivko
    27:22
    Paul Hlivko:

    So in the media cycle recently, I've, I've started to catch and I'd say the last couple months that we're going to have fewer software engineers than we have today. I think we're going to be laughing at that five years from now.

  • Venky Ananth
    27:33
    Venky Ananth:

    Oh, that's interesting.

  • Paul Hlivko
    27:35
    Paul Hlivko:

    So you probably heard right that there's a number of talk tracks right now floating around in the technology circles for sure. And to some extent CEOs are reading about it as well. But there's a premise that artificial intelligence is going to write all the software in the future, right? I have a different perspective and we'll see who's right or wrong five years from now if I could be wrong. But the limiting factor for writing software today is the number of people we graduate out of institutions that can write software. That's the first limiting factor. The second limiting factor is the amount of capital going towards it. So those two things today limit the amount of software in the world. Well, we just eliminated one of them, right? So I no longer have the constraint of software engineers having to write software. So what's going to happen from that is an explosion of software. So if we had a hockey stick moment, I think we're about to see it in the production of software like a limiting factor. Isn't ingenuity problems to solve with software like that backlog is immense and we all have appetite to solve the next problem with software. Like we still have a a bunch of objects in this world that are not smart. So software is going to end up being in more places than we can even imagine today. And I think that fact alone is going to drive more software engineers. So will AI be writing software? Absolutely. But we're still going to need engineers to actually write the AI software, to write the actual software. We're also going to need engineers to manage the fleet of software that's going to exist, that all the problems aren't going to strictly be solved by the AI figuring out what the problem is in the software. So I think the thing we're going to be laughing at most is we'll have fewer software engineers in the future. I don't think that's true. Technically, we've been able to do Nat. We've been able to do language translation for what, a decade? We have more employed linguists now than we've ever had. Yet AI can can translate right site. I think the thing that we continue to struggle with is like how fast technology diffuses. And I'll, I'll leave you with one last sound bite that amused me a couple weeks ago. We shouldn't underestimate the things that slow down adoption, right? So like you got here traveled with luggage with wheels on it. It took 100 years to put wheels on luggage, 100 years. And the only reason we didn't put wheels on luggage is masculinity. We thought it was masculine to carry around luggage, right. So like AI and the adoption of AI in markets, it's like there's a long list of things that are actually going to regulate how fast that actually gets into a market and it gets into a business. But like it is going to be disruptive like and it's going to create a ton of value. It's just a matter of what the timing is.

  • Venky Ananth
    Venky Ananth:
    Venky Ananth:

    That's such a fresh take on AI and very different talk track than what you normally find about everybody breathlessly talking about how AI is going to change everything for us. So while it's going to change everything, like you said, I think you kind of tempered with, tempered it with a lot of practical wisdom and knowledge. So I appreciate that. Thank you so much. I totally enjoyed this conversation, Paul.

  • Paul Hlivko
    31:01
    Paul Hlivko:

    Likewise, I appreciate the opportunity.

  • Venky Ananth
    31:03
    Venky Ananth:

    Thank you and hopefully you enjoyed that conversation with some deep insights from Paul and thank you and take care.

  • Paul Hlivko
    31:12
    Paul Hlivko:

    Great. Thank you.