Intel on Enterprise AI, Edge Computing & Scale
Insights
- Enterprise AI is entering a scaling phase where success depends on extending intelligence from centralized cloud environments to distributed edge systems.
- The next wave of AI value will come from leveraging existing enterprise compute infrastructure rather than relying solely on new, high-cost investments.
- Strategic partnerships that combine hardware, software, and platform capabilities are essential to move AI from pilots to production at scale.
At MWC 2026, leaders from Intel and Infosys explore how AI is moving from experimentation to enterprise-scale deployment. Greg Ernst of Intel explains how organizations are shifting from isolated use cases to multifunctional AI adoption across operations, IT, and customer service. The conversation highlights the growing importance of edge computing, the untapped AI potential within existing enterprise infrastructure, and the need to balance performance, cost, and power efficiency. They also examine how partnerships between Intel and Infosys are enabling organizations to scale AI from pilots to production through optimized platforms like Topaz Fabric, unlocking real-time decision-making and distributed intelligence across the enterprise.
Greg Ernst:
I just see the opportunities that the enterprises are facing right now. It's incredible. My company, your company, but all of our clients really understand the power that AI can bring to really tackle some of the most human intensive workflows in their operations. (01:19)
At Intel, one of the things that today we spend a lot of time on is translating the output from our factories into sellable, committable supply that we have for our clients.
And that's a space that we're using AI ourselves to take a workflow that traditionally requires hundreds of human hours and get that down into a real-time decisions that help our clients.
2026 could be the year enterprise AI scales
Anand Santhanam:
This year, 2026 is the year of scaling adoption in our view. And that hockey curve of going from use of AI in a single function to multifunctions, meaning use of AI in the factories, in IT, in customer service, in core operations, and that scaling is going to happen this year. And as enterprises do that scale, they would like power consumption, cost to go down, and performance to go up. So there are three curves which we are going to follow.
AI is shifting from cloud to edge computing
Samad Masood:
Is it supplementing or replacing, you know, the current sort of cloud AI capability? You mentioned Intel is in a lot of corporate data centers, obviously behind a lot of the cloud as well. And then you're moving it to the edge. This becomes then an additional processing resource, right, for AI?
Himanshu Shivam:
What I would say is that it is supplementing, it is not replacing. You always need training and inferencing over time. But there is a movement that is happening more towards having the edge AI so that you bring AI into your day-to-day activity, day-to-day functions, and the productivity that can be done at the premise.
Real-world AI is starting with customer operations
Anand Santhanam:
Customer service is one of the seven proof of values that we have seen where you have an agent in a contact center in a multilingual scenario, and they are listening to conversations in multi languages. Semantic understanding and then language neutralization happening on the edge right then and there. That's a phenomenal use case that we are able to, we are ready to roll out that. Industries where we would see that come, any industry where there is massive amount of customer service, so insurance industry, retail of course, retail CPG, financial sector, those are where we would see that going phenomenally forward.
Himanshu Shivam:
One of the things that comes like this, the concept about AI-enabled PC, it comes to the mind that always you need a very high-power compute inside the PC to do the AI, which may not be true. For running the inferencing and you can divide the inferencing into multiple different categories. So let's say if you are running the $2.5 billion parameter, 2.5 billion parameters. So in those situations, you can run it on Intel power laptop.
So one of the proof of concept that we did recently along with Intel is that how we can run the initial, the co-pilot for the software developments on a normal Intel laptop. Not necessarily it has to be an AI and has to be a stronger compute. That generates a huge capacity for the enterprise to use. Then your existing compute availability turns into an AI-enabled capability.
What if every enterprise laptop could run AI?
Anand Santhanam:
With the massive amount of data that every single device object thing with sensors are connected at the home, at the power plants, at the factory sites, at users. Every one of these endpoints have been generating data, but now we have the opportunity to infer from that and make decisions. The faster we make decisions with low latency, the better we are able to predict issues and the better we are able to resolve. So all across that chain, we are distributing intelligence and inferencing as we call it, which is the ability to make decisions.
Enterprises already own a massive hidden AI infrastructure
Greg Ernst:
Take all the PCs deployed in the world, data center by enterprise, by equal of 40 large centers. And so it's an incredible compute footprint that every enterprise already owns in their fleet. So one, it does augment and there is just a reality of tokens generated from cloud costs money.
Whether it's your enterprise, cloud model or the enterprise themselves painted directly. That is impossible. So there is a reality of use the compute footprint that you have. And part of that would be making conscious decisions at Intel to integrate inference compute through both an integrated GPU, a CPU, and an MPU in a client. So that's a huge value, and Infosys is the best at really understanding and helping enterprises use that footprint.
Scaling AI requires new enterprise technology partnerships
Samad Masood:
Himanshu, tell us a bit about the partnership from your perspective and the strength of that.
Himanshu Shivam:
We have a multi-year of the partnership. We have been working together, solving the problems one by one, going after enterprises, different verticals. So key that remains that how Infosys and Intel together move enterprise from the pilots to production, where we scale AI through more open source adoptions, we bring much more strong execution mechanism in our partnership as well as with the customers, which also brings the kind of a change management that is needed at the enterprise side.
Samad Masood:
How Intel and Infosys work together? What are the two products doing?
Greg Ernst:
The product that we're most well-known for is AON, which is what 90% of the world's data centers are built on. So that's really from the technology that we gravitate around and then working with Infosys and Topaz with software stack on top. As these models have changed a lot, even through 2025, if you look at through these enterprise AI models and some of the offerings from all the suppliers. It really have changed a lot and moved into this agentic format. So our partnership with Infosys has really been built at as you were saying, around. Still requires in many cases a GPU, but the key is that the Xeons around that keep the GPU fed, that handle the orchestration, that handle taking the outputs from the GPU, tokenizing that, sending that back into the user format. And that's where Infosys and Intel really been focused on. And that gets exactly what you said what clients did is minimize the output, I mean, minimize the hardware spent and the cost of the stack so that everything is the most efficient. And Infosys and Intel have been working together really to tune that and we will continue to tune that as these models and the software stacks evolve.
Building the platform layer for enterprise AI
Anand Santhanam:
Topaz Fabric is a set of multimodal or model agnostic AI components that Infosys has stacked together to give enterprises the opportunity to leverage and harness the power of AI. Now there are two ways in which this would happen are two phases. The phase one is the training and that requires a lot of compute power. But in production and we have been talking about scaling in production and Himanshu mentioned that this is the year, in production you need a lot of inferencing. Now you want to do and that the compute power you want that to be non-linear to the amount of inferencing that happens when live use cases are being used in production in customer service and IT, in factory floors. So Topaz Fabric allows and it's been hyper tuned on the Xeon and Gaudi accelerators on Intel. So when we take that full end to end stack, we are getting this highly efficient, low-cost, high-performance set of capabilities that enterprises could now leverage.
And that now makes sense for the CFO from an economic perspective, from a CTO for a high performance, and then for the lines of business because the range of use cases open up.
The future of AI is distributed across every enterprise
Anand Santhanam:
And that's the magic of what we are able to do together.