AI/MLOPS

Trend 9: AI models deployed at the edge for better customer experience

Enterprises are increasingly deploying AI models at the edge. And that too for the latest use cases. To deliver a transformative experience, diverse sensing devices in near-real time are deployed.

The various available frameworks (such as NVIDIA DeepStream, Triton, OpenVino, Azure Edge) and hardware options (such as Intel Edge VPUs, FPGA, NVIDIA Jetson, and Google's Edge TPU) will soon be joined by additional choices. Hence, solutions should evolve with technology and adhere to open community-driven standards.

With trending Internet of Things (IoT)/smart devices, deploying models for autonomous and lowpowered devices would bring significant new MLOps challenges. Customers will have to evaluate multiple frameworks, hardware devices, and deployment models to implement a future-proof solution. They will also have to address privacy, device management, security, federated deployments, and implementation of open standards across devices.

A Fortune 500 oil and gas company wants to capitalize on opportunities from alternative energy sources. It has partnered with Infosys to pilot an AI-enabled business model based on video analytics and IoT, involving autonomous stores. In the current pilot phase, the project is expected to provide a frictionless checkout experience to customers. It involves real-time AI inferences using video analytics on 50-plus cameras and signals from more than 100 IoT sensors installed in a store.

AI/MLOPS

Trend 10: Integrated MLOps practices enhance AI capabilities

Companies are standardizing MLOps practices to scale AI adoption. Having quickly identified use cases and conducted early experiments on AI applications, they are realizing the need for an end-to-end pipeline from data sourcing and for model training, deployment, and monitoring. Enterprises are also looking at creating a central model repository and adopting trustworthy AI practices.

Technologies like Azure ML, AWS SageMaker, Airflow, Kubernetes, Databricks Delta Lake, NVIDIA Triton, MLFlow, and products like DataRobot and Iguazio, are emerging as sources for model management, deployment, and managing training data. Meanwhile, various customers are indicating the need for online and offline feature storage for ML data management and model monitoring.

With multiple technology options available to realize the architectures, solutions and program road maps are evolving to cater to the specific implementation and priorities of individual organizations.

A large North American telco wanted to standardize its MLOps architecture to enhance AI development life cycle management. Infosys helped the client develop a platform that standardizes model deployment, monitoring, and governance. The platform works on Azure and technologies like Delta Lake and Spark. By implementing an end-to-end architecture for MLOps with an eye on cloud-native principles, the customer anticipates replicating it across different business lines and multiple cloud providers.