Microsoft and NVIDIA experts talk AI infrastructure

This post has been co-authored by Sheila Mueller, Senior GBB HPC+AI Specialist, Microsoft; Gabrielle Davelaar, Senior GBB AI Specialist, Microsoft; Gabriel Sallah, Senior HPC Specialist, Microsoft; Annamalai Chockalingam, Product Marketing Manager, NVIDIA; J Kent Altena, Principal GBB HPC+AI Specialist, Microsoft; Dr. Lukasz Miroslaw, Senior HPC Specialist, Microsoft; Uttara Kumar, Senior Product Marketing Manager, NVIDIA; Sooyoung Moon, Senior HPC + AI Specialist, Microsoft.

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects, and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results.

HPCwire recently sat down with Microsoft Azure and NVIDIA’s AI and cloud infrastructure specialists and asked a series of questions to uncover AI infrastructure insights, trends, and advice based on their engagements with customers worldwide.

How are your most interesting AI use cases dependent on infrastructure?

Sheila Mueller, Senior GBB HPC+AI Specialist, Healthcare & Life Sciences, Microsoft: Some of the most interesting AI use cases are in-patient health care, both clinical and research. Research in science, engineering, and health is creating significant improvements in patient care, enabled by high-performance computing and AI insights. Common use cases include molecular modeling, therapeutics, genomics, and health treatments. Predictive Analytics and AI coupled with cloud infrastructure purpose-built for AI are the backbone for improvements and simulations in these use cases and can lead to a faster prognosis and the ability to research cures. See how Elekta brings hope to more patients around the world with the promise of AI-powered radiation therapy.

Gabrielle Davelaar, Senior GBB AI Specialist, Microsoft: Many manufacturing companies need to train inference models at scale while being compliant with strict local and European-level regulations. AI is placed on the edge with high-performance compute. Full traceability with strict security rules on privacy and security is critical. This can be a tricky process as every step must be recorded for reproduction, from simple things like dataset versions to more complex things such as knowing which environment was used with what machine learning (ML) libraries with its specific versions. Machine learning operations (MLOps) for data and model auditability now make this possible. See how BMW uses machine learning-supported robots to provide flexibility in quality control for automotive manufacturing.

Gabriel Sallah, Senior HPC Specialist, Automotive Lead, Microsoft: We’ve worked with car makers to develop advanced driver assistance systems (ADAS) and advanced driving systems (ADS) platforms in the cloud using integrated services to build a highly scalable deep learning pipeline for creating AI/ML models. HPC techniques were applied to schedule, scale, and provision compute resources while ensuring effective monitoring, cost management, and data traceability. The result: faster simulation/training times due to the close integration of data inputs, compute simulation/training runs, and data outputs than their existing solutions.

Annamalai Chockalingam, Product Marketing Manager, Large Language Models & Deep Learning Products, NVIDIA: Progress in AI has led to the explosion of generative AI, particularly with advancements to large language models (LLMs) and diffusion-based transformer architectures. These models now recognize, summarize, translate, predict, and generate languages, images, videos, code, and even protein sequences, with little to no training or supervision, based on massive datasets. Early use cases include improved customer experiences through dynamic virtual assistants, AI-assisted content generation for blogs, advertising, marketing, and AI-assisted code generation. Infrastructure purpose-built for AI that can handle computer power and scalability demands is key.

What AI challenges are customers facing, and how does the right infrastructure help?

John Lee, Azure AI Platforms & Infrastructure Principal Lead, Microsoft: When companies try to scale AI training models beyond a single node to tens and hundreds of nodes, they quickly realize that AI infrastructure matters. Not all accelerators are alike. Optimized scale-up node-level architecture matters. How the host CPUs connect to groups of accelerators matter. When scaling beyond a single node, the scale-out architecture of your cluster matters. Selecting a cloud partner that provides AI-optimized infrastructure can be the difference between an AI project’s success or failure. Read the blog: AI and the need for purpose-built cloud infrastructure.

Annamalai Chockalingam: AI models are becoming increasingly powerful due to a proliferation of data, continued advancements in GPU compute infrastructure, and improvements in techniques across both training and inference of AI workloads. Yet, combining the trifecta of data, compute infrastructure, and algorithms at scale remains challenging. Developers and AI researchers require systems and frameworks that can scale, orchestrate, crunch mountains of data, and manage MLOps to optimally create deep learning models. End-to-end tools for production-grade systems incorporating fault tolerance for building and deploying large-scale models for specific workflows are scarce.

Kent Altena, Principal GBB HPC+AI Specialist, Financial Services, Microsoft: Trying to decide the best architectures between the open flexibility of a true HPC environment to the robust MLOps pipeline and capabilities of machine learning. Traditional HPC approaches, whether scheduled by a legacy scheduler like HPC Pack or SLURM or a cloud-native scheduler like Azure Batch, are great for when they need to scale to hundreds of GPUs, but in many cases, AI environments need the DevOps approach to AI model management and control of which models are authorized or conversely need overall workflow management.

Dr. Lukasz Miroslaw, Senior HPC Specialist, Microsoft: AI infrastructure is not only the GPU-based clusters but also low-latency, high-bandwidth interconnect between the nodes and high-performant storage. The storage requirement is often the limiting factor for large-scale distributed training as the amount of data used for the training in autonomous driving projects can grow to petabytes. The challenge is to design an AI platform that meets strict requirements in terms of storage throughput, capacity, support for multiple protocols, and scalability.

What are the most frequently asked questions about AI infrastructure?

John Lee: “Which platform should I use for my AI project/workload?” There is no single magic product or platform that is right for every AI project. Customers usually have a good understanding of what answers they are looking for but aren’t sure what AI products or platforms will get them that answer the fastest, most economical, and scalable way. A cloud partner with a wide portfolio of AI products, solutions, and expertise can help find the right solution for specific AI needs.

Uttara Kumar, Senior Product Marketing Manager, NVIDIA: “How do I select the right GPU for our AI workloads?” Customers want the flexibility to provision the right-sized GPU acceleration for different workloads to optimize cloud costs (fractional GPU, single GPU, multiple GPUs all the way up to multiple GPUs across multi-node clusters). Many also ask, “How do you make the most of the GPU instance/virtual machines and leverage it within applications/solutions?” Performance-optimized software is key to doing that.

Sheila Mueller: “How do I leverage the cloud for AI and HPC while ensuring data security and governance.” Customers want to automate the deployment of these solutions, often across multiple research labs with specific simulations. Customers want a secure, scalable platform that provides control over data access to provide insight. Cost management is also a focus in these discussions.

Kent Altena: “How best should we implement this GPU to run our GPUs?” We know what we need to run and have built the models, but we also need to understand the final mile. The answer is not always a straightforward one-size-fits-all answer. It requires understanding their models, what they are attempting to solve, and what their inputs and outputs/workflow looks like.

What have you learned from customers about their AI infrastructure needs?

John Lee: The majority of customers want to leverage the power of AI but are struggling to put an actionable plan in place to do so. They worry about what their competition is doing and whether they are falling behind but, at the same time, are not sure what first steps to take on their journey to integrate AI into their business.

Annamalai Chockalingam: Customers are looking for AI solutions to improve operational efficiency and deliver innovative solutions to their end customers. Easy-to-use, performant, platform-agnostic, and cost-effective solutions across the compute stack are incredibly desirable to customers.

Gabriel Sallah: All customers are looking to reduce the cost of training an ML model. Thanks to the flexibility of the cloud resources, customers can select the right GPU, storage I/O, and memory configuration for the given training model.

Gabrielle Davelaar: Costs are critical. With the current economic uncertainty, companies need to do more with less and want their AI training to be more efficient and effective. Something a lot of people are still not realizing is that training and inferencing costs can be optimized through the software layer.

What advice would you give to businesses looking to deploy AI or speed innovation?

Uttara Kumar: Invest in a platform that is performant, versatile, scalable, and can support the end-to-end workflow—start to finish—from importing and preparing data sets for training, to deploying a trained network as an AI-powered service using inference.

John Lee: Not every AI solution is the same. AI-optimized infrastructure matters, so be sure to understand the breadth of products and solutions available in the marketplace. And just as importantly, make sure you engage with a partner that has the expertise to help navigate the complex menu of possible solutions that best match what you need.

Sooyoung Moon, Senior HPC + AI Specialist, Microsoft: No amount of investment can guarantee success without thorough early-stage planning. Reliable and scalable infrastructure for continuous growth is critical.

Kent Altena: Understand your workflow first. What do you want to solve? Is it primarily a calculation-driven solution, or is it built upon a data graph-driven workload? Having that in mind will go a long way to determining the best or optimal approach to start down.

Gabriel Sallah: What are the dependencies across various teams responsible for creating and using the platform? Create an enterprise-wide architecture with common toolsets and services to avoid duplication of data, compute monitoring, and management.

Sheila Mueller: Involve stakeholders from IT and Lines of Business to ensure all parties agree to the business benefits, technical benefits, and assumptions made as part of the business case.

Learn more about Azure and NVIDIA

Source: Azure Blog Feed

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.