Nvidia and Microsoft offer AI foundry service

Author: EIS Release Date: Nov 23, 2023


NVIDIA has introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements — a collection of NVIDIA AI Foundation Models, NVIDIA NeMo™ framework and tools, and NVIDIA DGX™ Cloud AI supercomputing services — that give enterprises an end-to-end solution for creating custom generative AI models.

Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

 


SAP SE, Amdocs and Getty Images are among the pioneers building custom models using the service.


“Enterprises need custom models to perform specialized skills trained on the proprietary DNA of their company — their data,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s AI foundry service combines our generative AI model technologies, LLM training expertise and giant-scale AI factory. We built this in Microsoft Azure so enterprises worldwide can connect their custom model with Microsoft’s world-leading cloud services.”

 

“Our partnership with NVIDIA spans every layer of the Copilot stack — from silicon to software — as we innovate together for this new age of AI,” said Satya Nadella, chairman and CEO of Microsoft. “With NVIDIA’s generative AI foundry service on Microsoft Azure, we’re providing new capabilities for enterprises and startups to build and deploy AI applications on our cloud.”

NVIDIA’s AI foundry service can be used to customize models for generative AI-powered applications across industries, including enterprise software, telecommunications and media.

Once ready to deploy, enterprises can use a technique called retrieval-augmented generation (RAG) to connect their models with their enterprise data and access new insights.

As the first customer of NVIDIA DGX Cloud on Microsoft Azure, SAP plans to use the service and optimized RAG workflow with NVIDIA DGX Cloud and NVIDIA AI Enterprise software running on Azure to help customize and deploy Joule®, its new natural language generative AI copilot.

Amdocs, a leading provider of software and services to communications and media companies, is optimizing models for the Amdocs amAIz framework to speed adoption of generative AI applications and services for telcos globally.

Customers using the NVIDIA foundry service can choose from several NVIDIA AI Foundation models, including a new family of NVIDIA Nemotron-3 8B models hosted in the Azure AI model catalogue .

Developers can also access the Nemotron-3 8B models on the NVIDIA NGC™ catalog, as well as community models such as Meta’s Llama 2 models optimized for NVIDIA for accelerated computing, which are also coming soon to the Azure AI model catalog.

Optimized with 8 billion parameters, the Nemotron-3 8B family includes versions tuned for different use cases and have multilingual capabilities for building custom enterprise generative AI applications.

NVIDIA DGX Cloud AI supercomputing is available today on Azure Marketplace. It features instances customers can rent, scaling to thousands of NVIDIA Tensor Core GPUs, and comes with NVIDIA AI Enterprise software, including NeMo, to speed LLM customization.

The addition of DGX Cloud on the Azure Marketplace enables Azure customers to use their existing Microsoft Azure Consumption Commitment credits to speed model development with NVIDIA AI supercomputing and software.

NVIDIA AI Enterprise software is now integrated into Azure Machine Learning, adding NVIDIA’s platform of secure, stable and supported AI and data science software. This brings NeMo and NVIDIA Triton Inference Server™ to Azure’s enterprise-grade AI service.

NVIDIA AI Enterprise is also available on Azure Marketplace, providing businesses worldwide with broad options for production-ready AI development and deployment of custom generative AI applications.