informa
/
4 MIN READ
News

Nvidia Places Bet on AI Enterprise Software On-Premises

Nvidia has introduced enterprise software for AI, speculating that more artificial intelligence workloads will be headed to on-premises data centers.

The public cloud has served as a major shortcut for creating enterprise AI projects in recent years, particularly during the pandemic. Got test data? Need to create a model? Everything you need can be found in a public cloud like AWS or Google Cloud or Microsoft Azure. But look out, cloud fans. There may be a revival of on-premises computing for AI coming to an enterprise near you.

Saying that the democratization of AI is necessary for widespread adoption, GPU chip giant Nvidia has rolled out software called Nvidia AI Enterprise into general availability. The company has also built on its partnership with VMware (vSphere) and Domino Data Labs to create a stack of AI-specific software to enable enterprise organizations to put their AI deployments onto industry-standard servers in on-premises data centers. The Nvidia announcement is essentially acknowledging that the biggest AI hardware and infrastructure growth over the next several years will be on-premises.

That’s backed up by a recent market research report from analyst firm Omdia. While the cloud accounts for 88% of cloud and data center AI processor market revenue, with on-premises data centers responsible for just 12%, that on-premises market growth is expected to pick up the pace in the years ahead, according to Jonathan Cassell, principal analyst for advanced computing at Omdia.

“Up until now it’s been easier for companies to deploy AI projects in the cloud,” Cassell says. “You can get it up and running quickly. But the on-premises data center isn’t going away.”

Omdia is forecasting that on-premises data center AI processor revenue will expand at a CAGR (compound annual growth rate) of 78% from 2020 through 2026, more than double the rate expected (36%) for the cloud sector. Certainly, the small initial base of on-premises implementations accounts for some of that high growth rate on the on-premises side.

Other driving factors are the need for certain industries to move forward with AI projects while keeping that work on site. For instance, some industries such as finance or healthcare may be faced with strict regulatory environments that make it preferable to keep data and AI processing on-premises. In other cases, privacy and security concerns may influence organizations to choose an on-premises deployment of AI. For some industries, data latency could play a factor in companies’ decisions to keep AI workloads in-house. The cloud is too far removed from where the data is generated, so it's better to do the AI processing at the edge where the data lives.

The Nvidia announcement provides an easier path for organizations that want to deploy their AI on-premises, Cassell says.

“There’s a lot of challenges to implementing AI in a data center environment,” he says. “It requires a fair degree of expertise. Nvidia is trying to make the whole process simpler for enterprise customers … the company wants to bring the whole AI deployment process into the realm of standard virtualization that you see in data centers.”

Manuvir Das is the head of enterprise computing at Nvidia. In a briefing about the company’s enterprise AI software, he said it’s aimed at two personas: data scientists, who get all the tools and frameworks they expect for AI; and IT administrators who will be working in the familiar VMware environment. An SDK (software development kit) called Rapids Data Science optimizes frameworks including TensorFlow and Pytorch for the Nvidia GPUs so they can run faster.

The partnership with Domino Data Labs, which provides an MLOps platform, will offer the tools needed to do end-to-end AI workflow, from the acquisition of data to deployment and production of models.

This includes governance and reporting; model operations; reproducibility and collaboration; and self-serve, scalable infrastructure. Domino Data Labs provides a layer in between the Nvidia software and any number of other tools, packages, languages, data sources and other systems an enterprise may already have in place, including Python, R, SAS, Snowflake, SQL, Jupyter, DataRobot, Spark, and Amazon SageMaker.

But Nvidia is a hardware company, right? Nvidia's specialty and foundation has been on graphical processing units -- co-processors originally developed for better graphics processing and also used for the intensive compute required by artificial intelligence and other similar technologies. Wouldn't developing software for AI be just a side project for a company like that to help them sell more GPUs? Not according to Das.

“This is not a side project at all,” Das says. “We believe this is a significant business for Nvidia going forward…We think there is a significant opportunity here that is separate from the hardware.”

What to Read Next: 

Finding High-Value AI Use Cases

Data and Analytics Salaries Heat Up in Recovery Economy

What CIOs Need to Know About Graph Database Technology

    Editor's Choice
    John Edwards, Technology Journalist & Author
    Jessica Davis, Senior Editor
    John Edwards, Technology Journalist & Author
    John Edwards, Technology Journalist & Author
    Sara Peters, Editor-in-Chief, InformationWeek / Network Computing