Latest version of Nvidia Enterprise AI expands support for data science pipelines and low-code training
Nvidia Corp. today rolled out a major update to its AI Enterprise software suite, with version 2.1 adding support for key tools and frameworks that enterprises can use to run AI and AI workloads. machine learning.
Launched in August last year, Nvidia AI Enterprise is an end-to-end AI software suite that bundles various AI and machine learning tools that have been optimized to run on GPUs and graphics processing units. other hardware from Nvidia.
Among the highlights of today’s release is support for advanced data science use cases, Nvidia said, with the latest release of Nvidia Rapids, a suite of open-source software libraries and of application programming interfaces for running data science pipelines entirely on GPUs. Nvidia said Rapids is able to reduce AI model training times from days to minutes. The latest version of this suite adds better support for data workflows with the addition of new models, techniques and data processing capabilities.
Nvidia AI Enterprise 2.1 also supports the latest version of the Nvidia TAO Toolkit, which is a low-code, no-code framework for refining pre-trained AI and machine learning models with custom data. to produce more accurate computer vision, speech, and language understanding models. TAO Toolkit version 22.05 offers new features such as REST API integration, import of pre-trained weights, TensorBoard integration and new pre-trained models.
To make AI more accessible in hybrid and multicloud environments, Nvidia said the latest version of AI Enterprise adds support for Red Hat OpenShift running in public clouds, adding to its support of OpenShift on bare metal and VMware vSphere deployments. AI Enterprise 2.1 further adds support for the new Microsoft Azure NVads A10 v5 series virtual machines.
These are the first Nvidia virtual GP instances offered by any public cloud and enable more affordable “split GPU sharing,” the company explained. For example, customers can use flexible GPU sizes ranging from one-sixth of an A10 GPU up to two full A10 GPUs.
A final update is for Domino Data Lab Inc., whose enterprise MLOps platform has now been certified for AI Enterprise. Nvidia explained that with this certification, it helps mitigate deployment risks and ensures the reliability and high performance of MLOps with AI Enterprise. By using the two platforms together, enterprises can benefit from workload orchestration, self-service infrastructure and increased collaboration, as well as cost-effective scaling on consumer and virtualized accelerated servers, Nvidia said.
For companies interested in trying out the latest version of AI Enterprise, Nvidia said it’s offering new LaunchPad labs for them to play around with. LaunchPad is a service that provides immediate, short-term access to AI Enterprise in a private, accelerated computing environment with hands-on labs that customers can use to experiment with the platform. New labs include multi-node training for image classification on VMware vSphere with Tanzu, the ability to deploy an XGBoost fraud detection model using Nvidia Triton and more.