Generatív adatintelligencia

A Run:ai megszerzésével az Nvidia célja az AI K8s kezelése

Találka:

Nvidia on Wednesday announced the acquisition of AI-centric Kubernetes orchestration provider Run:ai in an effort to help bolster the efficiency of computing clusters built on GPU.

A részletek az üzlet weren’t disclosed, but állítólag the deal could be valued at roughly $700 million. The Tel Aviv-based startup has látszólag raised $118 million across four funding rounds since it was founded in 2018.

Run:ai’s platform provides a central user interface and control plane for working with a variety of popular Kubernetes variants. This makes it a bit like RedHat’s OpenShift or SUSE’s Rancher, and it features many of the same tools for managing things like namespaces, user profiles, and resource allocations.

The key difference is that Run:ai’s is designed to integrate with third-party AI tools and frameworks, and deal with GPU accelerated containers environments. Its software portfolio includes elements like workload scheduling and accelerator partitioning, the latter of which allows multiple workloads to be spread across a single GPU.

According to Nvidia, Run:ai’s platform already supports its DGX compute platforms, including its Superpod configurations, the Base Command cluster management system, NGC container library, and an AI Enterprise suite.

With regard to AI Kubernetes claims a number of advantages over bare metal deployments, as the environment can be configured to handle scaling across multiple, potentially geographically distributed, resources.

For now, existing Run:ai customers needn’t worry about Nvidia imposing major changes to the platform. In a engedje, Nvidia said it would continue to offer Run:ai’s products under the same business model, for the immediate future — whatever that may mean.

Meanwhile, those subscribed to Nvidia’s DGX Cloud will get access to Run:ai’s feature set for their AI workloads, including large language model (LLM) deployments.

The announcement comes just over a month after the GPU giant bemutatta a new container platform for building AI models called Nvidia Inference Microservices (NIM).

NIMS are essentially pre-configured and optimized container images containing the model, whether it be the open source or proprietary version, with all the dependencies necessary to get it running.

Like most containers, NIMs can be deployed across a variety of runtimes including CUDA-accelerated Kubernetes nodes.

The idea behind turning LLMs and other AI models into microservices is that they can be networked together and used to build more complex and feature rich AI models than would otherwise be possible without training a dedicated model yourself, or at least that’s how Nvidia envisions folks using them.

With the acquisition of Run:ai, Nvidia now has an Kubernetes orchestration layer for managing the deployment of these NIMs across its GPU infrastructure. ®

spot_img

Legújabb intelligencia

spot_img

Beszélj velünk

Szia! Miben segíthetek?