MLOps – NVIDIA Technical Blog
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
Tracked topic
The operational payoff of running Slurm on Kubernetes comes from the ecosystem. Rather than building and maintaining separate toolchains for GPU management, monitoring, networking, and node lifecycle, you can use the Kubernetes tooling that already exists for these problems. Platform teams manage clusters with declarative YAML, Helm deployments, rolling updates, and Prometheus or Grafana for observability.
Running Large-Scale GPU Workloads on Kubernetes with Slurm | NVIDIA Technical BlogNVSentinel is installed in each Kubernetes cluster run. Once deployed, NVSentinel continuously watches nodes for errors, analyzes events, and takes automated actions such as quarantining, draining, labeling, or triggering external remediation workflows. Specific NVSentinel features include continuous monitoring, data aggregation and analysis, and more, as detailed below.
Automate Kubernetes AI Cluster Health with NVSentinel | NVIDIA Technical BlogSlinky slurm-operator represents each Slurm component (slurmctld for scheduling, slurmdbd for accounting, slurmd for compute workers, slurmrestd for API access) as a Kubernetes Custom Resource Definition (CRD). A Slurm cluster is defined using Custom Resources, and Slinky creates containerized Slurm daemons running in their own pods, configured to belong to their respective cluster. Slinky ensures high availability (HA) of the Slurm control plane (slurmctld) through pod regeneration, with no need for the Slurm native HA mechanism. Configuration changes propagate automatically: Kubernetes synch
Running Large-Scale GPU Workloads on Kubernetes with Slurm | NVIDIA Technical BlogBefore diving into Kubernetes manifests, it helps to understand the two inference deployment modes for LLMs: In aggregated serving, a single process (or tightly coupled group of processes) handles the entire inference lifecycle from input to output. Disaggregated serving splits the pipeline into distinct stages such as prefill, decode, and routing, each running as independent services (see Figure 1, below).
Deploying Disaggregated LLM Inference Workloads on Kubernetes | NVIDIA Technical Blog…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…17 MIN READ Mar 23, 2026 Deploying Disaggregated LLM Inference Workloads on Kubernetes As large language model (LLM) inference workloads grow in complexity, a single monolithic serving process starts to hit its…
…can move faster from prototyping to production. For self-managed deployments, platforms like Canonical (Ubuntu, Kubernetes, and MLOps tooling) enable running Nemotron models (in private cloud, on-prem, or hybrid environments with…