Node Selectors
Node Selectors allow you to assign Pods to specific nodes within a cluster. This is especially useful in environments where different workloads require different hardware configurations, such as CPU-intensive or GPU-intensive tasks. By using node selectors, you can optimize resource utilization and ensure that your applications run on the most suitable hardware.
You can learn more about Node Selectors in the Kubernetes documentation.
Managing Node Selectors
To select a particular GPU type use the selector gpu.nvidia.com/class: <gpu-selector>
in your resource spec.
Selector | Type | Use |
---|---|---|
H100SXM-80 | H100 SXM 80GB | Train complex deep learning models and high performance inference tasks |
H100-80 | H100 PCIE 80GB | High performance inference/training |
L40S | L40S PCIE 48GB | High performance graphics/inference/training GPU |
L4 | L4 PCIE 24GB | Lower cost graphics/inference/training GPU |
Adding Node Selectors
To add a node selector to your pod configuration, you need to include the nodeSelector
field in your pod’s specification. Here's how you can specify node selectors for both GPU and CPU nodes:
Deploying to GPU-Enabled Nodes
To deploy a pod that requires a GPU, add a node selector to your pod's specification to ensure it runs on a node equipped with a GPU.
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
- name: cuda-container
image: nvidia/cuda:10.2-base
resources:
limits:
nvidia.com/gpu: 1 # Requesting 1 NVIDIA GPU
nodeSelector:
gpu.nvidia.com/class: H100-80 # New selector for specific GPU class
In this example, the pod gpu-pod
is configured to run on nodes labeled with gpu.nvidia.com/class: H100-80
. The container cuda-container
will utilize an NVIDIA CUDA image, specifically requiring a GPU to operate.
Deploying to CPU-Only Nodes
If you need to deploy a pod that does not require a GPU, you can simply omit the GPU resources.
apiVersion: v1
kind: Pod
metadata:
name: cpu-pod
spec:
containers:
- name: web-server
image: nginx
This configuration ensures that the cpu-pod
, which runs a simple NGINX web server, will be placed on any node that has the capacity to run this workload.