Pod Compatibility for Joulie
Joulie uses a single pod annotation to express workload placement intent:
joulie.io/workload-class: performance | standard
The scheduler extender reads this annotation and steers pods accordingly. No node affinity rules are needed.
Workload classes
| Class | Behavior |
|---|---|
performance | Must run on full-power nodes. The extender hard-rejects eco nodes. |
standard | Default. Can run on any node. Adaptive scoring steers toward eco when performance nodes are congested. |
If no annotation is present and no WorkloadProfile matches the pod, the extender treats it as standard.
Performance pod
Add the joulie.io/workload-class: performance annotation. The scheduler extender will reject eco nodes for this pod.
| |
Standard pod (default)
No annotation is required. The extender scores performance nodes higher but does not reject eco nodes.
| |
You can also be explicit:
| |
Standard pod (batch / non-critical)
Standard pods can run on any node. The scheduler uses adaptive scoring to steer them toward eco nodes when performance nodes are congested.
| |
GPU resource requests
GPU scheduling resources (nvidia.com/gpu, amd.com/gpu) are independent from Joulie workload classes.
- Request GPU resources as usual in pod/container resources.
- Set
joulie.io/workload-classto express your placement intent. - Joulie GPU capping is node-level (not per-container GPU slicing).
Example: a performance GPU inference pod:
| |
Sensitivity annotations
For finer control, add sensitivity annotations so the extender can prefer nodes with more headroom:
| Annotation | Values | Effect |
|---|---|---|
joulie.io/workload-class | performance, standard | Controls eco/performance placement |
joulie.io/cpu-sensitivity | high, medium, low | Scales penalty on capped CPU nodes |
joulie.io/gpu-sensitivity | high, medium, low | Scales penalty on capped GPU nodes |
All annotations are optional. If omitted and no WorkloadProfile matches the pod, the extender scores neutrally.
WorkloadProfile-based scheduling
For teams that prefer not to annotate individual pods, create a WorkloadProfile with a podSelector matching your workload’s labels. The extender will use the profile’s fields to drive filter and score logic automatically.
See WorkloadProfile Guide for details.