Pod Compatibility for Joulie
Joulie uses a single pod annotation to express workload placement intent:
joulie.io/workload-class: performance | standard
The scheduler extender reads this annotation and steers pods accordingly. No node affinity rules are needed.
Workload classes
| Class | Behavior |
|---|---|
performance | Must run on full-power nodes. The extender hard-rejects eco nodes. |
standard | Default. Can run on any node. Adaptive scoring steers toward eco when performance nodes are congested. |
If no annotation is present, the extender treats it as standard.
Performance pod
Add the joulie.io/workload-class: performance annotation. The scheduler extender will reject eco nodes for this pod.
| |
Standard pod (default)
No annotation is required. The extender scores performance nodes higher but does not reject eco nodes.
| |
You can also be explicit:
| |
GPU resource requests
GPU scheduling resources (nvidia.com/gpu, amd.com/gpu) are independent from Joulie workload classes.
- Request GPU resources as usual in pod/container resources.
- Set
joulie.io/workload-classto express your placement intent. - Joulie GPU capping is node-level (not per-container GPU slicing).
Example: a performance GPU inference pod:
| |