Joulie Agent
The agent is Joulie’s node-side enforcement component.
It consumes desired state and applies node-local controls through configured backends.
Responsibilities
At each reconcile tick, the agent:
- identifies its node scope (single node in daemonset mode, sharded set in pool mode),
- reads desired target (
NodePowerProfile) for each owned node, - reads telemetry/control routing (
TelemetryProfile), - applies controls (host or HTTP),
- exports metrics and status.
Inputs and outputs
Inputs:
NodePowerProfilefor desired profile/capTelemetryProfilefor source/control backend selection- node capability hints (for example NFD labels)
Outputs:
- control actions on host interfaces or simulator HTTP
- status updates (
TelemetryProfile.status.control) - Prometheus metrics (
/metrics)
Runtime modes
daemonset:- one pod per selected real node
- intended for real-hardware enforcement
pool:- one pod controls multiple logical nodes (sharded)
- intended for KWOK/simulator scale runs
Both modes should use the same managed-node selector contract (joulie.io/managed=true).
In practice, pool mode enforces this through POOL_NODE_SELECTOR; DaemonSet mode requires explicit nodeSelector if you want strict alignment.
Detailed deployment/runtime configuration is documented in:
Enforcement behavior
The agent does not choose cluster policy. It enforces operator intent and reports what happened:
appliedblockederror
This separation keeps policy logic centralized in the operator and actuator logic localized in the agent.
CPU enforcement algorithm (current)
The agent enforces CPU power intent with this backend order:
- Try RAPL package cap first (
rapl.set_power_cap_wattsvia HTTP control backend, or host RAPL files). - If RAPL is unavailable/fails, switch to DVFS fallback controller.
- If RAPL becomes available again later, restore DVFS throttle and return to RAPL mode.
Backend selection is visible in metric joulie_backend_mode{mode=none|rapl|dvfs}.
DVFS fallback control loop
When DVFS fallback is active, each reconcile tick:
- Read observed package power (from RAPL energy deltas in host mode, or HTTP telemetry source).
- Apply EMA smoothing:
ema = alpha * observed + (1-alpha) * ema
- Compute hysteresis thresholds around desired cap:
upper = cap + DVFS_HIGH_MARGIN_Wlower = cap - DVFS_LOW_MARGIN_W
- Update consecutive counters:
aboveCountifema > upperbelowCountifema < lower
- Only act when counter reaches
DVFS_TRIP_COUNT:- above trip -> increase throttle by
DVFS_STEP_PCT - below trip -> decrease throttle by
DVFS_STEP_PCT
- above trip -> increase throttle by
- Enforce cooldown:
- no new action before
DVFS_COOLDOWNsince last action.
- no new action before
This gives both hysteresis and temporal damping, preventing oscillation.
Main tunables:
DVFS_EMA_ALPHADVFS_HIGH_MARGIN_WDVFS_LOW_MARGIN_WDVFS_TRIP_COUNTDVFS_COOLDOWNDVFS_STEP_PCTDVFS_MIN_FREQ_KHZ
DVFS actuation details
- Host mode:
- write cpufreq
scaling_max_freqfiles. - a fraction of CPUs is throttled according to
throttlePct.
- write cpufreq
- HTTP mode:
- send
dvfs.set_throttle_pctto simulator/backend endpoint.
- send
Throttle state and actions are exported in:
joulie_dvfs_throttle_pctjoulie_dvfs_above_trip_countjoulie_dvfs_below_trip_countjoulie_dvfs_actions_total{action=throttle_up|throttle_down}
Performance -> eco transition and safeguards
This transition is safety-critical and is split between operator policy logic and agent enforcement.
Who does what
- Operator:
- decides whether a node is allowed to downgrade from performance to eco,
- runs safeguard checks,
- keeps/changes published desired state accordingly.
- Agent:
- enforces whatever desired state is currently published for that node,
- does not bypass safeguards on its own.
Safeguard goal
Prevent a node from dropping to eco while it still runs workloads that require performance supply.
Step-by-step transition flow
- Policy plans
performance -> ecofor nodeN. - Operator evaluates safeguard on
N:- classify active pods from scheduling constraints (
joulie.io/power-profileaffinity/selector), - detect whether performance-constrained pods are still running on
N.
- classify active pods from scheduling constraints (
- If performance-constrained pods are present:
- transition is deferred (internal operator FSM drain/defer phase),
- operator keeps node target/supply effectively performance-facing.
- Agent reconciles:
- sees performance-facing desired target,
- keeps enforcing performance cap/control backend behavior.
- On later reconcile ticks, operator re-checks safeguard.
- When no blocking performance-constrained pods remain:
- operator commits eco target for
N, - agent enforces eco cap/control on next reconcile.
- operator commits eco target for
Transition FSM (with conditions)
stateDiagram-v2
[*] --> ActivePerformance
ActivePerformance --> DrainingPerformance: defer
ActivePerformance --> ActiveEco: allow
DrainingPerformance --> DrainingPerformance: still blocked
DrainingPerformance --> ActiveEco: unblocked
DrainingPerformance --> ActivePerformance: re-plan perf
ActiveEco --> ActivePerformance: plan perfInterpretation:
DrainingPerformanceis the operator transition state.- In
DrainingPerformance, agent keeps enforcing performance-facing target. - In
DrainingPerformance, operator sets a temporary node label:joulie.io/power-profile=draining-performance. - This temporary label is intentional: it is neither
performancenoreco, so it prevents new strictperformanceand strictecopods from matching that node during transition. - Goal: let blocking performance-sensitive pods drain without admitting new strict placements that would prolong or break the transition.
- Transition to
ActiveEcoonly occurs when safeguard condition becomes true (perf-constrained pods == 0).
Transition conditions:
defer: policy plans eco and node still has performance-constrained pods (count > 0).allow: policy plans eco and node has no performance-constrained pods (count == 0).still blocked: periodic re-check still finds blocking pods (count > 0).unblocked: periodic re-check finds none (count == 0), so eco can be committed.re-plan perf/plan perf: policy decision requires performance supply.
Why this matters
- avoids violating workload placement/intent guarantees mid-flight,
- avoids abrupt performance loss for pods that explicitly require performance nodes,
- keeps transition behavior deterministic and auditable via operator/agent metrics and logs.
Current behavior is defer-until-safe (no forced eviction in this path).
For policy-side details, see:
GPU path and DCGM (future)
Current implementation detects GPU vendor hints (NFD labels) and logs capabilities, but does not apply GPU caps yet.
Planned extension:
- add GPU control backend(s) (for example NVML/DCGM path),
- keep same desired-state/enforcement contract style (
applied|blocked|error), - expose GPU control/telemetry metrics similarly to CPU.