Runtime
Runtime config knobs for debug output, snapshot caching, chunking, and telemetry warnings.
Runtime settings control how CrossWatch behaves during runs.
Most users should leave defaults alone.
The only knobs most setups need
runtime.debug: show extra detail in logs.runtime.snapshot_ttl_sec: keep this at0if you suspect staleness.runtime.apply_chunk_size: smaller chunks reduce blast radius.runtime.apply_chunk_pause_ms: add a pause if you hit rate limits.
Safe defaults
Snapshot TTL:
0Chunking: off, unless a provider is picky
Related:
Caching behavior: Caching layers
Chunk sizing: Chunking
This doc covers the “runtime” config that controls how the orchestrator behaves at run time: debug output, snapshot caching, suspect snapshot guards, apply chunking, telemetry warnings, and how the ctx object is built.
Implementation notes
Primary code: orchestrator/facade.py (class Orchestrator)
Also relevant: orchestrator/_chunking.py, orchestrator/_telemetry.py, orchestrator/_pairs.py
Where runtime config is read
The orchestrator reads runtime options from:
config["runtime"](primary)and telemetry thresholds from:
config["telemetry"], orconfig["runtime"]["telemetry"](fallback)
All parsing happens in Orchestrator.__post_init__().
runtime options (actual keys in code)
runtime options (actual keys in code)runtime.debug (bool)
runtime.debug (bool)Default: false
Enables
ctx.dbg(...)output.Debug output is emitted as either:
a
debugstructured event (when fields are supplied), ora plain
[DEBUG] ...line.
runtime.snapshot_ttl_sec (int)
runtime.snapshot_ttl_sec (int)Default: 0 (disabled)
If > 0, snapshots built via
_snapshots.build_snapshots_for_feature(...)can be reused fromctx.snap_cachewithin the same run (until TTL expires).This only affects within-run caching; there is no cross-run snapshot cache.
runtime.suspect_min_prev (int)
runtime.suspect_min_prev (int)Default: 20
Minimum baseline size required before “drop guard” will even consider a snapshot “suspect”.
runtime.suspect_shrink_ratio (float)
runtime.suspect_shrink_ratio (float)Default: 0.10
Used in two places:
Drop guard (snapshot coercion): “did current snapshot shrink too much?”
Mass delete protection: “are planned removals too large?”
runtime.suspect_debug (bool)
runtime.suspect_debug (bool)Default: true
When drop guard triggers, controls how noisy the debug reporting is.
runtime.apply_chunk_size (int)
runtime.apply_chunk_size (int)Default: 0 (no chunking)
When > 0, write batches are split into chunks of this size before calling provider
add/remove.
runtime.apply_chunk_pause_ms (int)
runtime.apply_chunk_pause_ms (int)Default: 0
Optional pause between chunks (milliseconds).
runtime.apply_chunk_size_by_provider (mapping)
runtime.apply_chunk_size_by_provider (mapping)Also accepted aliases (same behavior):
apply_chunk_sizes_by_providerapply_chunk_sizes
Example:
Keys are uppercased internally.
Values must be positive ints.
When present, it overrides the base chunk size for that provider.
Resolution logic lives in
orchestrator/_chunking.py(effective_chunk_size(ctx, provider_name)).
Telemetry warning thresholds
The orchestrator builds a threshold map at init:
At the end of run_pairs(...), it calls:
ctx.stats.record_summary(added=..., removed=...)ctx.emit_rate_warnings()
emit_rate_warnings() delegates to orchestrator/_telemetry.py:
maybe_emit_rate_warnings(stats, emit, thresholds)
This expects stats.http_overview(hours=24) to return per-provider “rate remaining” style info (depends on your Stats backend).
The execution context (ctx)
ctx)Orchestrator.context returns a SimpleNamespace that is passed into run_pairs(ctx).
Fields set today:
config: full config dict (mutable copy)providers: loaded InventoryOps mapemit: structured emitter (Emitter.emit)emit_info: raw line emitter (Emitter.info)dbg: debug emitter (gated byruntime.debug)debug: booldry_run: boolconflict: conflict policy object (ConflictPolicy)state_store:StateStoreinstancestats:Statsinstance (wrapper)emit_rate_warnings: bound method (calls telemetry warnings)tomb_prune: bound tombstone prune functiononly_feature: optional feature filterwrite_state_json: boolstate_path: where to write state (default:state_store.state)snap_cache: dict used for within-run snapshot cachingsnap_ttl_sec: snapshot TTL secondsapply_chunk_size,apply_chunk_pause_ms,apply_chunk_size_by_provider
Pipelines treat ctx as a “bag of knobs” and avoid global variables.
Run-time flags passed through Orchestrator.run(...)
Orchestrator.run(...)Orchestrator.run(...) supports:
dry_run: forces no writes (providers still called for reads)only_feature: run only a single feature (if supported by the pair)write_state_json: controls whether state changes are persistedstate_path: custom state file locationprogress: controls output destination:a callable:
progress(line)receives every emitted stringTrue: print to stdout if no callback existsFalse: silence output
Unknown **kwargs are ignored (debug event: run.kwargs.ignored).
End-of-run extras
After run_pairs(ctx) returns, Orchestrator.run() also:
Persists a “wall”:
_persist_state_wall(feature="watchlist")builds
state["wall"]as a de-duplicated list of minimal baseline items across providers for that feature
Clears the UI hide file:
state_store.clear_watchlist_hide()
Emits metrics snapshots if available:
http:overviewwithwindow_hours=24(fromstats.http_overview(hours=24))stats:overview(fromstats.overview(state))
Related pages
Snapshot TTL and drop guard: Snapshots
Suspect shrink ratio and delete guards: Guardrails
Event format and naming: Eventing
Last updated