Applier
Write engine that executes provider adds/removes, chunks batches, and normalizes results.
Applier is the part of CrossWatch that actually writes changes.
It takes a plan and calls provider APIs to add or remove items.
What you will notice
Logs show
apply:add:*andapply:remove:*events.Large runs may show
apply:*:progressevents (chunking).Failed writes show up as
unresolvedcounts.
When to care
High
unresolvedusually means the destination rejected items.If a provider is flaky, applier may retry and then give up.
Dry runs should show planning, but no real writes.
Related:
Write paths: One-way sync, Two-way sync
Why writes get suppressed: Guardrails
Why “failed items” stop retrying: Unresolved
Applier is the orchestrator’s write engine.
It calls provider add() and remove().
It retries failures.
It can chunk large batches.
It normalizes provider responses into one result shape.
It records unresolved items when writes cannot be confirmed.
Code: cw_platform/orchestrator/_applier.py
Used by: One-way sync and Two-way sync
What it does (and what it does not do)
✅ It does:
call provider write methods with retries
split large item lists into chunks
normalize provider result shapes into one dict
emit progress and summary events
record unresolved items via
record_unresolved()
❌ It does not:
verify writes by re-reading the destination
compute diffs or guardrails
apply tombstones, blackbox, or phantom guard logic
run writes concurrently (it is sequential by design)
Entry points
apply_add(...)
apply_add(...)Emits:
apply:add:start(attempted count)apply:add:progress(per chunk, when chunking)apply:add:done(counts + normalizedresult)
apply_remove(...)
apply_remove(...)Same signature.
It calls dst_ops.remove(...).
It emits:
apply:remove:startapply:remove:progressapply:remove:done
Provider contract (what applier expects)
The applier calls provider write methods on an InventoryOps instance:
The provider return value is flexible.
Applier looks for these keys.
A) Success indicator
ok(bool, default:True)
B) Confirmations
Preferred:
confirmed(int)confirmed_keys(list[str]) (best case)
Fallbacks (used only when confirmed is missing):
count(int)added(int) for addsremoved(int) for removes
C) Unresolved
unresolvedcan be:a list of item dicts (preferred), or
a number
D) Errors
errors(int)
Other keys are preserved.
Only these keys affect normalization and reporting.
Normalized result shape
Both add and remove return the same structure:
Rules:
attempted=len(items)countis alwaysconfirmed(alias)skipped=attempted - confirmed - unresolved - errors(clamped to ≥ 0)
Confirmation rules (how confirmed is derived)
confirmed is derived)Normalization logic in _normalize(...):
If provider returns
confirmed, use it.Else if
confirmed_keysexists, uselen(confirmed_keys).Else if
okis true, use the first non-empty of:count,added,removed
Else, treat confirmed as
0.
If a provider returns ok=true but no counts:
confirmedbecomes0everything looks “skipped”
For accurate reporting, return one of:
confirmed_keysconfirmedcount
Unresolved handling
Unresolved items are how applier prevents infinite “retry every run” loops.
If the provider response contains a non-empty unresolved list:
applier emits
apply:unresolvedapplier attempts
record_unresolved(dst, feature, ...)
Recording rules:
1) Provider returns unresolved items with IDs
If an unresolved entry is an item dict with ids containing one of:
imdb,tmdb,tvdb,slug
Then applier records those items.
It tags them as:
apply:add:provider_unresolvedorapply:remove:provider_unresolved
2) Provider returns unresolved items but they are not mappable
If it cannot derive stable tokens, it records nothing.
3) Fallback: “confirmed == 0 and we tried things”
If confirmed == 0 and attempted > 0, applier records all attempted items.
This is intentionally pessimistic.
It tags them as:
apply:add:fallback_unresolvedorapply:remove:fallback_unresolved
Files:
record_unresolved()writes*.unresolved.pending.jsonunder/config/.cw_state/.
Related: Unresolved and State
Retry behavior
All provider calls are wrapped in _retry(...):
attempts:
3base sleep:
0.5sbackoff:
0.5s,1.0s,2.0s
It retries on any exception.
It does not retry on ok=false return values.
That policy belongs to the provider.
Chunking (large batches)
Chunking happens in _apply_chunked(...).
Rules:
If
chunk_size <= 0ortotal <= chunk_size: one call.Else:
split into slices of
chunk_sizecall provider once per chunk
aggregate totals across chunks
append
confirmed_keysacross chunksemit
{tag}:progressafter each chunk
Optional pause:
if
chunk_pause_ms > 0, it sleeps after each chunk.
Chunk size selection is orchestrator-owned.
The orchestrator usually calls:
effective_chunk_size(ctx, provider)fromcw_platform/orchestrator/_chunking.pyand uses
ctx.apply_chunk_pause_msas a global throttle
Events emitted
Common payload keys:
dst,feature,count,attempted,skipped,unresolved,errors,result
Events:
apply:add:startapply:add:progressapply:add:doneapply:remove:startapply:remove:progressapply:remove:doneapply:unresolved
Events go through the orchestrator Emitter.
They land in logs and the UI event stream.
Provider checklist
If you implement a provider, do this:
Return
confirmed_keyswhenever possible.Otherwise return
confirmedorcount.Return
unresolvedas item dicts with stableids.Avoid
ok=truewith no counts.
If you return None, applier treats it as {}.
That usually makes runs look like no-ops.
Related pages
Where diffs come from: Planner
Where applies happen: One-way sync, Two-way sync
Guardrails around deletes: Guardrails
Write failure suppression: Blackbox, Unresolved
Last updated