triangle-exclamationLimitations

Soft size guidance, provider plan limits, and request-throttling limits that affect CrossWatch behavior.

CrossWatch is not suitable for very large libraries. It is a UI-driven tool built around usability, including TMDb image enrichment and similar quality-of-life features. It is not designed as a bulk-sync engine for huge datasets.

Huge libraries require many API calls. Most trackers enforce limits in one way or another. Some use hard maximums. Others rely on fair-use policies. If you abuse their services, you can get banned. Yes, very large libraries are often treated as abuse.

This matters especially with TMDb enrichment. Large watchlists can hit the TMDb API hard because CrossWatch pulls extra metadata and images for the UI.

Many trackers also do not want very large libraries synced this way. Some providers are more forgiving than others, but large-scale syncing is not something all of them handle well.

Size guidance

These are approximate limits. They depend on your hardware, provider behavior, and tracker tolerance.

Hardware is usually not the bottleneck.

On a normal machine or NAS, a library with about 50,000 to 100,000 items is usually fine. The tighter limits below are mostly about provider behavior, API pressure, and tracker tolerance.

CrossWatch advises these rough upper bounds: for normal uses

  • Watchlist: ~2,500

  • History: ~10,000

  • Ratings: ~10,000

Lower counts are preferable. Higher counts will probably still work, but they are untested and come with the challenges described above. If your volumes fall within these ranges, you are probably good to go. If they do not, it is usually better not to use CrossWatch.

circle-info

Prefer incremental windows for large backfills.

Ratings reads are not cached by design.

Provider account limits

Some limits come from the provider, not CrossWatch itself.

If you have large watchlist volumes, its better to disable TMDb enrichment.

Provider request limits

CrossWatch also throttles some upstream APIs to reduce 429 Too Many Requests errors.

Default request limits:

  • SIMKL: 10 GET/sec, 1 write/sec

  • MDBList: 10 GET/sec, 1 write/sec

  • Trakt: 3.33 GET/sec, 1 write/sec

See Provider rate limiting.

Practical advice

If your data is large:

  • Start with one pair and one feature

  • Use one-way first

  • Avoid large backfills in one run

  • Expect slower runs as datasets grow

If you see repeated slowdowns or unstable runs, reduce scope first.

Typical ways to do that:

  • Backfill a shorter date range

  • Sync one feature at a time

  • Limit libraries where supported

Last updated

Was this helpful?