Caching
FederIQ has an optional result cache keyed by a content-addressed hash of the query SQL. Three backends ship in the default build:
- memory — in-process LRU with TTL (fastest, evicted on restart)
- disk — file-backed store sharded by hash prefix (durable, shareable across processes)
- redis — enabled with
--features cache-redis(shared across nodes)
From the CLI
federiq query --cache disk --cache-dir ./.cache "SELECT COUNT(*) FROM events"
federiq cache status --backend disk --dir ./.cache
federiq cache clear --backend disk --dir ./.cache
From Rust
#![allow(unused)] fn main() { use federiq_core::{DiskCache, Engine, FreqPredictor}; use std::sync::Arc; let engine = Engine::new()? .with_cache(Arc::new(DiskCache::new("./.cache")?)) .with_predictor(Arc::new(FreqPredictor::new(10))); engine.attach_all(&catalog.sources)?; let rows = engine.query_cached("SELECT COUNT(*) FROM events")?; }
Predictors
Predictor::observe(sql, hit) gets called on every cache access. The
default FreqPredictor counts queries; predict() returns the top-k
most-frequent for background warmup.
#![allow(unused)] fn main() { engine.warmup()?; // runs predictor.predict() and caches the results }
A smarter ML predictor is on the roadmap — implement the Predictor
trait to plug in your own.
What's not cached yet
- Queries whose results depend on the calling role/region
(planned for v0.5.1 when
PolicyContextis plumbed throughquery_cached). - Negative results (errors are not cached).
- Schema introspection.