Cache
Sloppy ships first-party caches for normal backend app work:
Cache.memory(...)for bounded per-process storage.Cache.sqlite(...),Cache.postgres(...), andCache.sqlServer(...)for provider-backed distributed storage.Cache.hybrid(...)for memory-fronted distributed cache access.- route-level
.outputCache(...)for server-side response caching. .cacheHeaders(...)andResults.*(...).cacheControl(...)for client/proxy cache headers.
import { Cache, Results, Schema, Sloppy } from "sloppy";
const UserDto = Schema.object({
id: Schema.integer(),
name: Schema.string(),
});
const app = Sloppy.create();
const cache = Cache.memory("main", { maxEntries: 10000, ttlMs: 10000 });
app.services.addCache(cache);
app.get("/users/{id}", async (ctx) => {
const user = await cache.getOrCreate(`users:${ctx.route.id}`, {
ttlMs: 30000,
tags: [`user:${ctx.route.id}`, "users"],
schema: UserDto,
}, async () => {
return await ctx.services.get("users").findById(ctx.route.id);
});
return user === null ? Results.notFound() : Results.json(user);
});Memory Cache
Cache.memory(nameOrOptions?, options?) stores JSON-serializable values in the current process. It is bounded by maxEntries, defaults to a finite size, evicts expired entries first, then evicts least-recently-used entries.
const cache = Cache.memory("main", {
maxEntries: 10000,
ttlMs: 60000,
});
await cache.set("users:42", { id: 42 }, {
tags: ["users", "user:42"],
slidingExpirationMs: 10000,
});Memory cache values are serialized and parsed on write/read so callers do not receive the original mutable object reference.
Distributed Cache
Provider-backed caches require a real sloppy/data connection. They do not fall back to memory when a provider is unavailable.
import { Cache, data } from "sloppy";
const db = data.sqlite.open({
database: ":memory:",
capability: "data.main",
});
const cache = Cache.sqlite(db, {
name: "main",
namespace: "app",
table: "sloppy_cache_entries",
ttlMs: 60000,
});SQLite, PostgreSQL, and SQL Server use provider-specific SQL and store entries by namespace and cache_key. clear() deletes only the cache namespace. Use clear({ dangerouslyClearAll: true }) only when the whole cache table is intentionally owned by that operation.
Redis Cache
Cache.redis(...) is Sloppy's first Redis-backed cache provider. It builds on the first-party Redis client and never falls back to memory.
import { Cache, Redis } from "sloppy";
await using redis = Redis.client("main", {
url: "redis://127.0.0.1:6379/0",
});
await using cache = Cache.redis(redis, {
name: "default",
ttlMs: 60000,
prefix: "myapp:",
});
await cache.set("user:1", { id: 1, name: "Ada" }, { tags: ["users"] });
const value = await cache.get("user:1");The provider owns its Redis payload format and stores JSON values as bounded Sloppy value payloads. It does not provide an in-memory fallback.
Creation
const cache = Cache.redis("default", {
url: "redis://127.0.0.1:6379/0",
ttlMs: 60_000,
maxValueBytes: 1024 * 1024,
});Cache.redis(name, options) creates and owns a Redis client. Cache.redis(redis, options) uses an existing client and does not dispose it unless disposeClient or ownsClient is set.
| Option | Default | Notes |
|---|---|---|
name | client/cache name | Used in Redis keys and diagnostics. |
client | created from Redis options | Existing Redis.client(...) instance. |
url | required when no client is supplied | Passed to Redis.client(...). |
ttlMs | 60000 | Default entry lifetime. |
prefix | sloppy:cache: | Prefix for all cache keys, tag sets, and namespace sets. |
maxValueBytes | 1048576 | Maximum encoded value size. |
Redis cache stores JSON values using a cache-owned J: payload format. Tagged and untagged entries use the same payload format. Tag sets store hashed Redis entry keys plus reverse entry-to-tag metadata so remove(...), invalidateTag(...), and clear() do not leave active cross-tag memberships behind. Expired entries may leave inert metadata until a later tag invalidation or clear removes it.
getOrCreate coalesces concurrent misses in the current process. It is not a distributed lock.
Hybrid Cache
Hybrid cache checks memory first, falls back to the distributed cache, and populates memory on distributed hits.
const cache = Cache.hybrid("main", {
memory: Cache.memory({ maxEntries: 10000, ttlMs: 10000 }),
distributed: Cache.postgres(db, { ttlMs: 60000 }),
});set, remove, invalidateTag, invalidateTags, clear, and cleanup apply to both layers. Distributed read failures throw by default; set failOpenOnDistributedRead: true only when a memory miss may safely become a cache miss.
Cache-Aside
getOrCreate(key, options, factory) prevents local stampedes by coalescing concurrent calls for the same key and namespace.
const value = await cache.getOrCreate("dashboard:u_1", {
ttlMs: 10000,
tags: ["dashboard", "user:u_1"],
}, async (signal) => {
return await loadDashboard(signal);
});Factory failures are not cached. Schema failures are not cached. null is cached by default; pass cacheNull: false to return null without storing it.
Output Cache
Output cache stores safe server-side response descriptors for GET and HEAD routes.
app.get("/products", listProducts)
.outputCache({
ttlMs: 30000,
varyByQuery: ["category", "page"],
varyByHeader: ["accept-language"],
tags: ["products"],
});The output cache key uses the route pattern, selected query keys, selected headers, selected route params, and optional auth partition. It does not use raw full URLs, Authorization, or Cookie values.
Authenticated routes are not cached unless they vary by user:
app.get("/me/dashboard", dashboard)
.requiresAuth()
.outputCache({
ttlMs: 10000,
varyByUser: true,
tags: (ctx) => [`user:${ctx.user.sub}:dashboard`],
});Shared authenticated caching is opt-in. Use it only when every user in the selected role or claim partition is allowed to see exactly the same response.
app.get("/admin/summary", adminSummary)
.requiresAuth({ roles: ["admin"] })
.outputCache({
ttlMs: 5000,
varyByRole: true,
allowSharedAuthenticated: true,
});Output cache bypasses unsafe responses:
- methods other than GET/HEAD
- authenticated routes without
varyByUser: true, unless shared authenticated caching is explicit - responses with
Set-Cookie, unlessallowSetCookie: true - status codes outside the configured allowlist
- unsupported descriptor kinds, including streams, files, redirects, and custom/native descriptors
- descriptors with functions, symbols, or non-JSON body values
- response bodies larger than
maxBodyBytes
Cache Headers
Response cache headers tell clients and proxies what to do. They do not store a server-side response in Sloppy.
app.get("/assets/config.json", () => Results.json({ version: 1 }))
.cacheHeaders({
cacheControl: "public, max-age=60",
vary: ["Accept-Encoding"],
etag: true,
});
return Results.json(profile).cacheControl("private, max-age=30");Use output cache for server-side reuse. Use cache headers for HTTP client/proxy policy.
Observability
Registered caches emit low-cardinality app metrics when they are resolved through app.services.addCache(...):
cache.gets.totalcache.hits.totalcache.misses.totalcache.sets.totalcache.removes.totalcache.evictions.totalcache.expired.totalcache.tag_invalidations.totalcache.get_or_create.factory.totalcache.stampede.waiters.total
Output cache emits output_cache.requests.total. Labels use cache names, backend kinds, route patterns, outcomes, status classes, and bypass reasons. Raw keys, raw URLs, user IDs, cookies, authorization headers, and values are not labels.
Health.cache(cache) reports a cache as unhealthy after disposal and includes safe stats.