Files
aiworker/.cursor/skills/vercel-react-best-practices/rules/server-cache-lru.md
Hector Ros dcaaef1011 Unify past-sessions naming format
Format: YYYY-MM-DD-description.md
- 2026-01-19-infrastructure-deployment.md
- 2026-01-19-backend-api-implementation.md (in progress)

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
2026-01-20 01:07:17 +01:00

1.3 KiB

title, impact, impactDescription, tags
title impact impactDescription tags
Cross-Request LRU Caching HIGH caches across requests server, cache, lru, cross-request

Cross-Request LRU Caching

React.cache() only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.

Implementation:

import { LRUCache } from 'lru-cache'

const cache = new LRUCache<string, any>({
  max: 1000,
  ttl: 5 * 60 * 1000  // 5 minutes
})

export async function getUser(id: string) {
  const cached = cache.get(id)
  if (cached) return cached

  const user = await db.user.findUnique({ where: { id } })
  cache.set(id, user)
  return user
}

// Request 1: DB query, result cached
// Request 2: cache hit, no DB query

Use when sequential user actions hit multiple endpoints needing the same data within seconds.

With Vercel's Fluid Compute: LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.

In traditional serverless: Each invocation runs in isolation, so consider Redis for cross-process caching.

Reference: https://github.com/isaacs/node-lru-cache