Redis is one of those tools that gets added to a stack reflexively, often before there's a problem it actually solves. This guide is about when Redis genuinely earns its place, when you can skip it, and what alternatives are worth a look in 2026.
When Redis is the right tool
Three legitimate use cases, in roughly the order you'll encounter them:
1. Session storage
If you run a web app behind multiple replicas, the in-memory session store stops being viable — a user landing on replica B doesn't have a session that was created on replica A. Redis is excellent at this: low-latency reads and writes for tiny payloads, with optional TTLs.
2. Application cache
Caching expensive computations or database lookups. The pattern is cache.get(key) || (compute() && cache.set(key, value, ttl)). Redis is purpose-built for it.
3. Job queues
BullMQ, Sidekiq, RQ — all use Redis as their backing store. The combination of pub/sub, sorted sets, and atomic ops makes Redis the easiest queue to operate at small-to-medium scale.
When you don't need Redis
Skip Redis if any of these are true:
- Single-replica deploy. Use the in-process cache. It's faster and there's nothing to operate.
- You only cache things that change daily. Postgres + a
last_computed_atcolumn is fine. - Your "queue" is < 10 jobs/second and idempotent. A
pending_jobsPostgres table with aSELECT FOR UPDATE SKIP LOCKEDquery handles it without a second piece of infrastructure.
The cost of an extra component is real: another thing to monitor, another thing to back up (or not), another thing to upgrade, another set of credentials.
The Postgres-as-cache pattern
For products in their first year, a single Postgres handling sessions, cache, and queues is cheaper, simpler, and faster than Postgres + Redis. The query pattern:
-- Sessions
CREATE UNLOGGED TABLE sessions (
id text PRIMARY KEY,
user_id uuid,
data jsonb,
expires_at timestamptz
);
-- Cache
CREATE UNLOGGED TABLE cache (
key text PRIMARY KEY,
value jsonb,
expires_at timestamptz
);
-- Queue
CREATE TABLE jobs (
id bigserial PRIMARY KEY,
payload jsonb,
available_at timestamptz NOT NULL DEFAULT now(),
picked_up_by text,
picked_up_at timestamptz
);
UNLOGGED tables skip WAL writes — they're closer to in-memory in performance, at the cost of being lost on a database crash. For sessions and cache, that's the right tradeoff.
You can move to Redis later if any of these become hotspots; you might never need to.
DragonFly, KeyDB, and the modern alternatives
Redis is open source, but post-2023 the licensing got weird. Several drop-in compatible alternatives are worth knowing about:
- DragonFly. A from-scratch reimplementation, usually faster than Redis at high throughput. Available as a marketplace add-on on most modern PaaS, including Launchverse.
- KeyDB. Multithreaded fork of Redis, often faster for cache-heavy workloads.
- Valkey. Linux Foundation fork of Redis 7.x; explicit BSD licence.
All three speak the Redis wire protocol, so your existing client libraries work without changes. If you'd otherwise reach for Redis on a PaaS, you can substitute any of these and decide later.
When to scale Redis up
Symptoms that you need to take Redis seriously:
- Eviction count is non-zero in production. You're losing cache entries. Either raise memory or shorten TTLs.
- Single-thread CPU is pegged. Redis is single-threaded for commands; you've outgrown a single instance. Consider DragonFly or sharding.
- Pub/sub is dropping messages. Redis pub/sub is fire-and-forget. If you can't afford to drop, switch to Streams or move to a real broker (NATS, RabbitMQ).
What we run
Most Launchverse projects we observe in the wild start without Redis and add it 6–12 months in, when session sharding or queue throughput becomes the actual bottleneck. The marketplace makes it a one-click add-on then; the same project's running by tea time.