Tech Products intermediate 3 min read May 6, 2026 · Updated May 8, 2026
Public Preview Sign in free for the full digest →

Valkey: Redis fork that hit 1.2M RPS

“The same server that tops out at 360K requests/second on Redis 7.2 hits 1.19M on Valkey 8.0 — and the swap takes an afternoon, not a rewrite.”

Valkey: Redis fork that hit 1.2M RPS
1 Views
1 Likes
0 Bookmarks
Source · github.com

“"Valkey is not behaving like most forks and declining in interest, commits and project traction. Instead, it seems to have found a level of sustainable development velocity that shows no signs of stagnation, one enabled by a relatively diverse set of project backers." — Stephen ...”

You know that feeling when a core piece of your infrastructure changes its license overnight? Redis Inc. switched Redis from MIT to SSPL/RSALv2 in March 2024 — cloud providers and SaaS companies could no longer offer Redis-compatible services without licensing fees or open-sourcing their entire stack. At the same time, Redis 7.2's single-threaded architecture leaves CPU cores idle under heavy I/O: on a 16-vCPU machine you get the throughput of one thread while 15 sit unused. Valkey solves both: a permissive BSD-3-Clause license that cannot be revoked, and an I/O threading model that reaches 1.19M RPS on the same hardware where Redis 7.2 tops out at 360K.

rediskey-value-storecachingdatabaseopen-sourcelinux-foundationdistributed-systems

Valkey keeps Redis's core design: every command runs on a single thread so operations are atomic with no locks or race conditions. On top of that, Valkey 8.0 added a pool of async I/O threads (configured with --io-threads N) that handle all network reads and writes in parallel. Think of it like a restaurant where one chef still controls the kitchen — guaranteeing no two orders conflict — but now 8 runners carry food to tables simultaneously instead of one runner doing all the trips. You point your existing Redis client at Valkey's port and nothing else changes — same commands, same response format, same config file syntax.

01
Async I/O threading (--io-threads N) — on an AWS c8g.2xlarge with Graviton4, Valkey 8.1 reaches 999.8K RPS for SET and 947.1K for GET versus Redis 8.0's 729.4K and 821.4K; p99 GET latency drops from 0.95ms to 0.28ms
02
100% Redis 7.2 API compatibility — your existing redis-py, Jedis, ioredis, or any Redis 7.2 client works without modification; change the hostname and you are done
03
BSD-3-Clause license under Linux Foundation governance — no SSPL, no RSALv2, no relicensing possible; 47 organizations including Amazon and Google commit engineers, not just money
04
Hash field expiration (9.0) — set a TTL on individual fields inside a hash without deleting the entire key; this ships a feature request Redis never completed
05
Dual-channel replication — separates the replication stream from the data stream during full sync, cutting cluster join and failover sync time by up to 50%
06
Multi-DB in cluster mode (9.0) — use SELECT to switch logical databases inside a Redis cluster, a constraint that Redis cluster mode never removed
07
Public performance dashboard (valkey.io/performance/) — every commit on the unstable branch gets benchmarked on AWS c8g.metal-48xl with NUMA isolation, so you catch regressions before they ship
Who it’s for

If you run Redis 7.2 in production and want out of the SSPL licensing conversation, Valkey is a direct path — swap the binary, enable I/O threading, and the migration is done in hours. If you are hitting 200–400K RPS on a multi-core instance while cores sit idle, adding --io-threads 8 gives you the equivalent of 3× the hardware at zero cost. This is NOT the right move if you have already upgraded to Redis 7.4+ — the dump file format changed, and migrating back requires third-party tooling that adds risk.

Worth exploring

Yes, if you are on Redis 7.2.x — this is a straightforward swap with verified production adoption at AWS scale. The 37% throughput gain and 60%+ p99 latency improvement versus Redis 8.0 come from an independent Momento benchmark, not Valkey's own numbers. Hold off if you depend on Redis 7.4+ dump files or if your workload runs CPU-intensive sorted-set commands, where DragonflyDB's sharded model outperforms Valkey by 29× at 48 vCPUs per DragonflyDB's own benchmark.

Developer playbook
Tech stack, code snippet, sentiment, alternatives.
PM playbook
Adoption angles, user fit, positioning.
CEO playbook
Traction signals, ROI, build vs buy.
Deep-dive insight
Full long-form analysis, no fluff.
Easy mode
Core idea, fast — when you need the gist.
Pro mode
Technical nuance, edge cases, tradeoffs.
Read the full digest
Go beyond the preview

Deep-dive insight, Easy and Pro modes, plus action playbooks — the full breakdown is one tap away.

Underrated tools. Unfiltered takes.

Read the full digest in the Snaplyze app for deep-dive insight, Easy and Pro modes, and the playbooks you can actually use.

Install Snaplyze →