👤 User opens a new issue on GitHub
A cache isolation bug is reported with steps to reproduce
vllm-project/semantic-router / Issues / New
bug security P1
Issue #1448
Semantic Cache Cross-User Data Leak

When using semantic cache, User A's cached responses are returned to User B when queries are semantically similar. No user_id partitioning in cache keys.

bug   security   P1
Step 1: Alice
Step 2: Bob
🔴 BUG CONFIRMED
Bob received Alice's data ($12,847.53)
Root cause: Cache keys on (model, query_embedding) — no user_id partition
🔬 OpinAI — Automated Bug Reproduction
Status: 🔴 Bug Confirmed
Evidence: Cache hit (similarity=0.92) returned User A's response to User B without user_id isolation
Environment: VSR v0.2 Athena, Ollama qwen2.5:14b, RTX 4090
Reproduction: 2/2 steps completed

Added to regression suite — will re-validate daily.
Day 1Confirmed
Day 5Still present
Day 12Fix PR #1538 merged
Day 13Bug FIXED — cache now partitions by user_id
🔄 Fix Validation — PR #1538 merged, re-running reproduction...
Step 1: Alice (same test)
Step 2: Bob (same test)
✅ BUG FIXED — Cache correctly skips personalized responses
Bob gets his own data ($3,291.07). Alice's data stays private.
1.0x
⏸️