A cache isolation bug is reported with steps to reproduce
vllm-project/semantic-router/ Issues / New
bugsecurityP1
🤖 OpinAI detects the issue and analyzes feasibility
Classifying the bug, checking infrastructure requirements, planning reproduction
Issue #1448
Semantic Cache Cross-User Data Leak
When using semantic cache, User A's cached responses are returned to User B when queries are semantically similar. No user_id partitioning in cache keys.
bugsecurityP1
🤖 OpinAI provisions the test environment
Provisioning test environment on OpenShift with GPU scheduling
🤖 OpinAI reproduces the bug with real infrastructure
Sending requests as two different users to trigger the cache leak
Step 1: Alice
Step 2: Bob
🔴 BUG CONFIRMED
Bob received Alice's data ($12,847.53)
Root cause: Cache keys on (model, query_embedding) — no user_id partition
🤖 OpinAI reports evidence and tracks the fix
Posting structured evidence to GitHub, tracking regression, and validating the fix
🔬 OpinAI — Automated Bug Reproduction
Status: 🔴 Bug Confirmed Evidence: Cache hit (similarity=0.92) returned User A's response to User B without user_id isolation Environment: VSR v0.2 Athena, Ollama qwen2.5:14b, RTX 4090 Reproduction: 2/2 steps completed
Added to regression suite — will re-validate daily.
Evidence: Cache hit (similarity=0.92) returned User A's response to User B without user_id isolation
Environment: VSR v0.2 Athena, Ollama qwen2.5:14b, RTX 4090
Reproduction: 2/2 steps completed
Added to regression suite — will re-validate daily.