Ad

CVE-2025-62164

HIGH CVSS 3.1: 8.8 EPSS 0.13%
Updated Dec 04, 2025
Vllm
Parameter Value
CVSS 8.8 (HIGH)
Affected Versions 0.10.2 — 0.11.1
Fixed In 0.11.1
Type CWE-20 (Improper Input Validation), CWE-787 (Out-of-bounds Write), CWE-502 (Deserialization of Untrusted Data), CWE-123
Vendor Vllm
Public PoC No

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation.

Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM.

This issue has been patched in version 0.11.1.

Attack Parameters

Attack Vector
Network
Can be exploited remotely
Attack Complexity
Low
Easy to exploit
Privileges Required
Low
Basic privileges needed
User Interaction
None
No user interaction needed

Impact Assessment

Confidentiality
High
Complete data leak
Integrity
High
Complete data modification
Availability
High
Complete denial of service

CVSS Vector v3.1

Vulnerable Products 3

Configuration From (including) Up to (excluding)
Vllm Vllm
cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
0.10.2 0.11.1
Vllm Vllm
cpe:2.3:a:vllm:vllm:0.11.1:rc0:*:*:*:*:*:*
Vllm Vllm
cpe:2.3:a:vllm:vllm:0.11.1:rc1:*:*:*:*:*:*