9.8 CRITICAL
- CVSS version: 3.1
- Attack vector (AV): NETWORK
- Attack complexity (AC): LOW
- Privileges required (PR): NONE
- User interaction (UI): NONE
- Scope (S): UNCHANGED
- Confidentiality impact (C): HIGH
- Integrity impact (I): HIGH
- Availability impact (A): HIGH
vLLM leaks a heap address when PIL throws an error
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
References
-
https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv x_refsource_CONFIRM
-
https://github.com/vllm-project/vllm/pull/31987 x_refsource_MISC
-
https://github.com/vllm-project/vllm/pull/32319 x_refsource_MISC
-
https://github.com/vllm-project/vllm/releases/tag/v0.14.1 x_refsource_MISC
Affected products
- ==>= 0.8.3, < 0.14.1
Matching in nixpkgs
pkgs.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.pkgsRocm.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.python312Packages.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.python313Packages.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
pkgs.pkgsRocm.python3Packages.vllm
High-throughput and memory-efficient inference and serving engine for LLMs
Package maintainers
-
@happysalada Raphael Megzari <raphael@megzari.com>
-
@CertainLach Yaroslav Bolyukin <iam@lach.pw>
-
@daniel-fahey Daniel Fahey <daniel.fahey+nixpkgs@pm.me>