Nixpkgs security tracker

Login with GitHub
⚠️ You are using a production deployment that is still only suitable for demo purposes. Any work done in this might be wiped later without notice.

Suggestions search

With package: pkgsRocm.vllm

Found 9 matching suggestions

View:
Compact
Detailed
Untriaged
Permalink CVE-2026-34755
6.5 MEDIUM
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): NONE
  • Integrity impact (I): NONE
  • Availability impact (A): HIGH
created 1 week, 4 days ago
vLLM Affected by Denial of Service via Unbounded Frame Count in video/jpeg Base64 Processing

vLLM is an inference and serving engine for large language models (LLMs). From 0.7.0 to before 0.19.0, the VideoMediaIO.load_base64() method at vllm/multimodal/media/video.py splits video/jpeg data URLs by comma to extract individual JPEG frames, but does not enforce a frame count limit. The num_frames parameter (default: 32), which is enforced by the load_bytes() code path, is completely bypassed in the video/jpeg base64 path. An attacker can send a single API request containing thousands of comma-separated base64-encoded JPEG frames, causing the server to decode all frames into memory and crash with OOM. This vulnerability is fixed in 0.19.0.

Affected products

vllm
  • ==>= 0.7.0, < 0.19.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-34756
6.5 MEDIUM
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): NONE
  • Integrity impact (I): NONE
  • Availability impact (A): HIGH
created 1 week, 4 days ago
vLLM Affected by Unauthenticated OOM Denial of Service via Unbounded `n` Parameter in OpenAI API Server

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue. This vulnerability is fixed in 0.19.0.

Affected products

vllm
  • ==>= 0.1.0, < 0.19.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-34753
5.4 MEDIUM
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): LOW
  • Integrity impact (I): NONE
  • Availability impact (A): LOW
created 1 week, 4 days ago
vLLM affected by Server-Side Request Forgery (SSRF) in `download_bytes_from_url `

vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions. This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.

Affected products

vllm
  • ==>= 0.16.0, < 0.19.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-34760
5.9 MEDIUM
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): HIGH
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): NONE
  • Integrity impact (I): HIGH
  • Availability impact (A): LOW
created 2 weeks, 1 day ago
vLLM: Downmix Implementation Differences as Attack Vectors Against Audio AI Models

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.

Affected products

vllm
  • ==>= 0.5.5, < 0.18.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-27893
8.8 HIGH
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): NONE
  • User interaction (UI): REQUIRED
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): HIGH
  • Integrity impact (I): HIGH
  • Availability impact (A): HIGH
created 3 weeks, 1 day ago
vLLM's hardcoded trust_remote_code=True in NemotronVL and KimiK25 bypasses user security opt-out

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

Affected products

vllm
  • ==>= 0.10.1, < 0.18.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-25960
7.1 HIGH
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): HIGH
  • Integrity impact (I): NONE
  • Availability impact (A): LOW
created 1 month, 1 week ago
SSRF Protection Bypass in vLLM

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

Affected products

vllm
  • ==>= 0.15.1, < 0.17.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-22778
9.8 CRITICAL
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): NONE
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): HIGH
  • Integrity impact (I): HIGH
  • Availability impact (A): HIGH
created 2 months, 2 weeks ago
vLLM leaks a heap address when PIL throws an error

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

Affected products

vllm
  • ==>= 0.8.3, < 0.14.1

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

pkgs.pkgsRocm.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-24779
7.1 HIGH
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): LOW
  • User interaction (UI): NONE
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): HIGH
  • Integrity impact (I): NONE
  • Availability impact (A): LOW
created 2 months, 2 weeks ago
vLLM vulnerable to Server-Side Request Forgery (SSRF) in `MediaConnector`

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.14.1, a Server-Side Request Forgery (SSRF) vulnerability exists in the `MediaConnector` class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods obtain and process media from URLs provided by users, using different Python parsing libraries when restricting the target host. These two parsing libraries have different interpretations of backslashes, which allows the host name restriction to be bypassed. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources. This vulnerability is particularly critical in containerized environments like `llm-d`, where a compromised vLLM pod could be used to scan the internal network, interact with other pods, and potentially cause denial of service or access sensitive data. For example, an attacker could make the vLLM pod send malicious requests to an internal `llm-d` management endpoint, leading to system instability by falsely reporting metrics like the KV cache state. Version 0.14.1 contains a patch for the issue.

Affected products

vllm
  • ==< 0.14.1

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

pkgs.pkgsRocm.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers

Untriaged
Permalink CVE-2026-22807
8.8 HIGH
  • CVSS version: 3.1
  • Attack vector (AV): NETWORK
  • Attack complexity (AC): LOW
  • Privileges required (PR): NONE
  • User interaction (UI): REQUIRED
  • Scope (S): UNCHANGED
  • Confidentiality impact (C): HIGH
  • Integrity impact (I): HIGH
  • Availability impact (A): HIGH
created 2 months, 3 weeks ago
vLLM affected by RCE via auto_map dynamic module loading during model initialization

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

Affected products

vllm
  • ==>= 0.10.1, < 0.14.0

Matching in nixpkgs

pkgs.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

pkgs.pkgsRocm.vllm

High-throughput and memory-efficient inference and serving engine for LLMs

Package maintainers