[Doc] Dockerfile instructions for optional dependencies and dev transformers (#13699)

This commit is contained in:
Cyrus Leung 2025-02-22 22:04:31 +08:00 committed by GitHub
parent c904fdddf6
commit 8354f6640c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 30 additions and 0 deletions

View File

@ -27,6 +27,36 @@ container to access the host's shared memory. vLLM uses PyTorch, which uses shar
memory to share data between processes under the hood, particularly for tensor parallel inference.
:::
:::{note}
Optional dependencies are not included in order to avoid licensing issues (e.g. <gh-issue:8030>).
If you need to use those dependencies (having accepted the license terms),
create a custom Dockerfile on top of the base image with an extra layer that installs them:
```Dockerfile
FROM vllm/vllm-openai:v0.7.3
# e.g. install the `audio` and `video` optional dependencies
# NOTE: Make sure the version of vLLM matches the base image!
RUN uv pip install --system vllm[audio,video]==0.7.3
```
:::
:::{tip}
Some new models may only be available on the main branch of [HF Transformers](https://github.com/huggingface/transformers).
To use the development version of `transformers`, create a custom Dockerfile on top of the base image
with an extra layer that installs their code from source:
```Dockerfile
FROM vllm/vllm-openai:latest
RUN uv pip install --system git+https://github.com/huggingface/transformers.git
```
:::
(deployment-docker-build-image-from-source)=
## Building vLLM's Docker Image from Source