mirror of https://github.com/vllm-project/vllm
[Doc] Dockerfile instructions for optional dependencies and dev transformers (#13699)
This commit is contained in:
parent
c904fdddf6
commit
8354f6640c
|
@ -27,6 +27,36 @@ container to access the host's shared memory. vLLM uses PyTorch, which uses shar
|
|||
memory to share data between processes under the hood, particularly for tensor parallel inference.
|
||||
:::
|
||||
|
||||
:::{note}
|
||||
Optional dependencies are not included in order to avoid licensing issues (e.g. <gh-issue:8030>).
|
||||
|
||||
If you need to use those dependencies (having accepted the license terms),
|
||||
create a custom Dockerfile on top of the base image with an extra layer that installs them:
|
||||
|
||||
```Dockerfile
|
||||
FROM vllm/vllm-openai:v0.7.3
|
||||
|
||||
# e.g. install the `audio` and `video` optional dependencies
|
||||
# NOTE: Make sure the version of vLLM matches the base image!
|
||||
RUN uv pip install --system vllm[audio,video]==0.7.3
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::{tip}
|
||||
Some new models may only be available on the main branch of [HF Transformers](https://github.com/huggingface/transformers).
|
||||
|
||||
To use the development version of `transformers`, create a custom Dockerfile on top of the base image
|
||||
with an extra layer that installs their code from source:
|
||||
|
||||
```Dockerfile
|
||||
FROM vllm/vllm-openai:latest
|
||||
|
||||
RUN uv pip install --system git+https://github.com/huggingface/transformers.git
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
(deployment-docker-build-image-from-source)=
|
||||
|
||||
## Building vLLM's Docker Image from Source
|
||||
|
|
Loading…
Reference in New Issue