mirror of https://github.com/vllm-project/vllm
7879f24dcc
This PR adds basic support for OpenTelemetry distributed tracing. It includes changes to enable tracing functionality and improve monitoring capabilities. I've also added a markdown with print-screens to guide users how to use this feature. You can find it here |
||
---|---|---|
.. | ||
cutlass_benchmarks | ||
kernels | ||
overheads | ||
README.md | ||
backend_request_func.py | ||
benchmark_latency.py | ||
benchmark_prefix_caching.py | ||
benchmark_serving.py | ||
benchmark_throughput.py | ||
launch_tgi_server.sh | ||
sonnet.txt |
README.md
Benchmarking vLLM
Downloading the ShareGPT dataset
You can download the dataset by running:
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json