vllm/benchmarks
Ronen Schaffer 7879f24dcc
[Misc] Add OpenTelemetry support (#4687)
This PR adds basic support for OpenTelemetry distributed tracing.
It includes changes to enable tracing functionality and improve monitoring capabilities.

I've also added a markdown with print-screens to guide users how to use this feature. You can find it here
2024-06-19 01:17:03 +09:00
..
cutlass_benchmarks Fix w8a8 benchmark and add Llama-3-8B (#5562) 2024-06-17 06:48:06 +00:00
kernels [mypy] Enable type checking for test directory (#5017) 2024-06-15 04:45:31 +00:00
overheads [Core][Hash][Automatic Prefix caching] Accelerating the hashing function by avoiding deep copies (#4696) 2024-05-14 21:34:33 +09:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
backend_request_func.py [Misc] use AutoTokenizer for benchmark serving when vLLM not installed (#5588) 2024-06-17 09:40:35 -07:00
benchmark_latency.py [Misc] Add OpenTelemetry support (#4687) 2024-06-19 01:17:03 +09:00
benchmark_prefix_caching.py [Core] Enable prefix caching with block manager v2 enabled (#4142) 2024-05-01 11:20:32 -07:00
benchmark_serving.py [Misc] use AutoTokenizer for benchmark serving when vLLM not installed (#5588) 2024-06-17 09:40:35 -07:00
benchmark_throughput.py [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814) 2024-06-17 11:01:25 -07:00
launch_tgi_server.sh [Doc]Add documentation to benchmarking script when running TGI (#4920) 2024-05-20 20:16:57 +00:00
sonnet.txt feat(benchmarks): Add Prefix Caching Benchmark to Serving Benchmark (#3277) 2024-03-27 13:39:26 -07:00

README.md

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json