vllm/benchmarks
youkaichao 8fe8386591
[Kernel] change benchmark script so that result can be directly used; tune moe kernel in A100/H100 with tp=2,4,8 (#3389)
2024-03-14 08:11:48 +00:00
..
kernels [Kernel] change benchmark script so that result can be directly used; tune moe kernel in A100/H100 with tp=2,4,8 (#3389) 2024-03-14 08:11:48 +00:00
README.md Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
backend_request_func.py Fix the openai benchmarking requests to work with latest OpenAI apis (#2992) 2024-03-04 15:51:56 -08:00
benchmark_latency.py Make it easy to profile workers with nsight (#3162) 2024-03-03 16:19:13 -08:00
benchmark_prefix_caching.py [Minor Fix] Remove unused code in benchmark_prefix_caching.py (#3171) 2024-03-03 22:48:27 -08:00
benchmark_serving.py [Minor Fix] Fix comments in benchmark_serving (#3252) 2024-03-07 22:22:59 -08:00
benchmark_throughput.py enable --gpu-memory-utilization in benchmark_throughput.py (#3175) 2024-03-04 10:37:58 -08:00
launch_tgi_server.sh Serving Benchmark Refactoring (#2433) 2024-02-12 22:53:00 -08:00

README.md

Benchmarking vLLM

Downloading the ShareGPT dataset

You can download the dataset by running:

wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json