vllm/tests
Alexander Matveev 3bc94cab69
[V1] VLM - Run the mm_mapper preprocessor in the frontend process (#10640)
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
2024-12-03 10:33:10 +00:00
..
async_engine [MISC] Consolidate cleanup() and refactor offline_inference_with_prefix.py (#9510) 2024-10-18 14:30:55 -07:00
basic_correctness [core] gemma2 full context length support (#10584) 2024-11-22 20:13:54 -08:00
compile [torch.compile] remove compilation_context and simplify code (#10838) 2024-12-03 06:19:02 +00:00
core [Bugfix] Fix for Spec model TP + Chunked Prefill (#10232) 2024-11-26 09:11:16 -08:00
data [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
distributed [core][distributed] add pynccl broadcast (#10843) 2024-12-03 04:53:23 +00:00
encoder_decoder [Encoder Decoder] Update Mllama to run with both FlashAttention and XFormers (#9982) 2024-11-12 10:53:57 -08:00
engine [Bug][CLI] Allow users to disable prefix caching explicitly (#10724) 2024-11-27 23:59:28 -08:00
entrypoints [Core][Performance] Add XGrammar support for guided decoding and set it as default (#10785) 2024-12-03 15:17:00 +08:00
fp8_kv Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (#3290) 2024-04-03 14:15:55 -07:00
kernels [Kernel] Use `out` arg in flash_attn_varlen_func (#10811) 2024-12-01 17:55:39 -08:00
kv_transfer [Core] Implement disagg prefill by StatelessProcessGroup (#10502) 2024-12-01 19:01:00 -06:00
lora [Misc][LoRA] Move the implementation of lora bias to punica.py (#10829) 2024-12-02 17:53:36 +00:00
metrics [Frontend] Add max_tokens prometheus metric (#9881) 2024-11-04 22:53:24 +00:00
model_executor [Core][Performance] Add XGrammar support for guided decoding and set it as default (#10785) 2024-12-03 15:17:00 +08:00
models [torch.compile] remove compilation_context and simplify code (#10838) 2024-12-03 06:19:02 +00:00
mq_llm_engine [Bugfix][core] replace heartbeat with pid check (#9818) 2024-10-30 09:34:07 -07:00
multi_step [Core] Deprecating block manager v1 and make block manager v2 default (#8704) 2024-10-17 11:38:15 -05:00
multimodal [2/N] handling placeholders in merged multi-modal processor (#10485) 2024-11-22 21:25:09 -08:00
plugins/vllm_add_dummy_model [Model] Replace embedding models with pooling adapter (#10769) 2024-12-01 08:02:54 +08:00
prefix_caching Prefix Cache Aware Scheduling [1/n] (#10128) 2024-11-22 21:15:55 -08:00
prompt_adapter [CORE] Adding support for insertion of soft-tuned prompts (#4645) 2024-07-09 13:26:36 -07:00
prompts [BugFix] Fix input positions for long context with sliding window (#2088) 2023-12-13 12:28:13 -08:00
quantization [torch.compile] limit inductor threads and lazy import quant (#10482) 2024-11-20 18:36:33 -08:00
samplers [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
spec_decode [Bugfix][SpecDecode] apply sampling parameters to target probabilities for consistency in rejection sampling. (#10198) 2024-11-27 05:07:30 +00:00
tensorizer_loader [Misc] Fix import error in tensorizer tests and cleanup some code (#10349) 2024-11-15 09:34:17 +00:00
tokenization [CI/Build] drop support for Python 3.8 EOL (#8464) 2024-11-06 07:11:55 +00:00
tool_use [Frontend] Pythonic tool parser (#9859) 2024-11-14 04:14:34 +00:00
tpu [9/N] torch.compile LLM usage (#10552) 2024-11-21 19:13:31 -08:00
tracing [BugFix] Prevent exporting duplicate OpenTelemetry spans (#9017) 2024-10-22 11:11:53 -07:00
v1 [V1] VLM - Run the mm_mapper preprocessor in the frontend process (#10640) 2024-12-03 10:33:10 +00:00
vllm_test_utils [ci] fix slow tests (#10698) 2024-11-27 09:26:14 -08:00
weight_loading [Model][Quantization] HQQ support through Marlin kernel expansion (#9766) 2024-11-19 13:31:12 -08:00
worker [torch.compile] remove compilation_context and simplify code (#10838) 2024-12-03 06:19:02 +00:00
__init__.py [Small] Formatter only checks lints in changed files (#1528) 2023-10-31 15:39:38 -07:00
conftest.py [Model]: add some tests for aria model (#10770) 2024-12-02 05:36:36 +00:00
test_cache_block_hashing.py [Core] Make encoder-decoder inputs a nested structure to be more composable (#9604) 2024-11-05 10:07:31 +08:00
test_config.py [Model] Replace embedding models with pooling adapter (#10769) 2024-12-01 08:02:54 +08:00
test_embedded_commit.py [CI/Build] use setuptools-scm to set __version__ (#4738) 2024-09-23 09:44:26 -07:00
test_inputs.py [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (#9131) 2024-10-08 14:12:56 +00:00
test_lazy_torch_compile.py [ci] fix slow tests (#10698) 2024-11-27 09:26:14 -08:00
test_logger.py Rename vllm.logging to vllm.logging_utils (#10134) 2024-11-08 20:53:24 +00:00
test_logits_processor.py [Core] Factor out common code in `SequenceData` and `Sequence` (#8675) 2024-09-21 02:30:39 +00:00
test_regression.py Bugfix: fix broken of download models from modelscope (#5233) 2024-06-06 09:28:10 -07:00
test_sampling_params.py [Bugfix] fix crash if max_tokens=None (#2570) 2024-01-23 22:38:55 -08:00
test_scalartype.py [Bugfix] Fix support for dimension like integers and ScalarType (#9299) 2024-10-17 19:08:34 +00:00
test_sequence.py [Core] Factor out common code in `SequenceData` and `Sequence` (#8675) 2024-09-21 02:30:39 +00:00
test_sharded_state_loader.py [CI/Build] Replaced some models on tests for smaller ones (#9570) 2024-10-22 04:52:14 +00:00
test_utils.py [Bugfix] Fix load config when using bools (#9533) 2024-10-27 13:46:41 -04:00
utils.py Adds method to read the pooling types from model's files (#9506) 2024-11-07 08:42:40 +00:00