.. |
attention
|
[Kernel][Attention] Separate `Attention.kv_scale` into `k_scale` and `v_scale` (#6081)
|
2024-07-16 15:31:32 -07:00 |
cpu
|
[Kernel][Attention] Separate `Attention.kv_scale` into `k_scale` and `v_scale` (#6081)
|
2024-07-16 15:31:32 -07:00 |
moe
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
prepare_inputs
|
[Core] draft_model_runner: Implement prepare_inputs on GPU for advance_step (#6338)
|
2024-07-17 14:30:28 -07:00 |
punica
|
[Kernel] Add punica dimensions for Granite 3b and 8b (#5930)
|
2024-06-29 10:48:25 +08:00 |
quantization
|
[Bugfix][Kernel] Promote another index to int64_t (#6838)
|
2024-07-26 18:41:04 +00:00 |
activation_kernels.cu
|
[Model] Port over CLIPVisionModel for VLMs (#5591)
|
2024-06-20 11:52:09 +00:00 |
cache.h
|
Add fp8 support to `reshape_and_cache_flash` (#6667)
|
2024-07-24 18:36:52 +00:00 |
cache_kernels.cu
|
Add fp8 support to `reshape_and_cache_flash` (#6667)
|
2024-07-24 18:36:52 +00:00 |
cuda_compat.h
|
[Kernel][ROCm][AMD] enable fused topk_softmax kernel for moe layer (#4927)
|
2024-06-02 14:13:26 -07:00 |
cuda_utils.h
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
cuda_utils_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
custom_all_reduce.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
custom_all_reduce.cuh
|
[CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722)
|
2024-05-22 07:18:41 +00:00 |
custom_all_reduce_test.cu
|
[CI/Build] Enforce style for C++ and CUDA code with `clang-format` (#4722)
|
2024-05-22 07:18:41 +00:00 |
dispatch_utils.h
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
layernorm_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
moe_align_block_size_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
ops.h
|
[Kernel][Core] Add AWQ support to the Marlin kernel (#6612)
|
2024-07-21 19:41:42 -04:00 |
pos_encoding_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
reduction_utils.cuh
|
[Kernel] Dynamic Per-Token Activation Quantization (#5037)
|
2024-06-07 09:36:26 -07:00 |
registration.h
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
torch_bindings.cpp
|
Add fp8 support to `reshape_and_cache_flash` (#6667)
|
2024-07-24 18:36:52 +00:00 |