.. |
attention
|
[AMD][CI/Build] Disambiguation of the function call for ROCm 6.2 headers compatibility (#7477)
|
2024-08-21 16:47:36 -07:00 |
core
|
[Bugfix] Fix support for dimension like integers and ScalarType (#9299)
|
2024-10-17 19:08:34 +00:00 |
cpu
|
[Hardware][CPU] compressed-tensor INT8 W8A8 AZP support (#9344)
|
2024-10-17 12:21:04 -04:00 |
cutlass_extensions
|
[Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701)
|
2024-09-23 13:46:26 -04:00 |
mamba
|
[Kernel][Model] Improve continuous batching for Jamba and Mamba (#9189)
|
2024-10-16 12:12:43 -04:00 |
moe
|
[Performance][Kernel] Fused_moe Performance Improvement (#9384)
|
2024-10-24 15:37:52 -07:00 |
prepare_inputs
|
[Core] CUDA Graphs for Multi-Step + Chunked-Prefill (#8645)
|
2024-10-02 19:44:39 +00:00 |
quantization
|
[Bugfix] Fix spurious "No compiled cutlass_scaled_mm ..." for W8A8 on Turing (#9487)
|
2024-10-22 15:41:13 -07:00 |
rocm
|
[Kernel][Amd] Add fp8 kv cache support for rocm custom paged attention (#8577)
|
2024-09-19 17:37:57 +00:00 |
activation_kernels.cu
|
[Kernel] add kernel for FATReLU (#9610)
|
2024-10-24 16:18:27 +08:00 |
cache.h
|
Add fp8 support to `reshape_and_cache_flash` (#6667)
|
2024-07-24 18:36:52 +00:00 |
cache_kernels.cu
|
Add fp8 support to `reshape_and_cache_flash` (#6667)
|
2024-07-24 18:36:52 +00:00 |
cuda_compat.h
|
[Kernel][ROCm][AMD] enable fused topk_softmax kernel for moe layer (#4927)
|
2024-06-02 14:13:26 -07:00 |
cuda_utils.h
|
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
|
2024-08-20 07:09:33 -06:00 |
cuda_utils_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
custom_all_reduce.cu
|
[torch.compile] register allreduce operations as custom ops (#8526)
|
2024-09-16 22:57:57 -07:00 |
custom_all_reduce.cuh
|
[Bugfix][Kernel] Implement acquire/release polyfill for Pascal (#8776)
|
2024-09-24 21:26:33 -07:00 |
custom_all_reduce_test.cu
|
[Bugfix][Kernel] Implement acquire/release polyfill for Pascal (#8776)
|
2024-09-24 21:26:33 -07:00 |
dispatch_utils.h
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
layernorm_kernels.cu
|
[Kernel] Replaced `blockReduce[...]` functions with `cub::BlockReduce` (#7233)
|
2024-08-21 20:18:00 -04:00 |
ops.h
|
[core] cudagraph output with tensor weak reference (#9724)
|
2024-10-27 00:19:28 -07:00 |
permute_cols.cu
|
[Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (#7701)
|
2024-09-23 13:46:26 -04:00 |
pos_encoding_kernels.cu
|
[Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (#5047)
|
2024-06-09 16:23:30 -04:00 |
torch_bindings.cpp
|
[core] cudagraph output with tensor weak reference (#9724)
|
2024-10-27 00:19:28 -07:00 |