Go to file
Woosuk Kwon 6208d622ca
Minor code cleaning for SamplingParams (#99)
2023-05-12 18:07:09 -07:00
benchmark Enhance SamplingParams (#96) 2023-05-11 15:45:30 -07:00
cacheflow Minor code cleaning for SamplingParams (#99) 2023-05-12 18:07:09 -07:00
csrc Fix a bug in attention kernel (#68) 2023-05-04 02:56:09 -07:00
playground FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
tests/kernels Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
.gitignore Add an option to launch cacheflow without ray (#51) 2023-04-30 15:42:17 +08:00
README.md Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
gradio_webserver.py Enhance SamplingParams (#96) 2023-05-11 15:45:30 -07:00
requirements.txt Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
setup.py Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
simple_server.py Enhance SamplingParams (#96) 2023-05-11 15:45:30 -07:00
test_cli_client.py Refactor system architecture (#82) 2023-05-09 15:30:12 -07:00

README.md

CacheFlow

Build from source

pip install -r requirements.txt
pip install -e .  # This may take several minutes.

Test simple server

ray start --head
python simple_server.py

The detailed arguments for simple_server.py can be found by:

python simple_server.py --help

FastAPI server

To start the server:

ray start --head
python -m cacheflow.http_frontend.fastapi_frontend

To test the server:

python -m cacheflow.http_frontend.test_cli_client

Gradio web server

Install the following additional dependencies:

pip install gradio

Start the server:

python -m cacheflow.http_frontend.fastapi_frontend
# At another terminal
python -m cacheflow.http_frontend.gradio_webserver

Load LLaMA weights

Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.

  1. Converting LLaMA weights to huggingface format with this script.
    python src/transformers/models/llama/convert_llama_weights_to_hf.py \
        --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
    
    Please make sure that llama is included in the output directory name.
  2. For all the commands above, specify the model with --model /output/path/llama-7b to load the model. For example:
    python simple_server.py --model /output/path/llama-7b
    python -m cacheflow.http_frontend.fastapi_frontend --model /output/path/llama-7b