meilisearch/benchmarks
Clémentine Urquizar ef1ac8a0cb
Update README
2021-06-02 11:13:22 +02:00
..
benches add a way to provide primary_key or autogenerate documents ids 2021-06-02 11:13:20 +02:00
scripts Update README 2021-06-02 11:13:22 +02:00
src move the benchmarks to another crate so we can download the datasets automatically without adding overhead to the build of milli 2021-06-02 11:11:50 +02:00
Cargo.toml move the benchmarks to another crate so we can download the datasets automatically without adding overhead to the build of milli 2021-06-02 11:11:50 +02:00
README.md Update README 2021-06-02 11:13:22 +02:00
build.rs move the benchmarks to another crate so we can download the datasets automatically without adding overhead to the build of milli 2021-06-02 11:11:50 +02:00

README.md

Benchmarks

TOC

Datasets

The benchmarks are available for the following datasets:

  • songs
  • wiki

Songs

songs is a subset of the songs.csv dataset.

It was generated with this command:

xsv sample --seed 42 1000000 songs.csv -o smol-songs.csv

Download the generated songs dataset.

Wiki

wiki is a subset of the wikipedia-articles.csv dataset.

It was generated with the following command:

xsv sample --seed 42 500000 wikipedia-articles.csv -o smol-wikipedia-articles.csv

Download the generated wiki dataset.

Run the benchmarks

On our private server

The Meili team has self-hosted his own GitHub runner to run benchmarks on our dedicated bare metal server.

To trigger the benchmark workflow:

  • Go to the Actions tab of this repository.
  • Select the Benchmarks workflow on the left.
  • Click on Run workflow in the blue banner.
  • Select the branch on which you want to run the benchmarks and select the dataset you want (default: songs).
  • Finally, click on Run workflow.

This GitHub workflow will run the benchmarks and push the critcmp report to a DigitalOcean Space (= S3).

The name of the uploaded file is displayed in the workflow.

More about critcmp.

💡 To compare the just-uploaded benchmark with another one, check out the next section.

On your machine

To run all the benchmarks (~4h):

cargo bench

To run only the songs (~1h) or wiki (~3h) benchmark:

cargo bench --bench <dataset name>

By default, the benchmarks will be downloaded and uncompressed automatically in the target directory.
If you don't want to download the datasets every time you update something on the code, you can specify a custom directory with the environment variable MILLI_BENCH_DATASETS_PATH:

mkdir ~/datasets
MILLI_BENCH_DATASETS_PATH=~/datasets cargo bench --bench songs # the two datasets are downloaded
touch build.rs
MILLI_BENCH_DATASETS_PATH=~/datasets cargo bench --bench songs # the code is compiled again but the datasets are not downloaded

Comparison between benchmarks

The benchmark reports we push are generated with critcmp. Thus, we use critcmp to generate comparison results between 2 benchmarks.

We provide a script to download and display the comparison report.

Requirements:

List the available file in the DO Space:

./benchmarks/script/list.sh
songs_main_09a4321.json
songs_geosearch_24ec456.json

Run the comparison script:

./benchmarks/scripts/compare.sh songs_main_09a4321.json songs_geosearch_24ec456.json