init artificial evaluation

This commit is contained in:
FangnuoWu 2023-07-28 20:03:29 +08:00
commit 01355298cf
141 changed files with 7421 additions and 0 deletions

54
.gitignore vendored Normal file
View File

@ -0,0 +1,54 @@
*.o
*.elf
*.img
*.bin
tags
cscope.*
*.swp
!/**/firmware/*.bin
!/**/firmware/*.elf
# build
build/exec_log
build/gdb-port
.vscode
simulate.sh
debug.sh
.config
.*-config
musl-1.1.24/build
musl-1.1.24/obj
musl-1.1.24/lib
asm
kernel/include/arch/aarch64/arch/virt/asm-offsets.h
user/vmm/vm_img
user/vmm/vm_config
user/vmm/vm_result
run_*
exec_log
make-tag.sh
!scripts/build/
/ramdisk
/.chpm
__pycache__
infer_*_report.txt
experiment/log/*
user/demos/*
sosp23-exp.tar.gz
artificial_evaluation/logs/*
artificial_evaluation/*.jpg
artificial_evaluation/*.csv
artificial_evaluation/*/result/

1
README.md Executable file
View File

@ -0,0 +1 @@
# TreeSLS: A Whole-system Persistent Microkernel with Tree-structured State Checkpoint on NVM

162
artificial_eval.md Normal file
View File

@ -0,0 +1,162 @@
# SOSP 2023 Artifact Submission
We thank the artifact evaluators who have volunteered to do one of the toughest jobs out there!
## Requirements
Hardware
- Intel® Optane™ Persistent Memory (or you can use our Qemu mode for simulation)
Software
- docker: we build the OS within a given docker
- ipmitool: for interacting with the real machine (with kernel loaded)
- expect & python3: for scripts
## Building TreeSLS OS
> Currently, we provide the already-built kernel images, so you can skip this part.
Use `./defconfig x86_64` and `./quick-build.sh` to build everything at first.
### Kernel Parameters
Different tests require different flags in `kernel/sls_config.cmake`. But we will give a `setup.sh` script in each test to **automatically** set these parameters and build the kernel!
The meaning of each flag is given below:
1. Basic configuration
- SLS_RESTORE: restore from the last checkpoint if set; else start with an empty OS.
- SLS_EXT_SYNC: enable external synchrony.
- SLS_HYBRID_MEM: enable `hybrid method` to checkpoint memory pages; else fall back to `CoW method` during runtime.
2. Report details
- SLS_REPORT_CKPT: report checkpoint information
- SLS_REPORT_RESTORE: report restore information
- SLS_REPORT_HYBRID: report information of hybrid method
3. Special tests
- SLS_SPECIAL_OMIT_PF: omit triggering page fault related to checkpoint
- SLS_SPECIAL_OMIT_MEMCPY: omit to copy page-faulted pages related to checkpoint
- SLS_SPECIAL_OMIT_BENCHMARK: omit tracking benchmarks
### User App Parameters
Also, you can selectively choose whether to build each application by setting the `ON` flag in `user/config.cmake`.
```cmake
chcore_config(CHCORE_DEMOS_REDIS BOOL ON "Build redis?")
chcore_config(CHCORE_DEMOS_MEMCACHED BOOL ON "Build memcached?")
chcore_config(CHCORE_DEMOS_MEMCACHETEST BOOL ON "Build memcache test?")
chcore_config(CHCORE_DEMOS_SQLITE BOOL ON "Build SQLite3?")
chcore_config(CHCORE_DEMOS_LEVELDB BOOL ON "Build LevelDB?")
chcore_config(CHCORE_DEMOS_YCSB BOOL ON "Build YCSB-C?")
chcore_config(CHCORE_DEMOS_PHOENIX BOOL ON "Build Phoenix?")
chcore_config(CHCORE_DEMOS_ROCKSDB BOOL ON "Build RocksDB?")
```
## Knowledge Before Testing!
We will give scripts of each test (`*.sh` files) in **subdirs** in `artificial_evaluation`.
### Test Mode
You can use `QEMU` or `IPMI` mode. Switch the mode by setting mode in `artificial_evaluation/config.exp` and `artificial_evaluation/config.sh` (*you should modify both*). We **recommend** you use the `IPMI` mode since QEMU's simulation of NVM is quite different from the real cases.
### More information about using the IPMI mode!
To run with the `IPMI` mode, you should:
1. Build the os image and load the `./build/chcore.iso` file to the iDRAC platform.
![2-load-treesls](./load-treesls.png)
2. Boot the os with a grub entry.
![1-boot-treesls](./boot-treesls.png)
3. Wait a minute and interact with the os by ipmitool.
## Evaluating the Artifact
In most cases, the workflow of running each test is:
- **Currently not required**: use `setup*.sh` to set kernel flags and build the image.
- load the image (provided in `images` dir) and boot it.
- use `test*.sh` to run the test and get the data in `artificial_evaluation/logs`
- use `table*.sh` or `fig*.sh` to parsing the data and generate results in `artificial_evaluation/<subdir>/result`
**NOTE**: *We recommend you to run all tests with `artificial_evaluation/test_base_all.sh` together and run tests with other required setups separately!*
### 0. Functionality
We use QEMU mode To test the functionality (we set), that is, whether our programs can restart with the same working flow as the time it crashes.
You should:
1. use `start.exp` to start the program, we test the ping-pong program by default, we can test whatever you like by replacing `send -- "test_crash_counter.bin & \r"` in the script.
2. during the running of the program, you can use 'CTRL-A + X' to stop the QEMU (crash the program).
3. now you can use `restore.exp` to restart from the latest checkpoint and check the output.
### 1. Checkpoint/Restore Details (Table 2 & 3, Figure 9)
This test reports the checkpoint/restore details as well as other configurations like app size and object count.
#### 1.1 ckpt details
0. load `images/treesls-ckpt.iso`
1. use `test_ckpt_details.sh` to run each benchmark with checkpoint log reported.
2. run `fig9.sh`.
#### 1.1 restore details
0. load `images/treesls-restore.iso`
1. use `test_restore_details.sh` to run each benchmark with the restore log reported.
2. run `table3.sh`
#### 1.2 object count as well as size
0. use `images/treesls-mem-size.iso` (no need to load, size calculated in QEMU mode)
1. use `test_ckpt_size.exp` to calculate the memory size
2. run `table2.sh`
### 2. Hybrid memory checkpoint method (Table 4 & Figure 10)
Information in Table 4 is together tested with Test 1 (results are generated by `1-ckpt-restore-details/test-ckpt-details`). You can just run `table4.sh` to get the `table4.csv` file.
Figure 10 requires 4 different setups:
1. `+ckpt`: load `images/treesls-plusckpt.iso` and run `test_plusckpt.sh`.
2. `+pf`: load `images/treesls-pluspf.iso` and run `test_pluspf.sh`.
3. `+memcpy`: load `images/treesls-plusmemcpy.iso` and run `test_plusmemcpy.sh`.
4. `base and hybrid`: base can be tested with any setup (as no checkpoint here), we put it with hybrid setup. Load `images/treesls-base.iso` and run `test_base_and_hybrid.sh`. (recommended to run with artificial_evaluation/test_base_all.sh).
After all, run `fig10.sh`.
### 4. External Synchrony Support (Figure 12)
1. load `images/treesls-base.iso` and run `test_base.sh`. (recommended to run with artificial_evaluation/test_base_all.sh)
2. load `images/treesls-ext.iso` and run `test_ext_sync.sh`.
After all, run `fig12.sh`.
> Note: The following tests (4, 5, 6) can both run with the base image. We recommend you run with artificial_evaluation/test_base_all.sh together!
### 4. Memcached (Figure 11)
1. load `images/treesls-base.iso` and run `test_memcached.sh`
2. run `fig11.sh`
### 5. Redis-YCSB (Figure 13)
1. load `images/treesls-base.iso` and run `test_memcached.sh`
2. run linux tests
- run on the same machine with scripts in `linux-redis-ycsb` dir, please build the redis-server with musl-libc.
- run `./linux-redis-ycsb/test_ycsb.sh`
- copy the logs to `artificial_evaluation/logs/IPMI<orQEMU>/ycsb`
3. run `fig13.sh`
### 6. RocksDB-Prefix_dist (Figure 14)
1. load `images/treesls-base.iso` and run `test_rocksdb.sh`
2. run Rocksdb test provided by Aurora (https://github.com/rcslab/aurora-bench/tree/master), scripts are given in `aurora-rocksdb/test_rockdb.sh`
3. run `fig14.sh`

View File

@ -0,0 +1,6 @@
#!/usr/bin/expect
source ../config.exp
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"

View File

@ -0,0 +1,18 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE ON)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK OFF)" $kconfig
cd $basedir
# ./chbuild clean
# ./chbuild defconfig x86_64
./chbuild build

View File

@ -0,0 +1,18 @@
#!/usr/bin/expect
source ../config.exp
set timeout 60
spawn rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
# You can run whatever you like here! The following is some recommadation
send -- "test_crash_counter.bin & \r"
send -- "checkpoint.bin -i %1 & \r"
# Set the ANSI escape sequence for red color
send_user "$red_color \n Use CTRLA + X whenever you like to stop the program and restore! $white_color \n"
interact

View File

@ -0,0 +1,295 @@
import sys
import numpy as np
import matplotlib.pyplot as plt
import re
import os
import os.path as op
from break_down_config import *
import pandas as pd
SYS = 0
IPI = 1
MIGREATE = 2
OBJ = 3
CAP_GROUP = 4
THREAD = 5
CONNECTION = 6
NOTIFICATION = 7
IRQ = 8
PMO = 9
VMSPACE = 10
ALLOC = 11
TYPE_NR = 12
PFCOUNT = 0
DIRTY_PAGE = 1
TOT_CACHED_PAGE = 2
EXTRA_TYPE_NR = 3
vars = ['System_Vars', 'IPI', 'MIGREATE', 'OBJ', 'CAP_GROUP', 'THREAD', 'CONNECTION', 'NOTIFICATION', 'IRQ', 'PMO', 'VMSPACE', 'KVS', 'MEMCPY', 'ALLOC', 'THRACK_ACCESS', 'PTE_POLL']
labels = ['System Vars', 'IPI', 'Reset Page Table', 'K-V Store', 'Kernel Malloc', 'Data Copy']
# colors = ['orange', 'c', 'red', 'green', 'yellow', 'brown']
colors = ['grey', '#BCCCA3', '#0072BD', '#8682BD', '#D96A73', '#FABC55']
hatches = ['', '|||', '\\\\\\', '///', '++', '\\/\\/\\/']
def parseFile(infile):
with open(infile, 'r') as f:
lines = f.readlines()
# split lines into groups
groups = []
current = []
counts = []
on = False
for line in lines:
if line.find("==LOG") >= 0:
on = True
continue
# if line.find("==END") >= 0:
if line.find("active list") >= 0:
on = False
current.append(line)
groups.append(current)
counts.append(len(current))
current = []
continue
if on:
current.append(line)
# assert not on and np.max(counts) == np.min(counts)
return groups
# handle one era
def find(lines, str):
for line in lines:
if line.find(str) >= 0:
res = int(line.replace('\r', '').replace('\n', '').split()[-1])
# print("find", str, res)
return res
print('Error: cannot find str "%s" in current group:\n' % str)
for line in lines:
print(line, end='')
assert False
def findMaxMigrateTime(lines):
time_list = []
for line in lines:
if line.find('migrate time') >= 0:
cpu_list = line.split()[2:]
if len(cpu_list) == 1:
return int(cpu_list[0])
for i in range(len(cpu_list)//2):
time_list.append(int(cpu_list[i*2+1]))
break
return max(time_list)
def findExtraInfo(lines):
extra = [0] * EXTRA_TYPE_NR
for line in lines:
if line.find('pf_count') >= 0:
# get pf_count
pf_count_start = line.find("pf_count=") + len("pf_count=")
pf_count_end = line.find(",", pf_count_start)
extra[PFCOUNT] = int(line[pf_count_start:pf_count_end])
# break
if line.find('active list') >= 0:
num_regex = r"\d+"
matches = re.findall(num_regex, line)
if len(matches) >= 4:
extra[DIRTY_PAGE] = int(matches[0])
extra[TOT_CACHED_PAGE] = int(matches[1])
# print(extra)
return extra
def findCounts(lines):
counts = [0] * 7
for line in lines:
if line.find("object count") >= 0:
line = line.replace('\r', '').replace('\n', '')
m = re.match("^object count ([0-9]+): ([0-9]+) ([0-9]+) *", line)
assert not m is None
d = [int(x) for x in m.groups()]
counts[d[0]] = d[1], d[2]
return counts
IGNORE_CNT = 1
MAX_CNT = 100 * 1000
def getTimeCount(groups):
# traverse each group to get times
times = [0] * TYPE_NR
first_times = [0] * TYPE_NR
totalExtra = np.zeros(EXTRA_TYPE_NR, dtype=np.float32)
totalCounts = np.zeros((7, 2), dtype=np.float32)
length = len(groups) - IGNORE_CNT
# ignore the first 5
for i, group in enumerate(groups):
if i < IGNORE_CNT:
try:
first_times[CAP_GROUP] += find(group, "object count 0")
first_times[THREAD] += find(group, "object count 1")
first_times[CONNECTION] += find(group, "object count 2")
first_times[NOTIFICATION] += find(group, "object count 3")
first_times[IRQ] += find(group, "object count 4")
first_times[PMO] += find(group, "object count 5")
first_times[VMSPACE] += find(group, "object count 6")
continue
except (ValueError, AssertionError) as e:
print(e)
print(group, ": error in this group, please check")
exit(0)
else:
try:
times[IPI] += find(group, "ipi time")
times[ALLOC] += find(group, "get second latest obj")
times[OBJ] += find(group, "ckpt object")
# times[KVS] += find(group, "kvs get time")
# times[KVS] += find(group, "kvs put time") + find(group, "kvs get time")
times[SYS] += find(group, "recycle cost") + find(group, "fmap cost")
# times[MEMCPY] += find(group, "memcpy time")
times[CAP_GROUP] += find(group, "object count 0")
times[THREAD] += find(group, "object count 1")
times[CONNECTION] += find(group, "object count 2")
times[NOTIFICATION] += find(group, "object count 3")
times[IRQ] += find(group, "object count 4")
times[PMO] += find(group, "object count 5")
times[VMSPACE] += find(group, "object count 6")
# times[PTE_POLL] += find(group, "pte pool time")
# times[THRACK_ACCESS] += find(group, "track access time")
times[MIGREATE] += findMaxMigrateTime(group)
extra = findExtraInfo(group)
totalExtra += np.array(extra)
counts = findCounts(group)
totalCounts += np.array(counts)
except (ValueError, AssertionError) as e:
print(e)
# exit(0)
length = length - 1
continue
# print(len(groups), np.array(times, dtype=np.float32))
times = np.array(times, dtype=np.float32) / length
first_times = np.array(first_times, dtype=np.float32) / IGNORE_CNT
totalCounts /= (length - 1)
totalExtra /= (length - 1)
return times, totalCounts, first_times, totalExtra
# traverse each era
def printinfo(infile):
allTimes = []
allCounts = []
# infile = sys.argv[1]
groups = parseFile(infile)
times, counts, first_times, extras = getTimeCount(groups)
print(counts, extras)
print("object time (ns)")
for i in range(CAP_GROUP, VMSPACE + 1):
if counts[i-CAP_GROUP][0] == 0:
continue
print(float(times[i])/counts[i-CAP_GROUP][0])
print("object first time (ns)")
for i in range(CAP_GROUP, VMSPACE + 1):
if counts[i-CAP_GROUP][0] == 0:
continue
print(float(first_times[i])/counts[i-CAP_GROUP][0])
times = times / 1000
# with open(outProportion, 'w+') as ofile:
for i in range(TYPE_NR):
print(vars[i], ", ", times[i])
# ofile.write("{}, {}\n".format(vars[i], times[i]))
if __name__ == '__main__':
# indir = "../ckpt-breakdown-backup/"
args = sys.argv
if len(args) < 3:
print("usage: python draw_fig.py [indir] [ckpt/extra/count]")
sys.exit(1)
indir = args[1]
arg2 = args[2]
c={0: 'C.G.', 1: 'Thread', 2: 'IPC', 3: 'Noti.', 4: 'IRQ', 5: 'PMO', 6: 'VMS'}
path = './result/'
if not os.path.exists(path):
os.mkdir(path)
if arg2 == 'ckpt':
incr_res = {}
full_res ={}
# extra = {}
for label, fname in workload_dict.items():
files = os.listdir(indir)
# get all files that start with fname
matches = [f for f in files if f.startswith(fname)]
__times = [0] * 7
__first_times = [0] * 7
for m in matches:
_groups = parseFile(indir + m)
_times, _counts, _first_times, _extra = getTimeCount(_groups)
for i in range(CAP_GROUP, VMSPACE + 1):
if _counts[i-CAP_GROUP][0] == 0:
continue
__times[i-CAP_GROUP] = float(_times[i])/_counts[i-CAP_GROUP][0]
__first_times[i-CAP_GROUP] = float(_first_times[i])/_counts[i-CAP_GROUP][0]
incr_res[label] = __times
full_res[label] = __first_times
# c={0: 'C.G.', 1: 'Thread', 2: 'IPC', 3: 'Noti.', 4: 'IRQ', 5: 'PMO', 6: 'VMS'}
# incr-th, full-th,
df = pd.DataFrame.from_dict(incr_res).transpose()
df = df.rename(columns=c)
print("Table3 (Incr):")
print(df)
df.to_csv("./result/table3-incur-colum.csv")
df = pd.DataFrame.from_dict(full_res).transpose()
df = df.rename(columns=c)
print("Table3 (Full):")
print(df)
df.to_csv("./result/table3-full-colum.csv")
elif arg2 == 'extra':
extra = {}
for label, fname in extra_workload_dict.items():
files = os.listdir(indir)
# get all files that start with fname
matches = [f for f in files if f.startswith(fname)]
for m in matches:
_groups = parseFile(indir + m)
_, _, _, _extra = getTimeCount(_groups)
extra[label] = _extra
# PFCOUNT = 0
# DIRTY_PAGE = 1
# TOT_CACHED_PAGE = 2
df = pd.DataFrame.from_dict(extra).transpose()
df = df.rename(columns={0: '# of runtime page faults', 1: '# of dirty cached pages', 2: '# of cached pages'})
print("Table4:")
print(df)
df.to_csv("./result/table4.csv")
elif arg2 == 'count':
count_res = {}
for label, fname in workload_dict.items():
files = os.listdir(indir)
# get all files that start with fname
matches = [f for f in files if f.startswith(fname)]
__counts = [0] * 7
for m in matches:
_groups = parseFile(indir + m)
_, _counts, _, _ = getTimeCount(_groups)
for i in range(7):
__counts[i] = round(_counts[i][0])
count_res[label] = __counts
# counts
df = pd.DataFrame.from_dict(count_res).transpose()
df = df.rename(columns=c)
print("Table2 (Object Count):")
print(df)
df.to_csv("./result/table2-counts.csv")

View File

@ -0,0 +1,36 @@
SYS = 0
IPI = 1
MIGREATE = 2
OBJ = 3
CAP_GROUP = 4
THREAD = 5
CONNECTION = 6
NOTIFICATION = 7
IRQ = 8
PMO = 9
VMSPACE = 10
ALLOC = 11
TYPE_NR = 12
labels = ['Global', 'IPI', 'Hybrid Copy', 'Cap Tree', 'Cap Group', 'Thread', 'Connection', 'Notification', 'IRQ', 'PMO', 'VMSpace', 'Alloc']
# colors = ['grey', '#BCCCA3', '#0072BD', '#8682BD', '#D96A73', '#FABC55', 'grey', '#BCCCA3', '#0072BD', '#8682BD', '#D96A73', '#FABC55']
hatches = ['', '|||', '\\\\\\', '///', '++', '\\/\\/\\/', '', '|||', '\\\\\\', '///', '++', '\\/\\/\\/']
# colors = ['#E64B35','#4DBBD6','#00A086','#3D5488']
workload_dict = {
'Default': 'default',
'SQLite': 'sqlite',
'LevelDB': 'leveldb',
'WordCount': 'word_count',
'KMeans':'kmeans',
'Redis': 'redis',
'Memcached': 'memcached',
}
extra_workload_dict = {
'PCA': 'pca',
'KMeans':'kmeans',
'Redis': 'redis',
'Memcached': 'memcached',
}

View File

@ -0,0 +1,124 @@
import sys
import numpy as np
import matplotlib.pyplot as plt
import re
import os
import os.path as op
from break_down_config import *
import pandas as pd
CAP_GROUP = 0
THREAD = 1
CONNECTION = 2
NOTIFICATION = 3
IRQ = 4
PMO = 5
VMSPACE = 6
TYPE_NR = 7
def parseFile(infile):
with open(infile, 'r') as f:
lines = f.readlines()
# split lines into groups
groups = []
current = []
counts = []
on = False
for line in lines:
if line.find("[CKPT WS] latest") >= 0:
on = True
continue
if line.find("tcnt:") >= 0:
on = False
current.append(line)
groups.append(current)
counts.append(len(current))
current = []
continue
if on:
current.append(line)
# assert not on and np.max(counts) == np.min(counts)
return groups
def find(lines, str):
for line in lines:
if line.find(str) >= 0:
res = int(line.replace('\r', '').replace('\n', '').split()[-1])
# print("find", str, res)
return res
print('Error: cannot find str "%s" in current group:\n' % str)
for line in lines:
print(line, end='')
assert False
def findCounts(lines):
counts = [0] * 7
for line in lines:
if line.find("object count") >= 0:
line = line.replace('\r', '').replace('\n', '')
m = re.match("^object count ([0-9]+): ([0-9]+), time: ([0-9]+) *", line)
assert not m is None
d = [int(x) for x in m.groups()]
counts[d[0]] = d[1], d[2]
return counts
def getTimeCount(groups):
times = [0] * 7
totalCounts = np.zeros((7, 2), dtype=np.float32)
# ignore the first 5
for i, group in enumerate(groups):
counts = findCounts(group)
totalCounts += np.array(counts)
times[CAP_GROUP] += find(group, "object count 0")
times[THREAD] += find(group, "object count 1")
times[CONNECTION] += find(group, "object count 2")
times[NOTIFICATION] += find(group, "object count 3")
times[IRQ] += find(group, "object count 4")
times[PMO] += find(group, "object count 5")
times[VMSPACE] += find(group, "object count 6")
return times, totalCounts
if __name__ == '__main__':
# "/restore-breakdown/"
indir = sys.argv[1]
# for root, dirs, files in os.walk(dir_path):
# for file_name in files:
# file_path = os.path.join(root, file_name)
# print(file_path)
# printinfo(file_path)
# incr_res = {}
restore_res ={}
for label, fname in workload_dict.items():
files = os.listdir(indir)
# print(files)
# get all files that start with fname
matches = [f for f in files if f.startswith(fname)]
for m in matches:
_groups = parseFile(indir + m)
_times, _counts = getTimeCount(_groups)
for i in range(CAP_GROUP, VMSPACE + 1):
if _counts[i-CAP_GROUP][0] == 0:
continue
_times[i] = float(_times[i])/_counts[i-CAP_GROUP][0]
restore_res[label] = _times
# # incr-th, full-th,
df = pd.DataFrame.from_dict(restore_res).transpose()
c={0: 'C.G.', 1: 'Thread', 2: 'IPC', 3: 'Noti.', 4: 'IRQ', 5: 'PMO', 6: 'VMS'}
df = df.rename(columns=c)
print("Table3 (Restore):")
print(df)
df.to_csv("./result/table3-restore-column.csv")
# df = pd.DataFrame.from_dict(full_res).transpose()
# # print(wls)
# # df.insert(0, threads, wls)
# df.to_csv("obj-detail-full.csv")

View File

@ -0,0 +1,116 @@
import sys
import numpy as np
import matplotlib.pyplot as plt
import re
import os
import os.path as op
from break_down import parseFile, getTimeCount
from break_down_config import *
import seaborn as sns
args = sys.argv
if len(args) < 3:
print("usage: python draw_fig.py [indir] [a/b]")
sys.exit(1)
arg1 = args[1]
arg2 = args[2]
print("drawing fig", arg2, "...")
indir = arg1
# indir = "../ckpt-breakdown/"
allTimes = []
workloads = []
for label, fname in workload_dict.items():
files = os.listdir(indir)
# get all files that start with fname
matches = [f for f in files if f.startswith(fname)]
workloads.append(label)
times = []
for m in matches:
print("parsing benchmark: ", label)
_groups = parseFile(indir + m)
if len(_groups) <= 2:
print("data in file ", indir + m, " is incomplete, please re-run it!")
print(_groups)
exit(0)
_times, _counts, _first_times, _ = getTimeCount(_groups)
times.append(_times)
avg_times = []
for i in range(len(times[0])):
total = 0
for j in range(len(times)):
total += times[j][i]
avg = total / len(times)
avg_times.append(avg)
allTimes.append(avg_times)
# Draw figures
x = np.arange(0, len(workloads))
# traverse each era
plt.rcdefaults()
plt.rcParams.update({'font.size': 22, 'figure.figsize': (8, 4)})
plt.figure()
plt.rc('xtick', labelsize=18)
plt.xticks(x, workloads, rotation=20)
plt.rc('ytick', labelsize=18)
# plt.xlabel("Workloads", fontsize=16)
plt.ylabel("Checkpoint Time (μs)", fontsize=22)
plt.tight_layout()
if str(arg2) == "a":
my_palette = sns.color_palette("RdGy",4)
colors = [my_palette[i] for i in range(len(my_palette))]
y = np.array(allTimes).T/1000.0
y[OBJ] = sum(y[CAP_GROUP:VMSPACE+1])
draw = [IPI, SYS, OBJ]
draw_count = 0
bottom = [0] * len(workloads)
width = 0.25
for i in draw:
plt.bar(x - width/2, y[i], width, bottom = bottom, color=colors[draw_count], label=labels[i], edgecolor='black')
draw_count += 1
bottom += y[i]
i = MIGREATE
# plt.bar(x + width/2, y[i], width, color=colors[draw_count], label=labels[i], hatch=hatches[draw_count], edgecolor='black')
plt.bar(x + width/2, y[i], width, color=colors[draw_count], label=labels[i], edgecolor='black')
plt.grid(True, axis='y', linestyle=':')
plt.legend(fontsize=18, frameon=False, ncol=2, loc='upper left', columnspacing=0.5)
elif str(arg2) == "b":
my_palette = sns.color_palette("RdGy", 6)
colors = [my_palette[i] for i in range(len(my_palette))]
allTimes = np.array(allTimes).T/1000.0
draw = [CAP_GROUP, THREAD, CONNECTION, NOTIFICATION, PMO, VMSPACE]
draw_count = 0
bottom = [0] * len(workloads)
# print(workloads, allTimes)
for i in draw:
plt.bar(workloads, allTimes[i], 0.5, bottom = bottom, color=colors[draw_count], label=labels[i], edgecolor='black')
draw_count += 1
bottom += allTimes[i]
plt.grid(True, axis='y', linestyle=':')
plt.legend(fontsize=18, frameon=False, ncol=2, loc='upper left', columnspacing=0.5)
else:
print("invalid args")
# plt.show()
path = './result/'
if not os.path.exists(path):
os.mkdir(path)
plt.savefig('./result/fig9{}.jpg'.format(arg2), format='jpg', dpi=1000)

View File

@ -0,0 +1,6 @@
#!/bin/bash
source ../config.sh
python draw_fig.py $logbasedir/ckpt-breakdown/ a
python draw_fig.py $logbasedir/ckpt-breakdown/ b

View File

@ -0,0 +1,43 @@
import sys
import numpy as np
import re
import os
import os.path as op
line_begin = "free mem size = "
line_end = " MB"
if __name__ == "__main__":
file_dir = sys.argv[1]
outfile = sys.argv[2]
files = []
workload = {}
for file in os.listdir(file_dir):
workload[file_dir+"/"+file] = file.split('.')[0]
files.append(file_dir+"/"+file)
files.sort()
out = open(outfile, "w+")
for file in files:
with open(file, 'rt') as f:
lines=f.readlines()
min_mem_size = sys.maxsize
max_mem_size = 0
for line in lines:
index_begin = line.find(line_begin)
index_end = line.find(line_end)
if index_begin >= 0:
try:
mem_size = int(line[index_begin + len(line_begin): index_end])
except ValueError:
continue
# print(mem_size)
if mem_size < min_mem_size:
min_mem_size = mem_size
if mem_size > max_mem_size:
max_mem_size = mem_size
out.write(("%s:\tmin_mem_size=%d,\tmax_mem_size=%d,\tdiff=%d\n") % (workload[file], min_mem_size, max_mem_size, max_mem_size - min_mem_size))
out.close()

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT ON)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID ON)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE ON)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM OFF)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK OFF)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE ON)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM OFF)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE ON)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK OFF)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,11 @@
#!/bin/bash
source ../config.sh
mkdir -p './result'
# Object Composition (Count)
python break_down.py $logbasedir/ckpt-breakdown/ count
# Size (MB)
python parse_mem_size.py $logbasedir/ckpt-size/ './result/mem_size.csv'

View File

@ -0,0 +1,7 @@
#!/bin/bash
source ../config.sh
mkdir -p './result'
python break_down.py $logbasedir/ckpt-breakdown/ ckpt
python break_down_restore.py $logbasedir/restore-breakdown/ restore

View File

@ -0,0 +1,6 @@
#!/bin/bash
source ../config.sh
mkdir -p './result'
python break_down.py $logbasedir/ckpt-breakdown/ extra

View File

@ -0,0 +1,22 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/ckpt-breakdown
loop=(0)
mkdir -p $logdir
# for workload in default memcached redis sqlite leveldb kmeans word_count pca
# run a subset of workload
for workload in pca
do
for run in ${loop[@]}
do
if [ $workload == "redis" ]; then
$appdir/$workload.exp ckpt-log set nopipe 1 3 2>&1 | tee $logdir/$workload.ckpt1ms.log3.$run.log
else
$appdir/$workload.exp ckpt-log 1 3 2>&1 | tee $logdir/$workload.ckpt1ms.log3.$run.log
fi
sleep 5
done
done

View File

@ -0,0 +1,43 @@
#!/usr/bin/expect -f
source ../config.exp
source tool.exp
set logdir $logbasedir/ckpt-size
set timeout 1200
# redisSet redisGet memcached kmeans word_count leveldb sqlite
set workloads {default redisSet memcached kmeans word_count sqlite}
spawn mkdir -p $logdir
foreach workload $workloads {
foreach flag {raw ckpt} {
foreach i {1 2 3} {
# launch chcore
spawn rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn ../../build/simulate.sh
expect "Welcome to ChCore shell!"
# open the log file
log_file "$logdir/${workload}_${flag}_${i}.log"
# get free mem
puts "before run ${workload}"
send -- "get_free_mem_size.bin -i %1000 &\r"
expect "free mem size = "
sleep 1
# run workload
run_test $workload $flag
# close the log file
sleep 1
log_file
# terminate qemu
send "\01"
send "x"
}
}
}

View File

@ -0,0 +1,25 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/restore-breakdown
loop=(0)
mkdir -p $logdir
# for workload in default word_count sqlite leveldb kmeans redis memcached
# run a subset of workload
for workload in default word_count sqlite leveldb kmeans redis memcached
do
for run in ${loop[@]}
do
# $appdir/$workload.exp "restore-log" 500 4 2>&1 | tee $logdir/$workload.restore.$run.log
if [ $workload == "redis" ]; then
$appdir/$workload.exp restore-test set nopipe 1000 0 2>&1 | tee $logdir/$workload.restore.$run.log
elif [$workload == "leveldb"]; then
$appdir/$workload.exp "restore-test" 100 0 2>&1 | tee $logdir/$workload.restore.$run.log
else
$appdir/$workload.exp "restore-test" 1000 0 2>&1 | tee $logdir/$workload.restore.$run.log
fi
sleep 5
done
done

View File

@ -0,0 +1,52 @@
proc run_test {workload flag} {
set thread 8
if {$flag == "ckpt"} {
send -- "checkpoint.bin -i %10 -l 0 -a 14 &\r"
expect "Launching /checkpoint.bin"
puts "${workload}"
}
switch $workload {
"kmeans" {
send -- "kmeans.bin -p 500000 &\r"
expect "KMeans: MapReduce Completed"
}
"word_count" {
send -- "word_count.bin word_50MB.txt &\r"
expect "Wordcount: MapReduce Completed"
}
"redisSet" {
puts "redis set test"
send -- "redis-server --save \"\" -h 127.0.0.1 &\r"
expect "poll fd server is not lwip"
send -- "redis-benchmark -t set -n 1000000 -d 1024 --threads $thread -r 10000 &\r"
expect "requests per second"
}
"redisGet" {
send -- "redis-server --save \"\" -h 127.0.0.1 &\r"
expect "poll fd server is not lwip"
send -- "redis-benchmark -t get -n 1000000 -d 1024 --threads $thread -r 10000 &\r"
expect "requests per second"
}
"memcached" {
send -- "memcached -l 127.0.0.1 -p 123 & \r"
expect "poll fd server is not lwip"
send -- "memcachetest -h 127.0.0.1:123 -M 1024 -F -t $thread -i 1000000 &\r"
expect "Launching /memcachetest"
# expect "Total gets:"
expect "Total sets:"
}
"sqlite" {
send -- "test-sqlite3.bin tmpfs &\r"
expect "sqlite_test done"
}
"leveldb" {
send -- "leveldb-dbbench &\r"
expect "snappyuncomp"
expect "Min"
}
default {
puts "error workload"
}
}
}

View File

@ -0,0 +1,40 @@
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sys
import seaborn as sns
my_palette = sns.color_palette("RdGy",5)
colors = [my_palette[i] for i in range(len(my_palette))]
# Create a DataFrame from the CSV file
# df = pd.read_csv('result-moti.csv', index_col=0)
df = pd.read_csv(sys.argv[1], index_col=0)
# df = df.transpose()
# print(df)
plt.rcdefaults()
plt.rcParams.update({'font.size': 22, 'figure.figsize': (8, 4)})
# Normalize the DataFrame
df_norm = df.divide(df['base (no checkpoint)'], axis=0)
# Select the columns to plot
cols_to_plot = ['base (no checkpoint)', '+ checkpoint', '+ page fault', '+ page memcpy', '+ hybrid copy']
df_plot = df_norm[cols_to_plot]
# Create the bar chart
# colors = ['#E64B35','#4DBBD6','#00A086','#3D5488']
ax = df_plot.plot(kind='bar', color=colors, stacked=False, width=0.8, edgecolor='black')
# Configure the chart
ax.set_ylabel('Normalized Run Time')
# ax.set_title('Benchmark Results')
ax.set_xticklabels(df.index, rotation=0)
# plt.yticks(np.arange(0, 16, 2))
plt.grid(True, axis='y', linestyle=':')
plt.legend(fontsize=18, frameon=False, loc='upper right')
plt.tight_layout()
# plt.show()
plt.savefig('./result/fig10.jpg', format='jpg', dpi=1000)

View File

@ -0,0 +1,8 @@
#!/bin/bash
source ../config.sh
mkdir -p ./result
python read_data.py $logbasedir/hybrid-mem
python draw_fig10.py ./result/hybrid-mem.csv

View File

@ -0,0 +1,79 @@
import os, sys
import pandas as pd
# Define the directory path
ROOT_DIR = sys.argv[1]
# Define the prefixes and metrics
prefixes = ['raw.', 'plusckpt.', 'pluspf.','plusmemcpy.', 'ckpt1ms.']
# workloads = ['memcached', 'redis', 'kmeans', 'pca']
workloads = []
items = os.listdir(ROOT_DIR)
for item in items:
item_path = os.path.join(ROOT_DIR, item)
if os.path.isdir(item_path):
workloads.append(item)
# Initialize the dictionary
data = {}
data['raw'] = {}
data['cal'] = {}
def parse_exe_time(workload, lines):
for l in lines:
if workload == 'memcached':
if 'Tot: ' in l:
return float(l.split()[1])
if workload == 'redis':
if 'completed in' in l:
return float(l.split()[4])
if workload == 'pca' or workload == 'kmeans':
if 'library: ' in l and 'inter library: ' not in l:
return float(l.split()[1])
return 0
# Loop through the log files
for workload in workloads:
data['raw'][workload] = {}
data['cal'][workload] = {}
# Loop through the metrics
for prefix in prefixes:
data['raw'][workload][prefix] = []
data['cal'][workload][prefix] = 0
for workload in workloads:
for prefix in prefixes:
# Find log files with the given prefix
directory = ROOT_DIR + '/' + workload
file_names = [file for file in os.listdir(directory) if file.startswith(prefix)]
# Loop through the log files
for file_name in file_names:
file_path = os.path.join(directory, file_name)
# Read the log file
with open(file_path, 'r') as file:
lines = file.readlines()
# Parse the log file and extract the required metrics
time = parse_exe_time(workload, lines)
data['raw'][workload][prefix].append(time)
for prefix in prefixes:
for workload in workloads:
length = len(data['raw'][workload][prefix])
if length != 0:
data['cal'][workload][prefix] = sum(data['raw'][workload][prefix])/length
# Convert the dictionary to a DataFrame
df = pd.DataFrame.from_dict(data['cal']).transpose()
# 'raw.', 'plusckpt.', 'pluspf.','plusmemcpy.', 'ckpt1ms.'
df = df.rename(columns={'raw.': 'base (no checkpoint)',
'plusckpt.': '+ checkpoint',
'pluspf.': '+ page fault',
'plusmemcpy.': '+ page memcpy',
'ckpt1ms.': '+ hybrid copy'})
print(df)
# Save the DataFrame as a CSV file
df.to_csv('./result/hybrid-mem.csv')

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT ON)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM OFF)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF ON)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY ON)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM OFF)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM OFF)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY ON)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
./chbuild clean
./chbuild build

View File

@ -0,0 +1,33 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/hybrid-mem
loop=(0)
mkdir -p $logdir
for workload in memcached redis kmeans pca
do
mkdir -p $logdir/$workload
for run in ${loop[@]}
do
# test base (no checkpoint)
f1=$logdir/$workload/raw.$run.log
if [ $workload == "redis" ]; then
$appdir/$workload.exp raw set nopipe 2>&1 | tee $f1
else
$appdir/$workload.exp raw 2>&1 | tee $f1
fi
sleep 5
# test hybrid memory checkpoint
f2=$logdir/$workload/ckpt1ms.with-migration.$run.log
if [ $workload == "redis" ]; then
$appdir/$workload.exp ckpt set nopipe 1 0 2>&1 | tee $f2
else
$appdir/$workload.exp ckpt 1 0 2>&1 | tee $f2
fi
sleep 5
done
done

View File

@ -0,0 +1,23 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/hybrid-mem
loop=(0)
mode=$1
mkdir -p $logdir
for workload in memcached
do
mkdir -p $logdir/$workload
for run in ${loop[@]}
do
f=$logdir/$workload/$mode.$run.log
if [ $workload == "redis" ]; then
$appdir/$workload.exp ckpt set nopipe 1 0 2>&1 | tee $f
else
$appdir/$workload.exp ckpt 1 0 2>&1 | tee $f
fi
done
done

View File

@ -0,0 +1,3 @@
#!/bin/bash
./test_plus.sh plusckpt

View File

@ -0,0 +1,3 @@
#!/bin/bash
./test_plus.sh plusmemcpy

View File

@ -0,0 +1,3 @@
#!/bin/bash
./test_plus.sh pluspf

View File

@ -0,0 +1,114 @@
import os, sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, MaxNLocator
import seaborn as sns
my_palette = sns.color_palette("RdGy",4)
four_colors = [my_palette[i] for i in range(len(my_palette))]
csv_file1 = sys.argv[1]
df1 = pd.read_csv(csv_file1, index_col=0)
df1 = df1.transpose()
csv_file2 = sys.argv[1]
df2 = pd.read_csv(csv_file2, index_col=0)
df2 = df2.transpose()
print(df1)
print(df2)
freq_set = [1, 5, 10]
def draw_thp(thp, title, ofn, show_label=False):
x_list = freq_set
plt.rcdefaults()
plt.rcParams.update({'font.size': 20, 'figure.figsize': (4,4)})
plt.figure()
bench_list = [
('THP', 'TreeSLS', four_colors[0], '-', 'x'),
('THP-ext', 'TreeSLS-ExtSync', four_colors[1], '-', 'O'),
]
for (tar, label, color, linestyle, marker) in bench_list:
width = 0.4
x = np.arange(0, 3)
plt.bar(x - width/2, df1[thp][1:]/1000, label=label, color=color, width=width, edgecolor='black')
plt.axhline(y=df1[thp][0]/1000, xmin=0, xmax=x_list[-1], ls='--', c='black', label='Baseline')
plt.bar(x + width/2, df2[thp][1:]/1000, label=label, color=color, width=width, edgecolor='black')
# plt.legend(fontsize=18, frameon=False, bbox_to_anchor=(1, -0.18), ncol=3)
plt.xticks(np.arange(0, 3), ['1', '5', '10'])
plt.xlabel('Checkpoint Interval (ms)')
plt.ylabel('Throughput (Kops/s)')
plt.grid(True, axis='y', linestyle=':')
plt.tight_layout()
plt.savefig(ofn, dpi=1200, format='jpg', bbox_inches='tight')
# plt.show()
def draw_lat(lat, title, ofn, show_label=False):
x_list = freq_set
plt.rcdefaults()
plt.rcParams.update({'font.size': 20, 'figure.figsize': (4,4)})
plt.figure()
bench_list = [
(lat, 'TreeSLS', four_colors[0], '-', 'x'),
(lat+'-ext', 'TreeSLS-ExtSync', four_colors[1], '-', 'o'),
]
for (tar, label, _color, linestyle, marker) in bench_list:
width = 0.4
x = np.arange(0, 3)
plt.bar(x - width/2, df1[lat][1:], label=label, color=_color, width=width, edgecolor='black')
plt.axhline(y=df1[lat][0], xmin=0, xmax=x_list[-1], ls='--', c='black', label='Baseline')
plt.bar(x + width/2, df2[lat][1:], label=label, color=_color, width=width, edgecolor='black')
plt.xticks(np.arange(0, 3), [1, 5, 10])
plt.xlabel('Checkpoint Interval (ms)')
plt.ylim(ymin=0)
plt.ylabel('Latency (ms)')
plt.grid(True, axis='y', linestyle=':')
plt.tight_layout()
plt.savefig(ofn, dpi=1200, format='jpg', bbox_inches='tight')
# plt.show()
def draw_legend(ofn):
# create the figure and subplots
fig, ax = plt.subplots()
bench_list = [
# ('P50', 'P50', four_colors[0], '-', 'x'),
('P95', 'TreeSLS', four_colors[0], '-', 'x'),
# ('P50-ext', 'P50-ext', four_colors[1], '-', 'o'),
('P95-ext', 'TreeSLS-ExtSync', four_colors[1], '-', 'o'),
]
for (tar, label, _color, linestyle, marker) in bench_list:
# plot the data on the ax
ax.bar(0, 0, color=_color, width=0, edgecolor='black', label=label)
ax.axhline(y=0, xmin=0, xmax=0, ls='--', c='black', label='Baseline')
# add the legend outside the plot area
legend = ax.legend(bbox_to_anchor=(1.05, -0.18), fontsize=12, frameon=False, ncol=3)
legend_fig = legend.figure
legend_fig.canvas.draw()
bbox = legend.get_window_extent().transformed(legend_fig.dpi_scale_trans.inverted())
legend_fig.savefig(ofn, format='jpg', dpi=1000, bbox_inches=bbox)
if __name__ == "__main__":
draw_lat('P50', 'Latency', "./result/fig12a.jpg")
draw_thp('Throughput', "Throughput", "./result/fig12b.jpg")
# draw_legend("./result/legend.jpg")

View File

@ -0,0 +1,9 @@
#!/bin/bash
source ../config.sh
mkdir -p ./result
python read_data.py $logbasedir/ext-sync/base base
python read_data.py $logbasedir/ext-sync/ext ext
python draw_fig12.py ./result/ext-sync-base.csv ./result/ext-sync-ext.csv

View File

@ -0,0 +1,55 @@
import os, sys
import pandas as pd
# Define the directory path
directory = sys.argv[1]
mode = sys.argv[2]
# Define the prefixes and metrics
prefixes = ['ckpt0.', 'ckpt1.', 'ckpt5.', 'ckpt10.']
metrics = ['P50', 'Throughput']
# Initialize the dictionary
data = {}
data['raw'] = {}
data['cal'] = {}
# Loop through the log files
for prefix in prefixes:
data['raw'][prefix] = {}
data['cal'][prefix] = {}
# Loop through the metrics
for metric in metrics:
data['raw'][prefix][metric] = []
data['cal'][prefix][metric] = 0
# Find log files with the given prefix
file_names = [file for file in os.listdir(directory) if file.startswith(prefix)]
# Loop through the log files
for file_name in file_names:
file_path = os.path.join(directory, file_name)
# Read the log file
with open(file_path, 'r') as file:
lines = file.readlines()
# Parse the log file and extract the required metrics
for l in lines:
if "50% <=" in l:
data['raw'][prefix]['P50'].append(float(l.split()[2]))
if "requests per second" in l:
data['raw'][prefix]['Throughput'].append(float(l.split()[0]))
print(data)
for prefix in prefixes:
for metric in metrics:
length = len(data['raw'][prefix][metric])
if length != 0:
data['cal'][prefix][metric] = sum(data['raw'][prefix][metric])/length
# Convert the dictionary to a DataFrame
df = pd.DataFrame.from_dict(data['cal'])
# Save the DataFrame as a CSV file
df.to_csv('./result/ext-sync-{}.csv'.format(mode))

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC ON)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,20 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/ext-sync/base
mkdir -p $logdir
loop=(0)
intervals=(1 5 10)
$appdir/redis.exp raw set 32 2>&1 | tee $logdir/ckpt0.pip32.$run.log
for freq in ${intervals[@]}
do
mkdir -p $logdir/$freq
for run in ${loop[@]}
do
$appdir/redis.exp ckpt set 32 $freq 0 2>&1 | tee $logdir/ckpt$freq.pip32.$run.log
done
done

View File

@ -0,0 +1,18 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/ext-sync/ext
mkdir -p $logdir
loop=(0)
intervals=(1 5 10)
for freq in ${intervals[@]}
do
for run in ${loop[@]}
do
$appdir/redis.exp ckpt set 32 $freq 0 2>&1 | tee $logdir/ckpt$freq.pip32.$run.log
sleep 10
done
done

View File

@ -0,0 +1,49 @@
import os, sys
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, MaxNLocator
import seaborn as sns
my_palette = sns.color_palette("RdGy",4)
four_colors = [my_palette[i] for i in range(len(my_palette))]
freq_set = [1, 5, 10, 50]
def draw(csv_file, title, ofn, show_label=False):
x_list = freq_set
df = pd.read_csv(csv_file, index_col=0)
df = df.transpose()
plt.rcdefaults()
plt.rcParams.update({'font.size': 24, 'figure.figsize': (5, 4)})
plt.figure()
bench_list = [
('P50', 'P50-TreeSLS', four_colors[0], '-', 'x'),
('P95', 'P95-TreeSLS', four_colors[3], '-', 'x'),
# ('P99', 'TreeSLS-P99', '#d13c74', '-', 'o')
]
for (tar, label, color, linestyle, marker) in bench_list:
# print(df[tar][1:])
plt.plot(x_list, df[tar][1:], label=label, c=color, ls=linestyle, marker=marker, markersize=10, markeredgewidth=2, linewidth=2)
plt.axhline(y=df[tar][0], xmin=0, xmax=x_list[-1], ls='--', c=color, linewidth=2, label=tar+'-baseline')
if show_label:
plt.legend(frameon=False, fontsize=22)
plt.xticks([1, 5, 10, 50], [1, 5, 10, 50])
plt.xlabel('Checkpoint Interval (ms)')
plt.ylim(ymin=0)
plt.ylabel('Latency (us)')
plt.grid(True, 'major', linestyle=':')
plt.tight_layout()
plt.savefig(ofn, dpi=1200, format='jpg', bbox_inches='tight')
# plt.show()
if __name__ == "__main__":
draw(sys.argv[1]+'memcached-GET.csv', 'GET', "./result/fig11b.jpg", True)
draw(sys.argv[1]+'memcached-SET.csv', 'SET', "./result/fig11a.jpg", True)

View File

@ -0,0 +1,6 @@
#!/usr/bin/bash
source ../config.sh
# python read_memcached.py $logbasedir/memcached
python draw_fig11.py './result/'

View File

@ -0,0 +1,94 @@
import os, sys
import pandas as pd
# Define the directory path
directory = sys.argv[1]
# Define the prefixes and metrics
prefixes = ['ckpt0.', 'ckpt1.', 'ckpt5.', 'ckpt10.', 'ckpt50.']
metrics = ['P50', 'P95', 'P99']
workloads = ['SET', 'GET']
labels = ['SET', 'GET', 'SETraw', 'GETraw']
# Initialize the dictionary
data = {}
for l in labels:
data[l] = {}
def parse(words):
idx = 7
if words[idx+1] == 'ns':
p50 = float(words[idx])/1000
elif words[idx+1] == 'ms':
p50 = float(words[idx])*1000
else:
p50 = float(words[idx])
idx = 11
if words[idx+1] == 'ns':
p95 = float(words[idx])/1000
elif words[idx+1] == 'ms':
p95 = float(words[idx])*1000
else:
p95 = float(words[idx])
idx=13
if words[idx+1] == 'ns':
p99 = float(words[idx])/1000
elif words[idx+1] == 'ms':
p99 = float(words[idx])*1000
else:
p99 = float(words[idx])
return p50, p95, p99
# Loop through the log files
for prefix in prefixes:
for l in labels:
data[l][prefix] = {}
# Loop through the metrics
for metric in metrics:
data['GET'][prefix][metric] = 0
data['SET'][prefix][metric] = 0
data['GETraw'][prefix][metric] = []
data['SETraw'][prefix][metric] = []
# Find log files with the given prefix
file_names = [file for file in os.listdir(directory) if file.startswith(prefix)]
# Loop through the log files
for file_name in file_names:
file_path = os.path.join(directory, file_name)
# Read the log file
with open(file_path, 'r') as file:
lines = file.readlines()
# Parse the log file and extract the required metrics
for i in range(0, len(lines)):
if "Get operations:" in lines[i]:
words = lines[i+2].split()
p50, p95, p99 = parse(words)
# print(p50, p95, p99)
data['GETraw'][prefix]['P50'].append(p50)
data['GETraw'][prefix]['P95'].append(p95)
data['GETraw'][prefix]['P99'].append(p99)
if "Set operations:" in lines[i]:
words = lines[i+2].split()
p50, p95, p99 = parse(words)
data['SETraw'][prefix]['P50'].append(p50)
data['SETraw'][prefix]['P95'].append(p95)
data['SETraw'][prefix]['P99'].append(p99)
print(data)
for i in workloads:
for prefix in prefixes:
for metric in metrics:
data[i][prefix][metric] = sum(data[i+'raw'][prefix][metric])/len(data[i+'raw'][prefix][metric])
# Convert the dictionary to a DataFrame
df = pd.DataFrame.from_dict(data[i])
# Save the DataFrame as a CSV file
df.to_csv('./result/memcached-{}.csv'.format(i))

View File

@ -0,0 +1,17 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
./chbuild build

View File

@ -0,0 +1,22 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/memcached
mkdir $logdir
loop=(0)
intervals=(1 5 10 50)
for freq in ${intervals[@]}
do
for run in ${loop[@]}
do
f=t8.$run.log
$appdir/memcached.exp raw 2>&1 | tee $logdir/ckpt0.$f
sleep 10
$appdir/memcached.exp ckpt $freq 0 2>&1 | tee $logdir/ckpt$freq.$f
sleep 10
done
done

View File

@ -0,0 +1,43 @@
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sys
csv_file=sys.argv[1]
# Create a DataFrame from the CSV file
df = pd.read_csv(csv_file, index_col=0)
# df = df.transpose()
print(df)
plt.rcdefaults()
plt.rcParams.update({'font.size': 22, 'figure.figsize': (8, 4)})
# Normalize the DataFrame
# df_norm = df.divide(df['raw'], axis=0)
# Select the columns to plot
# cols_to_plot = ['chcore-baseline', 'chcore-1msckpt', 'linux-baseline', 'linux-NVM-WAL', 'linux-disk-WAL']
# cols_to_plot = ['TreeSLS-base','TreeSLS-1ms','Linux-base','Linux-WAL']
# df_plot = df[cols_to_plot]
df_plot = df[df.columns]
# Create the bar chart
# colors = ['#BCCCA3', '#0072BD', '#8682BD', '#D96A73', '#FABC55']
my_palette = sns.color_palette("RdGy",4)
colors = [my_palette[i] for i in range(len(my_palette))]
ax = df_plot.plot(kind='bar', stacked=False, color=colors, width=0.8, edgecolor='black')
# Configure the chart
ax.set_ylabel('Throughput (KTPS)')
# ax.set_title('Benchmark Results')
ax.set_xticklabels(df.index, rotation=0)
plt.grid(True, axis='y', linestyle=':')
plt.yticks(np.arange(0, 50, 10))
plt.legend(fontsize=18, frameon=False, bbox_to_anchor=(0.1, 1), ncol=2)
plt.tight_layout()
# plt.show()
plt.savefig(sys.argv[2], format='jpg', dpi=1000)

View File

@ -0,0 +1,8 @@
#!/bin/bash
source ../config.sh
mkdir -p "./result"
python read_ycsb.py $logbasedir/ycsb
python draw_ycsb.py "./result/ycsb.csv" "./result/fig13.jpg"

View File

@ -0,0 +1,59 @@
import os
import re
import pandas as pd
import os, sys
ROOT_DIR=sys.argv[1]
# search_base='../logs/treesls/ycsb/'
# search_dirs = ['chcore-baseline', 'chcore-ckpt1ms', 'linux-baseline', 'nvm-log', 'disk-log']
search_dirs = []
items = os.listdir(ROOT_DIR)
for item in items:
item_path = os.path.join(ROOT_DIR, item)
if os.path.isdir(item_path):
search_dirs.append(item)
filename_pattern = r'(\w+)\..+\.(\w+)\.(\d+)\.log'
result = {}
for d in search_dirs:
d = ROOT_DIR + '/' + d
print(d)
files = [f for f in os.listdir(d) if re.match(filename_pattern, f)]
for f in files:
filepath = os.path.join(d, f)
match = re.match(filename_pattern, f)
workload, threads, run = match.groups()
# print(workload, threads, run)
with open(filepath) as logfile:
contents = logfile.read()
match = re.search(r'(\d+(\.\d+)?)\s*$', contents)
if match:
value = float(match.group(1))
label = d.split('/')[-1]
if threads not in result:
result[threads] = {}
if label not in result[threads]:
result[threads][label] = {}
if workload not in result[threads][label]:
result[threads][label][workload] = value
else:
result[threads][label][workload] = (result[threads][label][workload] + value)/2
# print(result)
for threads, labels in result.items():
# wls = []
# for label, workloads in labels.items():
# for workload, value in workloads.items():
# row = {
# 'label': label,
# 'workload': workload,
# 'value': value
# }
# wls.append(workload)
# df = pd.DataFrame(workloads, columns=['label', 'workload', 'value'])
df = pd.DataFrame.from_dict(labels)
# print(wls)
# df.insert(0, threads, wls)
df.to_csv("./result/ycsb.csv".format(threads))

View File

@ -0,0 +1,18 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
# ./chbuild clean
# ./chbuild defconfig x86_64
./chbuild build

View File

@ -0,0 +1,28 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/ycsb
loop=(0)
threads=(1)
mkdir -p $logdir
mkdir -p $logdir/chcore-baseline
mkdir -p $logdir/chcore-ckpt1ms
for workload in a b c g
do
for thread in ${threads[@]}
do
for run in ${loop[@]}
do
$appdir/ycsb.exp raw $workload $thread 2>&1 | tee $logdir/chcore-baseline/$workload.chcore-raw.t$thread.$run.log
sleep 10
done
for run in ${loop[@]}
do
$appdir/ycsb.exp ckpt $workload $thread 2>&1 | tee $logdir/chcore-ckpt1ms/$workload.chcore-1msckpt.t$thread.$run.log
sleep 10
done
done
done

View File

@ -0,0 +1,78 @@
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from brokenaxes import brokenaxes
from matplotlib.axis import Axis
import seaborn as sns
csv_file=sys.argv[1]
# Create a DataFrame from the CSV file
df = pd.read_csv(csv_file, index_col=0)
# df = df.transpose()
print(df)
# Create the bar chart
# colors = ['#BCCCA3', '#0072BD', '#8682BD', '#D96A73', '#FABC55', 'grey']
my_palette = sns.color_palette("RdGy",6)
colors = [my_palette[i] for i in range(len(my_palette))]
# x = df.transpose().columns
# print(x)
def draw_fig(col, name):
# create three bar charts for the data
# for i, col in enumerate(df.columns):
# print(col, df[col])
plt.figure()
plt.rcdefaults()
plt.rcParams.update({'font.size': 22, 'figure.figsize': (4, 4)})
x = np.arange(len(df))
y = df[col]
# print(x, y)
if col == 'Throughput':
y = y/1000
# plt.xticks([0.25, 2], ["TreeSLS", "Aurora"])
plt.bar(x, y, color=colors, edgecolor='black')
plt.ylabel("Throughput (Kops/s)")
else:
plt.bar(x, y, color=colors, edgecolor='black')
plt.ylabel("Latency (ms)")
# plt.xticks(rotation=30)
# display the chartsa
plt.grid(True, axis='y', linestyle=':')
plt.tight_layout()
plt.savefig('./result/fig13{}.jpg'.format(name), format='jpg', dpi=1000)
# plt.show()
def draw_legend(col):
y = df[col]
x = np.arange(len(df))
# create the figure and subplots
fig, ax = plt.subplots()
# plot the data on the ax
ax.bar(x, y, color=colors, width=0, edgecolor='black', label=df.index)
# add the legend outside the plot area
legend = ax.legend(loc='upper left', fontsize=12, frameon=False, ncol=1)
legend_fig = legend.figure
legend_fig.canvas.draw()
bbox = legend.get_window_extent().transformed(legend_fig.dpi_scale_trans.inverted())
legend_fig.savefig('./result/fig13d.jpg', format='jpg', dpi=1000, bbox_inches=bbox)
# plt.legend(fontsize=11, frameon=False, ncol=1)
# plt.legend(loc='upper left')
if __name__ == "__main__":
draw_fig('Throughput', 'a')
draw_fig('P50', 'b')
draw_fig('P99', 'c')
draw_legend('P99')

View File

@ -0,0 +1,8 @@
#!/bin/bash
source ../config.sh
mkdir -p ./result
# python read_rocksdb.py $logbasedir/rocksdb
python draw_rocksdb.py "./result/rocksdb.csv"

View File

@ -0,0 +1,67 @@
import progbg as sb
import progbg.graphing as g
import pandas as pd
import os, sys
ROOT_DIR=sys.argv[1]
# ROOT_DIR='../logs/treesls/rocksdb/'
fs = []
items = os.listdir(ROOT_DIR)
for item in items:
item_path = os.path.join(ROOT_DIR, item)
if os.path.isdir(item_path):
fs.append(item)
# fs = ["chcore-base", "chcore-ckpt"]
result = {}
def parse(label, path):
files = [f for f in os.listdir(path)]
for file in files:
filepath = os.path.join(path, file)
with open(filepath) as f:
lines = f.readlines()
found = False
for i, l in enumerate(lines):
if "Microseconds per write" in l:
if found:
# get "P50" and "P99"
for ll in lines[i+1:i+7]:
if "Average" in ll:
avg = float(ll.strip().split()[3])
if "P99" in ll:
nine = float(ll.strip().split()[6])
ninenine = float(ll.strip().split()[8])
break
# add results to dict
if label not in result:
result[label] = {}
if "P50" not in result[label]:
result[label]["P50"] = avg
else:
result[label]["P50"] = (result[label]["P50"] + avg)/2
if "P99" not in result[label]:
result[label]["P99"] = nine
else:
result[label]["P99"] = (result[label]["P99"] + nine)/2
else:
found = True
if "mixgraph" in l and "ops/sec" in l:
thp = int(l.strip().split()[4])
if label not in result:
result[label] = {}
if "Throughput" not in result[label]:
result[label]["Throughput"] = thp
else:
result[label]["Throughput"] = (result[label]["Throughput"] + avg)/2
if found == False:
print("[Error] {} did not execute or pre-emptively shut down, please re-run fig14.sh".format(path))
os.unlink(path)
for f in fs:
path = os.path.join(ROOT_DIR, f)
parse(f, path)
df = pd.DataFrame.from_dict(result)
df.to_csv("./result/rocksdb.csv")

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,18 @@
#!/usr/bin/bash
source ../config.sh
sed -i "/SLS_RESTORE/c\set(SLS_RESTORE OFF)" $kconfig
sed -i "/SLS_EXT_SYNC/c\set(SLS_EXT_SYNC OFF)" $kconfig
sed -i "/SLS_HYBRID_MEM/c\set(SLS_HYBRID_MEM ON)" $kconfig
sed -i "/SLS_REPORT_CKPT/c\set(SLS_REPORT_CKPT OFF)" $kconfig
sed -i "/SLS_REPORT_RESTORE/c\set(SLS_REPORT_RESTORE OFF)" $kconfig
sed -i "/SLS_REPORT_HYBRID/c\set(SLS_REPORT_HYBRID OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_PF/c\set(SLS_SPECIAL_OMIT_PF OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_MEMCPY/c\set(SLS_SPECIAL_OMIT_MEMCPY OFF)" $kconfig
sed -i "/SLS_SPECIAL_OMIT_BENCHMARK/c\set(SLS_SPECIAL_OMIT_BENCHMARK ON)" $kconfig
cd $basedir
./chbuild clean
# ./chbuild defconfig x86_64
./chbuild build

View File

@ -0,0 +1,25 @@
#!/bin/bash
source ../config.sh
logdir=$logbasedir/rocksdb/
loop=(0)
mkdir -p $logdir
for run in ${loop[@]}
do
# baseline
mkdir -p $logdir/chcore-base
$appdir/rocksdb.exp raw 2>&1 | tee $logdir/chcore-base/$run.out
sleep 10
# baseline with WAL
# mkdir -p $logdir/chcore-base-wal
# $appdir/rocksdb.exp wal 2>&1 | tee $logdir/chcore-base-wal/$run.out
# sleep 10
# with ckpt
mkdir -p $logdir/chcore-ckpt
$appdir/rocksdb.exp ckpt 1 0 2>&1 | tee $logdir/chcore-ckpt/$run.out
sleep 10
done

View File

@ -0,0 +1,80 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 300
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 300
if {$mode == "raw"} {
puts "do nothing"
}
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test default system with $freq ms checkpoint"
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\r"
expect "Launching /checkpoint.bin"
}
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test default system with $freq ms checkpoint\n"
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\n"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}

View File

@ -0,0 +1,90 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 300
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 300
# Test 1
if {$mode == "raw"} {
puts "test raw kmeans"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test kmeans with 1ms checkpoint"
send -- "checkpoint.bin -i %$freq -l 0 -a 14 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
send -- "kmeans.bin -p 500000 &\r"
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "Final means"
expect "finalize"
}

View File

@ -0,0 +1,103 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 300
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 1200
send -- "idle.bin &\r"
expect "Launching /idle.bin"
sleep 1
# Test 1
if {$mode == "raw"} {
puts "test raw leveldb"
send -- "leveldb-dbbench --benchmarks=fillbatch --num=1000000 &\r"
expect "Launching /leveldb-dbbench"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test leveldb with $freq ms checkpoint"
send -- "leveldb-dbbench --benchmarks=fillbatch --num=1000000 &\r"
expect "Launching /leveldb-dbbench"
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "leveldb-dbbench --benchmarks=fillbatch --num=1000000 &\r"
expect "Launching /leveldb-dbbench"
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "leveldb-dbbench --benchmarks=fillbatch --num=5000000 &\r"
expect "Launching /leveldb-dbbench"
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "file:/chos/kernel/object/recycle.c"
}

View File

@ -0,0 +1,96 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 300
while {1} {
send -- "\n"
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 1200
set thread 8
send -- "memcached -l 127.0.0.1 -p 123 & \r"
expect "poll fd server is not lwip"
# Test 1
if {$mode == "raw"} {
puts "do nothing"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\n"
expect "Launching /checkpoint.bin"
sleep 5
}
send -- "memcachetest -h 127.0.0.1:123 -M 1024 -F -t $thread -i 1000000 &\r"
expect "Launching /memcachetest"
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "Total gets:"
expect "Total sets:"
}

View File

@ -0,0 +1,72 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 300
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 300
# PCA
# Test 1
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test pca with $freq ms checkpoint"
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
send -- "pca.bin -r 1000 -c 1000 &\r"
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
}
expect "Covariance sum"
expect "finalize"

View File

@ -0,0 +1,103 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
set workload [lindex $argv 1]
set pipeline [lindex $argv 2]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 1200
# Test 1
if {$mode == "raw"} {
puts "do nothing"
}
# Lauch redis server
send -- "redis-server --save \"\" -h 127.0.0.1 &\r"
expect "poll fd server is not lwip"
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 3]
set log [lindex $argv 4]
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\n"
expect "Launching /checkpoint.bin"
sleep 5
}
set thread 8
if {$pipeline == "nopipe"} {
# no pipeline mode
send -- "redis-benchmark -t $workload -n 3000000 -d 1024 --threads $thread &\r"
} else {
# pipelie with $pipeline reqs sending together
send -- "redis-benchmark -t $workload -n 5000000 -d 1024 --threads $thread -P $pipeline &\r"
}
expect "Launching /redis-benchmark"
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 3]
set log [lindex $argv 4]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 3]
set log [lindex $argv 4]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "requests per second"
}

View File

@ -0,0 +1,95 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 3000
# Test 1
if {$mode == "raw"} {
puts "test raw prefix_dist"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test prefix_dist with $freq ms checkpoint"
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
set ROCKSDB_NUM 1000000
set ROCKSDB_DUR 30
if {$mode == "wal"} {
send -- "rocksdb-dbbench --benchmarks=fillbatch,mixgraph \
--num=$ROCKSDB_NUM \
--duration=$ROCKSDB_DUR \
--write_buffer_size=17179869184 \
--db=/tmp --wal_dir=/tmp/wal \
--compression_type=none \
--sync=true --disable_wal=false \
--perf_level=2 \
--histogram=1 \
--threads=1 &\r"
} else {
send -- "rocksdb-dbbench --benchmarks=fillbatch,mixgraph \
--num=$ROCKSDB_NUM \
--duration=$ROCKSDB_DUR \
--write_buffer_size=17179869184 \
--db=/tmp --wal_dir=/tmp/wal \
--compression_type=none \
--sync=false --disable_wal=true \
--perf_level=2 \
--histogram=1 \
--threads=1 &\r"
}
expect "Launching /rocksdb-dbbench"
expect "Initializing RocksDB Options"
expect "mixgraph"
expect "Microseconds per read:"
expect "100.000%"

View File

@ -0,0 +1,89 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 300
# Test 1
if {$mode == "raw"} {
puts "test raw sqlite3"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test sqlite3 with 1ms checkpoint"
send -- "checkpoint.bin -i %$freq -l $log -a 14 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
send -- "test-sqlite3.bin tmpfs &\r"
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "sqlite_test done"
}

View File

@ -0,0 +1,41 @@
#!/usr/bin/expect -f
proc start_qemu {} {
source ../config.exp
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
}
proc start_ipmi {} {
source ../config.exp
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
}

View File

@ -0,0 +1,89 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set timeout 1200
# Test 1
if {$mode == "raw"} {
puts "test raw word count"
}
# Test 2
if {$mode == "ckpt"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
puts "test word count with 1ms checkpoint"
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 2000 &\r"
expect "Launching /checkpoint.bin"
sleep 5
}
send -- "word_count.bin word_100MB.txt &\r"
# Test 3
if {$mode == "ckpt-log"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t $ckptlogtimes &\r"
expect "Launching /checkpoint.bin"
expect "Checkpoint finished"
}
# Test 4
if {$mode == "restore-log" || $mode == "restore-test"} {
set freq [lindex $argv 1]
set log [lindex $argv 2]
send -- "checkpoint.bin -i %$freq -l $log -a 14 -t 4 &\r"
expect "Checkpoint finished"
send -- "shutdown.bin 0 & \r"
expect "restore from ckpt"
}
if {$mode != "ckpt-log" && $mode != "restore-log"} {
expect "process cost"
}

View File

@ -0,0 +1,75 @@
#!/usr/bin/expect -f
source ../config.exp
set timeout 20
set mode [lindex $argv 0]
if {$test_mode == "QEMU"} {
spawn -noecho rm [exec sh -c {echo "/tmp/nvm-file-$USER"}]
spawn $basedir/build/simulate.sh
expect "Welcome to ChCore shell!"
} elseif {$test_mode == "IPMI"} {
while {1} {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol activate
expect {
"SOL Session operational" {
puts "ipmi connected"
break
}
"SOL payload already active on another session" {
spawn ipmitool -H $ipmi_ip -U root -P calvin -I lanplus sol deactivate
continue
}
timeout {
puts "ipmitool failed"
exit
}
}
}
set timeout 60
while {1} {
send -- "shutdown.bin \r"
expect {
"Welcome to ChCore shell!" { break }
timeout {
puts "shutdown fail"
continue
}
}
}
} else {
puts "Currently, we only support IPMI and QEMU mode!"
}
set mode [lindex $argv 0]
set workload [lindex $argv 1]
set threads [lindex $argv 2]
# Start the redis server
send -- "redis-server --save \"\" -h 127.0.0.1 &\r"
expect "poll fd server is not lwip"
send -- "set_poll_loop_time.bin -n $threads &\r"
expect "Launching /set_poll_loop_time.bin"
# Lauch checkpoint thread if needed
if {$mode == "ckpt"} {
send -- "checkpoint.bin -i %1 -l 0 -a 7 &\r"
expect "Launching /checkpoint.bin"
}
sleep 5
# Start the YCSB benchmark
set timeout 1200
send -- "ycsbc -db redis -threads $threads -P workload$workload.spec -host 127.0.0.1 -port 6379 -slaves 0 &\r"
expect "Launching"
expect "Transaction throughput (KTPS)"
expect "redis"
sleep 5

View File

@ -0,0 +1,18 @@
#!/usr/bin/expect
# test mode "IPMI" or "QEMU"
set test_mode "IPMI"
# set test_mode "QEMU"
set basedir "/home/<basedir>/treesls"
set aedir "$basedir/artificial_evaluation"
set logbasedir "$aedir/logs/$test_mode"
set appdir "$aedir/applications"
set kconfig "$basedir/kernel/sls_config.cmake"
# Set the ANSI escape sequence for colors
set white_color "\033\[37m"
set red_color "\033\[31m"
set ipmi_ip "192.168.12.93"
set ckptlogtimes 1000

11
artificial_evaluation/config.sh Executable file
View File

@ -0,0 +1,11 @@
#!/usr/bin/bash
# test mode "IPMI" or "QEMU"
test_mode="IPMI"
# test_mode="QEMU"
basedir="/home/<basedir>/treesls"
aedir="$basedir/artificial_evaluation"
logbasedir="$aedir/logs/$test_mode"
appdir="$aedir/applications"
kconfig="$basedir/kernel/sls_config.cmake"

View File

@ -0,0 +1,18 @@
#!/usr/bin/bash
curdir=$(pwd)
cd $curdir/2-hybrid-method
./test_base_and_hybrid.sh
cd $curdir/3-ext-sync
./test_base.sh
cd $curdir/4-memcached
./test_memcached.sh
cd $curdir/5-ycsb
./test_ycsb.sh
cd $curdir/6-rocksdb
./test_rocksdb.sh

296
aurora-rocksdb/rocksdb.sh Normal file
View File

@ -0,0 +1,296 @@
#!/usr/local/bin/bash
. aurora.config
. helpers/util.sh
AURORACTL=$SRCROOT/tools/slsctl/slsctl
#rocksdb_dir=/mnt/treesls/testmnt
rocksdb_dir=/rocksdbmnt
db_bench_origin() {
cd dependencies/rocksdb
$1/db_bench \
--benchmarks=fillbatch,mixgraph \
--use_direct_io_for_flush_and_compaction=true \
--use_direct_reads=true \
--cache_size=$((256 << 20)) \
--key_dist_a=0.002312 \
--key_dist_b=0.3467 \
--keyrange_dist_a=14.18 \
--keyrange_dist_b=0.3467 \
--keyrange_dist_c=0.0164 \
--keyrange_dist_d=-0.08082 \
--keyrange_num=30 \
--value_k=0.2615 \
--value_sigma=25.45 \
--iter_k=2.517 \
--iter_sigma=14.236 \
--mix_get_ratio=0.83 \
--mix_put_ratio=0.14 \
--mix_seek_ratio=0.03 \
--sine_mix_rate_interval_milliseconds=5000 \
--sine_a=1000 \
--sine_b=0.000073 \
--sine_d=4500 \
--perf_level=2 \
--num=$ROCKSDB_NUM \
--key_size=48 \
--db=$rocksdb_dir/tmp-db \
--wal_dir=$rocksdb_dir/wal \
--duration=$ROCKSDB_DUR \
--histogram=1 \
--write_buffer_size=$((16 << 30)) \
--disable_auto_compactions \
--threads=24 \
"${@:2}"
cd -
return $?
}
db_bench() {
cd dependencies/rocksdb
$1/db_bench \
--benchmarks=fillbatch,mixgraph \
--key_dist_a=0.002312 \
--key_dist_b=0.3467 \
--keyrange_dist_a=14.18 \
--keyrange_dist_b=0.3467 \
--keyrange_dist_c=0.0164 \
--keyrange_dist_d=-0.08082 \
--keyrange_num=30 \
--value_k=0.2615 \
--value_sigma=25.45 \
--iter_k=2.517 \
--iter_sigma=14.236 \
--mix_get_ratio=0.83 \
--mix_put_ratio=0.14 \
--mix_seek_ratio=0.03 \
--sine_mix_rate_interval_milliseconds=5000 \
--sine_a=1000 \
--sine_b=0.000073 \
--sine_d=4500 \
--perf_level=2 \
--key_size=48 \
--db=$rocksdb_dir/tmp-db \
--wal_dir=$rocksdb_dir/wal \
--duration=$ROCKSDB_DUR \
--num=$ROCKSDB_NUM \
--histogram=1 \
--cache_size=$((256 << 20)) \
--write_buffer_size=$((16 << 30)) \
--compression_type=none \
--threads=1 \
"${@:2}"
cd -
return $?
}
run_base_wal()
{
DIR=$OUT/rocksdb/base-wal
mkdir -p $DIR
for ITER in `seq 0 $MAX_ITER`
do
# if check_completed "$DIR/$ITER.out"; then
# continue
# fi
echo "[Aurora] Running Rocksdb Baseline: WAL, Iteration $ITER"
#setup_zfs_rocksdb >> $LOG 2>> $LOG
stripe_setup_wal $MIN_FREQ
db_bench baseline --sync=true --disable_wal=false > /tmp/out
#teardown_zfs >> $LOG 2>> $LOG
mv /tmp/out $DIR/$ITER.out
fsync $DIR/$ITER.out
done
}
run_base_nowal()
{
DIR=$OUT/rocksdb/base-nowal
mkdir -p $DIR
for ITER in `seq 0 $MAX_ITER`
do
echo "[Aurora] Running Rocksdb Baseline: No WAL, Iteration $ITER"
setup_zfs_rocksdb >> $LOG 2>> $LOG
db_bench baseline --sync=false --disable_wal=true > /tmp/out
teardown_zfs
mv /tmp/out $DIR/$ITER.out
fsync $DIR/$ITER.out
done
}
stripe_setup_wal()
{
CKPT_FREQ=$1
createmd
newfs -j -S 4096 -b 65536 $DISKPATH
mkdir -p $rocksdb_dir
mount $DISKPATH $rocksdb_dir
}
stripe_setup_wal_old()
{
CKPT_FREQ=$1
gstripe load > /dev/null 2> /dev/null
gstripe stop "$STRIPENAME" > /dev/null 2> /dev/null
gstripe stop "st1" > /dev/null 2> /dev/null
# Sets up the two stripes needed for the RocksDB Aurora benchmark
# STRIPENAME is the default stripe used by all benchmarks which is used by the SLS and SLOS
# The secondary stripe "st1" is used for the persistent storage for the WAL.
# During operation operatiosn are written to the WAL (which is on st1), when this wal fills, Aurora
set -- $ROCKS_STRIPE1
if [ $# -gt 1 ]; then
gstripe create -s "$STRIPESIZE" -v "$STRIPENAME" $ROCKS_STRIPE1
DISK="stripe/$STRIPENAME"
else
if [ "$MODE" != "VM" ]; then
MAX_ITER=0
MIN_FREQ=10
fi
echo "[Aurora] Single device detected ($ROCKS_STRIPE1) Reducing period ($MIN_FREQ ms)"
DISK=$ROCKS_STRIPE1
# We are using 1 disk so we cannot keep up with checkpointing at 100Hz
fi
DISKPATH="/dev/$DISK"
set -- $ROCKS_STRIPE2
if [ $# -gt 1 ]; then
gstripe create -s "$STRIPESIZE" -v "st1" $ROCKS_STRIPE2
ln -s /dev/stripe/st1 /dev/wal
else
ln -s /dev/$ROCKS_STRIPE2 /dev/wal
fi
aursetup
if [ -z "$CKPT_FREQ" ]; then
sysctl aurora_slos.checkpointtime=$CKPT_FREQ > /dev/null
else
sysctl aurora_slos.checkpointtime=$MAX_FREQ > /dev/null
fi
}
stripe_teardown_wal()
{
umount /testmnt/dev > /dev/null 2> /dev/null
aurteardown 2> /dev/null
aurunstripe 2> /dev/null
gstripe destroy "st1" 2> /dev/null
rm /dev/wal
}
run_aurora_nowal()
{
DIR=$OUT/rocksdb/aurora-nowal
mkdir -p $DIR
for ITER in `seq 0 $MAX_ITER`
do
#if check_completed "$DIR/$ITER.out"; then
# continue
#fi
echo "[Aurora] Running Rocksdb SLS: No WAL, Iteration $ITER ($MIN_FREQ)"
rm /tmp/out 2> /dev/null > /dev/null
stripe_setup_wal $MIN_FREQ >> $LOG 2>> $LOG
setup_aurora
$AURORACTL partadd -o 1 -d -t $MIN_FREQ -b $BACKEND >> $LOG 2>> $LOG
db_bench baseline --sync=false --disable_wal=true 2>&1 | tee /tmp/out &
FUNC_PID="$!"
sleep 1
pid=`pidof db_bench`
$AURORACTL attach -o 1 -p $pid 2>> $LOG >> $LOG
$AURORACTL checkpoint -o 1 -r >> $LOG 2>> $LOG
wait $FUNC_PID
if [ $? -eq 124 ];then
echo "[Aurora] Issue with db_bench, restart required"
exit 1
fi
sleep 2
stripe_teardown_wal >> $LOG 2>> $LOG
teardown_aurora
mv /tmp/out $DIR/$ITER.out
fsync $DIR/$ITER.out
#stripe_teardown_wal >> $LOG 2>> $LOG
done
}
run_aurora_wal()
{
DIR=$OUT/rocksdb/aurora-wal
stripe_teardown_wal > /dev/null 2> /dev/null
mkdir -p $DIR
for ITER in `seq 0 $MAX_ITER`
do
# if check_completed "$DIR/$ITER.out"; then
# continue
# fi
# We need custom stripes for the WAL as we use a seperate stripe to directly write to for the WAL
echo "[Aurora] Running Rocksdb SLS: WAL, Iteration $ITER"
stripe_setup_wal $MIN_FREQ
db_bench sls --sync=true --disable_wal=false 2>&1 | tee /tmp/out
# Wait for the final checkpoint to be done
stripe_teardown_wal
teardown_aurora >> $LOG 2>> $LOG
mv /tmp/out $DIR/$ITER.out
fsync $DIR/$ITER.out
done
}
setup_script
clear_log
if [ "$MODE" = "VM" ]; then
MAX_ITER=2
else
MAX_ITER=2
fi
set -- $ROCKS_STRIPE1
if [ $# -eq 0 ]; then
echo "[Aurora] RocksDB Requires at least 1 disk set (Virtual or otherwise) for ROCKS_STRIPE1"
exit 1
fi
set -- $ROCKS_STRIPE2
if [ $# -eq 0 ]; then
echo "[Aurora] RocksDB Requires at least 1 disk set (Virtual or
otherwise) for ROCKS_STRIPE2 which is different from ROCKS_STRIPE1"
exit 1
fi
echo "[Aurora] Running with $MAX_ITER iterations"
mkdir -p $OUT/rocksdb
set -x
run_base_wal
run_base_nowal
run_aurora_wal
run_aurora_nowal
#echo "[Aurora] Creating RocksDB Graphs"
# PYTHONPATH=$PYTHONPATH:$(pwd)/dependencies/progbg
# export PYTHONPATH
# export OUT
# python3.7 -m progbg --debug graphing/fig5.py

View File

@ -0,0 +1,2 @@
#!/bin/bash
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

View File

@ -0,0 +1,8 @@
#!/bin/bash
pmem_dir=/mnt/treesls
pmem_dev=/dev/pmem0
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sudo rm -rf $pmem_dir/*
sudo umount $pmem_dir
sudo mkfs.ext4 -F -b 4096 $pmem_dev
sudo mount -o dax $pmme_dev $pmem_dir

View File

@ -0,0 +1,6 @@
#!/bin/bash
disk_dir=/mnt/treesls
disk_dev=/dev/nvme0n1p1
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sudo umount $disk_dir
sudo mount $disk_dev $disk_dir

View File

@ -0,0 +1,37 @@
#!/bin/bash
loop=(0)
threads=(1)
restart()
{
mode=$1
kill -9 $(pidof redis-server)
sleep 5
if [ $mode = "nvm-log" ]; then
./dax_config.sh
elif [ $mode = "disk-log" ]; then
./disk_config.sh
else
./config.sh
fi
sleep 5
}
for mode in "baseline" "nvm-log" "disk-log"
#for mode in "nvm-log" "disk-log"
do
for workload in a b c g
do
for thread in ${threads[@]}
do
for run in ${loop[@]}
do
restart $mode
./run_redis_server.sh $mode > /dev/null
sleep 5
./run_ycsb.sh $workload $thread $run $mode
sleep 5
done
done
done
done

View File

@ -0,0 +1,22 @@
#[[
Dump cache variables from `CMakeCache.txt` to `.config`.
This script is intended to be used as -C option of
cmake command.
#]]
include(${CMAKE_CURRENT_LIST_DIR}/Modules/CommonTools.cmake)
set(_config_lines)
macro(chcore_config _config_name _config_type _default _description)
# Dump config lines in definition order
list(APPEND _config_lines
"${_config_name}:${_config_type}=${${_config_name}}")
endmacro()
include(${CMAKE_SOURCE_DIR}/config.cmake)
string(REPLACE ";" "\n" _config_str "${_config_lines}")
file(WRITE ${CMAKE_SOURCE_DIR}/.config "${_config_str}\n")

View File

@ -0,0 +1,8 @@
#!/bin/bash
# This script prints a prompt and read user input.
# Should only be used in `LoadConfigAsk.cmake`.
echo -n "$1 "
read -p "" input
echo $input >&2

View File

@ -0,0 +1,59 @@
#[[
Load cache variables from `.config` and `config.cmake`.
This script is intended to be used as -C option of
cmake command.
#]]
include(${CMAKE_CURRENT_LIST_DIR}/Modules/CommonTools.cmake)
if(EXISTS ${CMAKE_SOURCE_DIR}/.config)
# Read in config file
file(READ ${CMAKE_SOURCE_DIR}/.config _config_str)
string(REPLACE "\n" ";" _config_lines "${_config_str}")
unset(_config_str)
# Set config cache variables
foreach(_line ${_config_lines})
if(${_line} MATCHES "^//" OR ${_line} MATCHES "^#")
continue()
endif()
string(REGEX MATCHALL "^([^:=]+):([^:=]+)=(.*)$" _config "${_line}")
if("${_config}" STREQUAL "")
message(FATAL_ERROR "Invalid line in `.config`: ${_line}")
endif()
set(${CMAKE_MATCH_1}
${CMAKE_MATCH_3}
CACHE ${CMAKE_MATCH_2} "" FORCE)
endforeach()
unset(_config_lines)
else()
message(WARNING "There is no `.config` file")
endif()
# Check if there exists `chcore_config` macro, which will be used in
# `config.cmake`
if(NOT COMMAND chcore_config)
message(FATAL_ERROR "Don't directly use `LoadConfig.cmake`")
endif()
macro(chcore_config _config_name _config_type _default _description)
if(DEFINED ${_config_name})
# config is in `.config`, set description
set(${_config_name}
${${_config_name}}
CACHE ${_config_type} ${_description} FORCE)
else()
# config is not in `.config`, use previously-defined chcore_config
# Note: use quota marks to allow forwarding empty arguments
_chcore_config("${_config_name}" "${_config_type}" "${_default}"
"${_description}")
endif()
endmacro()
# Include the top-level config definition file
include(${CMAKE_SOURCE_DIR}/config.cmake)
# Hide unrelavant builtin cache variables
mark_as_advanced(CMAKE_BUILD_TYPE)
mark_as_advanced(CMAKE_INSTALL_PREFIX)

View File

@ -0,0 +1,16 @@
#[[
Load config values from `.config` and check if all
cache variables defined in `config.cmake` are set,
if not, abort.
This script is intended to be used as -C option of
cmake command.
#]]
macro(chcore_config _config_name _config_type _default _description)
if(NOT DEFINED ${_config_name})
message(FATAL_ERROR "${_config_name} is not set")
endif()
endmacro()
include(${CMAKE_CURRENT_LIST_DIR}/LoadConfig.cmake)

View File

@ -0,0 +1,62 @@
#[[
Load config values from `.config` and check if all
cache variables defined in `config.cmake` are set,
if not, ask user interactively.
This script is intended to be used as -C option of
cmake command.
#]]
set(_input_sh ${CMAKE_CURRENT_LIST_DIR}/Helpers/input.sh)
# Ask user for input
function(_ask_for_input _prompt _result)
execute_process(
COMMAND bash ${_input_sh} ${_prompt}
ERROR_VARIABLE _tmp
ERROR_STRIP_TRAILING_WHITESPACE)
set(${_result}
${_tmp}
PARENT_SCOPE)
endfunction()
# Ask user for yes or no
function(_ask_for_yn _prompt _default _yn_var)
while(1)
_ask_for_input(${_prompt} _result)
if("${_result}" MATCHES "^(y|Y)")
set(${_yn_var}
"y"
PARENT_SCOPE)
elseif("${_result}" MATCHES "^(n|N)")
set(${_yn_var}
"n"
PARENT_SCOPE)
elseif("${_result}" STREQUAL "")
set(${_yn_var}
${_default}
PARENT_SCOPE)
else()
execute_process(COMMAND echo "Invalid input!")
continue()
endif()
break()
endwhile()
endfunction()
macro(chcore_config _config_name _config_type _default _description)
if(NOT DEFINED ${_config_name})
_ask_for_yn(
"${_config_name} is not set, use default (${_default})? (Y/n)" "y"
_answer)
set(_value ${_default})
if(NOT "${_answer}" STREQUAL "y")
_ask_for_input("Enter a value for ${_config_name}:" _value)
endif()
set(${_config_name}
${_value}
CACHE ${_config_type} ${_description} FORCE)
endif()
endmacro()
include(${CMAKE_CURRENT_LIST_DIR}/LoadConfig.cmake)

View File

@ -0,0 +1,18 @@
#[[
Load config values from `.config` and default values
from `config.cmake`.
This script is intended to be used as -C option of
cmake command.
#]]
macro(chcore_config _config_name _config_type _default _description)
if(NOT DEFINED ${_config_name})
# config is not in `.config`, set default value
set(${_config_name}
${_default}
CACHE ${_config_type} ${_description})
endif()
endmacro()
include(${CMAKE_CURRENT_LIST_DIR}/LoadConfig.cmake)

View File

@ -0,0 +1,70 @@
function(chcore_dump_cmake_vars)
message(STATUS "CMAKE_TOOLCHAIN_FILE: ${CMAKE_TOOLCHAIN_FILE}")
message(STATUS "CMAKE_MODULE_PATH: ${CMAKE_MODULE_PATH}")
message(STATUS "CMAKE_CROSSCOMPILING: ${CMAKE_CROSSCOMPILING}")
message(STATUS "CMAKE_SYSTEM_PROCESSOR: ${CMAKE_SYSTEM_PROCESSOR}")
message(STATUS "CMAKE_SYSTEM_NAME: ${CMAKE_SYSTEM_NAME}")
message(
STATUS "CMAKE_HOST_SYSTEM_PROCESSOR: ${CMAKE_HOST_SYSTEM_PROCESSOR}")
message(STATUS "CMAKE_HOST_SYSTEM_NAME: ${CMAKE_HOST_SYSTEM_NAME}")
message(STATUS "CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
message(STATUS "CMAKE_ASM_COMPILER: ${CMAKE_ASM_COMPILER}")
message(STATUS "CMAKE_C_COMPILER: ${CMAKE_C_COMPILER}")
message(STATUS "CMAKE_C_OUTPUT_EXTENSION: ${CMAKE_C_OUTPUT_EXTENSION}")
message(STATUS "CMAKE_LINKER: ${CMAKE_LINKER}")
message(STATUS "CMAKE_SOURCE_DIR: ${CMAKE_SOURCE_DIR}")
message(STATUS "CMAKE_BINARY_DIR: ${CMAKE_BINARY_DIR}")
message(STATUS "CMAKE_PREFIX_PATH: ${CMAKE_PREFIX_PATH}")
message(STATUS "CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX}")
chcore_dump_chcore_vars()
endfunction()
function(chcore_dump_chcore_vars)
get_cmake_property(_variable_names VARIABLES)
list(SORT _variable_names)
foreach(_variable_name ${_variable_names})
string(REGEX MATCH "^CHCORE_" _matched ${_variable_name})
if(NOT _matched)
continue()
endif()
message(STATUS "${_variable_name}: ${${_variable_name}}")
endforeach()
endfunction()
macro(chcore_config_include _config_rel_path)
include(${CMAKE_CURRENT_LIST_DIR}/${_config_rel_path})
endmacro()
function(chcore_target_remove_compile_options _target)
get_target_property(_target_options ${_target} COMPILE_OPTIONS)
if(_target_options)
foreach(_option ${ARGN})
list(REMOVE_ITEM _target_options ${_option})
endforeach()
set_target_properties(${_target} PROPERTIES COMPILE_OPTIONS
"${_target_options}")
endif()
endfunction()
function(chcore_target_remove_link_options _target)
get_target_property(_target_options ${_target} LINK_OPTIONS)
if(_target_options)
foreach(_option ${ARGN})
list(REMOVE_ITEM _target_options ${_option})
endforeach()
set_target_properties(${_target} PROPERTIES LINK_OPTIONS
"${_target_options}")
endif()
endfunction()
if(NOT COMMAND ProcessorCount)
include(ProcessorCount)
endif()
macro(chcore_get_nproc _nproc)
ProcessorCount(${_nproc})
if(${_nproc} EQUAL 0)
set(${_nproc} 16)
endif()
endmacro()

View File

@ -0,0 +1,61 @@
# Add source files to target with specified scope, and output an object list.
macro(chcore_target_sources_out_objects _target _scope _objects)
target_sources(${_target} ${_scope} ${ARGN})
if(NOT CMAKE_CURRENT_BINARY_DIR MATCHES "^${CMAKE_BINARY_DIR}/")
message(
FATAL_ERROR
"CMAKE_CURRENT_BINARY_DIR (${CMAKE_BINARY_DIR}) must be in CMAKE_BINARY_DIR (${CMAKE_BINARY_DIR})."
)
endif()
string(REGEX REPLACE "^${CMAKE_BINARY_DIR}/" "" _curr_bin_rel_path
${CMAKE_CURRENT_BINARY_DIR})
foreach(_src ${ARGN})
if(_src MATCHES "\.(c|C)$")
set(_obj_extension ${CMAKE_C_OUTPUT_EXTENSION})
elseif(_src MATCHES "\.(s|S)$")
set(_obj_extension ${CMAKE_ASM_OUTPUT_EXTENSION})
elseif(_src MATCHES "\.(cpp|CPP|cxx|CXX|cc|CC)$")
set(_obj_extension ${CMAKE_CXX_OUTPUT_EXTENSION})
else()
message(FATAL_ERROR "Unsupported file type: ${_src}")
endif()
list(
APPEND
${_objects}
CMakeFiles/${_target}.dir/${_curr_bin_rel_path}/${_src}${_obj_extension}
)
endforeach()
unset(_obj_extension)
unset(_curr_bin_rel_path)
endmacro()
# Add target to convert ELF kernel to binary image.
function(chcore_objcopy_binary _kernel_target _binary_name)
add_custom_target(
${_binary_name} ALL
COMMAND ${CMAKE_OBJCOPY} -O binary -S $<TARGET_FILE:${_kernel_target}>
${_binary_name}
DEPENDS ${_kernel_target})
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${_binary_name}
DESTINATION ${CMAKE_INSTALL_PREFIX})
endfunction()
# Add target to generate qemu emulation script.
function(chcore_generate_emulate_sh _qemu _qemu_options)
set(qemu ${_qemu})
set(qemu_options ${_qemu_options})
configure_file(${CHCORE_PROJECT_DIR}/scripts/qemu/emulate.tpl.sh emulate.sh
@ONLY)
unset(qemu)
unset(qemu_options)
install(PROGRAMS ${CMAKE_CURRENT_BINARY_DIR}/emulate.sh
DESTINATION ${CMAKE_INSTALL_PREFIX})
install(
PROGRAMS ${CMAKE_CURRENT_BINARY_DIR}/emulate.sh
DESTINATION ${CMAKE_INSTALL_PREFIX}
RENAME simulate.sh)
endfunction()

View File

@ -0,0 +1,7 @@
#[[
This file defines "ChCore" platform.
Set CMAKE_SYSTEM_NAME to "ChCore" to use this platform.
#]]
# We actually act exactly like Linux, so just include it
include(Platform/Linux)

View File

@ -0,0 +1,8 @@
# A simple wrapper to the built-in ExternalProject module.
include(ExternalProject)
macro(chcore_add_subproject)
# Note: may encounter problem when need to forward empty arguments
ExternalProject_Add(${ARGN})
endmacro()

View File

@ -0,0 +1,109 @@
function(chcore_install_target_to_ramdisk _target)
install(TARGETS ${_target} DESTINATION ${CMAKE_INSTALL_PREFIX}/ramdisk)
set_property(GLOBAL PROPERTY ${_target}_INSTALLED TRUE)
endfunction()
function(chcore_install_binary_to_ramdisk _file)
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/${_file}
DESTINATION ${CMAKE_INSTALL_PREFIX}/ramdisk)
endfunction()
# Get all "build system targets" defined in the current source dir,
# recursively.
function(chcore_get_all_targets _out_var)
set(_targets)
_get_all_targets_recursive(_targets ${CMAKE_CURRENT_SOURCE_DIR})
set(${_out_var}
${_targets}
PARENT_SCOPE)
endfunction()
macro(_get_all_targets_recursive _targets _dir)
get_property(
_subdirectories
DIRECTORY ${_dir}
PROPERTY SUBDIRECTORIES)
foreach(_subdir ${_subdirectories})
_get_all_targets_recursive(${_targets} ${_subdir})
endforeach()
get_property(
_current_targets
DIRECTORY ${_dir}
PROPERTY BUILDSYSTEM_TARGETS)
list(APPEND ${_targets} ${_current_targets})
endmacro()
function(chcore_copy_binary_to_ramdisk _target)
add_custom_target(
cp_${_target}_to_ramdisk
COMMAND cp ${CMAKE_CURRENT_BINARY_DIR}/${_target} ${build_ramdisk_dir}
DEPENDS ${_target})
add_dependencies(ramdisk cp_${_target}_to_ramdisk ${_target})
set_property(GLOBAL PROPERTY ${_target}_INSTALLED TRUE)
endfunction()
function(chcore_copy_target_to_ramdisk _target)
add_custom_target(
cp_${_target}_to_ramdisk
COMMAND cp $<TARGET_FILE:${_target}> ${build_ramdisk_dir}
DEPENDS ${_target})
add_dependencies(ramdisk cp_${_target}_to_ramdisk ${_target})
set_property(GLOBAL PROPERTY ${_target}_INSTALLED TRUE)
endfunction()
# Install all shared library and executable targets defined in
# the current source dir to ramdisk.
#
# This will exclude those that are already installed by
# `chcore_install_target_as_cpio` or `chcore_install_target_to_ramdisk`.
function(chcore_copy_all_targets_to_ramdisk)
set(_targets)
chcore_get_all_targets(_targets)
foreach(_target ${_targets})
get_property(_installed GLOBAL PROPERTY ${_target}_INSTALLED)
if(${_installed})
continue()
endif()
get_target_property(_target_type ${_target} TYPE)
if(${_target_type} STREQUAL SHARED_LIBRARY OR ${_target_type} STREQUAL
EXECUTABLE)
chcore_copy_target_to_ramdisk(${_target})
endif()
endforeach()
endfunction()
function(chcore_enable_clang_tidy)
set(_checks
"-bugprone-easily-swappable-parameters,-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling,-bugprone-reserved-identifier"
)
set(_options)
set(_one_val_args EXTRA_CHECKS)
set(_multi_val_args)
cmake_parse_arguments(_clang_tidy "${_options}" "${_one_val_args}"
"${_multi_val_args}" ${ARGN})
if(_clang_tidy_EXTRA_CHECKS)
set(_checks "${_checks},${_clang_tidy_EXTRA_CHECKS}")
endif()
set(CMAKE_C_CLANG_TIDY
clang-tidy --checks=${_checks}
--extra-arg=-I${CHCORE_MUSL_LIBC_INSTALL_DIR}/include
--config-file=${CHCORE_PROJECT_DIR}/.clang-tidy
PARENT_SCOPE)
endfunction()
function(chcore_disable_clang_tidy)
unset(CMAKE_C_CLANG_TIDY PARENT_SCOPE)
endfunction()
function(chcore_copy_files_to_ramdisk)
file(COPY ${ARGN} DESTINATION ${build_ramdisk_dir})
endfunction()
function(chcore_objcopy_binary _user_target _binary_name)
add_custom_target(
${_binary_name} ALL
COMMAND ${CMAKE_OBJCOPY} -O binary -S $<TARGET_FILE:${_user_target}>
${_binary_name}
DEPENDS ${_user_target})
endfunction()

View File

@ -0,0 +1,72 @@
add_compile_definitions(CHCORE)
# Get the target architecture
execute_process(
COMMAND ${CMAKE_C_COMPILER} -dumpmachine
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE _target_machine)
string(REGEX MATCH "^[^-]+" _target_arch ${_target_machine})
# Set CHCORE_ARCH cache var
# Note: set as cache variable so that it will be passed into C
# as compile definition later
set(CHCORE_ARCH
${_target_arch}
CACHE STRING "" FORCE)
unset(_target_machine)
unset(_target_arch)
# Set optimization level
add_compile_options("$<$<CONFIG:Debug>:-Og;-g>")
add_compile_options("$<$<CONFIG:Release>:-O3>")
# Convert config items to compile definition
get_cmake_property(_cache_var_names CACHE_VARIABLES)
foreach(_var_name ${_cache_var_names})
string(REGEX MATCH "^CHCORE_" _matched ${_var_name})
if(NOT _matched)
continue()
endif()
get_property(
_var_type
CACHE ${_var_name}
PROPERTY TYPE)
if(_var_type STREQUAL BOOL)
# for BOOL, add definition if ON/TRUE
if(${_var_name})
add_compile_definitions(${_var_name})
endif()
elseif(_var_type STREQUAL STRING)
# for STRING, always add definition with string literal value
add_compile_definitions(${_var_name}="${${_var_name}}")
endif()
endforeach()
unset(_cache_var_names)
unset(_var_name)
unset(_var_type)
unset(_matched)
# Set CHCORE_ARCH_XXX and CHCORE_PLAT_XXX compile definitions
string(TOUPPER ${CHCORE_ARCH} _arch_uppercase)
string(TOUPPER ${CHCORE_PLAT} _plat_uppercase)
add_compile_definitions(CHCORE_ARCH_${_arch_uppercase}
CHCORE_PLAT_${_plat_uppercase})
unset(_arch_uppercase)
unset(_plat_uppercase)
# Pass all CHCORE_* variables (cache and non-cache) to
# CMake try_compile projects
get_cmake_property(_var_names VARIABLES)
foreach(_var_name ${_var_names})
string(REGEX MATCH "^CHCORE_" _matched ${_var_name})
if(NOT _matched)
continue()
endif()
list(APPEND CMAKE_TRY_COMPILE_PLATFORM_VARIABLES ${_var_name})
endforeach()
unset(_var_names)
unset(_var_name)
unset(_matched)

View File

@ -0,0 +1,37 @@
# CMake toolchain for building ChCore kernel.
if(NOT DEFINED CHCORE_PROJECT_DIR)
message(FATAL_ERROR "CHCORE_PROJECT_DIR is not defined")
else()
message(STATUS "CHCORE_PROJECT_DIR: ${CHCORE_PROJECT_DIR}")
endif()
if(NOT DEFINED CHCORE_USER_INSTALL_DIR)
message(FATAL_ERROR "CHCORE_USER_INSTALL_DIR is not defined")
else()
message(STATUS "CHCORE_USER_INSTALL_DIR: ${CHCORE_USER_INSTALL_DIR}")
endif()
# Set toolchain executables
set(CMAKE_ASM_COMPILER "${CHCORE_CROSS_COMPILE}gcc")
set(CMAKE_C_COMPILER "${CHCORE_CROSS_COMPILE}gcc")
# set(CMAKE_CXX_COMPILER "${CHCORE_CROSS_COMPILE}g++")
set(CMAKE_AR "${CHCORE_CROSS_COMPILE}ar")
set(CMAKE_NM "${CHCORE_CROSS_COMPILE}nm")
set(CMAKE_OBJCOPY "${CHCORE_CROSS_COMPILE}objcopy")
set(CMAKE_OBJDUMP "${CHCORE_CROSS_COMPILE}objdump")
set(CMAKE_RANLIB "${CHCORE_CROSS_COMPILE}ranlib")
set(CMAKE_STRIP "${CHCORE_CROSS_COMPILE}strip")
# Set build type
if(CHCORE_KERNEL_DEBUG)
set(CMAKE_BUILD_TYPE "Debug")
else()
set(CMAKE_BUILD_TYPE "Release")
endif()
include(${CMAKE_CURRENT_LIST_DIR}/_common.cmake)
# Set the target system (automatically set CMAKE_CROSSCOMPILING to true)
set(CMAKE_SYSTEM_NAME "Generic")
set(CMAKE_SYSTEM_PROCESSOR ${CHCORE_ARCH})

View File

@ -0,0 +1,10 @@
set(CHCORE_CROSS_COMPILE
"aarch64-linux-gnu-"
CACHE STRING "" FORCE)
set(CHCORE_PLAT
"raspi3"
CACHE STRING "" FORCE)
set(FBINFER ON)
include(${CMAKE_CURRENT_LIST_DIR}/kernel.cmake)

View File

@ -0,0 +1,80 @@
# CMake toolchain for building ChCore user-level libs and apps.
if(NOT DEFINED CHCORE_PROJECT_DIR)
message(FATAL_ERROR "CHCORE_PROJECT_DIR is not defined")
else()
message(STATUS "CHCORE_PROJECT_DIR: ${CHCORE_PROJECT_DIR}")
endif()
if(NOT DEFINED CHCORE_MUSL_LIBC_INSTALL_DIR)
message(FATAL_ERROR "CHCORE_MUSL_LIBC_INSTALL_DIR is not defined")
else()
message(
STATUS "CHCORE_MUSL_LIBC_INSTALL_DIR: ${CHCORE_MUSL_LIBC_INSTALL_DIR}")
endif()
# Set toolchain executables
set(CMAKE_ASM_COMPILER "${CHCORE_MUSL_LIBC_INSTALL_DIR}/bin/musl-gcc")
set(CMAKE_C_COMPILER "${CHCORE_MUSL_LIBC_INSTALL_DIR}/bin/musl-gcc")
set(CMAKE_CXX_COMPILER "${CHCORE_MUSL_LIBC_INSTALL_DIR}/bin/musl-gcc")
set(CMAKE_AR "${CHCORE_MUSL_LIBC_INSTALL_DIR}/bin/musl-ar")
set(CMAKE_NM "${CHCORE_CROSS_COMPILE}nm")
set(CMAKE_OBJCOPY "${CHCORE_CROSS_COMPILE}objcopy")
set(CMAKE_OBJDUMP "${CHCORE_CROSS_COMPILE}objdump")
set(CMAKE_RANLIB "${CHCORE_CROSS_COMPILE}ranlib")
set(CMAKE_STRIP "${CHCORE_MUSL_LIBC_INSTALL_DIR}/bin/musl-strip")
# Set build type
if(CHCORE_USER_DEBUG)
set(CMAKE_BUILD_TYPE "Debug")
else()
set(CMAKE_BUILD_TYPE "Release")
endif()
# Build position independent code, a.k.a -fPIC
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
include(${CMAKE_CURRENT_LIST_DIR}/_common.cmake)
# Set the target system (automatically set CMAKE_CROSSCOMPILING to true)
set(CMAKE_SYSTEM_NAME "ChCore")
set(CMAKE_SYSTEM_PROCESSOR ${CHCORE_ARCH})
# Set prefix path
if(CHCORE_CHPM_INSTALL_PREFIX)
# Get absolute path
get_filename_component(_chpm_install_prefix ${CHCORE_CHPM_INSTALL_PREFIX}
REALPATH BASE_DIR ${CHCORE_PROJECT_DIR})
# For find_package, find_library, etc.
set(CMAKE_PREFIX_PATH ${_chpm_install_prefix})
# C++ headers (FIXME: now we hardcode the version number)
if(CHCORE_ARCH STREQUAL "x86_64")
include_directories(
$<$<COMPILE_LANGUAGE:CXX>:${_chpm_install_prefix}/include/c++/9.2.0/x86_64-linux-musl>
)
elseif(CHCORE_ARCH STREQUAL "aarch64")
include_directories(
$<$<COMPILE_LANGUAGE:CXX>:${_chpm_install_prefix}/include/c++/9.2.0/aarch64-linux-musleabi>
)
elseif(CHCORE_ARCH STREQUAL "riscv64")
include_directories(
$<$<COMPILE_LANGUAGE:CXX>:${_chpm_install_prefix}/include/c++/9.2.0/riscv64-linux-musl>
)
else()
message(
WARNING
"Please set arch-specific C++ header location for ${CHCORE_ARCH}"
)
endif()
include_directories(
$<$<COMPILE_LANGUAGE:CXX>:${_chpm_install_prefix}/include/c++/9.2.0>)
# Link C++ standard library for C++ apps
if(EXISTS ${CMAKE_PREFIX_PATH}/lib/libstdc++.so)
set(CMAKE_CXX_FLAGS
"${CMAKE_CXX_FLAGS} -L${_chpm_install_prefix}/lib/ ${_chpm_install_prefix}/lib/libstdc++.so ${_chpm_install_prefix}/lib/libgcc_s.so"
)
endif()
endif()

View File

@ -0,0 +1,5 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=hikey970
CHCORE_KERNEL_TEST:BOOL=ON
CHCORE_KERNEL_VIRT:BOOL=ON
CHCORE_APP_VMM:BOOL=ON

View File

@ -0,0 +1,7 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=raspi3
CHCORE_KERNEL_TEST:BOOL=ON
CHCORE_KERNEL_VIRT:BOOL=ON
CHCORE_KERNEL_RT:BOOL=ON
CHCORE_SERVER_GUI:BOOL=ON
CHCORE_APP_VMM:BOOL=ON

View File

@ -0,0 +1,6 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=raspi3
CHCORE_KERNEL_TEST:BOOL=ON
CHCORE_KERNEL_VIRT:BOOL=ON
CHCORE_SERVER_GUI:BOOL=ON
CHCORE_APP_VMM:BOOL=ON

View File

@ -0,0 +1,3 @@
CHCORE_CROSS_COMPILE:STRING=riscv64-linux-gnu-
CHCORE_PLAT:STRING=qemu_virt
CHCORE_KERNEL_TEST:BOOL=ON

View File

@ -0,0 +1,5 @@
CHCORE_CROSS_COMPILE:STRING=
CHCORE_PLAT:STRING=intel
CHCORE_KERNEL_TEST:BOOL=ON
CHCORE_KERNEL_RT:BOOL=ON
CHCORE_SERVER_GUI:BOOL=ON

View File

@ -0,0 +1,2 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=ft2000

View File

@ -0,0 +1,2 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=hikey970

View File

@ -0,0 +1,2 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=raspi3

View File

@ -0,0 +1,2 @@
CHCORE_CROSS_COMPILE:STRING=aarch64-linux-gnu-
CHCORE_PLAT:STRING=raspi4

Some files were not shown because too many files have changed in this diff Show More