Merge branch 'master' into mengxu/merge-to-master-PR
This commit is contained in:
commit
862336de8f
|
@ -85,28 +85,6 @@ Steve Dekorte (libcoroutine)
|
|||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
Jean-loup Gailly, Mark Adler (zlib)
|
||||
Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler
|
||||
|
||||
This software is provided 'as-is', without any express or implied
|
||||
warranty. In no event will the authors be held liable for any damages
|
||||
arising from the use of this software.
|
||||
|
||||
Permission is granted to anyone to use this software for any purpose,
|
||||
including commercial applications, and to alter it and redistribute it
|
||||
freely, subject to the following restrictions:
|
||||
|
||||
1. The origin of this software must not be misrepresented; you must not
|
||||
claim that you wrote the original software. If you use this software
|
||||
in a product, an acknowledgment in the product documentation would be
|
||||
appreciated but is not required.
|
||||
2. Altered source versions must be plainly marked as such, and must not be
|
||||
misrepresented as being the original software.
|
||||
3. This notice may not be removed or altered from any source distribution.
|
||||
|
||||
Jean-loup Gailly Mark Adler
|
||||
jloup@gzip.org madler@alumni.caltech.edu
|
||||
|
||||
The Go Authors (Go Tools)
|
||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
||||
|
||||
|
@ -536,3 +514,51 @@ sse2neon Authors (sse2neon)
|
|||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
|
||||
rte_memcpy.h (from DPDK):
|
||||
SPDX-License-Identifier: BSD-3-Clause
|
||||
Copyright(c) 2010-2014 Intel Corporation
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from this
|
||||
software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
folly_memcpy:
|
||||
|
||||
Copyright (c) Facebook, Inc. and its affiliates.
|
||||
Author: Bin Liu <binliu@fb.com>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
|
|
@ -18,7 +18,11 @@
|
|||
# limitations under the License.
|
||||
cmake_minimum_required(VERSION 3.13)
|
||||
project(foundationdb
|
||||
<<<<<<< HEAD
|
||||
VERSION 6.3.7
|
||||
=======
|
||||
VERSION 7.0.0
|
||||
>>>>>>> master
|
||||
DESCRIPTION "FoundationDB is a scalable, fault-tolerant, ordered key-value store with full ACID transactions."
|
||||
HOMEPAGE_URL "http://www.foundationdb.org/"
|
||||
LANGUAGES C CXX ASM)
|
||||
|
@ -177,6 +181,11 @@ else()
|
|||
include(CPack)
|
||||
endif()
|
||||
|
||||
set(BUILD_FLOWBENCH OFF CACHE BOOL "Build microbenchmark program (builds google microbenchmark dependency)")
|
||||
if(BUILD_FLOWBENCH)
|
||||
add_subdirectory(flowbench)
|
||||
endif()
|
||||
|
||||
if(CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
|
||||
add_link_options(-lexecinfo)
|
||||
endif()
|
||||
|
|
|
@ -32,7 +32,7 @@ We draw inspiration from the Apache Software Foundation's informal motto: ["comm
|
|||
|
||||
The project technical lead is Evan Tschannen (ejt@apple.com).
|
||||
|
||||
Members of the Apple FoundationDB team are part of the initial core committers helping review individual contributions; you'll see them commenting on your pull requests. Future committers to the open source project, and the process for adding individuals in this role will be formalized in the future.
|
||||
Members of the Apple FoundationDB team are part of the core committers helping review individual contributions; you'll see them commenting on your pull requests. As the FDB open source community has grown, some members of the community have consistently produced high quality code reviews and other significant contributions to FoundationDB. The project technical lead maintains a list of external committers that actively contribute in this way, and gives them permission to review and merge pull requests.
|
||||
|
||||
## Contributing
|
||||
### Opening a Pull Request
|
||||
|
|
|
@ -26,7 +26,7 @@ sys.path[:0] = [os.path.join(os.path.dirname(__file__), '..', '..', 'bindings',
|
|||
|
||||
import util
|
||||
|
||||
FDB_API_VERSION = 630
|
||||
FDB_API_VERSION = 700
|
||||
|
||||
LOGGING = {
|
||||
'version': 1,
|
||||
|
|
|
@ -157,7 +157,7 @@ def choose_api_version(selected_api_version, tester_min_version, tester_max_vers
|
|||
api_version = min_version
|
||||
elif random.random() < 0.9:
|
||||
api_version = random.choice([v for v in [13, 14, 16, 21, 22, 23, 100, 200, 300, 400, 410, 420, 430,
|
||||
440, 450, 460, 500, 510, 520, 600, 610, 620, 630] if v >= min_version and v <= max_version])
|
||||
440, 450, 460, 500, 510, 520, 600, 610, 620, 630, 700] if v >= min_version and v <= max_version])
|
||||
else:
|
||||
api_version = random.randint(min_version, max_version)
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
|
||||
import os
|
||||
|
||||
MAX_API_VERSION = 630
|
||||
MAX_API_VERSION = 700
|
||||
COMMON_TYPES = ['null', 'bytes', 'string', 'int', 'uuid', 'bool', 'float', 'double', 'tuple']
|
||||
ALL_TYPES = COMMON_TYPES + ['versionstamp']
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ fdb.api_version(FDB_API_VERSION)
|
|||
|
||||
|
||||
class ScriptedTest(Test):
|
||||
TEST_API_VERSION = 630
|
||||
TEST_API_VERSION = 700
|
||||
|
||||
def __init__(self, subspace):
|
||||
super(ScriptedTest, self).__init__(subspace, ScriptedTest.TEST_API_VERSION, ScriptedTest.TEST_API_VERSION)
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#define FDB_INCLUDE_LEGACY_TYPES
|
||||
|
||||
#include "fdbclient/MultiVersionTransaction.h"
|
||||
|
@ -438,18 +438,17 @@ FDBFuture* fdb_transaction_get_range_impl(
|
|||
}
|
||||
|
||||
/* Zero at the C API maps to "infinity" at lower levels */
|
||||
if (!limit)
|
||||
limit = CLIENT_KNOBS->ROW_LIMIT_UNLIMITED;
|
||||
if (!target_bytes)
|
||||
target_bytes = CLIENT_KNOBS->BYTE_LIMIT_UNLIMITED;
|
||||
if (!limit) limit = GetRangeLimits::ROW_LIMIT_UNLIMITED;
|
||||
if (!target_bytes) target_bytes = GetRangeLimits::BYTE_LIMIT_UNLIMITED;
|
||||
|
||||
/* Unlimited/unlimited with mode _EXACT isn't permitted */
|
||||
if (limit == CLIENT_KNOBS->ROW_LIMIT_UNLIMITED && target_bytes == CLIENT_KNOBS->BYTE_LIMIT_UNLIMITED && mode == FDB_STREAMING_MODE_EXACT)
|
||||
if (limit == GetRangeLimits::ROW_LIMIT_UNLIMITED && target_bytes == GetRangeLimits::BYTE_LIMIT_UNLIMITED &&
|
||||
mode == FDB_STREAMING_MODE_EXACT)
|
||||
return TSAV_ERROR(Standalone<RangeResultRef>, exact_mode_without_limits);
|
||||
|
||||
/* _ITERATOR mode maps to one of the known streaming modes
|
||||
depending on iteration */
|
||||
const int mode_bytes_array[] = { CLIENT_KNOBS->BYTE_LIMIT_UNLIMITED, 256, 1000, 4096, 80000 };
|
||||
const int mode_bytes_array[] = { GetRangeLimits::BYTE_LIMIT_UNLIMITED, 256, 1000, 4096, 80000 };
|
||||
|
||||
/* The progression used for FDB_STREAMING_MODE_ITERATOR.
|
||||
Goes from small -> medium -> large. Then 1.5 * previous until serial. */
|
||||
|
@ -474,9 +473,9 @@ FDBFuture* fdb_transaction_get_range_impl(
|
|||
else
|
||||
return TSAV_ERROR(Standalone<RangeResultRef>, client_invalid_operation);
|
||||
|
||||
if(target_bytes == CLIENT_KNOBS->BYTE_LIMIT_UNLIMITED)
|
||||
if (target_bytes == GetRangeLimits::BYTE_LIMIT_UNLIMITED)
|
||||
target_bytes = mode_bytes;
|
||||
else if(mode_bytes != CLIENT_KNOBS->BYTE_LIMIT_UNLIMITED)
|
||||
else if (mode_bytes != GetRangeLimits::BYTE_LIMIT_UNLIMITED)
|
||||
target_bytes = std::min(target_bytes, mode_bytes);
|
||||
|
||||
return (FDBFuture*)( TXN(tr)->getRange(
|
||||
|
@ -597,7 +596,7 @@ fdb_error_t fdb_transaction_set_option_impl( FDBTransaction* tr,
|
|||
void fdb_transaction_set_option_v13( FDBTransaction* tr,
|
||||
FDBTransactionOption option )
|
||||
{
|
||||
fdb_transaction_set_option_impl( tr, option, NULL, 0 );
|
||||
fdb_transaction_set_option_impl( tr, option, nullptr, 0 );
|
||||
}
|
||||
|
||||
extern "C" DLLEXPORT
|
||||
|
|
|
@ -28,10 +28,10 @@
|
|||
#endif
|
||||
|
||||
#if !defined(FDB_API_VERSION)
|
||||
#error You must #define FDB_API_VERSION prior to including fdb_c.h (current version is 630)
|
||||
#error You must #define FDB_API_VERSION prior to including fdb_c.h (current version is 700)
|
||||
#elif FDB_API_VERSION < 13
|
||||
#error API version no longer supported (upgrade to 13)
|
||||
#elif FDB_API_VERSION > 630
|
||||
#elif FDB_API_VERSION > 700
|
||||
#error Requested API version requires a newer version of this header
|
||||
#endif
|
||||
|
||||
|
@ -91,7 +91,7 @@ extern "C" {
|
|||
DLLEXPORT WARN_UNUSED_RESULT fdb_error_t fdb_add_network_thread_completion_hook(void (*hook)(void*), void *hook_parameter);
|
||||
|
||||
#pragma pack(push, 4)
|
||||
#if FDB_API_VERSION >= 630
|
||||
#if FDB_API_VERSION >= 700
|
||||
typedef struct keyvalue {
|
||||
const uint8_t* key;
|
||||
int key_length;
|
||||
|
|
|
@ -9,6 +9,8 @@
|
|||
#include <sys/wait.h>
|
||||
#include <time.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
|
||||
#if defined(__linux__)
|
||||
#include <linux/limits.h>
|
||||
|
@ -52,7 +54,8 @@ FILE* debugme; /* descriptor used for debug messages */
|
|||
int err = wait_future(_f); \
|
||||
if (err) { \
|
||||
int err2; \
|
||||
if ((err != 1020 /* not_committed */) && (err != 1021 /* commit_unknown_result */)) { \
|
||||
if ((err != 1020 /* not_committed */) && (err != 1021 /* commit_unknown_result */) && \
|
||||
(err != 1213 /* tag_throttled */)) { \
|
||||
fprintf(stderr, "ERROR: Error %s (%d) occured at %s\n", #_func, err, fdb_get_error(err)); \
|
||||
} else { \
|
||||
fprintf(annoyme, "ERROR: Error %s (%d) occured at %s\n", #_func, err, fdb_get_error(err)); \
|
||||
|
@ -97,7 +100,8 @@ int commit_transaction(FDBTransaction* transaction) {
|
|||
return FDB_SUCCESS;
|
||||
}
|
||||
|
||||
void update_op_lat_stats(struct timespec* start, struct timespec* end, int op, mako_stats_t* stats) {
|
||||
void update_op_lat_stats(struct timespec* start, struct timespec* end, int op, mako_stats_t* stats,
|
||||
lat_block_t* block[], int* elem_size, bool* is_memory_allocated) {
|
||||
uint64_t latencyus;
|
||||
|
||||
latencyus = (((uint64_t)end->tv_sec * 1000000000 + end->tv_nsec) -
|
||||
|
@ -111,6 +115,19 @@ void update_op_lat_stats(struct timespec* start, struct timespec* end, int op, m
|
|||
if (latencyus > stats->latency_us_max[op]) {
|
||||
stats->latency_us_max[op] = latencyus;
|
||||
}
|
||||
if (!is_memory_allocated[op]) return;
|
||||
if (elem_size[op] < stats->latency_samples[op]) {
|
||||
elem_size[op] = elem_size[op] + LAT_BLOCK_SIZE;
|
||||
lat_block_t* temp_block = (lat_block_t*)malloc(sizeof(lat_block_t));
|
||||
if (temp_block == NULL) {
|
||||
is_memory_allocated[op] = false;
|
||||
return;
|
||||
}
|
||||
temp_block->next_block = NULL;
|
||||
block[op]->next_block = (lat_block_t*)temp_block;
|
||||
block[op] = temp_block;
|
||||
}
|
||||
block[op]->data[(stats->latency_samples[op] - 1) % LAT_BLOCK_SIZE] = latencyus;
|
||||
}
|
||||
|
||||
/* FDB network thread */
|
||||
|
@ -155,11 +172,12 @@ failExit:
|
|||
|
||||
/* populate database */
|
||||
int populate(FDBTransaction* transaction, mako_args_t* args, int worker_id, int thread_id, int thread_tps,
|
||||
mako_stats_t* stats) {
|
||||
mako_stats_t* stats, lat_block_t* block[], int* elem_size, bool* is_memory_allocated) {
|
||||
int i;
|
||||
struct timespec timer_start, timer_end;
|
||||
struct timespec timer_prev, timer_now; /* for throttling */
|
||||
struct timespec timer_per_xact_start, timer_per_xact_end;
|
||||
struct timespec timer_start_commit;
|
||||
char* keystr;
|
||||
char* valstr;
|
||||
|
||||
|
@ -237,13 +255,22 @@ int populate(FDBTransaction* transaction, mako_args_t* args, int worker_id, int
|
|||
|
||||
/* commit every 100 inserts (default) */
|
||||
if (i % args->txnspec.ops[OP_INSERT][OP_COUNT] == 0) {
|
||||
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_start_commit);
|
||||
}
|
||||
if (commit_transaction(transaction) != FDB_SUCCESS) goto failExit;
|
||||
|
||||
/* xact latency stats */
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_COMMIT, stats);
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_start_commit, &timer_per_xact_end, OP_COMMIT, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_TRANSACTION, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
}
|
||||
|
||||
stats->ops[OP_COMMIT]++;
|
||||
stats->ops[OP_TRANSACTION]++;
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_start);
|
||||
|
||||
fdb_transaction_reset(transaction);
|
||||
|
@ -252,11 +279,19 @@ int populate(FDBTransaction* transaction, mako_args_t* args, int worker_id, int
|
|||
}
|
||||
}
|
||||
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_start_commit);
|
||||
}
|
||||
if (commit_transaction(transaction) != FDB_SUCCESS) goto failExit;
|
||||
|
||||
/* xact latency stats */
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_COMMIT, stats);
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_start_commit, &timer_per_xact_end, OP_COMMIT, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_TRANSACTION, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
}
|
||||
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_end);
|
||||
stats->xacts++;
|
||||
|
@ -393,12 +428,13 @@ int run_op_clearrange(FDBTransaction* transaction, char* keystr, char* keystr2)
|
|||
|
||||
/* run one transaction */
|
||||
int run_one_transaction(FDBTransaction* transaction, mako_args_t* args, mako_stats_t* stats, char* keystr,
|
||||
char* keystr2, char* valstr) {
|
||||
char* keystr2, char* valstr, lat_block_t* block[], int* elem_size, bool* is_memory_allocated) {
|
||||
int i;
|
||||
int count;
|
||||
int rc;
|
||||
struct timespec timer_start, timer_end;
|
||||
struct timespec timer_per_xact_start, timer_per_xact_end;
|
||||
struct timespec timer_start_commit;
|
||||
int docommit = 0;
|
||||
int keynum;
|
||||
int keyend;
|
||||
|
@ -416,7 +452,7 @@ int run_one_transaction(FDBTransaction* transaction, mako_args_t* args, mako_sta
|
|||
retryTxn:
|
||||
for (i = 0; i < MAX_OP; i++) {
|
||||
|
||||
if ((args->txnspec.ops[i][OP_COUNT] > 0) && (i != OP_COMMIT)) {
|
||||
if ((args->txnspec.ops[i][OP_COUNT] > 0) && (i != OP_TRANSACTION) && (i != OP_COMMIT)) {
|
||||
for (count = 0; count < args->txnspec.ops[i][OP_COUNT]; count++) {
|
||||
|
||||
/* note: for simplicity, always generate a new key(s) even when retrying */
|
||||
|
@ -492,11 +528,21 @@ retryTxn:
|
|||
rc = run_op_insert(transaction, keystr, valstr);
|
||||
if (rc == FDB_SUCCESS) {
|
||||
/* commit insert so mutation goes to storage */
|
||||
/* to measure commit latency */
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_start_commit);
|
||||
}
|
||||
rc = commit_transaction(transaction);
|
||||
if (rc == FDB_SUCCESS) {
|
||||
stats->ops[OP_COMMIT]++;
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_COMMIT, stats);
|
||||
stats->ops[OP_TRANSACTION]++;
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_start_commit, &timer_per_xact_end, OP_COMMIT, stats, block,
|
||||
elem_size, is_memory_allocated);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_TRANSACTION, stats,
|
||||
block, elem_size, is_memory_allocated);
|
||||
}
|
||||
} else {
|
||||
/* error */
|
||||
if (rc == FDB_ERROR_CONFLICT) {
|
||||
|
@ -542,17 +588,26 @@ retryTxn:
|
|||
}
|
||||
}
|
||||
/* commit insert so mutation goes to storage */
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_start_commit);
|
||||
}
|
||||
rc = commit_transaction(transaction);
|
||||
if (rc == FDB_SUCCESS) {
|
||||
stats->ops[OP_COMMIT]++;
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_COMMIT, stats);
|
||||
stats->ops[OP_TRANSACTION]++;
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_start_commit, &timer_per_xact_end, OP_COMMIT, stats, block,
|
||||
elem_size, is_memory_allocated);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_TRANSACTION, stats,
|
||||
block, elem_size, is_memory_allocated);
|
||||
}
|
||||
} else {
|
||||
/* error */
|
||||
if (rc == FDB_ERROR_CONFLICT) {
|
||||
stats->conflicts++;
|
||||
} else {
|
||||
stats->errors[OP_COMMIT]++;
|
||||
stats->errors[OP_TRANSACTION]++;
|
||||
}
|
||||
if (rc == FDB_ERROR_ABORT) {
|
||||
/* make sure to reset transaction */
|
||||
|
@ -574,7 +629,7 @@ retryTxn:
|
|||
clock_gettime(CLOCK_MONOTONIC, &timer_end);
|
||||
if (rc == FDB_SUCCESS) {
|
||||
/* per op latency, record successful transactions */
|
||||
update_op_lat_stats(&timer_start, &timer_end, i, stats);
|
||||
update_op_lat_stats(&timer_start, &timer_end, i, stats, block, elem_size, is_memory_allocated);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -586,7 +641,7 @@ retryTxn:
|
|||
if (rc == FDB_ERROR_CONFLICT) {
|
||||
stats->conflicts++;
|
||||
} else {
|
||||
stats->errors[OP_COMMIT]++;
|
||||
stats->errors[OP_TRANSACTION]++;
|
||||
}
|
||||
if (rc == FDB_ERROR_ABORT) {
|
||||
/* make sure to reset transaction */
|
||||
|
@ -601,18 +656,24 @@ retryTxn:
|
|||
|
||||
/* commit only successful transaction */
|
||||
if (docommit | args->commit_get) {
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_start_commit);
|
||||
}
|
||||
rc = commit_transaction(transaction);
|
||||
if (rc == FDB_SUCCESS) {
|
||||
/* success */
|
||||
stats->ops[OP_COMMIT]++;
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_COMMIT, stats);
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_start_commit, &timer_per_xact_end, OP_COMMIT, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
}
|
||||
} else {
|
||||
/* error */
|
||||
if (rc == FDB_ERROR_CONFLICT) {
|
||||
stats->conflicts++;
|
||||
} else {
|
||||
stats->errors[OP_COMMIT]++;
|
||||
stats->errors[OP_TRANSACTION]++;
|
||||
}
|
||||
if (rc == FDB_ERROR_ABORT) {
|
||||
/* make sure to reset transaction */
|
||||
|
@ -623,6 +684,13 @@ retryTxn:
|
|||
}
|
||||
}
|
||||
|
||||
stats->ops[OP_TRANSACTION]++;
|
||||
if (stats->xacts % args->sampling == 0) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &timer_per_xact_end);
|
||||
update_op_lat_stats(&timer_per_xact_start, &timer_per_xact_end, OP_TRANSACTION, stats, block, elem_size,
|
||||
is_memory_allocated);
|
||||
}
|
||||
|
||||
stats->xacts++;
|
||||
|
||||
/* make sure to reset transaction */
|
||||
|
@ -631,7 +699,8 @@ retryTxn:
|
|||
}
|
||||
|
||||
int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps, volatile double* throttle_factor,
|
||||
int thread_iters, volatile int* signal, mako_stats_t* stats, int dotrace) {
|
||||
int thread_iters, volatile int* signal, mako_stats_t* stats, int dotrace, int dotagging, lat_block_t* block[],
|
||||
int* elem_size, bool* is_memory_allocated) {
|
||||
int xacts = 0;
|
||||
int64_t total_xacts = 0;
|
||||
int rc = 0;
|
||||
|
@ -642,6 +711,7 @@ int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps,
|
|||
int current_tps;
|
||||
char* traceid;
|
||||
int tracetimer = 0;
|
||||
char* tagstr;
|
||||
|
||||
if (thread_tps < 0) return 0;
|
||||
|
||||
|
@ -649,6 +719,12 @@ int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps,
|
|||
traceid = (char*)malloc(32);
|
||||
}
|
||||
|
||||
if(dotagging) {
|
||||
tagstr = (char*)calloc(16, 1);
|
||||
memcpy(tagstr, KEYPREFIX, KEYPREFIXLEN);
|
||||
memcpy(tagstr + KEYPREFIXLEN, args->txntagging_prefix, TAGPREFIXLENGTH_MAX);
|
||||
}
|
||||
|
||||
current_tps = (int)((double)thread_tps * *throttle_factor);
|
||||
|
||||
keystr = (char*)malloc(sizeof(char) * args->key_length + 1);
|
||||
|
@ -706,6 +782,7 @@ int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps,
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
} else {
|
||||
if (thread_tps > 0) {
|
||||
/* 1 second not passed, throttle */
|
||||
|
@ -715,7 +792,19 @@ int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps,
|
|||
}
|
||||
} /* throttle or txntrace */
|
||||
|
||||
rc = run_one_transaction(transaction, args, stats, keystr, keystr2, valstr);
|
||||
/* enable transaction tagging */
|
||||
if (dotagging > 0) {
|
||||
sprintf(tagstr + KEYPREFIXLEN + TAGPREFIXLENGTH_MAX, "%03d", urand(0, args->txntagging - 1));
|
||||
fdb_error_t err = fdb_transaction_set_option(transaction, FDB_TR_OPTION_AUTO_THROTTLE_TAG,
|
||||
(uint8_t*)tagstr, 16);
|
||||
if (err) {
|
||||
fprintf(stderr, "ERROR: FDB_TR_OPTION_DEBUG_TRANSACTION_IDENTIFIER: %s\n",
|
||||
fdb_get_error(err));
|
||||
}
|
||||
}
|
||||
|
||||
rc = run_one_transaction(transaction, args, stats, keystr, keystr2, valstr, block, elem_size,
|
||||
is_memory_allocated);
|
||||
if (rc) {
|
||||
/* FIXME: run_one_transaction should return something meaningful */
|
||||
fprintf(annoyme, "ERROR: run_one_transaction failed (%d)\n", rc);
|
||||
|
@ -739,10 +828,63 @@ int run_workload(FDBTransaction* transaction, mako_args_t* args, int thread_tps,
|
|||
if (dotrace) {
|
||||
free(traceid);
|
||||
}
|
||||
if(dotagging) {
|
||||
free(tagstr);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
void get_stats_file_name(char filename[], int worker_id, int thread_id, int op) {
|
||||
char str1[256];
|
||||
sprintf(str1, "/%d_%d_", worker_id + 1, thread_id + 1);
|
||||
strcat(filename, str1);
|
||||
switch (op) {
|
||||
case OP_GETREADVERSION:
|
||||
strcat(filename, "GRV");
|
||||
break;
|
||||
case OP_GET:
|
||||
strcat(filename, "GET");
|
||||
break;
|
||||
case OP_GETRANGE:
|
||||
strcat(filename, "GETRANGE");
|
||||
break;
|
||||
case OP_SGET:
|
||||
strcat(filename, "SGET");
|
||||
break;
|
||||
case OP_SGETRANGE:
|
||||
strcat(filename, "SGETRANGE");
|
||||
break;
|
||||
case OP_UPDATE:
|
||||
strcat(filename, "UPDATE");
|
||||
break;
|
||||
case OP_INSERT:
|
||||
strcat(filename, "INSERT");
|
||||
break;
|
||||
case OP_INSERTRANGE:
|
||||
strcat(filename, "INSERTRANGE");
|
||||
break;
|
||||
case OP_CLEAR:
|
||||
strcat(filename, "CLEAR");
|
||||
break;
|
||||
case OP_SETCLEAR:
|
||||
strcat(filename, "SETCLEAR");
|
||||
break;
|
||||
case OP_CLEARRANGE:
|
||||
strcat(filename, "CLEARRANGE");
|
||||
break;
|
||||
case OP_SETCLEARRANGE:
|
||||
strcat(filename, "SETCLRRANGE");
|
||||
break;
|
||||
case OP_COMMIT:
|
||||
strcat(filename, "COMMIT");
|
||||
break;
|
||||
case OP_TRANSACTION:
|
||||
strcat(filename, "TRANSACTION");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* mako worker thread */
|
||||
void* worker_thread(void* thread_args) {
|
||||
int worker_id = ((thread_args_t*)thread_args)->process->worker_id;
|
||||
|
@ -755,18 +897,30 @@ void* worker_thread(void* thread_args) {
|
|||
int thread_tps = 0;
|
||||
int thread_iters = 0;
|
||||
int op;
|
||||
int i, size;
|
||||
int dotrace = (worker_id == 0 && thread_id == 0 && args->txntrace) ? args->txntrace : 0;
|
||||
int dotagging = args->txntagging;
|
||||
volatile int* signal = &((thread_args_t*)thread_args)->process->shm->signal;
|
||||
volatile double* throttle_factor = &((thread_args_t*)thread_args)->process->shm->throttle_factor;
|
||||
volatile int* readycount = &((thread_args_t*)thread_args)->process->shm->readycount;
|
||||
volatile int* stopcount = &((thread_args_t*)thread_args)->process->shm->stopcount;
|
||||
mako_stats_t* stats = (void*)((thread_args_t*)thread_args)->process->shm + sizeof(mako_shmhdr_t) /* skip header */
|
||||
+ (sizeof(mako_stats_t) * (worker_id * args->num_threads + thread_id));
|
||||
|
||||
lat_block_t* block[MAX_OP];
|
||||
for (int i = 0; i < MAX_OP; i++) {
|
||||
block[i] = ((thread_args_t*)thread_args)->block[i];
|
||||
}
|
||||
int* elem_size = &((thread_args_t*)thread_args)->elem_size[0];
|
||||
pid_t* parent_id = &((thread_args_t*)thread_args)->process->parent_id;
|
||||
bool* is_memory_allocated = &((thread_args_t*)thread_args)->is_memory_allocated[0];
|
||||
|
||||
/* init latency */
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
stats->latency_us_min[op] = 0xFFFFFFFFFFFFFFFF; /* uint64_t */
|
||||
stats->latency_us_max[op] = 0;
|
||||
stats->latency_us_total[op] = 0;
|
||||
stats->latency_samples[op] = 0;
|
||||
}
|
||||
|
||||
fprintf(debugme, "DEBUG: worker_id:%d (%d) thread_id:%d (%d) (tid:%d)\n", worker_id, args->num_processes, thread_id,
|
||||
|
@ -801,7 +955,8 @@ void* worker_thread(void* thread_args) {
|
|||
|
||||
/* build/popualte */
|
||||
else if (args->mode == MODE_BUILD) {
|
||||
rc = populate(transaction, args, worker_id, thread_id, thread_tps, stats);
|
||||
rc =
|
||||
populate(transaction, args, worker_id, thread_id, thread_tps, stats, block, elem_size, is_memory_allocated);
|
||||
if (rc < 0) {
|
||||
fprintf(stderr, "ERROR: populate failed\n");
|
||||
}
|
||||
|
@ -809,20 +964,63 @@ void* worker_thread(void* thread_args) {
|
|||
|
||||
/* run the workload */
|
||||
else if (args->mode == MODE_RUN) {
|
||||
rc = run_workload(transaction, args, thread_tps, throttle_factor, thread_iters, signal, stats, dotrace);
|
||||
rc = run_workload(transaction, args, thread_tps, throttle_factor, thread_iters,
|
||||
signal, stats, dotrace, dotagging, block, elem_size, is_memory_allocated);
|
||||
if (rc < 0) {
|
||||
fprintf(stderr, "ERROR: run_workload failed\n");
|
||||
}
|
||||
}
|
||||
|
||||
if (args->mode == MODE_BUILD || args->mode == MODE_RUN) {
|
||||
char str2[1000];
|
||||
sprintf(str2, "%s%d", TEMP_DATA_STORE, *parent_id);
|
||||
rc = mkdir(str2, S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_COMMIT || op == OP_TRANSACTION) {
|
||||
FILE* fp;
|
||||
char file_name[NAME_MAX] = { '\0' };
|
||||
strcat(file_name, str2);
|
||||
get_stats_file_name(file_name, worker_id, thread_id, op);
|
||||
fp = fopen(file_name, "w");
|
||||
lat_block_t* temp_block = ((thread_args_t*)thread_args)->block[op];
|
||||
if (is_memory_allocated[op]) {
|
||||
size = stats->latency_samples[op] / LAT_BLOCK_SIZE;
|
||||
for (i = 0; i < size && temp_block != NULL; i++) {
|
||||
fwrite(&temp_block->data, sizeof(uint64_t) * LAT_BLOCK_SIZE, 1, fp);
|
||||
temp_block = temp_block->next_block;
|
||||
}
|
||||
size = stats->latency_samples[op] % LAT_BLOCK_SIZE;
|
||||
if (size != 0) fwrite(&temp_block->data, sizeof(uint64_t) * size, 1, fp);
|
||||
} else {
|
||||
while (temp_block) {
|
||||
fwrite(&temp_block->data, sizeof(uint64_t) * LAT_BLOCK_SIZE, 1, fp);
|
||||
temp_block = temp_block->next_block;
|
||||
}
|
||||
}
|
||||
fclose(fp);
|
||||
}
|
||||
}
|
||||
__sync_fetch_and_add(stopcount, 1);
|
||||
}
|
||||
|
||||
/* fall through */
|
||||
failExit:
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
lat_block_t* curr = ((thread_args_t*)thread_args)->block[op];
|
||||
lat_block_t* prev = NULL;
|
||||
size = elem_size[op] / LAT_BLOCK_SIZE;
|
||||
while (size--) {
|
||||
prev = curr;
|
||||
curr = curr->next_block;
|
||||
free(prev);
|
||||
}
|
||||
}
|
||||
fdb_transaction_destroy(transaction);
|
||||
pthread_exit(0);
|
||||
}
|
||||
|
||||
/* mako worker process */
|
||||
int worker_process_main(mako_args_t* args, int worker_id, mako_shmhdr_t* shm) {
|
||||
int worker_process_main(mako_args_t* args, int worker_id, mako_shmhdr_t* shm, pid_t* pid_main) {
|
||||
int i;
|
||||
pthread_t network_thread; /* handle for thread which invoked fdb_run_network() */
|
||||
pthread_t* worker_threads = NULL;
|
||||
|
@ -835,6 +1033,7 @@ int worker_process_main(mako_args_t* args, int worker_id, mako_shmhdr_t* shm) {
|
|||
fdb_error_t err;
|
||||
|
||||
process.worker_id = worker_id;
|
||||
process.parent_id = *pid_main;
|
||||
process.args = args;
|
||||
process.shm = (mako_shmhdr_t*)shm;
|
||||
|
||||
|
@ -948,6 +1147,19 @@ int worker_process_main(mako_args_t* args, int worker_id, mako_shmhdr_t* shm) {
|
|||
|
||||
for (i = 0; i < args->num_threads; i++) {
|
||||
thread_args[i].thread_id = i;
|
||||
for (int op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
thread_args[i].block[op] = (lat_block_t*)malloc(sizeof(lat_block_t));
|
||||
if (thread_args[i].block[op] == NULL) {
|
||||
thread_args[i].is_memory_allocated[op] = false;
|
||||
thread_args[i].elem_size[op] = 0;
|
||||
} else {
|
||||
thread_args[i].is_memory_allocated[op] = true;
|
||||
thread_args[i].block[op]->next_block = NULL;
|
||||
thread_args[i].elem_size[op] = LAT_BLOCK_SIZE;
|
||||
}
|
||||
}
|
||||
}
|
||||
thread_args[i].process = &process;
|
||||
rc = pthread_create(&worker_threads[i], NULL, worker_thread, (void*)&thread_args[i]);
|
||||
if (rc != 0) {
|
||||
|
@ -1021,6 +1233,8 @@ int init_args(mako_args_t* args) {
|
|||
args->tracepath[0] = '\0';
|
||||
args->traceformat = 0; /* default to client's default (XML) */
|
||||
args->txntrace = 0;
|
||||
args->txntagging = 0;
|
||||
memset(args->txntagging_prefix, 0, TAGPREFIXLENGTH_MAX);
|
||||
for (i = 0; i < MAX_OP; i++) {
|
||||
args->txnspec.ops[i][OP_COUNT] = 0;
|
||||
}
|
||||
|
@ -1042,6 +1256,9 @@ int parse_transaction(mako_args_t* args, char* optarg) {
|
|||
|
||||
op = 0;
|
||||
while (*ptr) {
|
||||
// Clang gives false positive array bounds warning, which must be ignored:
|
||||
#pragma clang diagnostic push
|
||||
#pragma clang diagnostic ignored "-Warray-bounds"
|
||||
if (strncmp(ptr, "grv", 3) == 0) {
|
||||
op = OP_GETREADVERSION;
|
||||
ptr += 3;
|
||||
|
@ -1088,6 +1305,7 @@ int parse_transaction(mako_args_t* args, char* optarg) {
|
|||
error = 1;
|
||||
break;
|
||||
}
|
||||
#pragma clang diagnostic pop
|
||||
|
||||
/* count */
|
||||
num = 0;
|
||||
|
@ -1174,6 +1392,8 @@ void usage() {
|
|||
printf("%-24s %s\n", " --tracepath=PATH", "Set trace file path");
|
||||
printf("%-24s %s\n", " --trace_format <xml|json>", "Set trace format (Default: json)");
|
||||
printf("%-24s %s\n", " --txntrace=sec", "Specify transaction tracing interval (Default: 0)");
|
||||
printf("%-24s %s\n", " --txntagging", "Specify the number of different transaction tag (Default: 0, max = 1000)");
|
||||
printf("%-24s %s\n", " --txntagging_prefix", "Specify the prefix of transaction tag - mako${txntagging_prefix} (Default: '')");
|
||||
printf("%-24s %s\n", " --knobs=KNOBS", "Set client knobs");
|
||||
printf("%-24s %s\n", " --flatbuffers", "Use flatbuffers");
|
||||
}
|
||||
|
@ -1215,6 +1435,8 @@ int parse_args(int argc, char* argv[], mako_args_t* args) {
|
|||
{ "commitget", no_argument, NULL, ARG_COMMITGET },
|
||||
{ "flatbuffers", no_argument, NULL, ARG_FLATBUFFERS },
|
||||
{ "trace", no_argument, NULL, ARG_TRACE },
|
||||
{ "txntagging", required_argument, NULL, ARG_TXNTAGGING },
|
||||
{ "txntagging_prefix", required_argument, NULL, ARG_TXNTAGGINGPREFIX},
|
||||
{ "version", no_argument, NULL, ARG_VERSION },
|
||||
{ NULL, 0, NULL, 0 }
|
||||
};
|
||||
|
@ -1330,8 +1552,25 @@ int parse_args(int argc, char* argv[], mako_args_t* args) {
|
|||
case ARG_TXNTRACE:
|
||||
args->txntrace = atoi(optarg);
|
||||
break;
|
||||
|
||||
case ARG_TXNTAGGING:
|
||||
args->txntagging = atoi(optarg);
|
||||
if(args->txntagging > 1000) {
|
||||
args->txntagging = 1000;
|
||||
}
|
||||
break;
|
||||
case ARG_TXNTAGGINGPREFIX: {
|
||||
if(strlen(optarg) > TAGPREFIXLENGTH_MAX) {
|
||||
fprintf(stderr, "Error: the length of txntagging_prefix is larger than %d\n", TAGPREFIXLENGTH_MAX);
|
||||
exit(0);
|
||||
}
|
||||
memcpy(args->txntagging_prefix, optarg, strlen(optarg));
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
if ((args->tpsmin == -1) || (args->tpsmin > args->tpsmax)) {
|
||||
args->tpsmin = args->tpsmax;
|
||||
}
|
||||
|
@ -1388,6 +1627,10 @@ int validate_args(mako_args_t* args) {
|
|||
fprintf(stderr, "ERROR: Must specify either seconds or iteration\n");
|
||||
return -1;
|
||||
}
|
||||
if(args->txntagging < 0) {
|
||||
fprintf(stderr, "ERROR: --txntagging must be a non-negative integer\n");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -1453,12 +1696,13 @@ void print_stats(mako_args_t* args, mako_stats_t* stats, struct timespec* now, s
|
|||
return;
|
||||
}
|
||||
|
||||
void print_stats_header(mako_args_t* args) {
|
||||
void print_stats_header(mako_args_t* args, bool show_commit, bool is_first_header_empty, bool show_op_stats) {
|
||||
int op;
|
||||
int i;
|
||||
|
||||
/* header */
|
||||
for (i = 0; i <= STATS_TITLE_WIDTH; i++) printf(" ");
|
||||
if (is_first_header_empty)
|
||||
for (i = 0; i <= STATS_TITLE_WIDTH; i++) printf(" ");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0) {
|
||||
switch (op) {
|
||||
|
@ -1501,8 +1745,14 @@ void print_stats_header(mako_args_t* args) {
|
|||
}
|
||||
}
|
||||
}
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "TPS");
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s\n", "Conflicts/s");
|
||||
|
||||
if (show_commit) printf("%" STR(STATS_FIELD_WIDTH) "s ", "COMMIT");
|
||||
if (show_op_stats) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s\n", "TRANSACTION");
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "TPS");
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s\n", "Conflicts/s");
|
||||
}
|
||||
|
||||
for (i = 0; i < STATS_TITLE_WIDTH; i++) printf("=");
|
||||
printf(" ");
|
||||
|
@ -1512,16 +1762,31 @@ void print_stats_header(mako_args_t* args) {
|
|||
printf(" ");
|
||||
}
|
||||
}
|
||||
/* TPS */
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
printf(" ");
|
||||
/* Conflicts */
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
|
||||
/* COMMIT */
|
||||
if (show_commit) {
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
printf(" ");
|
||||
}
|
||||
|
||||
if (show_op_stats) {
|
||||
/* TRANSACTION */
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
printf(" ");
|
||||
} else {
|
||||
/* TPS */
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
printf(" ");
|
||||
|
||||
/* Conflicts */
|
||||
for (i = 0; i < STATS_FIELD_WIDTH; i++) printf("=");
|
||||
}
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer_now, struct timespec* timer_start) {
|
||||
int i, j, op;
|
||||
void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer_now, struct timespec* timer_start,
|
||||
pid_t* pid_main) {
|
||||
int i, j, k, op, index;
|
||||
uint64_t totalxacts = 0;
|
||||
uint64_t conflicts = 0;
|
||||
uint64_t totalerrors = 0;
|
||||
|
@ -1548,7 +1813,7 @@ void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer
|
|||
totalxacts += stats[idx].xacts;
|
||||
conflicts += stats[idx].conflicts;
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if ((args->txnspec.ops[op][OP_COUNT] > 0) || (op == OP_COMMIT)) {
|
||||
if ((args->txnspec.ops[op][OP_COUNT] > 0) || (op == OP_TRANSACTION) || (op == OP_COMMIT)) {
|
||||
totalerrors += stats[idx].errors[op];
|
||||
ops_total[op] += stats[idx].ops[op];
|
||||
errors_total[op] += stats[idx].errors[op];
|
||||
|
@ -1594,33 +1859,51 @@ void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer
|
|||
printf("Overall TPS: %8lld\n\n", totalxacts * 1000000000 / durationns);
|
||||
|
||||
/* per-op stats */
|
||||
print_stats_header(args);
|
||||
print_stats_header(args, true, true, false);
|
||||
|
||||
/* OPS */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Total OPS");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 && op != OP_COMMIT) {
|
||||
if ((args->txnspec.ops[op][OP_COUNT] > 0 && op != OP_TRANSACTION) || op == OP_COMMIT) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", ops_total[op]);
|
||||
}
|
||||
}
|
||||
|
||||
/* TPS */
|
||||
printf("%" STR(STATS_FIELD_WIDTH) ".2f ", totalxacts * 1000000000.0 / durationns);
|
||||
|
||||
/* Conflicts */
|
||||
printf("%" STR(STATS_FIELD_WIDTH) ".2f\n", conflicts * 1000000000.0 / durationns);
|
||||
|
||||
/* Errors */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Errors");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 && op != OP_COMMIT) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 && op != OP_TRANSACTION) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", errors_total[op]);
|
||||
}
|
||||
}
|
||||
printf("\n\n");
|
||||
|
||||
printf("%s", "Latency (us)");
|
||||
print_stats_header(args, true, false, true);
|
||||
|
||||
/* Total Samples */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Samples");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (lat_total[op]) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", lat_samples[op]);
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
/* Min Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Lat Min (us)");
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Min");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_COMMIT) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (lat_min[op] == -1) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
} else {
|
||||
|
@ -1631,9 +1914,9 @@ void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer
|
|||
printf("\n");
|
||||
|
||||
/* Avg Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Lat Avg (us)");
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Avg");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_COMMIT) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (lat_total[op]) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", lat_total[op] / lat_samples[op]);
|
||||
} else {
|
||||
|
@ -1644,9 +1927,9 @@ void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer
|
|||
printf("\n");
|
||||
|
||||
/* Max Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Lat Max (us)");
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Max");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_COMMIT) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (lat_max[op] == 0) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
} else {
|
||||
|
@ -1655,9 +1938,122 @@ void print_report(mako_args_t* args, mako_stats_t* stats, struct timespec* timer
|
|||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
uint64_t* dataPoints[MAX_OP];
|
||||
uint64_t median;
|
||||
int point_99_9pct, point_99pct, point_95pct;
|
||||
|
||||
/* Median Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "Median");
|
||||
int num_points[MAX_OP] = { 0 };
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (lat_total[op]) {
|
||||
dataPoints[op] = (uint64_t*)malloc(sizeof(uint64_t) * lat_samples[op]);
|
||||
if (dataPoints[op] == NULL) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
continue;
|
||||
}
|
||||
k = 0;
|
||||
for (i = 0; i < args->num_processes; i++) {
|
||||
for (j = 0; j < args->num_threads; j++) {
|
||||
char file_name[NAME_MAX] = { '\0' };
|
||||
sprintf(file_name, "%s%d", TEMP_DATA_STORE, *pid_main);
|
||||
get_stats_file_name(file_name, i, j, op);
|
||||
FILE* f = fopen(file_name, "r");
|
||||
fseek(f, 0, SEEK_END);
|
||||
int numPoints = ftell(f) / sizeof(uint64_t);
|
||||
fseek(f, 0, 0);
|
||||
index = 0;
|
||||
while (index < numPoints) {
|
||||
fread(&dataPoints[op][k++], sizeof(uint64_t), 1, f);
|
||||
++index;
|
||||
}
|
||||
fclose(f);
|
||||
}
|
||||
}
|
||||
num_points[op] = k;
|
||||
quick_sort(dataPoints[op], num_points[op]);
|
||||
if (num_points[op] & 1) {
|
||||
median = dataPoints[op][num_points[op] / 2];
|
||||
} else {
|
||||
median = (dataPoints[op][num_points[op] / 2] + dataPoints[op][num_points[op] / 2 - 1]) >> 1;
|
||||
}
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", median);
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
/* 95%ile Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "95.0 pctile");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (dataPoints[op] == NULL) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
continue;
|
||||
}
|
||||
if (lat_total[op]) {
|
||||
point_95pct = ((float)(num_points[op]) * 0.95) - 1;
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", dataPoints[op][point_95pct]);
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
/* 99%ile Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "99.0 pctile");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (dataPoints[op] == NULL) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
continue;
|
||||
}
|
||||
if (lat_total[op]) {
|
||||
point_99pct = ((float)(num_points[op]) * 0.99) - 1;
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", dataPoints[op][point_99pct]);
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
/* 99.9%ile Latency */
|
||||
printf("%-" STR(STATS_TITLE_WIDTH) "s ", "99.9 pctile");
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION || op == OP_COMMIT) {
|
||||
if (dataPoints[op] == NULL) {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
continue;
|
||||
}
|
||||
if (lat_total[op]) {
|
||||
point_99_9pct = ((float)(num_points[op]) * 0.999) - 1;
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "lld ", dataPoints[op][point_99_9pct]);
|
||||
} else {
|
||||
printf("%" STR(STATS_FIELD_WIDTH) "s ", "N/A");
|
||||
}
|
||||
}
|
||||
}
|
||||
printf("\n");
|
||||
|
||||
char command_remove[NAME_MAX] = { '\0' };
|
||||
sprintf(command_remove, "rm -rf %s%d", TEMP_DATA_STORE, *pid_main);
|
||||
system(command_remove);
|
||||
|
||||
for (op = 0; op < MAX_OP; op++) {
|
||||
if (args->txnspec.ops[op][OP_COUNT] > 0 || op == OP_TRANSACTION) {
|
||||
if (lat_total[op]) free(dataPoints[op]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int stats_process_main(mako_args_t* args, mako_stats_t* stats, volatile double* throttle_factor, volatile int* signal) {
|
||||
int stats_process_main(mako_args_t* args, mako_stats_t* stats, volatile double* throttle_factor, volatile int* signal,
|
||||
volatile int* stopcount, pid_t* pid_main) {
|
||||
struct timespec timer_start, timer_prev, timer_now;
|
||||
double sin_factor;
|
||||
|
||||
|
@ -1666,7 +2062,7 @@ int stats_process_main(mako_args_t* args, mako_stats_t* stats, volatile double*
|
|||
usleep(10000); /* 10ms */
|
||||
}
|
||||
|
||||
if (args->verbose >= VERBOSE_DEFAULT) print_stats_header(args);
|
||||
if (args->verbose >= VERBOSE_DEFAULT) print_stats_header(args, false, true, false);
|
||||
|
||||
clock_gettime(CLOCK_MONOTONIC_COARSE, &timer_start);
|
||||
timer_prev.tv_sec = timer_start.tv_sec;
|
||||
|
@ -1717,7 +2113,10 @@ int stats_process_main(mako_args_t* args, mako_stats_t* stats, volatile double*
|
|||
/* print report */
|
||||
if (args->verbose >= VERBOSE_DEFAULT) {
|
||||
clock_gettime(CLOCK_MONOTONIC_COARSE, &timer_now);
|
||||
print_report(args, stats, &timer_now, &timer_start);
|
||||
while (*stopcount < args->num_threads * args->num_processes) {
|
||||
usleep(10000); /* 10ms */
|
||||
}
|
||||
print_report(args, stats, &timer_now, &timer_start, pid_main);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1737,6 +2136,7 @@ int main(int argc, char* argv[]) {
|
|||
char shmpath[NAME_MAX];
|
||||
size_t shmsize;
|
||||
mako_stats_t* stats;
|
||||
pid_t pid_main;
|
||||
|
||||
rc = init_args(&args);
|
||||
if (rc < 0) {
|
||||
|
@ -1764,8 +2164,9 @@ int main(int argc, char* argv[]) {
|
|||
}
|
||||
}
|
||||
|
||||
pid_main = getpid();
|
||||
/* create the shared memory for stats */
|
||||
sprintf(shmpath, "mako%d", getpid());
|
||||
sprintf(shmpath, "mako%d", pid_main);
|
||||
shmfd = shm_open(shmpath, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);
|
||||
if (shmfd < 0) {
|
||||
fprintf(stderr, "ERROR: shm_open failed\n");
|
||||
|
@ -1795,6 +2196,7 @@ int main(int argc, char* argv[]) {
|
|||
/* get ready */
|
||||
shm->signal = SIGNAL_OFF;
|
||||
shm->readycount = 0;
|
||||
shm->stopcount = 0;
|
||||
shm->throttle_factor = 1.0;
|
||||
|
||||
/* fork worker processes + 1 stats process */
|
||||
|
@ -1837,7 +2239,7 @@ int main(int argc, char* argv[]) {
|
|||
|
||||
if (proc_type == proc_worker) {
|
||||
/* worker process */
|
||||
worker_process_main(&args, worker_id, shm);
|
||||
worker_process_main(&args, worker_id, shm, &pid_main);
|
||||
/* worker can exit here */
|
||||
exit(0);
|
||||
} else if (proc_type == proc_stats) {
|
||||
|
@ -1846,12 +2248,11 @@ int main(int argc, char* argv[]) {
|
|||
/* no stats needed for clean mode */
|
||||
exit(0);
|
||||
}
|
||||
stats_process_main(&args, stats, &shm->throttle_factor, &shm->signal);
|
||||
stats_process_main(&args, stats, &shm->throttle_factor, &shm->signal, &shm->stopcount, &pid_main);
|
||||
exit(0);
|
||||
}
|
||||
|
||||
/* master */
|
||||
|
||||
/* wait for everyone to be ready */
|
||||
while (shm->readycount < (args.num_processes * args.num_threads)) {
|
||||
usleep(1000);
|
||||
|
|
|
@ -3,12 +3,13 @@
|
|||
#pragma once
|
||||
|
||||
#ifndef FDB_API_VERSION
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#endif
|
||||
|
||||
#include <foundationdb/fdb_c.h>
|
||||
#include <pthread.h>
|
||||
#include <sys/types.h>
|
||||
#include <stdbool.h>
|
||||
#if defined(__linux__)
|
||||
#include <linux/limits.h>
|
||||
#elif defined(__APPLE__)
|
||||
|
@ -32,6 +33,8 @@
|
|||
#define FDB_ERROR_ABORT -2
|
||||
#define FDB_ERROR_CONFLICT -3
|
||||
|
||||
#define LAT_BLOCK_SIZE 511 /* size of each block to get detailed latency for each operation */
|
||||
|
||||
/* transaction specification */
|
||||
enum Operations {
|
||||
OP_GETREADVERSION,
|
||||
|
@ -47,6 +50,7 @@ enum Operations {
|
|||
OP_CLEARRANGE,
|
||||
OP_SETCLEARRANGE,
|
||||
OP_COMMIT,
|
||||
OP_TRANSACTION, /* pseudo-operation - cumulative time for the operation + commit */
|
||||
MAX_OP /* must be the last item */
|
||||
};
|
||||
|
||||
|
@ -71,7 +75,9 @@ enum Arguments {
|
|||
ARG_TPSMIN,
|
||||
ARG_TPSINTERVAL,
|
||||
ARG_TPSCHANGE,
|
||||
ARG_TXNTRACE
|
||||
ARG_TXNTRACE,
|
||||
ARG_TXNTAGGING,
|
||||
ARG_TXNTAGGINGPREFIX
|
||||
};
|
||||
|
||||
enum TPSChangeTypes { TPS_SIN, TPS_SQUARE, TPS_PULSE };
|
||||
|
@ -79,6 +85,8 @@ enum TPSChangeTypes { TPS_SIN, TPS_SQUARE, TPS_PULSE };
|
|||
#define KEYPREFIX "mako"
|
||||
#define KEYPREFIXLEN 4
|
||||
|
||||
#define TEMP_DATA_STORE "/tmp/makoTemp"
|
||||
|
||||
/* we set mako_txnspec_t and mako_args_t only once in the master process,
|
||||
* and won't be touched by child processes.
|
||||
*/
|
||||
|
@ -89,6 +97,7 @@ typedef struct {
|
|||
} mako_txnspec_t;
|
||||
|
||||
#define KNOB_MAX 256
|
||||
#define TAGPREFIXLENGTH_MAX 8
|
||||
|
||||
/* benchmark parameters */
|
||||
typedef struct {
|
||||
|
@ -118,6 +127,8 @@ typedef struct {
|
|||
char knobs[KNOB_MAX];
|
||||
uint8_t flatbuffers;
|
||||
int txntrace;
|
||||
int txntagging;
|
||||
char txntagging_prefix[TAGPREFIXLENGTH_MAX];
|
||||
} mako_args_t;
|
||||
|
||||
/* shared memory */
|
||||
|
@ -129,8 +140,15 @@ typedef struct {
|
|||
int signal;
|
||||
int readycount;
|
||||
double throttle_factor;
|
||||
int stopcount;
|
||||
} mako_shmhdr_t;
|
||||
|
||||
/* memory block allocated to each operation when collecting detailed latency */
|
||||
typedef struct {
|
||||
uint64_t data[LAT_BLOCK_SIZE];
|
||||
void* next_block;
|
||||
} lat_block_t;
|
||||
|
||||
typedef struct {
|
||||
uint64_t xacts;
|
||||
uint64_t conflicts;
|
||||
|
@ -145,6 +163,7 @@ typedef struct {
|
|||
/* per-process information */
|
||||
typedef struct {
|
||||
int worker_id;
|
||||
pid_t parent_id;
|
||||
FDBDatabase* database;
|
||||
mako_args_t* args;
|
||||
mako_shmhdr_t* shm;
|
||||
|
@ -153,6 +172,10 @@ typedef struct {
|
|||
/* args for threads */
|
||||
typedef struct {
|
||||
int thread_id;
|
||||
int elem_size[MAX_OP]; /* stores the multiple of LAT_BLOCK_SIZE to check the memory allocation of each operation */
|
||||
bool is_memory_allocated[MAX_OP]; /* flag specified for each operation, whether the memory was allocated to that
|
||||
specific operation */
|
||||
lat_block_t* block[MAX_OP];
|
||||
process_info_t* process;
|
||||
} thread_args_t;
|
||||
|
||||
|
|
|
@ -77,3 +77,58 @@ void genkey(char* str, int num, int rows, int len) {
|
|||
}
|
||||
str[len - 1] = '\0';
|
||||
}
|
||||
|
||||
/* This is another sorting algorithm used to calculate latency parameters */
|
||||
/* We moved from radix sort to quick sort to avoid extra space used in radix sort */
|
||||
|
||||
#if 0
|
||||
uint64_t get_max(uint64_t arr[], int n) {
|
||||
uint64_t mx = arr[0];
|
||||
for (int i = 1; i < n; i++) {
|
||||
if (arr[i] > mx) {
|
||||
mx = arr[i];
|
||||
}
|
||||
}
|
||||
return mx;
|
||||
}
|
||||
|
||||
void bucket_data(uint64_t arr[], int n, uint64_t exp) {
|
||||
// uint64_t output[n];
|
||||
int i, count[10] = { 0 };
|
||||
uint64_t* output = (uint64_t*)malloc(sizeof(uint64_t) * n);
|
||||
|
||||
for (i = 0; i < n; i++) {
|
||||
count[(arr[i] / exp) % 10]++;
|
||||
}
|
||||
for (i = 1; i < 10; i++) {
|
||||
count[i] += count[i - 1];
|
||||
}
|
||||
for (i = n - 1; i >= 0; i--) {
|
||||
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
|
||||
count[(arr[i] / exp) % 10]--;
|
||||
}
|
||||
for (i = 0; i < n; i++) {
|
||||
arr[i] = output[i];
|
||||
}
|
||||
free(output);
|
||||
}
|
||||
|
||||
// The main function is to sort arr[] of size n using Radix Sort
|
||||
void radix_sort(uint64_t* arr, int n) {
|
||||
// Find the maximum number to know number of digits
|
||||
uint64_t m = get_max(arr, n);
|
||||
for (uint64_t exp = 1; m / exp > 0; exp *= 10) bucket_data(arr, n, exp);
|
||||
}
|
||||
#endif
|
||||
|
||||
int compare(const void* a, const void* b) {
|
||||
const uint64_t* da = (const uint64_t*)a;
|
||||
const uint64_t* db = (const uint64_t*)b;
|
||||
|
||||
return (*da > *db) - (*da < *db);
|
||||
}
|
||||
|
||||
// The main function is to sort arr[] of size n using Quick Sort
|
||||
void quick_sort(uint64_t* arr, int n) {
|
||||
qsort(arr, n, sizeof(uint64_t), compare);
|
||||
}
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
#define UTILS_H
|
||||
#pragma once
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
/* uniform-distribution random */
|
||||
/* return a uniform random number between low and high, both inclusive */
|
||||
int urand(int low, int high);
|
||||
|
@ -48,4 +50,15 @@ int digits(int num);
|
|||
/* len is the buffer size, key length + null */
|
||||
void genkey(char* str, int num, int rows, int len);
|
||||
|
||||
#if 0
|
||||
// The main function is to sort arr[] of size n using Radix Sort
|
||||
void radix_sort(uint64_t arr[], int n);
|
||||
void bucket_data(uint64_t arr[], int n, uint64_t exp);
|
||||
uint64_t get_max(uint64_t arr[], int n);
|
||||
#endif
|
||||
|
||||
// The main function is to sort arr[] of size n using Quick Sort
|
||||
void quick_sort(uint64_t arr[], int n);
|
||||
int compare(const void* a, const void* b);
|
||||
|
||||
#endif /* UTILS_H */
|
||||
|
|
|
@ -603,7 +603,7 @@ void runTests(struct ResultSet *rs) {
|
|||
int main(int argc, char **argv) {
|
||||
srand(time(NULL));
|
||||
struct ResultSet *rs = newResultSet();
|
||||
checkError(fdb_select_api_version(630), "select API version", rs);
|
||||
checkError(fdb_select_api_version(700), "select API version", rs);
|
||||
printf("Running performance test at client version: %s\n", fdb_get_client_version());
|
||||
|
||||
valueStr = (uint8_t*)malloc((sizeof(uint8_t))*valueSize);
|
||||
|
|
|
@ -244,7 +244,7 @@ void runTests(struct ResultSet *rs) {
|
|||
int main(int argc, char **argv) {
|
||||
srand(time(NULL));
|
||||
struct ResultSet *rs = newResultSet();
|
||||
checkError(fdb_select_api_version(630), "select API version", rs);
|
||||
checkError(fdb_select_api_version(700), "select API version", rs);
|
||||
printf("Running RYW Benchmark test at client version: %s\n", fdb_get_client_version());
|
||||
|
||||
keys = generateKeys(numKeys, keySize);
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
#include <inttypes.h>
|
||||
|
||||
#ifndef FDB_API_VERSION
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#endif
|
||||
|
||||
#include <foundationdb/fdb_c.h>
|
||||
|
|
|
@ -97,7 +97,7 @@ void runTests(struct ResultSet *rs) {
|
|||
int main(int argc, char **argv) {
|
||||
srand(time(NULL));
|
||||
struct ResultSet *rs = newResultSet();
|
||||
checkError(fdb_select_api_version(630), "select API version", rs);
|
||||
checkError(fdb_select_api_version(700), "select API version", rs);
|
||||
printf("Running performance test at client version: %s\n", fdb_get_client_version());
|
||||
|
||||
keys = generateKeys(numKeys, KEY_SIZE);
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#include "foundationdb/fdb_c.h"
|
||||
#undef DLLEXPORT
|
||||
#include "workloads.h"
|
||||
|
@ -258,7 +258,7 @@ struct SimpleWorkload : FDBWorkload {
|
|||
insertsPerTx = context->getOption("insertsPerTx", 100ul);
|
||||
opsPerTx = context->getOption("opsPerTx", 100ul);
|
||||
runFor = context->getOption("runFor", 10.0);
|
||||
auto err = fdb_select_api_version(630);
|
||||
auto err = fdb_select_api_version(700);
|
||||
if (err) {
|
||||
context->trace(FDBSeverity::Info, "SelectAPIVersionFailed",
|
||||
{ { "Error", std::string(fdb_get_error(err)) } });
|
||||
|
|
|
@ -36,7 +36,7 @@ THREAD_FUNC networkThread(void* fdb) {
|
|||
}
|
||||
|
||||
ACTOR Future<Void> _test() {
|
||||
API *fdb = FDB::API::selectAPIVersion(630);
|
||||
API *fdb = FDB::API::selectAPIVersion(700);
|
||||
auto db = fdb->createDatabase();
|
||||
state Reference<Transaction> tr = db->createTransaction();
|
||||
|
||||
|
@ -79,7 +79,7 @@ ACTOR Future<Void> _test() {
|
|||
}
|
||||
|
||||
void fdb_flow_test() {
|
||||
API *fdb = FDB::API::selectAPIVersion(630);
|
||||
API *fdb = FDB::API::selectAPIVersion(700);
|
||||
fdb->setupNetwork();
|
||||
startThread(networkThread, fdb);
|
||||
|
||||
|
@ -157,16 +157,16 @@ namespace FDB {
|
|||
void cancel() override;
|
||||
void reset() override;
|
||||
|
||||
TransactionImpl() : tr(NULL) {}
|
||||
TransactionImpl(TransactionImpl&& r) BOOST_NOEXCEPT {
|
||||
tr = r.tr;
|
||||
r.tr = NULL;
|
||||
}
|
||||
TransactionImpl& operator=(TransactionImpl&& r) BOOST_NOEXCEPT {
|
||||
tr = r.tr;
|
||||
r.tr = NULL;
|
||||
TransactionImpl() : tr(nullptr) {}
|
||||
TransactionImpl(TransactionImpl&& r) noexcept {
|
||||
tr = r.tr;
|
||||
r.tr = nullptr;
|
||||
}
|
||||
TransactionImpl& operator=(TransactionImpl&& r) noexcept {
|
||||
tr = r.tr;
|
||||
r.tr = nullptr;
|
||||
return *this;
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
FDBTransaction* tr;
|
||||
|
@ -207,10 +207,10 @@ namespace FDB {
|
|||
if ( value.present() )
|
||||
throw_on_error( fdb_network_set_option( option, value.get().begin(), value.get().size() ) );
|
||||
else
|
||||
throw_on_error( fdb_network_set_option( option, NULL, 0 ) );
|
||||
throw_on_error( fdb_network_set_option( option, nullptr, 0 ) );
|
||||
}
|
||||
|
||||
API* API::instance = NULL;
|
||||
API* API::instance = nullptr;
|
||||
API::API(int version) : version(version) {}
|
||||
|
||||
API* API::selectAPIVersion(int apiVersion) {
|
||||
|
@ -234,11 +234,11 @@ namespace FDB {
|
|||
}
|
||||
|
||||
bool API::isAPIVersionSelected() {
|
||||
return API::instance != NULL;
|
||||
return API::instance != nullptr;
|
||||
}
|
||||
|
||||
API* API::getInstance() {
|
||||
if(API::instance == NULL) {
|
||||
if(API::instance == nullptr) {
|
||||
throw api_version_unset();
|
||||
}
|
||||
else {
|
||||
|
@ -280,7 +280,7 @@ namespace FDB {
|
|||
if (value.present())
|
||||
throw_on_error(fdb_database_set_option(db, option, value.get().begin(), value.get().size()));
|
||||
else
|
||||
throw_on_error(fdb_database_set_option(db, option, NULL, 0));
|
||||
throw_on_error(fdb_database_set_option(db, option, nullptr, 0));
|
||||
}
|
||||
|
||||
TransactionImpl::TransactionImpl(FDBDatabase* db) {
|
||||
|
@ -417,7 +417,7 @@ namespace FDB {
|
|||
if ( value.present() ) {
|
||||
throw_on_error( fdb_transaction_set_option( tr, option, value.get().begin(), value.get().size() ) );
|
||||
} else {
|
||||
throw_on_error( fdb_transaction_set_option( tr, option, NULL, 0 ) );
|
||||
throw_on_error( fdb_transaction_set_option( tr, option, nullptr, 0 ) );
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@
|
|||
|
||||
#include <flow/flow.h>
|
||||
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#include <bindings/c/foundationdb/fdb_c.h>
|
||||
#undef DLLEXPORT
|
||||
|
||||
|
@ -31,7 +31,7 @@
|
|||
|
||||
namespace FDB {
|
||||
struct CFuture : NonCopyable, ReferenceCounted<CFuture>, FastAllocated<CFuture> {
|
||||
CFuture() : f(NULL) {}
|
||||
CFuture() : f(nullptr) {}
|
||||
explicit CFuture(FDBFuture* f) : f(f) {}
|
||||
~CFuture() {
|
||||
if (f) {
|
||||
|
|
|
@ -1817,7 +1817,7 @@ ACTOR void _test_versionstamp() {
|
|||
try {
|
||||
g_network = newNet2(TLSConfig());
|
||||
|
||||
API *fdb = FDB::API::selectAPIVersion(630);
|
||||
API *fdb = FDB::API::selectAPIVersion(700);
|
||||
|
||||
fdb->setupNetwork();
|
||||
startThread(networkThread, fdb);
|
||||
|
|
|
@ -9,7 +9,7 @@ This package requires:
|
|||
- [Mono](http://www.mono-project.com/) (macOS or Linux) or [Visual Studio](https://www.visualstudio.com/) (Windows) (build-time only)
|
||||
- FoundationDB C API 2.0.x-6.1.x (part of the [FoundationDB client packages](https://apple.github.io/foundationdb/downloads.html#c))
|
||||
|
||||
Use of this package requires the selection of a FoundationDB API version at runtime. This package currently supports FoundationDB API versions 200-630.
|
||||
Use of this package requires the selection of a FoundationDB API version at runtime. This package currently supports FoundationDB API versions 200-700.
|
||||
|
||||
To install this package, you can run the "fdb-go-install.sh" script (for versions 5.0.x and greater):
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
import "C"
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
import "C"
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ A basic interaction with the FoundationDB API is demonstrated below:
|
|||
|
||||
func main() {
|
||||
// Different API versions may expose different runtime behaviors.
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
|
||||
// Open the default database from the system cluster
|
||||
db := fdb.MustOpenDefault()
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
import "C"
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
// #include <stdlib.h>
|
||||
import "C"
|
||||
|
@ -108,7 +108,7 @@ func (opt NetworkOptions) setOpt(code int, param []byte) error {
|
|||
// library, an error will be returned. APIVersion must be called prior to any
|
||||
// other functions in the fdb package.
|
||||
//
|
||||
// Currently, this package supports API versions 200 through 630.
|
||||
// Currently, this package supports API versions 200 through 700.
|
||||
//
|
||||
// Warning: When using the multi-version client API, setting an API version that
|
||||
// is not supported by a particular client library will prevent that client from
|
||||
|
@ -116,7 +116,7 @@ func (opt NetworkOptions) setOpt(code int, param []byte) error {
|
|||
// the API version of your application after upgrading your client until the
|
||||
// cluster has also been upgraded.
|
||||
func APIVersion(version int) error {
|
||||
headerVersion := 630
|
||||
headerVersion := 700
|
||||
|
||||
networkMutex.Lock()
|
||||
defer networkMutex.Unlock()
|
||||
|
@ -128,7 +128,7 @@ func APIVersion(version int) error {
|
|||
return errAPIVersionAlreadySet
|
||||
}
|
||||
|
||||
if version < 200 || version > 630 {
|
||||
if version < 200 || version > 700 {
|
||||
return errAPIVersionNotSupported
|
||||
}
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ import (
|
|||
func ExampleOpenDefault() {
|
||||
var e error
|
||||
|
||||
e = fdb.APIVersion(630)
|
||||
e = fdb.APIVersion(700)
|
||||
if e != nil {
|
||||
fmt.Printf("Unable to set API version: %v\n", e)
|
||||
return
|
||||
|
@ -52,7 +52,7 @@ func ExampleOpenDefault() {
|
|||
}
|
||||
|
||||
func TestVersionstamp(t *testing.T) {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
|
||||
setVs := func(t fdb.Transactor, key fdb.Key) (fdb.FutureKey, error) {
|
||||
|
@ -98,7 +98,7 @@ func TestVersionstamp(t *testing.T) {
|
|||
}
|
||||
|
||||
func ExampleTransactor() {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
|
||||
setOne := func(t fdb.Transactor, key fdb.Key, value []byte) error {
|
||||
|
@ -149,7 +149,7 @@ func ExampleTransactor() {
|
|||
}
|
||||
|
||||
func ExampleReadTransactor() {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
|
||||
getOne := func(rt fdb.ReadTransactor, key fdb.Key) ([]byte, error) {
|
||||
|
@ -202,7 +202,7 @@ func ExampleReadTransactor() {
|
|||
}
|
||||
|
||||
func ExamplePrefixRange() {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
|
||||
tr, e := db.CreateTransaction()
|
||||
|
@ -241,7 +241,7 @@ func ExamplePrefixRange() {
|
|||
}
|
||||
|
||||
func ExampleRangeIterator() {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
|
||||
tr, e := db.CreateTransaction()
|
||||
|
|
|
@ -23,7 +23,7 @@
|
|||
package fdb
|
||||
|
||||
// #cgo LDFLAGS: -lfdb_c -lm
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
// #include <string.h>
|
||||
//
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
import "C"
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
package fdb
|
||||
|
||||
// #define FDB_API_VERSION 630
|
||||
// #define FDB_API_VERSION 700
|
||||
// #include <foundationdb/fdb_c.h>
|
||||
import "C"
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
*/
|
||||
|
||||
#include <foundationdb/ClientWorkload.h>
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#include <foundationdb/fdb_c.h>
|
||||
|
||||
#include <jni.h>
|
||||
|
@ -373,7 +373,7 @@ struct JVM {
|
|||
jmethodID selectMethod =
|
||||
env->GetStaticMethodID(fdbClass, "selectAPIVersion", "(I)Lcom/apple/foundationdb/FDB;");
|
||||
checkException();
|
||||
auto fdbInstance = env->CallStaticObjectMethod(fdbClass, selectMethod, jint(630));
|
||||
auto fdbInstance = env->CallStaticObjectMethod(fdbClass, selectMethod, jint(700));
|
||||
checkException();
|
||||
env->CallObjectMethod(fdbInstance, getMethod(fdbClass, "disableShutdownHook", "()V"));
|
||||
checkException();
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
#include <jni.h>
|
||||
#include <string.h>
|
||||
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
|
||||
#include <foundationdb/fdb_c.h>
|
||||
|
||||
|
@ -1089,13 +1089,13 @@ void JNI_OnUnload(JavaVM *vm, void *reserved) {
|
|||
return;
|
||||
} else {
|
||||
// delete global references so the GC can collect them
|
||||
if (range_result_summary_class != NULL) {
|
||||
if (range_result_summary_class != JNI_NULL) {
|
||||
env->DeleteGlobalRef(range_result_summary_class);
|
||||
}
|
||||
if (range_result_class != NULL) {
|
||||
if (range_result_class != JNI_NULL) {
|
||||
env->DeleteGlobalRef(range_result_class);
|
||||
}
|
||||
if (string_class != NULL) {
|
||||
if (string_class != JNI_NULL) {
|
||||
env->DeleteGlobalRef(string_class);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -35,7 +35,7 @@ import java.util.concurrent.atomic.AtomicInteger;
|
|||
* This call is required before using any other part of the API. The call allows
|
||||
* an error to be thrown at this point to prevent client code from accessing a later library
|
||||
* with incorrect assumptions from the current version. The API version documented here is version
|
||||
* {@code 630}.<br><br>
|
||||
* {@code 700}.<br><br>
|
||||
* FoundationDB encapsulates multiple versions of its interface by requiring
|
||||
* the client to explicitly specify the version of the API it uses. The purpose
|
||||
* of this design is to allow you to upgrade the server, client libraries, or
|
||||
|
@ -183,8 +183,8 @@ public class FDB {
|
|||
}
|
||||
if(version < 510)
|
||||
throw new IllegalArgumentException("API version not supported (minimum 510)");
|
||||
if(version > 630)
|
||||
throw new IllegalArgumentException("API version not supported (maximum 630)");
|
||||
if(version > 700)
|
||||
throw new IllegalArgumentException("API version not supported (maximum 700)");
|
||||
|
||||
Select_API_version(version);
|
||||
singleton = new FDB(version);
|
||||
|
|
|
@ -44,7 +44,7 @@ import com.apple.foundationdb.async.AsyncUtil;
|
|||
* operation, the remove is not durable until {@code commit()} on the {@code Transaction}
|
||||
* that yielded this query returns <code>true</code>.
|
||||
*/
|
||||
class RangeQuery implements AsyncIterable<KeyValue>, Iterable<KeyValue> {
|
||||
class RangeQuery implements AsyncIterable<KeyValue> {
|
||||
private final FDBTransaction tr;
|
||||
private final KeySelector begin;
|
||||
private final KeySelector end;
|
||||
|
|
|
@ -48,7 +48,7 @@ public interface ReadTransaction extends ReadTransactionContext {
|
|||
* whether read conflict ranges are omitted for any reads done through this {@code ReadTransaction}.
|
||||
* <br>
|
||||
* For more information about how to use snapshot reads correctly, see
|
||||
* <a href="/foundationdb/developer-guide.html#using-snapshot-reads" target="_blank">Using snapshot reads</a>.
|
||||
* <a href="/foundationdb/developer-guide.html#snapshot-reads" target="_blank">Using snapshot reads</a>.
|
||||
*
|
||||
* @return whether this is a snapshot view of the database with relaxed isolation properties
|
||||
* @see #snapshot()
|
||||
|
@ -58,11 +58,11 @@ public interface ReadTransaction extends ReadTransactionContext {
|
|||
/**
|
||||
* Return a special-purpose, read-only view of the database. Reads done through this interface are known as "snapshot reads".
|
||||
* Snapshot reads selectively relax FoundationDB's isolation property, reducing
|
||||
* <a href="/foundationdb/developer-guide.html#transaction-conflicts" target="_blank">Transaction conflicts</a>
|
||||
* <a href="/foundationdb/developer-guide.html#conflict-ranges" target="_blank">Transaction conflicts</a>
|
||||
* but making reasoning about concurrency harder.<br>
|
||||
* <br>
|
||||
* For more information about how to use snapshot reads correctly, see
|
||||
* <a href="/foundationdb/developer-guide.html#using-snapshot-reads" target="_blank">Using snapshot reads</a>.
|
||||
* <a href="/foundationdb/developer-guide.html#snapshot-reads" target="_blank">Using snapshot reads</a>.
|
||||
*
|
||||
* @return a read-only view of this {@code ReadTransaction} with relaxed isolation properties
|
||||
*/
|
||||
|
|
|
@ -20,8 +20,6 @@
|
|||
|
||||
package com.apple.foundationdb.testing;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
public class WorkloadContext {
|
||||
long impl;
|
||||
|
||||
|
|
|
@ -129,7 +129,7 @@ public class Tuple implements Comparable<Tuple>, Iterable<Object> {
|
|||
}
|
||||
return new Tuple(this, o,
|
||||
(o instanceof Versionstamp && !((Versionstamp)o).isComplete()) ||
|
||||
(o instanceof List<?> && TupleUtil.hasIncompleteVersionstamp(((List)o).stream())) ||
|
||||
(o instanceof List<?> && TupleUtil.hasIncompleteVersionstamp(((List<?>)o).stream())) ||
|
||||
(o instanceof Tuple && ((Tuple) o).hasIncompleteVersionstamp()));
|
||||
}
|
||||
|
||||
|
|
|
@ -788,7 +788,7 @@ class TupleUtil {
|
|||
return hasIncompleteVersionstamp(((Tuple) item).stream());
|
||||
}
|
||||
else if(item instanceof Collection<?>) {
|
||||
return hasIncompleteVersionstamp(((Collection) item).stream());
|
||||
return hasIncompleteVersionstamp(((Collection<?>) item).stream());
|
||||
}
|
||||
else {
|
||||
return false;
|
||||
|
|
|
@ -13,7 +13,7 @@ and then added to your classpath.<br>
|
|||
<h1>Getting started</h1>
|
||||
To start using FoundationDB from Java, create an instance of the
|
||||
{@link com.apple.foundationdb.FDB FoundationDB API interface} with the version of the
|
||||
API that you want to use (this release of the FoundationDB Java API supports versions between {@code 510} and {@code 630}).
|
||||
API that you want to use (this release of the FoundationDB Java API supports versions between {@code 510} and {@code 700}).
|
||||
With this API object you can then open {@link com.apple.foundationdb.Cluster Cluster}s and
|
||||
{@link com.apple.foundationdb.Database Database}s and start using
|
||||
{@link com.apple.foundationdb.Transaction Transaction}s.
|
||||
|
@ -29,7 +29,7 @@ import com.apple.foundationdb.tuple.Tuple;
|
|||
|
||||
public class Example {
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
|
||||
try(Database db = fdb.open()) {
|
||||
// Run an operation on the database
|
||||
|
|
|
@ -27,7 +27,7 @@ import com.apple.foundationdb.Database;
|
|||
import com.apple.foundationdb.FDB;
|
||||
|
||||
public abstract class AbstractTester {
|
||||
public static final int API_VERSION = 630;
|
||||
public static final int API_VERSION = 700;
|
||||
protected static final int NUM_RUNS = 25;
|
||||
protected static final Charset ASCII = Charset.forName("ASCII");
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ public class BlockingBenchmark {
|
|||
private static final int PARALLEL = 100;
|
||||
|
||||
public static void main(String[] args) throws InterruptedException {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
|
||||
// The cluster file DOES NOT need to be valid, although it must exist.
|
||||
// This is because the database is never really contacted in this test.
|
||||
|
|
|
@ -48,7 +48,7 @@ public class ConcurrentGetSetGet {
|
|||
}
|
||||
|
||||
public static void main(String[] args) {
|
||||
try(Database database = FDB.selectAPIVersion(630).open()) {
|
||||
try(Database database = FDB.selectAPIVersion(700).open()) {
|
||||
new ConcurrentGetSetGet().apply(database);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -33,7 +33,7 @@ import com.apple.foundationdb.directory.DirectorySubspace;
|
|||
public class DirectoryTest {
|
||||
public static void main(String[] args) throws Exception {
|
||||
try {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database db = fdb.open()) {
|
||||
runTests(db);
|
||||
}
|
||||
|
|
|
@ -26,7 +26,7 @@ import com.apple.foundationdb.tuple.Tuple;
|
|||
|
||||
public class Example {
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
|
||||
try(Database db = fdb.open()) {
|
||||
// Run an operation on the database
|
||||
|
|
|
@ -31,7 +31,7 @@ public class IterableTest {
|
|||
public static void main(String[] args) throws InterruptedException {
|
||||
final int reps = 1000;
|
||||
try {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database db = fdb.open()) {
|
||||
runTests(reps, db);
|
||||
}
|
||||
|
|
|
@ -34,7 +34,7 @@ import com.apple.foundationdb.tuple.ByteArrayUtil;
|
|||
public class LocalityTests {
|
||||
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database database = fdb.open(args[0])) {
|
||||
try(Transaction tr = database.createTransaction()) {
|
||||
String[] keyAddresses = LocalityUtil.getAddressesForKey(tr, "a".getBytes()).join();
|
||||
|
|
|
@ -43,7 +43,7 @@ public class ParallelRandomScan {
|
|||
private static final int PARALLELISM_STEP = 5;
|
||||
|
||||
public static void main(String[] args) throws InterruptedException {
|
||||
FDB api = FDB.selectAPIVersion(630);
|
||||
FDB api = FDB.selectAPIVersion(700);
|
||||
try(Database database = api.open(args[0])) {
|
||||
for(int i = PARALLELISM_MIN; i <= PARALLELISM_MAX; i += PARALLELISM_STEP) {
|
||||
runTest(database, i, ROWS, DURATION_MS);
|
||||
|
|
|
@ -34,7 +34,7 @@ import com.apple.foundationdb.Transaction;
|
|||
import com.apple.foundationdb.async.AsyncIterable;
|
||||
|
||||
public class RangeTest {
|
||||
private static final int API_VERSION = 630;
|
||||
private static final int API_VERSION = 700;
|
||||
|
||||
public static void main(String[] args) {
|
||||
System.out.println("About to use version " + API_VERSION);
|
||||
|
|
|
@ -34,7 +34,7 @@ public class SerialInsertion {
|
|||
private static final int NODES = 1000000;
|
||||
|
||||
public static void main(String[] args) {
|
||||
FDB api = FDB.selectAPIVersion(630);
|
||||
FDB api = FDB.selectAPIVersion(700);
|
||||
try(Database database = api.open()) {
|
||||
long start = System.currentTimeMillis();
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ public class SerialIteration {
|
|||
private static final int THREAD_COUNT = 1;
|
||||
|
||||
public static void main(String[] args) throws InterruptedException {
|
||||
FDB api = FDB.selectAPIVersion(630);
|
||||
FDB api = FDB.selectAPIVersion(700);
|
||||
try(Database database = api.open(args[0])) {
|
||||
for(int i = 1; i <= THREAD_COUNT; i++) {
|
||||
runThreadedTest(database, i);
|
||||
|
|
|
@ -30,7 +30,7 @@ public class SerialTest {
|
|||
public static void main(String[] args) throws InterruptedException {
|
||||
final int reps = 1000;
|
||||
try {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database db = fdb.open()) {
|
||||
runTests(reps, db);
|
||||
}
|
||||
|
|
|
@ -39,7 +39,7 @@ public class SnapshotTransactionTest {
|
|||
private static final Subspace SUBSPACE = new Subspace(Tuple.from("test", "conflict_ranges"));
|
||||
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database db = fdb.open()) {
|
||||
snapshotReadShouldNotConflict(db);
|
||||
snapshotShouldNotAddConflictRange(db);
|
||||
|
|
|
@ -50,7 +50,7 @@ public class TupleTest {
|
|||
public static void main(String[] args) throws NoSuchFieldException {
|
||||
final int reps = 1000;
|
||||
try {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
addMethods();
|
||||
comparisons();
|
||||
emptyTuple();
|
||||
|
|
|
@ -32,7 +32,7 @@ import com.apple.foundationdb.tuple.Versionstamp;
|
|||
|
||||
public class VersionstampSmokeTest {
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database db = fdb.open()) {
|
||||
db.run(tr -> {
|
||||
tr.clear(Tuple.from("prefix").range());
|
||||
|
|
|
@ -34,7 +34,7 @@ import com.apple.foundationdb.Transaction;
|
|||
public class WatchTest {
|
||||
|
||||
public static void main(String[] args) {
|
||||
FDB fdb = FDB.selectAPIVersion(630);
|
||||
FDB fdb = FDB.selectAPIVersion(700);
|
||||
try(Database database = fdb.open(args[0])) {
|
||||
database.options().setLocationCacheSize(42);
|
||||
try(Transaction tr = database.createTransaction()) {
|
||||
|
|
|
@ -52,7 +52,7 @@ def get_api_version():
|
|||
|
||||
|
||||
def api_version(ver):
|
||||
header_version = 630
|
||||
header_version = 700
|
||||
|
||||
if '_version' in globals():
|
||||
if globals()['_version'] != ver:
|
||||
|
|
|
@ -253,7 +253,7 @@ def transactional(*tr_args, **tr_kwargs):
|
|||
@functools.wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
# We can't throw this from the decorator, as when a user runs
|
||||
# >>> import fdb ; fdb.api_version(630)
|
||||
# >>> import fdb ; fdb.api_version(700)
|
||||
# the code above uses @transactional before the API version is set
|
||||
if fdb.get_api_version() >= 630 and inspect.isgeneratorfunction(func):
|
||||
raise ValueError("Generators can not be wrapped with fdb.transactional")
|
||||
|
|
|
@ -22,7 +22,7 @@ import fdb
|
|||
import sys
|
||||
|
||||
if __name__ == '__main__':
|
||||
fdb.api_version(630)
|
||||
fdb.api_version(700)
|
||||
|
||||
@fdb.transactional
|
||||
def setValue(tr, key, value):
|
||||
|
|
|
@ -15,8 +15,8 @@ EOF
|
|||
s.email = 'fdb-dist@apple.com'
|
||||
s.files = ["${CMAKE_SOURCE_DIR}/LICENSE", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdb.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdbdirectory.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdbimpl.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdblocality.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdboptions.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdbsubspace.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdbtuple.rb", "${CMAKE_CURRENT_SOURCE_DIR}/lib/fdbimpl_v609.rb"]
|
||||
s.homepage = 'https://www.foundationdb.org'
|
||||
s.license = 'Apache v2'
|
||||
s.add_dependency('ffi', '>= 1.1.5')
|
||||
s.license = 'Apache-2.0'
|
||||
s.add_dependency('ffi', '~> 1.1', '>= 1.1.5')
|
||||
s.required_ruby_version = '>= 1.9.3'
|
||||
s.requirements << 'These bindings require the FoundationDB client. The client can be obtained from https://www.foundationdb.org/download/.'
|
||||
end
|
||||
|
|
|
@ -15,8 +15,8 @@ EOF
|
|||
s.email = 'fdb-dist@apple.com'
|
||||
s.files = ["LICENSE", "lib/fdb.rb", "lib/fdbdirectory.rb", "lib/fdbimpl.rb", "lib/fdblocality.rb", "lib/fdboptions.rb", "lib/fdbsubspace.rb", "lib/fdbtuple.rb", "lib/fdbimpl_v609.rb"]
|
||||
s.homepage = 'https://www.foundationdb.org'
|
||||
s.license = 'Apache v2'
|
||||
s.add_dependency('ffi', '>= 1.1.5')
|
||||
s.license = 'Apache-2.0'
|
||||
s.add_dependency('ffi', '~> 1.1', '>= 1.1.5')
|
||||
s.required_ruby_version = '>= 1.9.3'
|
||||
s.requirements << 'These bindings require the FoundationDB client. The client can be obtained from https://www.foundationdb.org/download/.'
|
||||
end
|
||||
|
|
|
@ -36,7 +36,7 @@ module FDB
|
|||
end
|
||||
end
|
||||
def self.api_version(version)
|
||||
header_version = 630
|
||||
header_version = 700
|
||||
if self.is_api_version_selected?()
|
||||
if @@chosen_version != version
|
||||
raise "FDB API already loaded at version #{@@chosen_version}."
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#include <foundationdb/fdb_c.h>
|
||||
|
||||
int main(int argc, char* argv[]) {
|
||||
fdb_select_api_version(630);
|
||||
fdb_select_api_version(700);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -65,7 +65,7 @@ then
|
|||
python setup.py install
|
||||
successOr "Installing python bindings failed"
|
||||
popd
|
||||
python -c 'import fdb; fdb.api_version(630)'
|
||||
python -c 'import fdb; fdb.api_version(700)'
|
||||
successOr "Loading python bindings failed"
|
||||
|
||||
# Test cmake and pkg-config integration: https://github.com/apple/foundationdb/issues/1483
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
#!/usr/bin/env python
|
||||
import sys
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: txt-to-toml.py [src.txt]")
|
||||
return 1
|
||||
|
||||
filename = sys.argv[1]
|
||||
|
||||
indent = " "
|
||||
in_workload = False
|
||||
first_test = False
|
||||
keys_before_test = False
|
||||
|
||||
for line in open(filename):
|
||||
k = ""
|
||||
v = ""
|
||||
|
||||
if line.strip().startswith(";"):
|
||||
print((indent if in_workload else "") + line.strip().replace(";", "#"))
|
||||
continue
|
||||
|
||||
if "=" in line:
|
||||
(k, v) = line.strip().split("=")
|
||||
(k, v) = (k.strip(), v.strip())
|
||||
|
||||
if k == "testTitle":
|
||||
first_test = True
|
||||
if in_workload:
|
||||
print("")
|
||||
in_workload = False
|
||||
if keys_before_test:
|
||||
print("")
|
||||
keys_before_test = False
|
||||
print("[[test]]")
|
||||
|
||||
if k == "testName":
|
||||
in_workload = True
|
||||
print("")
|
||||
print(indent + "[[test.workload]]")
|
||||
|
||||
if not first_test:
|
||||
keys_before_test = True
|
||||
|
||||
if v.startswith("."):
|
||||
v = "0" + v
|
||||
|
||||
if any(c.isalpha() or c in ["/", "!"] for c in v):
|
||||
if v != "true" and v != "false":
|
||||
v = "'" + v + "'"
|
||||
|
||||
if k == "buggify":
|
||||
print("buggify = " + ("true" if v == "'on'" else "false"))
|
||||
elif k:
|
||||
print((indent if in_workload else "") + k + " = " + v)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
|
@ -16,6 +16,8 @@ function(configure_testing)
|
|||
set(no_tests YES)
|
||||
if(CONFIGURE_TESTING_ERROR_ON_ADDITIONAL_FILES)
|
||||
file(GLOB_RECURSE candidates "${CONFIGURE_TESTING_TEST_DIRECTORY}/*.txt")
|
||||
file(GLOB_RECURSE toml_candidates "${CONFIGURE_TESTING_TEST_DIRECTORY}/*.toml")
|
||||
list(APPEND candidates ${toml_candidates})
|
||||
foreach(candidate IN LISTS candidates)
|
||||
set(candidate_is_test YES)
|
||||
foreach(pattern IN LISTS CONFIGURE_TESTING_IGNORE_PATTERNS)
|
||||
|
@ -79,7 +81,7 @@ function(add_fdb_test)
|
|||
set(test_type "test")
|
||||
endif()
|
||||
list(GET ADD_FDB_TEST_TEST_FILES 0 first_file)
|
||||
string(REGEX REPLACE "^(.*)\\.txt$" "\\1" test_name ${first_file})
|
||||
string(REGEX REPLACE "^(.*)\\.(txt|toml)$" "\\1" test_name ${first_file})
|
||||
if("${test_name}" MATCHES "(-\\d)$")
|
||||
string(REGEX REPLACE "(.*)(-\\d)$" "\\1" test_name_1 ${test_name})
|
||||
message(STATUS "new testname ${test_name_1}")
|
||||
|
|
|
@ -210,6 +210,41 @@ else()
|
|||
# -mavx
|
||||
# -msse4.2)
|
||||
|
||||
# Tentatively re-enabling vector instructions
|
||||
set(USE_AVX512F OFF CACHE BOOL "Enable AVX 512F instructions")
|
||||
if (USE_AVX512F)
|
||||
if (CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^x86")
|
||||
add_compile_options(-mavx512f)
|
||||
elseif(USE_VALGRIND)
|
||||
message(STATUS "USE_VALGRIND=ON make USE_AVX OFF to satisfy valgrind analysis requirement")
|
||||
set(USE_AVX512F OFF)
|
||||
else()
|
||||
message(STATUS "USE_AVX512F is supported on x86 or x86_64 only")
|
||||
set(USE_AVX512F OFF)
|
||||
endif()
|
||||
endif()
|
||||
set(USE_AVX ON CACHE BOOL "Enable AVX instructions")
|
||||
if (USE_AVX)
|
||||
if (CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^x86")
|
||||
add_compile_options(-mavx)
|
||||
elseif(USE_VALGRIND)
|
||||
message(STATUS "USE_VALGRIND=ON make USE_AVX OFF to satisfy valgrind analysis requirement")
|
||||
set(USE_AVX OFF)
|
||||
else()
|
||||
message(STATUS "USE_AVX is supported on x86 or x86_64 only")
|
||||
set(USE_AVX OFF)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Intentionally using builtin memcpy. G++ does a good job on small memcpy's when the size is known at runtime.
|
||||
# If the size is not known, then it falls back on the memcpy that's available at runtime (rte_memcpy, as of this
|
||||
# writing; see flow.cpp).
|
||||
#
|
||||
# The downside of the builtin memcpy is that it's slower at large copies, so if we spend a lot of time on large
|
||||
# copies of sizes that are known at compile time, this might not be a win. See the output of performance/memcpy
|
||||
# for more information.
|
||||
#add_compile_options(-fno-builtin-memcpy)
|
||||
|
||||
if (USE_VALGRIND)
|
||||
add_compile_options(-DVALGRIND=1 -DUSE_VALGRIND=1)
|
||||
endif()
|
||||
|
@ -254,6 +289,7 @@ else()
|
|||
-Wno-tautological-pointer-compare
|
||||
-Wredundant-move
|
||||
-Wpessimizing-move
|
||||
-Woverloaded-virtual
|
||||
-Wno-unknown-pragmas
|
||||
-Wno-unknown-warning-option
|
||||
-Wno-unused-function
|
||||
|
@ -273,7 +309,6 @@ else()
|
|||
endif()
|
||||
if (GCC)
|
||||
add_compile_options(-Wno-pragmas)
|
||||
|
||||
# Otherwise `state [[maybe_unused]] int x;` will issue a warning.
|
||||
# https://stackoverflow.com/questions/50646334/maybe-unused-on-member-variable-gcc-warns-incorrectly-that-attribute-is
|
||||
add_compile_options(-Wno-attributes)
|
||||
|
@ -287,6 +322,9 @@ else()
|
|||
-fvisibility=hidden
|
||||
-Wreturn-type
|
||||
-fPIC)
|
||||
if (CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^x86")
|
||||
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:-Wclass-memaccess>)
|
||||
endif()
|
||||
if (GPERFTOOLS_FOUND AND GCC)
|
||||
add_compile_options(
|
||||
-fno-builtin-malloc
|
||||
|
|
|
@ -115,6 +115,14 @@ else()
|
|||
set(WITH_ROCKSDB_EXPERIMENTAL OFF)
|
||||
endif()
|
||||
|
||||
################################################################################
|
||||
# TOML11
|
||||
################################################################################
|
||||
|
||||
# TOML can download and install itself into the binary directory, so it should
|
||||
# always be available.
|
||||
find_package(TOML11)
|
||||
|
||||
################################################################################
|
||||
|
||||
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/packages)
|
||||
|
|
|
@ -0,0 +1,28 @@
|
|||
find_path(TOML11_INCLUDE_DIR
|
||||
NAMES
|
||||
toml.hpp
|
||||
PATH_SUFFIXES
|
||||
include
|
||||
toml11
|
||||
include/toml11
|
||||
HINTS
|
||||
"${_TOML11_HINTS}"
|
||||
)
|
||||
|
||||
if (NOT TOML11_INCLUDE_DIR)
|
||||
include(ExternalProject)
|
||||
|
||||
ExternalProject_add(toml11
|
||||
URL "https://github.com/ToruNiina/toml11/archive/v3.4.0.tar.gz"
|
||||
URL_HASH SHA256=bc6d733efd9216af8c119d8ac64a805578c79cc82b813e4d1d880ca128bd154d
|
||||
CMAKE_CACHE_ARGS
|
||||
-DCMAKE_INSTALL_PREFIX:PATH=${CMAKE_CURRENT_BINARY_DIR}/toml11
|
||||
-Dtoml11_BUILD_TEST:BOOL=OFF)
|
||||
|
||||
set(TOML11_INCLUDE_DIR "${CMAKE_CURRENT_BINARY_DIR}/toml11/include")
|
||||
endif()
|
||||
|
||||
find_package_handle_standard_args(TOML11
|
||||
REQUIRED_VARS
|
||||
TOML11_INCLUDE_DIR
|
||||
)
|
|
@ -17,12 +17,17 @@ AUDITLOG="${AUDITLOG:-/tmp/audit-cluster.log}"
|
|||
status=0
|
||||
messagetime=0
|
||||
messagecount=0
|
||||
let index2="${RANDOM} % 256"
|
||||
let index3="${RANDOM} % 256"
|
||||
let index4="(${RANDOM} % 255) + 1"
|
||||
let FDBPORT="(${RANDOM} % 1000) + ${FDBPORTSTART}"
|
||||
|
||||
# Define a random ip address and port on localhost
|
||||
IPADDRESS="127.${index2}.${index3}.${index4}"
|
||||
if [ -z ${IPADDRESS} ]; then
|
||||
let index2="${RANDOM} % 256"
|
||||
let index3="${RANDOM} % 256"
|
||||
let index4="(${RANDOM} % 255) + 1"
|
||||
IPADDRESS="127.${index2}.${index3}.${index4}"
|
||||
fi
|
||||
if [ -z ${FDBPORT} ]; then
|
||||
let FDBPORT="(${RANDOM} % 1000) + ${FDBPORTSTART}"
|
||||
fi
|
||||
CLUSTERSTRING="${IPADDRESS}:${FDBPORT}"
|
||||
|
||||
|
||||
|
@ -231,12 +236,17 @@ function startFdbServer
|
|||
log 'Failed to display user message'
|
||||
let status="${status} + 1"
|
||||
|
||||
elif ! "${BINDIR}/fdbserver" --knob_disable_posix_kernel_aio=1 -C "${FDBCONF}" -p "${CLUSTERSTRING}" -L "${LOGDIR}" -d "${WORKDIR}/fdb/${$}" &> "${LOGDIR}/fdbserver.log" &
|
||||
then
|
||||
log "Failed to start FDB Server"
|
||||
let status="${status} + 1"
|
||||
else
|
||||
FDBSERVERID="${!}"
|
||||
"${BINDIR}/fdbserver" --knob_disable_posix_kernel_aio=1 -C "${FDBCONF}" -p "${CLUSTERSTRING}" -L "${LOGDIR}" -d "${WORKDIR}/fdb/${$}" &> "${LOGDIR}/fdbserver.log" &
|
||||
fdbpid=$!
|
||||
fdbrc=$?
|
||||
if [ $fdbrc -ne 0 ]
|
||||
then
|
||||
log "Failed to start FDB Server"
|
||||
let status="${status} + 1"
|
||||
else
|
||||
FDBSERVERID="${fdbpid}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "${FDBSERVERID}" ]; then
|
||||
|
|
|
@ -3,7 +3,7 @@ set(SRCS
|
|||
Properties/AssemblyInfo.cs)
|
||||
|
||||
set(TEST_HARNESS_REFERENCES
|
||||
"-r:System,System.Core,System.Xml.Linq,System.Data.DataSetExtensions,Microsoft.CSharp,System.Data,System.Xml,${TraceLogHelperDll}")
|
||||
"-r:System,System.Core,System.Xml.Linq,System.Data.DataSetExtensions,Microsoft.CSharp,System.Data,System.Xml,System.Runtime.Serialization,${TraceLogHelperDll}")
|
||||
|
||||
set(out_file ${CMAKE_BINARY_DIR}/packages/bin/TestHarness.exe)
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@ using System.Diagnostics;
|
|||
using System.ComponentModel;
|
||||
using System.Runtime.InteropServices;
|
||||
using System.Xml;
|
||||
using System.Runtime.Serialization.Json;
|
||||
|
||||
namespace SummarizeTest
|
||||
{
|
||||
|
@ -366,20 +367,22 @@ namespace SummarizeTest
|
|||
{
|
||||
ErrorOutputListener errorListener = new ErrorOutputListener();
|
||||
process.StartInfo.UseShellExecute = false;
|
||||
string tlsPluginArg = "";
|
||||
if (tlsPluginFile.Length > 0) {
|
||||
process.StartInfo.EnvironmentVariables["FDB_TLS_PLUGIN"] = tlsPluginFile;
|
||||
tlsPluginArg = "--tls_plugin=" + tlsPluginFile;
|
||||
}
|
||||
process.StartInfo.RedirectStandardOutput = true;
|
||||
var args = "";
|
||||
if (willRestart && oldBinaryName.EndsWith("alpha6"))
|
||||
{
|
||||
args = string.Format("-Rs 1000000000 -r simulation {0} -s {1} -f \"{2}\" -b {3} --tls_plugin={4} --crash",
|
||||
IsRunningOnMono() ? "" : "-q", seed, testFile, buggify ? "on" : "off", tlsPluginFile);
|
||||
args = string.Format("-Rs 1000000000 -r simulation {0} -s {1} -f \"{2}\" -b {3} {4} --crash",
|
||||
IsRunningOnMono() ? "" : "-q", seed, testFile, buggify ? "on" : "off", tlsPluginArg);
|
||||
}
|
||||
else
|
||||
{
|
||||
args = string.Format("-Rs 1GB -r simulation {0} -s {1} -f \"{2}\" -b {3} --tls_plugin={4} --crash",
|
||||
IsRunningOnMono() ? "" : "-q", seed, testFile, buggify ? "on" : "off", tlsPluginFile);
|
||||
args = string.Format("-Rs 1GB -r simulation {0} -s {1} -f \"{2}\" -b {3} {4} --crash",
|
||||
IsRunningOnMono() ? "" : "-q", seed, testFile, buggify ? "on" : "off", tlsPluginArg);
|
||||
}
|
||||
if (restarting) args = args + " --restarting";
|
||||
if (useValgrind && !willRestart)
|
||||
|
@ -484,7 +487,7 @@ namespace SummarizeTest
|
|||
memCheckThread.Join();
|
||||
consoleThread.Join();
|
||||
|
||||
var traceFiles = Directory.GetFiles(tempPath, "trace*.xml");
|
||||
var traceFiles = Directory.GetFiles(tempPath, "trace*.*").Where(s => s.EndsWith(".xml") || s.EndsWith(".json")).ToArray();
|
||||
if (traceFiles.Length == 0)
|
||||
{
|
||||
if (!traceToStdout)
|
||||
|
@ -665,6 +668,10 @@ namespace SummarizeTest
|
|||
return whats.ToArray();
|
||||
}
|
||||
|
||||
delegate IEnumerable<Magnesium.Event> parseDelegate(System.IO.Stream stream, string file,
|
||||
bool keepOriginalElement = false, double startTime = -1, double endTime = Double.MaxValue,
|
||||
double samplingFactor = 1.0);
|
||||
|
||||
static int Summarize(string[] traceFiles, string summaryFileName,
|
||||
string errorFileName, bool? killed, List<string> outputErrors, int? exitCode, long? peakMemory,
|
||||
string uid, string valgrindOutputFileName, int expectedUnseed, out int unseed, out bool retryableError, bool logOnRetryableError,
|
||||
|
@ -696,7 +703,12 @@ namespace SummarizeTest
|
|||
{
|
||||
try
|
||||
{
|
||||
foreach (var ev in Magnesium.XmlParser.Parse(traceFile, traceFileName))
|
||||
parseDelegate parse;
|
||||
if (traceFileName.EndsWith(".json"))
|
||||
parse = Magnesium.JsonParser.Parse;
|
||||
else
|
||||
parse = Magnesium.XmlParser.Parse;
|
||||
foreach (var ev in parse(traceFile, traceFileName))
|
||||
{
|
||||
Magnesium.Severity newSeverity;
|
||||
if (severityMap.TryGetValue(new KeyValuePair<string, Magnesium.Severity>(ev.Type, ev.Severity), out newSeverity))
|
||||
|
@ -1096,10 +1108,20 @@ namespace SummarizeTest
|
|||
|
||||
private static void AppendToSummary(string summaryFileName, XElement xout, bool traceToStdout = false, bool shouldLock = true)
|
||||
{
|
||||
bool useXml = true;
|
||||
if (summaryFileName != null && summaryFileName.EndsWith(".json")) {
|
||||
useXml = false;
|
||||
}
|
||||
|
||||
if (traceToStdout)
|
||||
{
|
||||
using (var wr = System.Xml.XmlWriter.Create(Console.OpenStandardOutput(), new System.Xml.XmlWriterSettings() { OmitXmlDeclaration = true, Encoding = new System.Text.UTF8Encoding(false) }))
|
||||
xout.WriteTo(wr);
|
||||
if (useXml) {
|
||||
using (var wr = System.Xml.XmlWriter.Create(Console.OpenStandardOutput(), new System.Xml.XmlWriterSettings() { OmitXmlDeclaration = true, Encoding = new System.Text.UTF8Encoding(false) }))
|
||||
xout.WriteTo(wr);
|
||||
} else {
|
||||
using (var wr = System.Runtime.Serialization.Json.JsonReaderWriterFactory.CreateJsonWriter(Console.OpenStandardOutput()))
|
||||
xout.WriteTo(wr);
|
||||
}
|
||||
Console.WriteLine();
|
||||
return;
|
||||
}
|
||||
|
@ -1110,7 +1132,6 @@ namespace SummarizeTest
|
|||
takeLock(summaryFileName);
|
||||
try
|
||||
{
|
||||
|
||||
using (var f = System.IO.File.Open(summaryFileName, System.IO.FileMode.Append, System.IO.FileAccess.Write))
|
||||
{
|
||||
if (f.Length == 0)
|
||||
|
@ -1118,8 +1139,13 @@ namespace SummarizeTest
|
|||
byte[] bytes = Encoding.UTF8.GetBytes("<Trace>");
|
||||
f.Write(bytes, 0, bytes.Length);
|
||||
}
|
||||
using (var wr = System.Xml.XmlWriter.Create(f, new System.Xml.XmlWriterSettings() { OmitXmlDeclaration = true }))
|
||||
xout.Save(wr);
|
||||
if (useXml) {
|
||||
using (var wr = System.Xml.XmlWriter.Create(f, new System.Xml.XmlWriterSettings() { OmitXmlDeclaration = true }))
|
||||
xout.Save(wr);
|
||||
} else {
|
||||
using (var wr = System.Runtime.Serialization.Json.JsonReaderWriterFactory.CreateJsonWriter(f))
|
||||
xout.WriteTo(wr);
|
||||
}
|
||||
var endl = Encoding.UTF8.GetBytes(Environment.NewLine);
|
||||
f.Write(endl, 0, endl.Length);
|
||||
}
|
||||
|
@ -1130,6 +1156,7 @@ namespace SummarizeTest
|
|||
releaseLock(summaryFileName);
|
||||
}
|
||||
}
|
||||
|
||||
private static void AppendXmlMessageToSummary(string summaryFileName, XElement xout, bool traceToStdout = false, string testFile = null,
|
||||
int? seed = null, bool? buggify = null, bool? determinismCheck = null, string oldBinaryName = null)
|
||||
{
|
||||
|
|
|
@ -51,7 +51,7 @@ namespace Magnesium
|
|||
}
|
||||
catch (Exception e)
|
||||
{
|
||||
throw new Exception(string.Format("Failed to parse {0}", root), e);
|
||||
throw new Exception(string.Format("Failed to parse JSON {0}", root), e);
|
||||
}
|
||||
if (ev != null) yield return ev;
|
||||
}
|
||||
|
@ -81,7 +81,7 @@ namespace Magnesium
|
|||
DDetails = xEvent.Elements()
|
||||
.Where(a=>a.Name != "Type" && a.Name != "Time" && a.Name != "Machine" && a.Name != "ID" && a.Name != "Severity" && (!rolledEvent || a.Name != "OriginalTime"))
|
||||
.ToDictionary(a=>string.Intern(a.Name.LocalName), a=>(object)a.Value),
|
||||
original = keepOriginalElement ? xEvent : null,
|
||||
original = keepOriginalElement ? xEvent : null
|
||||
};
|
||||
}
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ namespace Magnesium
|
|||
}
|
||||
catch (Exception e)
|
||||
{
|
||||
throw new Exception(string.Format("Failed to parse {0}", xev), e);
|
||||
throw new Exception(string.Format("Failed to parse XML {0}", xev), e);
|
||||
}
|
||||
if (ev != null) yield return ev;
|
||||
}
|
||||
|
|
|
@ -24,22 +24,22 @@ def parse_args():
|
|||
# (e)nd of a span with a better given name
|
||||
locationToPhase = {
|
||||
"NativeAPI.commit.Before": [],
|
||||
"MasterProxyServer.batcher": [("b", "Commit")],
|
||||
"MasterProxyServer.commitBatch.Before": [],
|
||||
"MasterProxyServer.commitBatch.GettingCommitVersion": [("b", "CommitVersion")],
|
||||
"MasterProxyServer.commitBatch.GotCommitVersion": [("e", "CommitVersion")],
|
||||
"CommitProxyServer.batcher": [("b", "Commit")],
|
||||
"CommitProxyServer.commitBatch.Before": [],
|
||||
"CommitProxyServer.commitBatch.GettingCommitVersion": [("b", "CommitVersion")],
|
||||
"CommitProxyServer.commitBatch.GotCommitVersion": [("e", "CommitVersion")],
|
||||
"Resolver.resolveBatch.Before": [("b", "Resolver.PipelineWait")],
|
||||
"Resolver.resolveBatch.AfterQueueSizeCheck": [],
|
||||
"Resolver.resolveBatch.AfterOrderer": [("e", "Resolver.PipelineWait"), ("b", "Resolver.Conflicts")],
|
||||
"Resolver.resolveBatch.After": [("e", "Resolver.Conflicts")],
|
||||
"MasterProxyServer.commitBatch.AfterResolution": [("b", "Proxy.Processing")],
|
||||
"MasterProxyServer.commitBatch.ProcessingMutations": [],
|
||||
"MasterProxyServer.commitBatch.AfterStoreCommits": [("e", "Proxy.Processing")],
|
||||
"CommitProxyServer.commitBatch.AfterResolution": [("b", "Proxy.Processing")],
|
||||
"CommitProxyServer.commitBatch.ProcessingMutations": [],
|
||||
"CommitProxyServer.commitBatch.AfterStoreCommits": [("e", "Proxy.Processing")],
|
||||
"TLog.tLogCommit.BeforeWaitForVersion": [("b", "TLog.PipelineWait")],
|
||||
"TLog.tLogCommit.Before": [("e", "TLog.PipelineWait")],
|
||||
"TLog.tLogCommit.AfterTLogCommit": [("b", "TLog.FSync")],
|
||||
"TLog.tLogCommit.After": [("e", "TLog.FSync")],
|
||||
"MasterProxyServer.commitBatch.AfterLogPush": [("e", "Commit")],
|
||||
"CommitProxyServer.commitBatch.AfterLogPush": [("e", "Commit")],
|
||||
"NativeAPI.commit.After": [],
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,52 @@
|
|||
# fdbcstat
|
||||
`fdbcstat` is a FoundationDB client monitoring tool which collects and displays transaction operation statistics inside the C API library (`libfdb_c.so`).
|
||||
|
||||
## How it works
|
||||
`fdbcstat` utilizes [eBPF/bcc](https://github.com/iovisor/bcc) to attach to `libfdb_c.so` shared library and insert special instructions to collect statistics in several common `fdb_transaction_*` calls, then it periodically displays the aggregated statistics.
|
||||
|
||||
## How to use
|
||||
|
||||
### Syntax
|
||||
`fdbcstat <full path to libfdb_c.so> <options...>`
|
||||
|
||||
### Options
|
||||
- `-p` or `--pid` : Only capture statistics for the functions called by the specified process
|
||||
- `-i` or `--interval` : Specify the time interval in seconds between 2 outputs (Default: 1)
|
||||
- `-d` or `--duration` : Specify the total duration in seconds `fdbcstats` will run (Default: Unset / Forever)
|
||||
- `-f` or `--functions` : Specify the comma-separated list of functions to monitor (Default: Unset / All supported functions)
|
||||
|
||||
### Supported Functions
|
||||
- get
|
||||
- get_range
|
||||
- get_read_version
|
||||
- set
|
||||
- clear
|
||||
- clear_range
|
||||
- commit
|
||||
|
||||
### Examples
|
||||
##### Collect all statistics and display every second
|
||||
`fdbcstat /usr/lib64/libfdb_c.so`
|
||||
##### Collect all statistics for PID 12345 for 60 seconds with 10 second interval
|
||||
`fdbcstat /usr/lib64/libfdb_c.so -p 12345 -d 60 -i 10`
|
||||
##### Collect statitics only for get and commit
|
||||
`fdbcstat /usr/lib64/libfdb_c.so -f get,commit`
|
||||
|
||||
## Output Format
|
||||
Each line contains multiple fields. The first field is the timestamp. Other fields are the statistics for each operation. Each operation field contains the following statistics in a slash (/) separated format.
|
||||
|
||||
- Function
|
||||
- Number of calls per second
|
||||
- Average latency in microseconds (us)
|
||||
- Maximum latency in microseconds (us)
|
||||
|
||||
**Note**: The latency is computed as the time difference between the start time and the end time of the `fdb_transaction_*` function call except for `get`, `get_range`, `get_read_version` and `commit`. For those 4 functions, the latency is the time difference between the start time of the function and the end time of the following `fdb_future_block_until_ready` call.
|
||||
|
||||
## Sample Output
|
||||
```
|
||||
...
|
||||
15:05:31 clear/22426/2/34 commit/18290/859/15977 get/56230/1110/12748 get_range/14141/23/75 set/6276/3/19
|
||||
15:05:41 clear/24147/2/38 commit/18259/894/44259 get/57978/1098/15636 get_range/13171/23/90 set/6564/3/15
|
||||
15:05:51 clear/21287/2/34 commit/18386/876/17824 get/58318/1106/30539 get_range/13018/23/68 set/6559/3/13
|
||||
...
|
||||
```
|
|
@ -0,0 +1,304 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import print_function
|
||||
from bcc import BPF
|
||||
from time import sleep, strftime, time
|
||||
import argparse
|
||||
import signal
|
||||
|
||||
description = """The fdbcstat utility displays FDB C API statistics on terminal
|
||||
that include calls-per-second, average latency and maximum latency
|
||||
within the given time interval.
|
||||
|
||||
Each field in the output represents the following elements
|
||||
in a slash-separated format:
|
||||
- Operation type
|
||||
- Number of calls per second
|
||||
- Average latency in microseconds (us)
|
||||
- Maximum latency in microseconds (us)
|
||||
"""
|
||||
|
||||
# supported APIs
|
||||
# note: the array index is important here.
|
||||
# it's used in BPF as the funciton identifier.
|
||||
# 0: get
|
||||
# 1: get_range
|
||||
# 2: get_read_version
|
||||
# 3: set
|
||||
# 4: clear
|
||||
# 5: clear_range
|
||||
# 6: commit
|
||||
fdbfuncs = [
|
||||
{ "name":"get", "waitfuture":True, "enabled":True },
|
||||
{ "name":"get_range", "waitfuture":True, "enabled":True },
|
||||
{ "name":"get_read_version", "waitfuture":True, "enabled":True },
|
||||
{ "name":"set", "waitfuture":False, "enabled":True },
|
||||
{ "name":"clear", "waitfuture":False, "enabled":True },
|
||||
{ "name":"clear_range", "waitfuture":False, "enabled":True },
|
||||
{ "name":"commit", "waitfuture":True, "enabled":True }
|
||||
]
|
||||
|
||||
# arguments
|
||||
parser = argparse.ArgumentParser(
|
||||
description="FoundationDB client statistics collector",
|
||||
formatter_class=argparse.RawTextHelpFormatter,
|
||||
epilog=description)
|
||||
parser.add_argument("-p", "--pid", type=int,
|
||||
help="Capture for this PID only")
|
||||
parser.add_argument("-i", "--interval", type=int,
|
||||
help="Print interval in seconds (Default: 1 second)")
|
||||
parser.add_argument("-d", "--duration", type=int,
|
||||
help="Duration in seconds (Default: unset)")
|
||||
parser.add_argument("-f", "--functions", type=str,
|
||||
help='''Capture for specific functions (comma-separated) (Default: unset)
|
||||
Supported functions: get, get_range, get_read_version,
|
||||
set, clear, clear_range, commit''')
|
||||
parser.add_argument("libpath",
|
||||
help="Full path to libfdb_c.so")
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.interval:
|
||||
args.interval = 1
|
||||
|
||||
if args.functions:
|
||||
# reset all
|
||||
idx=0
|
||||
while idx < len(fdbfuncs):
|
||||
fdbfuncs[idx]['enabled'] = False
|
||||
idx += 1
|
||||
|
||||
# enable specified functions
|
||||
for f in args.functions.split(','):
|
||||
idx=0
|
||||
while idx < len(fdbfuncs):
|
||||
if fdbfuncs[idx]['name'] == f:
|
||||
fdbfuncs[idx]['enabled'] = True
|
||||
idx += 1
|
||||
|
||||
# check for libfdb_c.so
|
||||
libpath = BPF.find_library(args.libpath) or BPF.find_exe(args.libpath)
|
||||
if libpath is None:
|
||||
print("Error: Can't find %s" % args.libpath)
|
||||
exit(1)
|
||||
|
||||
# main BPF program
|
||||
# we do not rely on PT_REGS_IP() and BPF.sym() to retrive the symbol name
|
||||
# because some "backword-compatible" symbols do not get resovled through BPF.sym().
|
||||
bpf_text = """
|
||||
#include <uapi/linux/ptrace.h>
|
||||
|
||||
typedef struct _stats_key_t {
|
||||
u32 pid;
|
||||
u32 func;
|
||||
} stats_key_t;
|
||||
|
||||
typedef struct _stats_val_t {
|
||||
u64 cnt;
|
||||
u64 total;
|
||||
u64 max;
|
||||
} stats_val_t;
|
||||
|
||||
BPF_HASH(starttime, u32, u64);
|
||||
BPF_HASH(startfunc, u32, u32);
|
||||
BPF_HASH(stats, stats_key_t, stats_val_t);
|
||||
|
||||
static int trace_common_entry(struct pt_regs *ctx, u32 func)
|
||||
{
|
||||
u64 pid_tgid = bpf_get_current_pid_tgid();
|
||||
u32 pid = pid_tgid; /* lower 32-bit = Process ID (Thread ID) */
|
||||
u32 tgid = pid_tgid >> 32; /* upper 32-bit = Thread Group ID (Process ID) */
|
||||
|
||||
/* if PID is specified, we'll filter by tgid here */
|
||||
FILTERPID
|
||||
|
||||
/* start time in ns */
|
||||
u64 ts = bpf_ktime_get_ns();
|
||||
|
||||
/* function type */
|
||||
u32 f = func;
|
||||
startfunc.update(&pid, &f);
|
||||
|
||||
/* update start time */
|
||||
starttime.update(&pid, &ts);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int trace_get_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 0);
|
||||
}
|
||||
|
||||
int trace_get_range_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 1);
|
||||
}
|
||||
|
||||
int trace_get_read_version_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 2);
|
||||
}
|
||||
|
||||
int trace_set_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 3);
|
||||
}
|
||||
|
||||
int trace_clear_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 4);
|
||||
}
|
||||
|
||||
int trace_clear_range_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 5);
|
||||
}
|
||||
|
||||
int trace_commit_entry(struct pt_regs *ctx)
|
||||
{
|
||||
return trace_common_entry(ctx, 6);
|
||||
}
|
||||
|
||||
int trace_func_return(struct pt_regs *ctx)
|
||||
{
|
||||
u64 *st; /* start time */
|
||||
u64 duration;
|
||||
u64 pid_tgid = bpf_get_current_pid_tgid();
|
||||
u32 pid = pid_tgid;
|
||||
u32 tgid = pid_tgid >> 32;
|
||||
|
||||
/* if PID is specified, we'll filter by tgid here */
|
||||
FILTERPID
|
||||
|
||||
/* calculate duration in ns */
|
||||
st = starttime.lookup(&pid);
|
||||
if (!st || st == 0) {
|
||||
return 0; /* missed start */
|
||||
}
|
||||
/* duration in ns */
|
||||
duration = bpf_ktime_get_ns() - *st;
|
||||
starttime.delete(&pid);
|
||||
|
||||
/* update stats */
|
||||
u32 func, *funcp = startfunc.lookup(&pid);
|
||||
if (funcp) {
|
||||
func = *funcp;
|
||||
stats_key_t key;
|
||||
stats_val_t *prev;
|
||||
stats_val_t cur;
|
||||
key.pid = pid; /* pid here is the thread ID in user space */
|
||||
key.func = func;
|
||||
prev = stats.lookup(&key);
|
||||
if (prev) {
|
||||
cur.cnt = prev->cnt + 1;
|
||||
cur.total = prev->total + duration;
|
||||
cur.max = (duration > prev->max) ? duration : prev->max;
|
||||
stats.update(&key, &cur);
|
||||
} else {
|
||||
cur.cnt = 1;
|
||||
cur.total = duration;
|
||||
cur.max = duration;
|
||||
stats.insert(&key, &cur);
|
||||
}
|
||||
startfunc.delete(&pid);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
"""
|
||||
|
||||
# If PID is specified, insert the PID filter
|
||||
if args.pid:
|
||||
bpf_text = bpf_text.replace('FILTERPID',
|
||||
'if (tgid != %d) { return 0; }' % args.pid)
|
||||
else:
|
||||
bpf_text = bpf_text.replace('FILTERPID', '')
|
||||
|
||||
# signal handler
|
||||
def signal_ignore(signal, frame):
|
||||
pass
|
||||
|
||||
# load BPF program
|
||||
b = BPF(text=bpf_text)
|
||||
|
||||
# attach probes
|
||||
waitfuture = False;
|
||||
for f in fdbfuncs:
|
||||
|
||||
# skip disabled functions
|
||||
if not f['enabled']:
|
||||
continue
|
||||
|
||||
# attach the entry point
|
||||
b.attach_uprobe(name=libpath, sym='fdb_transaction_'+f['name'],
|
||||
fn_name='trace_' + f['name'] + '_entry', pid=args.pid or -1)
|
||||
if f['waitfuture']:
|
||||
waitfuture = True
|
||||
else:
|
||||
b.attach_uretprobe(name=libpath, sym='fdb_transaction_'+f['name'],
|
||||
fn_name="trace_func_return", pid=args.pid or -1)
|
||||
if waitfuture:
|
||||
b.attach_uretprobe(name=libpath, sym='fdb_future_block_until_ready',
|
||||
fn_name="trace_func_return", pid=args.pid or -1)
|
||||
|
||||
# open uprobes
|
||||
matched = b.num_open_uprobes()
|
||||
|
||||
if matched == 0:
|
||||
print("0 functions matched... Exiting.")
|
||||
exit()
|
||||
|
||||
stats = b.get_table("stats")
|
||||
|
||||
# aggregated stats dictionary
|
||||
agg = {}
|
||||
|
||||
exiting = 0
|
||||
seconds = 0
|
||||
prev = 0.0
|
||||
now = 0.0
|
||||
|
||||
# main loop
|
||||
while (1):
|
||||
try:
|
||||
sleep(args.interval)
|
||||
seconds += args.interval
|
||||
prev = now
|
||||
now = time()
|
||||
if prev == 0:
|
||||
stats.clear()
|
||||
continue
|
||||
except KeyboardInterrupt:
|
||||
exiting = 1
|
||||
signal.signal(signal.SIGINT, signal_ignore)
|
||||
|
||||
if args.duration and seconds >= args.duration:
|
||||
exiting = 1
|
||||
|
||||
# walk through the stats and aggregate by the functions
|
||||
for k,v in stats.items():
|
||||
f = fdbfuncs[k.func]['name']
|
||||
if f in agg:
|
||||
# update an exiting entry
|
||||
agg[f]['cnt'] = agg[f]['cnt'] + v.cnt
|
||||
agg[f]['total'] = agg[f]['total'] + v.total;
|
||||
if v.cnt > agg[f]['max']:
|
||||
agg[f]['max'] = v.cnt
|
||||
else:
|
||||
# insert a new entry
|
||||
agg[f] = {'cnt':v.cnt, 'total':v.total, 'max':v.max}
|
||||
|
||||
# print out aggregated stats
|
||||
print("%-8s " % (strftime("%H:%M:%S")), end="", flush=True)
|
||||
for f in sorted(agg):
|
||||
print("%s/%d/%d/%d " % (f,
|
||||
agg[f]['cnt'] / (now - prev),
|
||||
agg[f]['total']/agg[f]['cnt'] / 1000, # us
|
||||
agg[f]['max'] / 1000), # us
|
||||
end="")
|
||||
print()
|
||||
|
||||
stats.clear()
|
||||
agg.clear()
|
||||
|
||||
if exiting:
|
||||
exit()
|
473
contrib/transaction_profiling_analyzer/transaction_profiling_analyzer.py
Executable file → Normal file
473
contrib/transaction_profiling_analyzer/transaction_profiling_analyzer.py
Executable file → Normal file
|
@ -39,7 +39,9 @@ from json import JSONEncoder
|
|||
import logging
|
||||
import struct
|
||||
from bisect import bisect_left
|
||||
from bisect import bisect_right
|
||||
import time
|
||||
import datetime
|
||||
|
||||
PROTOCOL_VERSION_5_2 = 0x0FDB00A552000001
|
||||
PROTOCOL_VERSION_6_0 = 0x0FDB00A570010001
|
||||
|
@ -96,6 +98,9 @@ class ByteBuffer(object):
|
|||
def get_double(self):
|
||||
return struct.unpack("<d", self.get_bytes(8))[0]
|
||||
|
||||
def get_bool(self):
|
||||
return struct.unpack("<?", self.get_bytes(1))[0]
|
||||
|
||||
def get_bytes_with_length(self):
|
||||
length = self.get_int()
|
||||
return self.get_bytes(length)
|
||||
|
@ -221,6 +226,8 @@ class CommitInfo(BaseInfo):
|
|||
self.mutations = mutations
|
||||
|
||||
self.read_snapshot_version = bb.get_long()
|
||||
if protocol_version >= PROTOCOL_VERSION_6_3:
|
||||
self.report_conflicting_keys = bb.get_bool()
|
||||
|
||||
|
||||
class ErrorGetInfo(BaseInfo):
|
||||
|
@ -253,7 +260,8 @@ class ErrorCommitInfo(BaseInfo):
|
|||
self.mutations = mutations
|
||||
|
||||
self.read_snapshot_version = bb.get_long()
|
||||
|
||||
if protocol_version >= PROTOCOL_VERSION_6_3:
|
||||
self.report_conflicting_keys = bb.get_bool()
|
||||
|
||||
class UnsupportedProtocolVersionError(Exception):
|
||||
def __init__(self, protocol_version):
|
||||
|
@ -414,7 +422,7 @@ class TransactionInfoLoader(object):
|
|||
else:
|
||||
end_key = self.client_latency_end_key_selector
|
||||
|
||||
valid_transaction_infos = 0
|
||||
transaction_infos = 0
|
||||
invalid_transaction_infos = 0
|
||||
|
||||
def build_client_transaction_info(v):
|
||||
|
@ -446,11 +454,12 @@ class TransactionInfoLoader(object):
|
|||
info = build_client_transaction_info(v)
|
||||
if info.has_types():
|
||||
buffer.append(info)
|
||||
valid_transaction_infos += 1
|
||||
except UnsupportedProtocolVersionError as e:
|
||||
invalid_transaction_infos += 1
|
||||
except ValueError:
|
||||
invalid_transaction_infos += 1
|
||||
|
||||
transaction_infos += 1
|
||||
else:
|
||||
if chunk_num == 1:
|
||||
# first chunk
|
||||
|
@ -476,14 +485,15 @@ class TransactionInfoLoader(object):
|
|||
info = build_client_transaction_info(b''.join([chunk.value for chunk in c_list]))
|
||||
if info.has_types():
|
||||
buffer.append(info)
|
||||
valid_transaction_infos += 1
|
||||
except UnsupportedProtocolVersionError as e:
|
||||
invalid_transaction_infos += 1
|
||||
except ValueError:
|
||||
invalid_transaction_infos += 1
|
||||
|
||||
transaction_infos += 1
|
||||
self._check_and_adjust_chunk_cache_size()
|
||||
if (valid_transaction_infos + invalid_transaction_infos) % 1000 == 0:
|
||||
print("Processed valid: %d, invalid: %d" % (valid_transaction_infos, invalid_transaction_infos))
|
||||
if transaction_infos % 1000 == 0:
|
||||
print("Processed %d transactions, %d invalid" % (transaction_infos, invalid_transaction_infos))
|
||||
if found == 0:
|
||||
more = False
|
||||
except fdb.FDBError as e:
|
||||
|
@ -495,13 +505,15 @@ class TransactionInfoLoader(object):
|
|||
for item in buffer:
|
||||
yield item
|
||||
|
||||
print("Processed %d transactions, %d invalid\n" % (transaction_infos, invalid_transaction_infos))
|
||||
|
||||
|
||||
def has_sortedcontainers():
|
||||
try:
|
||||
import sortedcontainers
|
||||
return True
|
||||
except ImportError:
|
||||
logger.warn("Can't find sortedcontainers so disabling RangeCounter")
|
||||
logger.warn("Can't find sortedcontainers so disabling ReadCounter")
|
||||
return False
|
||||
|
||||
|
||||
|
@ -513,155 +525,200 @@ def has_dateparser():
|
|||
logger.warn("Can't find dateparser so disabling human date parsing")
|
||||
return False
|
||||
|
||||
|
||||
class RangeCounter(object):
|
||||
def __init__(self, k):
|
||||
self.k = k
|
||||
class ReadCounter(object):
|
||||
def __init__(self):
|
||||
from sortedcontainers import SortedDict
|
||||
self.ranges = SortedDict()
|
||||
self.reads = SortedDict()
|
||||
self.reads[b''] = [0, 0]
|
||||
|
||||
self.read_counts = {}
|
||||
self.hit_count=0
|
||||
|
||||
def process(self, transaction_info):
|
||||
for get in transaction_info.gets:
|
||||
self._insert_read(get.key, None)
|
||||
for get_range in transaction_info.get_ranges:
|
||||
self._insert_range(get_range.key_range.start_key, get_range.key_range.end_key)
|
||||
self._insert_read(get_range.key_range.start_key, get_range.key_range.end_key)
|
||||
|
||||
def _insert_range(self, start_key, end_key):
|
||||
keys = self.ranges.keys()
|
||||
if len(keys) == 0:
|
||||
self.ranges[start_key] = end_key, 1
|
||||
return
|
||||
def _insert_read(self, start_key, end_key):
|
||||
self.read_counts.setdefault((start_key, end_key), 0)
|
||||
self.read_counts[(start_key, end_key)] += 1
|
||||
|
||||
start_pos = bisect_left(keys, start_key)
|
||||
end_pos = bisect_left(keys, end_key)
|
||||
#print("start_pos=%d, end_pos=%d" % (start_pos, end_pos))
|
||||
self.reads.setdefault(start_key, [0, 0])[0] += 1
|
||||
if end_key is not None:
|
||||
self.reads.setdefault(end_key, [0, 0])[1] += 1
|
||||
else:
|
||||
self.reads.setdefault(start_key+b'\x00', [0, 0])[1] += 1
|
||||
|
||||
possible_intersection_keys = keys[max(0, start_pos - 1):min(len(keys), end_pos+1)]
|
||||
def get_total_reads(self):
|
||||
return sum([v for v in self.read_counts.values()])
|
||||
|
||||
start_range_left = start_key
|
||||
def matches_filter(addresses, required_addresses):
|
||||
for addr in required_addresses:
|
||||
if addr not in addresses:
|
||||
return False
|
||||
return True
|
||||
|
||||
for key in possible_intersection_keys:
|
||||
cur_end_key, cur_count = self.ranges[key]
|
||||
#logger.debug("key=%s, cur_end_key=%s, cur_count=%d, start_range_left=%s" % (key, cur_end_key, cur_count, start_range_left))
|
||||
if start_range_left < key:
|
||||
if end_key <= key:
|
||||
self.ranges[start_range_left] = end_key, 1
|
||||
return
|
||||
self.ranges[start_range_left] = key, 1
|
||||
start_range_left = key
|
||||
assert start_range_left >= key
|
||||
if start_range_left >= cur_end_key:
|
||||
continue
|
||||
def get_top_k_reads(self, num, filter_addresses, shard_finder=None):
|
||||
count_pairs = sorted([(v, k) for (k, v) in self.read_counts.items()], reverse=True, key=lambda item: item[0])
|
||||
if not filter_addresses:
|
||||
count_pairs = count_pairs[0:num]
|
||||
|
||||
# [key, start_range_left) = cur_count
|
||||
# if key == start_range_left this will get overwritten below
|
||||
self.ranges[key] = start_range_left, cur_count
|
||||
if shard_finder:
|
||||
results = []
|
||||
for (count, (start, end)) in count_pairs:
|
||||
results.append((start, end, count, shard_finder.get_addresses_for_key(start)))
|
||||
|
||||
if end_key <= cur_end_key:
|
||||
# [start_range_left, end_key) = cur_count+1
|
||||
# [end_key, cur_end_key) = cur_count
|
||||
self.ranges[start_range_left] = end_key, cur_count + 1
|
||||
if end_key != cur_end_key:
|
||||
self.ranges[end_key] = cur_end_key, cur_count
|
||||
start_range_left = end_key
|
||||
break
|
||||
else:
|
||||
# [start_range_left, cur_end_key) = cur_count+1
|
||||
self.ranges[start_range_left] = cur_end_key, cur_count+1
|
||||
start_range_left = cur_end_key
|
||||
assert start_range_left <= end_key
|
||||
shard_finder.wait_for_shard_addresses(results, 0, 3)
|
||||
|
||||
# there may be some range left
|
||||
if start_range_left < end_key:
|
||||
self.ranges[start_range_left] = end_key, 1
|
||||
if filter_addresses:
|
||||
filter_addresses = set(filter_addresses)
|
||||
results = [r for r in results if filter_addresses.issubset(set(r[3]))][0:num]
|
||||
else:
|
||||
results = [(start, end, count) for (count, (start, end)) in count_pairs[0:num]]
|
||||
|
||||
def get_count_for_key(self, key):
|
||||
if key in self.ranges:
|
||||
return self.ranges[key][1]
|
||||
return results
|
||||
|
||||
keys = self.ranges.keys()
|
||||
index = bisect_left(keys, key)
|
||||
if index == 0:
|
||||
return 0
|
||||
|
||||
index_key = keys[index-1]
|
||||
if index_key <= key < self.ranges[index_key][0]:
|
||||
return self.ranges[index_key][1]
|
||||
return 0
|
||||
|
||||
def get_range_boundaries(self, shard_finder=None):
|
||||
total = sum([count for _, (_, count) in self.ranges.items()])
|
||||
range_size = total // self.k
|
||||
def get_range_boundaries(self, num_buckets, shard_finder=None):
|
||||
total = sum([start_count for (start_count, end_count) in self.reads.values()])
|
||||
range_size = total // num_buckets
|
||||
output_range_counts = []
|
||||
|
||||
def add_boundary(start, end, count):
|
||||
if total == 0:
|
||||
return output_range_counts
|
||||
|
||||
def add_boundary(start, end, started_count, total_count):
|
||||
if shard_finder:
|
||||
shard_count = shard_finder.get_shard_count(start, end)
|
||||
if shard_count == 1:
|
||||
addresses = shard_finder.get_addresses_for_key(start)
|
||||
else:
|
||||
addresses = None
|
||||
output_range_counts.append((start, end, count, shard_count, addresses))
|
||||
output_range_counts.append((start, end, started_count, total_count, shard_count, addresses))
|
||||
else:
|
||||
output_range_counts.append((start, end, count, None, None))
|
||||
output_range_counts.append((start, end, started_count, total_count, None, None))
|
||||
|
||||
this_range_start_key = None
|
||||
last_end = None
|
||||
open_count = 0
|
||||
opened_this_range = 0
|
||||
count_this_range = 0
|
||||
for (start_key, (end_key, count)) in self.ranges.items():
|
||||
if not this_range_start_key:
|
||||
this_range_start_key = start_key
|
||||
count_this_range += count
|
||||
if count_this_range >= range_size:
|
||||
add_boundary(this_range_start_key, end_key, count_this_range)
|
||||
count_this_range = 0
|
||||
this_range_start_key = None
|
||||
if count_this_range > 0:
|
||||
add_boundary(this_range_start_key, end_key, count_this_range)
|
||||
|
||||
for (start_key, (start_count, end_count)) in self.reads.items():
|
||||
open_count -= end_count
|
||||
|
||||
if opened_this_range >= range_size:
|
||||
add_boundary(this_range_start_key, start_key, opened_this_range, count_this_range)
|
||||
count_this_range = open_count
|
||||
opened_this_range = 0
|
||||
this_range_start_key = None
|
||||
|
||||
count_this_range += start_count
|
||||
opened_this_range += start_count
|
||||
open_count += start_count
|
||||
|
||||
if count_this_range > 0 and this_range_start_key is None:
|
||||
this_range_start_key = start_key
|
||||
|
||||
if end_count > 0:
|
||||
last_end = start_key
|
||||
|
||||
if last_end is None:
|
||||
last_end = b'\xff'
|
||||
if count_this_range > 0:
|
||||
add_boundary(this_range_start_key, last_end, opened_this_range, count_this_range)
|
||||
|
||||
shard_finder.wait_for_shard_addresses(output_range_counts, 0, 5)
|
||||
return output_range_counts
|
||||
|
||||
|
||||
class ShardFinder(object):
|
||||
def __init__(self, db):
|
||||
def __init__(self, db, exclude_ports):
|
||||
self.db = db
|
||||
self.exclude_ports = exclude_ports
|
||||
|
||||
self.tr = db.create_transaction()
|
||||
self.refresh_tr()
|
||||
|
||||
self.outstanding = []
|
||||
self.boundary_keys = list(fdb.locality.get_boundary_keys(db, b'', b'\xff\xff'))
|
||||
self.shard_cache = {}
|
||||
|
||||
def _get_boundary_keys(self, begin, end):
|
||||
start_pos = max(0, bisect_right(self.boundary_keys, begin)-1)
|
||||
end_pos = max(0, bisect_right(self.boundary_keys, end)-1)
|
||||
|
||||
return self.boundary_keys[start_pos:end_pos]
|
||||
|
||||
def refresh_tr(self):
|
||||
self.tr.options.set_read_lock_aware()
|
||||
if not self.exclude_ports:
|
||||
self.tr.options.set_include_port_in_address()
|
||||
|
||||
@staticmethod
|
||||
@fdb.transactional
|
||||
def _get_boundary_keys(tr, begin, end):
|
||||
tr.options.set_read_lock_aware()
|
||||
return fdb.locality.get_boundary_keys(tr, begin, end)
|
||||
|
||||
@staticmethod
|
||||
@fdb.transactional
|
||||
def _get_addresses_for_key(tr, key):
|
||||
tr.options.set_read_lock_aware()
|
||||
return fdb.locality.get_addresses_for_key(tr, key)
|
||||
|
||||
def get_shard_count(self, start_key, end_key):
|
||||
return len(list(self._get_boundary_keys(self.db, start_key, end_key))) + 1
|
||||
return len(self._get_boundary_keys(start_key, end_key)) + 1
|
||||
|
||||
def get_addresses_for_key(self, key):
|
||||
return [a.decode('ascii') for a in self._get_addresses_for_key(self.db, key).wait()]
|
||||
shard = self.boundary_keys[max(0, bisect_right(self.boundary_keys, key)-1)]
|
||||
do_load = False
|
||||
if not shard in self.shard_cache:
|
||||
do_load = True
|
||||
elif self.shard_cache[shard].is_ready():
|
||||
try:
|
||||
self.shard_cache[shard].wait()
|
||||
except fdb.FDBError as e:
|
||||
self.tr.on_error(e).wait()
|
||||
self.refresh_tr()
|
||||
do_load = True
|
||||
|
||||
if do_load:
|
||||
if len(self.outstanding) > 1000:
|
||||
for f in self.outstanding:
|
||||
try:
|
||||
f.wait()
|
||||
except fdb.FDBError as e:
|
||||
pass
|
||||
|
||||
class TopKeysCounter(object):
|
||||
self.outstanding = []
|
||||
self.tr.reset()
|
||||
self.refresh_tr()
|
||||
|
||||
self.outstanding.append(self._get_addresses_for_key(self.tr, shard))
|
||||
self.shard_cache[shard] = self.outstanding[-1]
|
||||
|
||||
return self.shard_cache[shard]
|
||||
|
||||
def wait_for_shard_addresses(self, ranges, key_idx, addr_idx):
|
||||
for index in range(len(ranges)):
|
||||
item = ranges[index]
|
||||
if item[addr_idx] is not None:
|
||||
while True:
|
||||
try:
|
||||
ranges[index] = item[0:addr_idx] + ([a.decode('ascii') for a in item[addr_idx].wait()],) + item[addr_idx+1:]
|
||||
break
|
||||
except fdb.FDBError as e:
|
||||
ranges[index] = item[0:addr_idx] + (self.get_addresses_for_key(item[key_idx]),) + item[addr_idx+1:]
|
||||
|
||||
class WriteCounter(object):
|
||||
mutation_types_to_consider = frozenset([MutationType.SET_VALUE, MutationType.ADD_VALUE])
|
||||
|
||||
def __init__(self, k):
|
||||
self.k = k
|
||||
self.reads = defaultdict(lambda: 0)
|
||||
def __init__(self):
|
||||
self.writes = defaultdict(lambda: 0)
|
||||
|
||||
def process(self, transaction_info):
|
||||
for get in transaction_info.gets:
|
||||
self.reads[get.key] += 1
|
||||
if transaction_info.commit:
|
||||
for mutation in transaction_info.commit.mutations:
|
||||
if mutation.code in self.mutation_types_to_consider:
|
||||
self.writes[mutation.param_one] += 1
|
||||
|
||||
def _get_range_boundaries(self, counts, shard_finder=None):
|
||||
total = sum([v for (k, v) in counts.items()])
|
||||
range_size = total // self.k
|
||||
key_counts_sorted = sorted(counts.items())
|
||||
def get_range_boundaries(self, num_buckets, shard_finder=None):
|
||||
total = sum([v for (k, v) in self.writes.items()])
|
||||
range_size = total // num_buckets
|
||||
key_counts_sorted = sorted(self.writes.items())
|
||||
output_range_counts = []
|
||||
|
||||
def add_boundary(start, end, count):
|
||||
|
@ -671,9 +728,9 @@ class TopKeysCounter(object):
|
|||
addresses = shard_finder.get_addresses_for_key(start)
|
||||
else:
|
||||
addresses = None
|
||||
output_range_counts.append((start, end, count, shard_count, addresses))
|
||||
output_range_counts.append((start, end, count, None, shard_count, addresses))
|
||||
else:
|
||||
output_range_counts.append((start, end, count, None, None))
|
||||
output_range_counts.append((start, end, count, None, None, None))
|
||||
|
||||
start_key = None
|
||||
count_this_range = 0
|
||||
|
@ -688,24 +745,31 @@ class TopKeysCounter(object):
|
|||
if count_this_range > 0:
|
||||
add_boundary(start_key, k, count_this_range)
|
||||
|
||||
shard_finder.wait_for_shard_addresses(output_range_counts, 0, 5)
|
||||
return output_range_counts
|
||||
|
||||
def _get_top_k(self, counts):
|
||||
count_key_pairs = sorted([(v, k) for (k, v) in counts.items()], reverse=True)
|
||||
return count_key_pairs[0:self.k]
|
||||
def get_total_writes(self):
|
||||
return sum([v for v in self.writes.values()])
|
||||
|
||||
def get_top_k_reads(self):
|
||||
return self._get_top_k(self.reads)
|
||||
def get_top_k_writes(self, num, filter_addresses, shard_finder=None):
|
||||
count_pairs = sorted([(v, k) for (k, v) in self.writes.items()], reverse=True)
|
||||
if not filter_addresses:
|
||||
count_pairs = count_pairs[0:num]
|
||||
|
||||
def get_top_k_writes(self):
|
||||
return self._get_top_k(self.writes)
|
||||
if shard_finder:
|
||||
results = []
|
||||
for (count, key) in count_pairs:
|
||||
results.append((key, None, count, shard_finder.get_addresses_for_key(key)))
|
||||
|
||||
def get_k_read_range_boundaries(self, shard_finder=None):
|
||||
return self._get_range_boundaries(self.reads, shard_finder)
|
||||
shard_finder.wait_for_shard_addresses(results, 0, 3)
|
||||
|
||||
def get_k_write_range_boundaries(self, shard_finder=None):
|
||||
return self._get_range_boundaries(self.writes, shard_finder)
|
||||
if filter_addresses:
|
||||
filter_addresses = set(filter_addresses)
|
||||
results = [r for r in results if filter_addresses.issubset(set(r[3]))][0:num]
|
||||
else:
|
||||
results = [(key, end, count) for (count, key) in count_pairs[0:num]]
|
||||
|
||||
return results
|
||||
|
||||
def connect(cluster_file=None):
|
||||
db = fdb.open(cluster_file=cluster_file)
|
||||
|
@ -722,6 +786,8 @@ def main():
|
|||
help="Include get type. If no filter args are given all will be returned.")
|
||||
parser.add_argument("--filter-get-range", action="store_true",
|
||||
help="Include get_range type. If no filter args are given all will be returned.")
|
||||
parser.add_argument("--filter-reads", action="store_true",
|
||||
help="Include get and get_range type. If no filter args are given all will be returned.")
|
||||
parser.add_argument("--filter-commit", action="store_true",
|
||||
help="Include commit type. If no filter args are given all will be returned.")
|
||||
parser.add_argument("--filter-error-get", action="store_true",
|
||||
|
@ -737,21 +803,34 @@ def main():
|
|||
end_time_group = parser.add_mutually_exclusive_group()
|
||||
end_time_group.add_argument("--max-timestamp", type=int, help="Don't return events newer than this epoch time")
|
||||
end_time_group.add_argument("-e", "--end-time", type=str, help="Don't return events older than this parsed time")
|
||||
parser.add_argument("--top-keys", type=int, help="If specified will output this many top keys for reads or writes", default=0)
|
||||
parser.add_argument("--num-buckets", type=int, help="The number of buckets to partition the key-space into for operation counts", default=100)
|
||||
parser.add_argument("--top-requests", type=int, help="If specified will output this many top keys for reads or writes", default=0)
|
||||
parser.add_argument("--exclude-ports", action="store_true", help="Print addresses without the port number. Only works in versions older than 6.3, and is required in versions older than 6.2.")
|
||||
parser.add_argument("--single-shard-ranges-only", action="store_true", help="Only print range boundaries that exist in a single shard")
|
||||
parser.add_argument("-a", "--filter-address", action="append", help="Only print range boundaries that include the given address. This option can used multiple times to include more than one address in the filter, in which case all addresses must match.")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
type_filter = set()
|
||||
if args.filter_get_version: type_filter.add("get_version")
|
||||
if args.filter_get: type_filter.add("get")
|
||||
if args.filter_get_range: type_filter.add("get_range")
|
||||
if args.filter_get or args.filter_reads: type_filter.add("get")
|
||||
if args.filter_get_range or args.filter_reads: type_filter.add("get_range")
|
||||
if args.filter_commit: type_filter.add("commit")
|
||||
if args.filter_error_get: type_filter.add("error_get")
|
||||
if args.filter_error_get_range: type_filter.add("error_get_range")
|
||||
if args.filter_error_commit: type_filter.add("error_commit")
|
||||
top_keys = args.top_keys
|
||||
key_counter = TopKeysCounter(top_keys) if top_keys else None
|
||||
range_counter = RangeCounter(top_keys) if (has_sortedcontainers() and top_keys) else None
|
||||
full_output = args.full_output or (top_keys is not None)
|
||||
|
||||
if (not type_filter or "commit" in type_filter):
|
||||
write_counter = WriteCounter() if args.num_buckets else None
|
||||
else:
|
||||
write_counter = None
|
||||
|
||||
if (not type_filter or "get" in type_filter or "get_range" in type_filter):
|
||||
read_counter = ReadCounter() if (has_sortedcontainers() and args.num_buckets) else None
|
||||
else:
|
||||
read_counter = None
|
||||
|
||||
full_output = args.full_output or (args.num_buckets is not None)
|
||||
|
||||
if args.min_timestamp:
|
||||
min_timestamp = args.min_timestamp
|
||||
|
@ -784,48 +863,128 @@ def main():
|
|||
db = connect(cluster_file=args.cluster_file)
|
||||
loader = TransactionInfoLoader(db, full_output=full_output, type_filter=type_filter,
|
||||
min_timestamp=min_timestamp, max_timestamp=max_timestamp)
|
||||
|
||||
for info in loader.fetch_transaction_info():
|
||||
if info.has_types():
|
||||
if not key_counter and not range_counter:
|
||||
if not write_counter and not read_counter:
|
||||
print(info.to_json())
|
||||
else:
|
||||
if key_counter:
|
||||
key_counter.process(info)
|
||||
if range_counter:
|
||||
range_counter.process(info)
|
||||
if write_counter:
|
||||
write_counter.process(info)
|
||||
if read_counter:
|
||||
read_counter.process(info)
|
||||
|
||||
if key_counter:
|
||||
def print_top(top):
|
||||
for (count, key) in top:
|
||||
print("%s %d" % (key, count))
|
||||
|
||||
def print_range_boundaries(range_boundaries):
|
||||
for (start, end, count, shard_count, addresses) in range_boundaries:
|
||||
if not shard_count:
|
||||
print("[%s, %s] %d" % (start, end, count))
|
||||
def print_top(top, total, context):
|
||||
if top:
|
||||
running_count = 0
|
||||
for (idx, (start, end, count, addresses)) in enumerate(top):
|
||||
running_count += count
|
||||
if end is not None:
|
||||
op_str = 'Range %r - %r' % (start, end)
|
||||
else:
|
||||
addresses_string = "addresses=%s" % ','.join(addresses) if addresses else ''
|
||||
print("[%s, %s] %d shards=%d %s" % (start, end, count, shard_count, addresses_string))
|
||||
op_str = 'Key %r' % start
|
||||
|
||||
print(" %d. %s\n %d sampled %s (%.2f%%, %.2f%% cumulative)" % (idx+1, op_str, count, context, 100*count/total, 100*running_count/total))
|
||||
print(" shard addresses: %s\n" % ", ".join(addresses))
|
||||
|
||||
else:
|
||||
print(" No %s found" % context)
|
||||
|
||||
def print_range_boundaries(range_boundaries, context):
|
||||
omit_start = None
|
||||
for (idx, (start, end, start_count, total_count, shard_count, addresses)) in enumerate(range_boundaries):
|
||||
omit = args.single_shard_ranges_only and shard_count is not None and shard_count > 1
|
||||
if args.filter_address:
|
||||
if not addresses:
|
||||
omit = True
|
||||
else:
|
||||
for addr in args.filter_address:
|
||||
if addr not in addresses:
|
||||
omit = True
|
||||
break
|
||||
|
||||
if not omit:
|
||||
if omit_start is not None:
|
||||
if omit_start == idx-1:
|
||||
print(" %d. Omitted\n" % (idx))
|
||||
else:
|
||||
print(" %d - %d. Omitted\n" % (omit_start+1, idx))
|
||||
omit_start = None
|
||||
|
||||
if total_count is None:
|
||||
count_str = '%d sampled %s' % (start_count, context)
|
||||
else:
|
||||
count_str = '%d sampled %s (%d intersecting)' % (start_count, context, total_count)
|
||||
if not shard_count:
|
||||
print(" %d. [%s, %s]\n %d sampled %s\n" % (idx+1, start, end, count, context))
|
||||
else:
|
||||
addresses_string = "; addresses=%s" % ', '.join(addresses) if addresses else ''
|
||||
print(" %d. [%s, %s]\n %s spanning %d shard(s)%s\n" % (idx+1, start, end, count_str, shard_count, addresses_string))
|
||||
elif omit_start is None:
|
||||
omit_start = idx
|
||||
|
||||
if omit_start is not None:
|
||||
if omit_start == len(range_boundaries)-1:
|
||||
print(" %d. Omitted\n" % len(range_boundaries))
|
||||
else:
|
||||
print(" %d - %d. Omitted\n" % (omit_start+1, len(range_boundaries)))
|
||||
|
||||
shard_finder = ShardFinder(db, args.exclude_ports)
|
||||
|
||||
print("NOTE: shard locations are current and may not reflect where an operation was performed in the past\n")
|
||||
|
||||
if write_counter:
|
||||
if args.top_requests:
|
||||
top_writes = write_counter.get_top_k_writes(args.top_requests, args.filter_address, shard_finder=shard_finder)
|
||||
|
||||
range_boundaries = write_counter.get_range_boundaries(args.num_buckets, shard_finder=shard_finder)
|
||||
num_writes = write_counter.get_total_writes()
|
||||
|
||||
if args.top_requests or range_boundaries:
|
||||
print("WRITES")
|
||||
print("------\n")
|
||||
print("Processed %d total writes\n" % num_writes)
|
||||
|
||||
if args.top_requests:
|
||||
suffix = ""
|
||||
if args.filter_address:
|
||||
suffix = " (%s)" % ", ".join(args.filter_address)
|
||||
print("Top %d writes%s:\n" % (args.top_requests, suffix))
|
||||
|
||||
print_top(top_writes, write_counter.get_total_writes(), "writes")
|
||||
print("")
|
||||
|
||||
shard_finder = ShardFinder(db)
|
||||
top_reads = key_counter.get_top_k_reads()
|
||||
if top_reads:
|
||||
print("Top %d reads:" % min(top_keys, len(top_reads)))
|
||||
print_top(top_reads)
|
||||
print("Approx equal sized gets range boundaries:")
|
||||
print_range_boundaries(key_counter.get_k_read_range_boundaries(shard_finder=shard_finder))
|
||||
top_writes = key_counter.get_top_k_writes()
|
||||
if top_writes:
|
||||
print("Top %d writes:" % min(top_keys, len(top_writes)))
|
||||
print_top(top_writes)
|
||||
print("Approx equal sized commits range boundaries:")
|
||||
print_range_boundaries(key_counter.get_k_write_range_boundaries(shard_finder=shard_finder))
|
||||
if range_counter:
|
||||
range_boundaries = range_counter.get_range_boundaries(shard_finder=shard_finder)
|
||||
if range_boundaries:
|
||||
print("Approx equal sized get_ranges boundaries:")
|
||||
print_range_boundaries(range_boundaries)
|
||||
print("Key-space boundaries with approximately equal mutation counts:\n")
|
||||
print_range_boundaries(range_boundaries, "writes")
|
||||
|
||||
if args.top_requests or range_boundaries:
|
||||
print("")
|
||||
|
||||
if read_counter:
|
||||
if args.top_requests:
|
||||
top_reads = read_counter.get_top_k_reads(args.top_requests, args.filter_address, shard_finder=shard_finder)
|
||||
|
||||
range_boundaries = read_counter.get_range_boundaries(args.num_buckets, shard_finder=shard_finder)
|
||||
num_reads = read_counter.get_total_reads()
|
||||
|
||||
if args.top_requests or range_boundaries:
|
||||
print("READS")
|
||||
print("-----\n")
|
||||
print("Processed %d total reads\n" % num_reads)
|
||||
|
||||
if args.top_requests:
|
||||
suffix = ""
|
||||
if args.filter_address:
|
||||
suffix = " (%s)" % ", ".join(args.filter_address)
|
||||
print("Top %d reads%s:\n" % (args.top_requests, suffix))
|
||||
|
||||
print_top(top_reads, num_reads, "reads")
|
||||
print("")
|
||||
|
||||
if range_boundaries:
|
||||
print("Key-space boundaries with approximately equal read counts:\n")
|
||||
print_range_boundaries(range_boundaries, "reads")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
|
@ -16,7 +16,7 @@ As an essential component of a database system, backup and restore is commonly u
|
|||
|
||||
## Background
|
||||
|
||||
FDB backup system continuously scan the database’s key-value space, save key-value pairs and mutations at versions into range files and log files in blob storage. Specifically, mutation logs are generated at Proxy, and are written to transaction logs along with regular mutations. In production clusters like CK clusters, backup system is always on, which means each mutation is written twice to transaction logs, consuming about half of write bandwidth and about 40% of Proxy CPU time.
|
||||
FDB backup system continuously scan the database’s key-value space, save key-value pairs and mutations at versions into range files and log files in blob storage. Specifically, mutation logs are generated at CommitProxy, and are written to transaction logs along with regular mutations. In production clusters like CK clusters, backup system is always on, which means each mutation is written twice to transaction logs, consuming about half of write bandwidth and about 40% of CommitProxy CPU time.
|
||||
|
||||
The design of old backup system is [here](https://github.com/apple/foundationdb/blob/master/design/backup.md), and the data format of range files and mutations files is [here](https://github.com/apple/foundationdb/blob/master/design/backup-dataFormat.md). The technical overview of FDB is [here](https://github.com/apple/foundationdb/wiki/Technical-Overview-of-the-Database). The FDB recovery is described in this [doc](https://github.com/apple/foundationdb/blob/master/design/recovery-internals.md).
|
||||
|
||||
|
@ -37,7 +37,7 @@ The design of old backup system is [here](https://github.com/apple/foundationdb/
|
|||
|
||||
Feature priorities: Feature 1, 2, 3, 4, 5 are must-have; Feature 6 is better to have.
|
||||
|
||||
1. **Write bandwidth reduction by half**: removes the requirement to generate backup mutations at the Proxy, thus reduce TLog write bandwidth usage by half and significantly improve Proxy CPU usage;
|
||||
1. **Write bandwidth reduction by half**: removes the requirement to generate backup mutations at the CommitProxy, thus reduce TLog write bandwidth usage by half and significantly improve CommitProxy CPU usage;
|
||||
2. **Correctness**: The restored database must be consistent: each *restored* state (i.e., key-value pair) at a version `v` must match the original state at version `v`.
|
||||
3. **Performance**: The backup system should be performant, mostly measured as a small CPU overhead on transaction logs and backup workers. The version lag on backup workers is an indicator of performance.
|
||||
4. **Fault-tolerant**: The backup system should be fault-tolerant to node failures in the FDB cluster.
|
||||
|
@ -153,10 +153,11 @@ The requirement of the new backup system raises several design challenges:
|
|||
|
||||
**Master**: The master is responsible for coordinating the transition of the FDB transaction sub-system from one generation to the next. In particular, the master recruits backup workers during the recovery.
|
||||
|
||||
**Transaction Logs (TLogs)**: The transaction logs make mutations durable to disk for fast commit latencies. The logs receive commits from the proxy in version order, and only respond to the proxy once the data has been written and fsync'ed to an append only mutation log on disk. Storage servers retrieve mutations from TLogs. Once the storage servers have persisted mutations, storage servers then pop the mutations from the TLogs.
|
||||
**Transaction Logs (TLogs)**: The transaction logs make mutations durable to disk for fast commit latencies. The logs receive commits from the commit proxy in version order, and only respond to the commit proxy once the data has been written and fsync'ed to an append only mutation log on disk. Storage servers retrieve mutations from TLogs. Once the storage servers have persisted mutations, storage servers then pop the mutations from the TLogs.
|
||||
|
||||
**Proxy**: The proxies are responsible for providing read versions, committing transactions, and tracking the storage servers responsible for each range of keys. In the old backup system, Proxies are responsible to group mutations into backup mutations and write them to the database.
|
||||
**CommitProxy**: The commit proxies are responsible for committing transactions, and tracking the storage servers responsible for each range of keys. In the old backup system, Proxies are responsible to group mutations into backup mutations and write them to the database.
|
||||
|
||||
**GrvProxy**: The GRV proxies are responsible for providing read versions.
|
||||
## System overview
|
||||
|
||||
From an end-to-end perspective, the new backup system works in the following steps:
|
||||
|
|
|
@ -20,16 +20,16 @@ Consequently, the special-key-space framework wants to integrate all client func
|
|||
If your feature is exposing information to clients and the results are easily formatted as key-value pairs, then you can use special-key-space to implement your client function.
|
||||
|
||||
## How
|
||||
If you choose to use, you need to implement a function class that inherits from `SpecialKeyRangeBaseImpl`, which has an abstract method `Future<Standalone<RangeResultRef>> getRange(ReadYourWritesTransaction* ryw, KeyRangeRef kr)`.
|
||||
If you choose to use, you need to implement a function class that inherits from `SpecialKeyRangeReadImpl`, which has an abstract method `Future<Standalone<RangeResultRef>> getRange(ReadYourWritesTransaction* ryw, KeyRangeRef kr)`.
|
||||
This method can be treated as a callback, whose implementation details are determined by the developer.
|
||||
Once you fill out the method, register the function class to the corresponding key range.
|
||||
Below is a detailed example.
|
||||
```c++
|
||||
// Implement the function class,
|
||||
// the corresponding key range is [\xff\xff/example/, \xff\xff/example/\xff)
|
||||
class SKRExampleImpl : public SpecialKeyRangeBaseImpl {
|
||||
class SKRExampleImpl : public SpecialKeyRangeReadImpl {
|
||||
public:
|
||||
explicit SKRExampleImpl(KeyRangeRef kr): SpecialKeyRangeBaseImpl(kr) {
|
||||
explicit SKRExampleImpl(KeyRangeRef kr): SpecialKeyRangeReadImpl(kr) {
|
||||
// Our implementation is quite simple here, the key-value pairs are formatted as:
|
||||
// \xff\xff/example/<country_name> : <capital_city_name>
|
||||
CountryToCapitalCity[LiteralStringRef("USA")] = LiteralStringRef("Washington, D.C.");
|
||||
|
|
Binary file not shown.
|
@ -0,0 +1,42 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="201px" height="98px" viewBox="0 0 201 98" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<!-- Generator: Sketch 55.1 (78136) - https://sketchapp.com -->
|
||||
<title>Artboard</title>
|
||||
<desc>Created with Sketch.</desc>
|
||||
<defs>
|
||||
<polygon id="path-1" points="0.0278974895 27.28142 0.0278974895 21.9743876 42.1452024 21.9743876 42.1452024 0.409628138 83.9914367 4.60308577 83.9914367 21.9743876 126.435607 21.9743876 126.435607 0.409628138 200.861925 8.19721234 200.861925 27.28142"></polygon>
|
||||
<polygon id="path-3" points="0.0278974895 28.0890523 42.1452024 21.9158028 42.1452024 0.35104341 83.9914367 10.7214702 83.9914367 28.0932369 126.435607 21.9158028 126.435607 0.35104341 200.861925 19.1079205 200.861925 38.2767505 126.435607 27.9058588 83.9914367 33.4844268 42.1452024 27.9058588 0.0278974895 33.4807071"></polygon>
|
||||
<polygon id="path-5" points="0.298968096 34.5115194 42.7431386 21.6321783 42.7431386 0.0674189331 117.169456 30.6174948 117.169456 49.1874587 42.7431386 28.2215654 0.298968096 39.9027092"></polygon>
|
||||
</defs>
|
||||
<g id="Artboard" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
|
||||
<g id="fdb-Logo">
|
||||
<g id="Group" transform="translate(0.000000, 69.743724)">
|
||||
<g id="Clipped">
|
||||
<mask id="mask-2" fill="white">
|
||||
<use xlink:href="#path-1"></use>
|
||||
</mask>
|
||||
<g id="c"></g>
|
||||
<polygon id="Path" fill="#0073E0" fill-rule="nonzero" mask="url(#mask-2)" points="0.0278974895 27.3209414 200.889822 27.3209414 200.889822 0.371966527 0.0278974895 0.371966527"></polygon>
|
||||
</g>
|
||||
</g>
|
||||
<g id="Group" transform="translate(0.000000, 34.871862)">
|
||||
<g id="Clipped">
|
||||
<mask id="mask-4" fill="white">
|
||||
<use xlink:href="#path-3"></use>
|
||||
</mask>
|
||||
<g id="e"></g>
|
||||
<polygon id="Path" fill="#289DFC" fill-rule="nonzero" mask="url(#mask-4)" points="0.0278974895 38.3125523 200.889822 38.3125523 200.889822 0.316171548 0.0278974895 0.316171548"></polygon>
|
||||
</g>
|
||||
</g>
|
||||
<g id="Group" transform="translate(83.692469, 0.000000)">
|
||||
<g id="Clipped">
|
||||
<mask id="mask-6" fill="white">
|
||||
<use xlink:href="#path-5"></use>
|
||||
</mask>
|
||||
<g id="g"></g>
|
||||
<polygon id="Path" fill="#9ACEFE" fill-rule="nonzero" mask="url(#mask-6)" points="0.251077406 49.1925732 117.197354 49.1925732 117.197354 0.0371966527 0.251077406 0.0371966527"></polygon>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 2.8 KiB |
|
@ -0,0 +1,34 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1912" height="212" viewBox="0 0 1912 212">
|
||||
<defs>
|
||||
<path id="a" d="M22.551,151 L0.676,151 L0.676,10.082 L88.664,10.082 L88.664,30.004 L22.551,30.004 L22.551,73.266 L83.098,73.266 L83.098,92.602 L22.551,92.602 L22.551,151 Z M159.465,134.398 C177.629,134.398 187.98,120.922 187.98,97.777 C187.98,74.73 177.629,61.254 159.465,61.254 C141.203,61.254 130.949,74.73 130.949,97.777 C130.949,121.02 141.203,134.398 159.465,134.398 Z M159.465,153.051 C128.312,153.051 109.27,132.25 109.27,97.777 C109.27,63.5 128.41,42.602 159.465,42.602 C190.422,42.602 209.562,63.5 209.562,97.777 C209.562,132.25 190.52,153.051 159.465,153.051 Z M324.406,44.652 L324.406,151 L304.191,151 L304.191,134.105 L302.531,134.105 C297.355,146.215 286.516,153.051 270.402,153.051 C246.867,153.051 233.781,138.695 233.781,113.695 L233.781,44.652 L254.777,44.652 L254.777,108.227 C254.777,125.414 261.711,133.617 277.141,133.617 C294.133,133.617 303.41,123.559 303.41,106.859 L303.41,44.652 L324.406,44.652 Z M354.875,151 L354.875,44.652 L375.09,44.652 L375.09,61.547 L376.652,61.547 C381.828,49.73 392.375,42.602 408.391,42.602 C432.121,42.602 445.207,56.859 445.207,82.152 L445.207,151 L424.211,151 L424.211,87.426 C424.211,70.336 416.789,61.84 401.262,61.84 C385.734,61.84 375.871,72.191 375.871,88.793 L375.871,151 L354.875,151 Z M513.566,152.758 C486.516,152.758 469.426,131.469 469.426,97.777 C469.426,64.184 486.711,42.895 513.566,42.895 C528.117,42.895 540.422,49.828 546.184,61.547 L547.746,61.547 L547.746,3.148 L568.742,3.148 L568.742,151 L548.625,151 L548.625,134.203 L546.965,134.203 C540.617,145.824 528.215,152.758 513.566,152.758 Z M519.523,61.742 C501.848,61.742 491.105,75.414 491.105,97.777 C491.105,120.336 501.75,133.91 519.523,133.91 C537.199,133.91 548.137,120.141 548.137,97.875 C548.137,75.707 537.102,61.742 519.523,61.742 Z M634.953,135.082 C650.773,135.082 662.492,125.023 662.492,111.84 L662.492,102.953 L636.516,104.613 C621.867,105.59 615.227,110.57 615.227,119.945 C615.227,129.516 623.527,135.082 634.953,135.082 Z M629.582,152.758 C609.074,152.758 594.133,140.355 594.133,120.922 C594.133,101.781 608.391,90.746 633.684,89.184 L662.492,87.523 L662.492,78.344 C662.492,67.113 655.07,60.766 640.715,60.766 C628.996,60.766 620.891,65.062 618.547,72.582 L598.234,72.582 C600.383,54.32 617.57,42.602 641.691,42.602 C668.352,42.602 683.391,55.883 683.391,78.344 L683.391,151 L663.176,151 L663.176,136.059 L661.516,136.059 C655.168,146.703 643.547,152.758 629.582,152.758 Z M720.695,18.187 L741.691,18.187 L741.691,45.141 L764.738,45.141 L764.738,62.816 L741.691,62.816 L741.691,117.504 C741.691,128.637 746.281,133.52 756.73,133.52 C759.953,133.52 761.809,133.324 764.738,133.031 L764.738,150.512 C761.32,151.098 757.414,151.586 753.313,151.586 C729.973,151.586 720.695,143.383 720.695,122.875 L720.695,62.816 L703.801,62.816 L703.801,45.141 L720.695,45.141 L720.695,18.187 Z M789.836,151 L789.836,44.652 L810.734,44.652 L810.734,151 L789.836,151 Z M800.285,26 C792.473,26 786.711,20.434 786.711,13.207 C786.711,5.883 792.473,0.316 800.285,0.316 C808.098,0.316 813.859,5.883 813.859,13.207 C813.859,20.434 808.098,26 800.285,26 Z M886.516,134.398 C904.68,134.398 915.031,120.922 915.031,97.777 C915.031,74.73 904.68,61.254 886.516,61.254 C868.254,61.254 858,74.73 858,97.777 C858,121.02 868.254,134.398 886.516,134.398 Z M886.516,153.051 C855.363,153.051 836.32,132.25 836.32,97.777 C836.32,63.5 855.461,42.602 886.516,42.602 C917.473,42.602 936.613,63.5 936.613,97.777 C936.613,132.25 917.57,153.051 886.516,153.051 Z M961.809,151 L961.809,44.652 L982.023,44.652 L982.023,61.547 L983.586,61.547 C988.762,49.73 999.309,42.602 1015.324,42.602 C1039.055,42.602 1052.141,56.859 1052.141,82.152 L1052.141,151 L1031.145,151 L1031.145,87.426 C1031.145,70.336 1023.723,61.84 1008.195,61.84 C992.668,61.84 982.805,72.191 982.805,88.793 L982.805,151 L961.809,151 Z M1083.488,9.984 L1138.957,9.984 C1180.852,9.984 1205.07,35.375 1205.07,79.516 C1205.07,125.316 1181.145,151 1138.957,151 L1083.488,151 L1083.488,9.984 Z M1112.98,35.18 L1112.98,125.805 L1134.27,125.805 C1160.344,125.805 1174.992,109.789 1174.992,80.004 C1174.992,51.488 1159.855,35.18 1134.27,35.18 L1112.98,35.18 Z M1296.184,151 L1232.902,151 L1232.902,10.082 L1294.523,10.082 C1321.867,10.082 1338.176,23.461 1338.176,45.238 C1338.176,60.18 1327.141,73.168 1312.687,75.316 L1312.687,77.074 C1331.34,78.441 1344.914,92.504 1344.914,110.668 C1344.914,135.375 1326.262,151 1296.184,151 Z M1262.395,32.641 L1262.395,68.48 L1284.563,68.48 C1300.48,68.48 1309.172,61.937 1309.172,50.609 C1309.172,39.379 1301.066,32.641 1287.004,32.641 L1262.395,32.641 Z M1262.395,128.441 L1288.664,128.441 C1305.656,128.441 1314.836,121.313 1314.836,108.129 C1314.836,95.238 1305.363,88.402 1287.98,88.402 L1262.395,88.402 L1262.395,128.441 Z"/>
|
||||
<polygon id="c" points=".06 58.675 .06 47.261 90.643 47.261 90.643 .881 180.643 9.9 180.643 47.261 271.929 47.261 271.929 .881 432 17.63 432 58.675 .06 58.675"/>
|
||||
<polygon id="e" points=".06 60.412 90.643 47.135 90.643 .755 180.643 23.059 180.643 60.421 271.929 47.135 271.929 .755 432 41.096 432 82.323 271.929 60.018 180.643 72.016 90.643 60.018 .06 72.008 .06 60.412"/>
|
||||
<polygon id="g" points=".643 74.225 91.929 46.525 91.929 .145 252 65.85 252 105.789 91.929 60.697 .643 85.82"/>
|
||||
</defs>
|
||||
<g fill="none" fill-rule="evenodd">
|
||||
<g transform="translate(567 58)">
|
||||
<mask id="b" fill="white">
|
||||
<use xlink:href="#a"/>
|
||||
</mask>
|
||||
<polygon fill="#0081FF" points=".66 153.16 1345.02 153.16 1345.02 .28 .66 .28" mask="url(#b)"/>
|
||||
</g>
|
||||
<g transform="translate(0 150)">
|
||||
<mask id="d" fill="white">
|
||||
<use xlink:href="#c"/>
|
||||
</mask>
|
||||
<polygon fill="#0073E0" points=".06 58.76 432.06 58.76 432.06 .8 .06 .8" mask="url(#d)"/>
|
||||
</g>
|
||||
<g transform="translate(0 75)">
|
||||
<mask id="f" fill="white">
|
||||
<use xlink:href="#e"/>
|
||||
</mask>
|
||||
<polygon fill="#289DFC" points=".06 82.4 432.06 82.4 432.06 .68 .06 .68" mask="url(#f)"/>
|
||||
</g>
|
||||
<g transform="translate(180)">
|
||||
<mask id="h" fill="white">
|
||||
<use xlink:href="#g"/>
|
||||
</mask>
|
||||
<polygon fill="#9ACEFE" points=".54 105.8 252.06 105.8 252.06 .08 .54 .08" mask="url(#h)"/>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 6.2 KiB |
|
@ -228,6 +228,8 @@ If you interrupt the exclude command with Ctrl-C after seeing the "waiting for s
|
|||
|
||||
7) If you ever want to add a removed machine back to the cluster, you will have to take it off the excluded servers list to which it was added in step 3. This can be done using the ``include`` command of ``fdbcli``. If attempting to re-include a failed server, this can be done using the ``include failed`` command of ``fdbcli``. Typing ``exclude`` with no parameters will tell you the current list of excluded and failed machines.
|
||||
|
||||
As of api version 700, excluding servers can be done with the :ref:`special key space management module <special-key-space-management-module>` as well.
|
||||
|
||||
Moving a cluster
|
||||
================
|
||||
|
||||
|
|
|
@ -133,7 +133,7 @@ API versioning
|
|||
|
||||
Prior to including ``fdb_c.h``, you must define the ``FDB_API_VERSION`` macro. This, together with the :func:`fdb_select_api_version()` function, allows programs written against an older version of the API to compile and run with newer versions of the C library. The current version of the FoundationDB C API is |api-version|. ::
|
||||
|
||||
#define FDB_API_VERSION 630
|
||||
#define FDB_API_VERSION 700
|
||||
#include <foundationdb/fdb_c.h>
|
||||
|
||||
.. function:: fdb_error_t fdb_select_api_version(int version)
|
||||
|
|
|
@ -147,7 +147,7 @@
|
|||
.. |atomic-versionstamps-tuple-warning-value| replace::
|
||||
At this time, versionstamped values are not compatible with the Tuple layer except in Java, Python, and Go. Note that this implies versionstamped values may not be used with the Subspace and Directory layers except in those languages.
|
||||
|
||||
.. |api-version| replace:: 630
|
||||
.. |api-version| replace:: 700
|
||||
|
||||
.. |streaming-mode-blurb1| replace::
|
||||
When using |get-range-func| and similar interfaces, API clients can request large ranges of the database to iterate over. Making such a request doesn't necessarily mean that the client will consume all of the data in the range - sometimes the client doesn't know how far it intends to iterate in advance. FoundationDB tries to balance latency and bandwidth by requesting data for iteration in batches.
|
||||
|
|
|
@ -40,7 +40,7 @@ FoundationDB may return the following error codes from API functions. If you nee
|
|||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| external_client_already_loaded | 1040| External client has already been loaded |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| proxy_memory_limit_exceeded | 1042| Proxy commit memory limit exceeded |
|
||||
| proxy_memory_limit_exceeded | 1042| CommitProxy commit memory limit exceeded |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| batch_transaction_throttled | 1051| Batch GRV request rate limit exceeded |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
|
@ -146,6 +146,16 @@ FoundationDB may return the following error codes from API functions. If you nee
|
|||
| special_keys_no_module_found | 2113| Special key space range read does not intersect a module. |
|
||||
| | | Refer to the ``SPECIAL_KEY_SPACE_RELAXED`` transaction option for more details.|
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| special_keys_write_disabled | 2114| Special key space is not allowed to write by default. Refer |
|
||||
| | | to the ``SPECIAL_KEY_SPACE_ENABLE_WRITES`` transaction option for more details.|
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| special_keys_no_write_module_found | 2115| Special key space key or keyrange in set or clear does not intersect a module. |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| special_keys_cross_module_write | 2116| Special key space clear crosses modules |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| special_keys_api_failure | 2117| Api call through special keys failed. For more information, read the |
|
||||
| | | ``0xff0xff/error_message`` key |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| api_version_unset | 2200| API version is not set |
|
||||
+-----------------------------------------------+-----+--------------------------------------------------------------------------------+
|
||||
| api_version_already_set | 2201| API version may be set only once |
|
||||
|
|
|
@ -108,7 +108,7 @@ Opening a database
|
|||
After importing the ``fdb`` module and selecting an API version, you probably want to open a :class:`Database` using :func:`open`::
|
||||
|
||||
import fdb
|
||||
fdb.api_version(630)
|
||||
fdb.api_version(700)
|
||||
db = fdb.open()
|
||||
|
||||
.. function:: open( cluster_file=None, event_model=None )
|
||||
|
|
|
@ -93,7 +93,7 @@ Opening a database
|
|||
After requiring the ``FDB`` gem and selecting an API version, you probably want to open a :class:`Database` using :func:`open`::
|
||||
|
||||
require 'fdb'
|
||||
FDB.api_version 630
|
||||
FDB.api_version 700
|
||||
db = FDB.open
|
||||
|
||||
.. function:: open( cluster_file=nil ) -> Database
|
||||
|
|
|
@ -6,7 +6,7 @@ FoundationDB makes your architecture flexible and easy to operate. Your applicat
|
|||
|
||||
The following diagram details the logical architecture.
|
||||
|
||||
.. image:: /images/Architecture.png
|
||||
|image0|
|
||||
|
||||
|
||||
Detailed FoundationDB Architecture
|
||||
|
@ -362,6 +362,7 @@ Documentation <https://github.com/apple/foundationdb/blob/master/design/data-dis
|
|||
`Recovery
|
||||
Documentation <https://github.com/apple/foundationdb/blob/master/design/recovery-internals.md>`__
|
||||
|
||||
.. |image0| image:: images/Architecture.png
|
||||
.. |image1| image:: images/architecture-1.jpeg
|
||||
.. |image2| image:: images/architecture-2.jpeg
|
||||
.. |image3| image:: images/architecture-3.jpeg
|
||||
|
|
|
@ -29,7 +29,7 @@ Before using the API, we need to specify the API version. This allows programs t
|
|||
|
||||
.. code-block:: go
|
||||
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
|
||||
Next, we open a FoundationDB database. The API will connect to the FoundationDB cluster indicated by the :ref:`default cluster file <default-cluster-file>`.
|
||||
|
||||
|
@ -78,7 +78,7 @@ If this is all working, it looks like we are ready to start building a real appl
|
|||
|
||||
func main() {
|
||||
// Different API versions may expose different runtime behaviors.
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
|
||||
// Open the default database from the system cluster
|
||||
db := fdb.MustOpenDefault()
|
||||
|
@ -666,7 +666,7 @@ Here's the code for the scheduling tutorial:
|
|||
}
|
||||
|
||||
func main() {
|
||||
fdb.MustAPIVersion(630)
|
||||
fdb.MustAPIVersion(700)
|
||||
db := fdb.MustOpenDefault()
|
||||
db.Options().SetTransactionTimeout(60000) // 60,000 ms = 1 minute
|
||||
db.Options().SetTransactionRetryLimit(100)
|
||||
|
|
|
@ -30,7 +30,7 @@ Before using the API, we need to specify the API version. This allows programs t
|
|||
private static final Database db;
|
||||
|
||||
static {
|
||||
fdb = FDB.selectAPIVersion(630);
|
||||
fdb = FDB.selectAPIVersion(700);
|
||||
db = fdb.open();
|
||||
}
|
||||
|
||||
|
@ -66,7 +66,7 @@ If this is all working, it looks like we are ready to start building a real appl
|
|||
private static final Database db;
|
||||
|
||||
static {
|
||||
fdb = FDB.selectAPIVersion(630);
|
||||
fdb = FDB.selectAPIVersion(700);
|
||||
db = fdb.open();
|
||||
}
|
||||
|
||||
|
@ -441,7 +441,7 @@ Here's the code for the scheduling tutorial:
|
|||
private static final Database db;
|
||||
|
||||
static {
|
||||
fdb = FDB.selectAPIVersion(630);
|
||||
fdb = FDB.selectAPIVersion(700);
|
||||
db = fdb.open();
|
||||
db.options().setTransactionTimeout(60000); // 60,000 ms = 1 minute
|
||||
db.options().setTransactionRetryLimit(100);
|
||||
|
|
|
@ -23,7 +23,7 @@ Open a Ruby interactive interpreter and import the FoundationDB API module::
|
|||
|
||||
Before using the API, we need to specify the API version. This allows programs to maintain compatibility even if the API is modified in future versions::
|
||||
|
||||
> FDB.api_version 630
|
||||
> FDB.api_version 700
|
||||
=> nil
|
||||
|
||||
Next, we open a FoundationDB database. The API will connect to the FoundationDB cluster indicated by the :ref:`default cluster file <default-cluster-file>`. ::
|
||||
|
@ -46,7 +46,7 @@ If this is all working, it looks like we are ready to start building a real appl
|
|||
.. code-block:: ruby
|
||||
|
||||
require 'fdb'
|
||||
FDB.api_version 630
|
||||
FDB.api_version 700
|
||||
@db = FDB.open
|
||||
@db['hello'] = 'world'
|
||||
print 'hello ', @db['hello']
|
||||
|
@ -373,7 +373,7 @@ Here's the code for the scheduling tutorial:
|
|||
|
||||
require 'fdb'
|
||||
|
||||
FDB.api_version 630
|
||||
FDB.api_version 700
|
||||
|
||||
####################################
|
||||
## Initialization ##
|
||||
|
|
|
@ -30,7 +30,7 @@ Open a Python interactive interpreter and import the FoundationDB API module::
|
|||
|
||||
Before using the API, we need to specify the API version. This allows programs to maintain compatibility even if the API is modified in future versions::
|
||||
|
||||
>>> fdb.api_version(630)
|
||||
>>> fdb.api_version(700)
|
||||
|
||||
Next, we open a FoundationDB database. The API will connect to the FoundationDB cluster indicated by the :ref:`default cluster file <default-cluster-file>`. ::
|
||||
|
||||
|
@ -48,7 +48,7 @@ When this command returns without exception, the modification is durably stored
|
|||
If this is all working, it looks like we are ready to start building a real application. For reference, here's the full code for "hello world"::
|
||||
|
||||
import fdb
|
||||
fdb.api_version(630)
|
||||
fdb.api_version(700)
|
||||
db = fdb.open()
|
||||
db[b'hello'] = b'world'
|
||||
print 'hello', db[b'hello']
|
||||
|
@ -91,7 +91,7 @@ FoundationDB includes a few tools that make it easy to model data using this app
|
|||
opening a :ref:`directory <developer-guide-directories>` in the database::
|
||||
|
||||
import fdb
|
||||
fdb.api_version(630)
|
||||
fdb.api_version(700)
|
||||
|
||||
db = fdb.open()
|
||||
scheduling = fdb.directory.create_or_open(db, ('scheduling',))
|
||||
|
@ -337,7 +337,7 @@ Here's the code for the scheduling tutorial::
|
|||
import fdb
|
||||
import fdb.tuple
|
||||
|
||||
fdb.api_version(630)
|
||||
fdb.api_version(700)
|
||||
|
||||
|
||||
####################################
|
||||
|
|
|
@ -456,16 +456,20 @@ disable
|
|||
|
||||
``throttle disable auto``
|
||||
|
||||
Disables cluster auto-throttling for busy transaction tags. This does not disable any currently active throttles. To do so, run the following command after disabling auto-throttling::
|
||||
|
||||
> throttle off auto
|
||||
Disables cluster auto-throttling for busy transaction tags. This may not disable currently active throttles immediately, seconds of delay is expected.
|
||||
|
||||
list
|
||||
^^^^
|
||||
|
||||
``throttle list [LIMIT]``
|
||||
``throttle list [throttled|recommended|all] [LIMIT]``
|
||||
|
||||
Prints a list of currently active transaction tag throttles.
|
||||
Prints a list of currently active transaction tag throttles, or recommended transaction tag throttles if auto-throttling is disabled.
|
||||
|
||||
``throttled`` - list active transaction tag throttles.
|
||||
|
||||
``recommended`` - list transaction tag throttles recommended by the ratekeeper, but not active yet.
|
||||
|
||||
``all`` - list both active and recommended transaction tag throttles.
|
||||
|
||||
``LIMIT`` - The number of throttles to print. Defaults to 100.
|
||||
|
||||
|
|
|
@ -583,7 +583,7 @@ Clients should also specify their datacenter with the database option ``datacent
|
|||
Changing the region configuration
|
||||
---------------------------------
|
||||
|
||||
To change the region configure, use the ``fileconfigure`` command ``fdbcli``. For example::
|
||||
To change the region configuration, use the ``fileconfigure`` command ``fdbcli``. For example::
|
||||
|
||||
user@host$ fdbcli
|
||||
Using cluster file `/etc/foundationdb/fdb.cluster'.
|
||||
|
|
|
@ -763,8 +763,8 @@ Special keys
|
|||
Keys starting with the bytes ``\xff\xff`` are called "special" keys, and they are materialized when read. :doc:`\\xff\\xff/status/json <mr-status>` is an example of a special key.
|
||||
As of api version 630, additional features have been exposed as special keys and are available to read as ranges instead of just individual keys. Additionally, the special keys are now organized into "modules".
|
||||
|
||||
Modules
|
||||
-------
|
||||
Read-only modules
|
||||
-----------------
|
||||
|
||||
A module is loosely defined as a key range in the special key space where a user can expect similar behavior from reading any key in that range.
|
||||
By default, users will see a ``special_keys_no_module_found`` error if they read from a range not contained in a module.
|
||||
|
@ -912,6 +912,59 @@ Caveats
|
|||
|
||||
#. ``\xff\xff/metrics/health/`` These keys may return data that's several seconds old, and the data may not be available for a brief period during recovery. This will be indicated by the keys being absent.
|
||||
|
||||
|
||||
Read/write modules
|
||||
------------------
|
||||
|
||||
As of api version 700, some modules in the special key space allow writes as
|
||||
well as reads. In these modules, a user can expect that mutations (i.e. sets,
|
||||
clears, etc) do not have side-effects outside of the current transaction
|
||||
until commit is called (the same is true for writes to the normal key space).
|
||||
A user can also expect the effects on commit to be atomic. Reads to
|
||||
special keys may require reading system keys (whose format is an implementation
|
||||
detail), and for those reads appropriate read conflict ranges are added on
|
||||
the underlying system keys.
|
||||
|
||||
Writes to read/write modules in the special key space are disabled by
|
||||
default. Use the ``special_key_space_enable_writes`` transaction option to
|
||||
enable them [#special_key_space_enable_writes]_.
|
||||
|
||||
|
||||
.. _special-key-space-management-module:
|
||||
|
||||
Management module
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The management module is for temporary cluster configuration changes. For
|
||||
example, in order to safely remove a process from the cluster, one can add an
|
||||
exclusion to the ``\xff\xff/management/excluded/`` key prefix that matches
|
||||
that process, and wait for necessary data to be moved away.
|
||||
|
||||
#. ``\xff\xff/management/excluded/<exclusion>`` Read/write. Indicates that the cluster should move data away from processes matching ``<exclusion>``, so that they can be safely removed. See :ref:`removing machines from a cluster <removing-machines-from-a-cluster>` for documentation for the corresponding fdbcli command.
|
||||
#. ``\xff\xff/management/failed/<exclusion>`` Read/write. Indicates that the cluster should consider matching processes as permanently failed. This allows the cluster to avoid maintaining extra state and doing extra work in the hope that these processes come back. See :ref:`removing machines from a cluster <removing-machines-from-a-cluster>` for documentation for the corresponding fdbcli command.
|
||||
#. ``\xff\xff/management/inProgressExclusion/<address>`` Read-only. Indicates that the process matching ``<address>`` matches an exclusion, but still has necessary data and can't yet be safely removed.
|
||||
#. ``\xff\xff/management/options/excluded/force`` Read/write. Setting this key disables safety checks for writes to ``\xff\xff/management/excluded/<exclusion>``. Setting this key only has an effect in the current transaction and is not persisted on commit.
|
||||
#. ``\xff\xff/management/options/failed/force`` Read/write. Setting this key disables safety checks for writes to ``\xff\xff/management/failed/<exclusion>``. Setting this key only has an effect in the current transaction and is not persisted on commit.
|
||||
|
||||
An exclusion is syntactically either an ip address (e.g. ``127.0.0.1``), or
|
||||
an ip address and port (e.g. ``127.0.0.1:4500``). If no port is specified,
|
||||
then all processes on that host match the exclusion.
|
||||
|
||||
Error message module
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each module written to validates the transaction before committing, and this
|
||||
validation failing is indicated by a ``special_keys_api_failure`` error.
|
||||
More detailed information about why this validation failed can be accessed through the ``\xff\xff/error_message`` key, whose value is a json document with the following schema.
|
||||
|
||||
========================== ======== ===============
|
||||
**Field** **Type** **Description**
|
||||
-------------------------- -------- ---------------
|
||||
retriable boolean Whether or not this operation might succeed if retried
|
||||
command string The fdbcli command corresponding to this operation
|
||||
message string Help text explaining the reason this operation failed
|
||||
========================== ======== ===============
|
||||
|
||||
Performance considerations
|
||||
==========================
|
||||
|
||||
|
@ -1114,3 +1167,4 @@ At a first glance this looks very similar to an ``commit_unknown_result``. Howev
|
|||
|
||||
.. [#conflicting_keys] In practice, the transaction probably committed successfully. However, if you're running multiple resolvers then it's possible for a transaction to cause another to abort even if it doesn't commit successfully.
|
||||
.. [#max_read_transaction_life_versions] The number 5000000 comes from the server knob MAX_READ_TRANSACTION_LIFE_VERSIONS
|
||||
.. [#special_key_space_enable_writes] Enabling this option enables other transaction options, such as ``ACCESS_SYSTEM_KEYS``. This may change in the future.
|
||||
|
|
|
@ -104,7 +104,7 @@ Field Name Description
|
|||
``Name for the snapshot file`` recommended name for the disk snapshot cluster-name:ip-addr:port:UID
|
||||
================================ ======================================================== ========================================================
|
||||
|
||||
``snapshot create binary`` will not be invoked on processes which does not have any persistent data (for example, Cluster Controller or Master or MasterProxy). Since these processes are stateless, there is no need for a snapshot. Any specialized configuration knobs used for one of these stateless processes need to be copied and restored externally.
|
||||
``snapshot create binary`` will not be invoked on processes which does not have any persistent data (for example, Cluster Controller or Master or CommitProxy). Since these processes are stateless, there is no need for a snapshot. Any specialized configuration knobs used for one of these stateless processes need to be copied and restored externally.
|
||||
|
||||
Management of disk snapshots
|
||||
----------------------------
|
||||
|
|
|
@ -51,7 +51,7 @@ C
|
|||
|
||||
FoundationDB's C bindings are installed with the FoundationDB client binaries. You can find more details in the :doc:`C API Documentation <api-c>`.
|
||||
|
||||
Python 2.7 - 3.5
|
||||
Python 2.7 - 3.7
|
||||
----------------
|
||||
|
||||
On macOS and Windows, the FoundationDB Python API bindings are installed as part of your FoundationDB installation.
|
||||
|
|
|
@ -69,7 +69,7 @@ Here’s a basic implementation of the recipe.
|
|||
private static final long EMPTY_ARRAY = -1;
|
||||
|
||||
static {
|
||||
fdb = FDB.selectAPIVersion(630);
|
||||
fdb = FDB.selectAPIVersion(700);
|
||||
db = fdb.open();
|
||||
docSpace = new Subspace(Tuple.from("D"));
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue