Merge branch 'release-6.0' into track-server-request-latencies

This commit is contained in:
A.J. Beamon 2019-01-16 13:39:01 -08:00
commit 7498c2308c
34 changed files with 1343 additions and 467 deletions

View File

@ -10,38 +10,38 @@ macOS
The macOS installation package is supported on macOS 10.7+. It includes the client and (optionally) the server.
* `FoundationDB-6.0.15.pkg <https://www.foundationdb.org/downloads/6.0.15/macOS/installers/FoundationDB-6.0.15.pkg>`_
* `FoundationDB-6.0.18.pkg <https://www.foundationdb.org/downloads/6.0.18/macOS/installers/FoundationDB-6.0.18.pkg>`_
Ubuntu
------
The Ubuntu packages are supported on 64-bit Ubuntu 12.04+, but beware of the Linux kernel bug in Ubuntu 12.x.
* `foundationdb-clients-6.0.15-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.15/ubuntu/installers/foundationdb-clients_6.0.15-1_amd64.deb>`_
* `foundationdb-server-6.0.15-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.15/ubuntu/installers/foundationdb-server_6.0.15-1_amd64.deb>`_ (depends on the clients package)
* `foundationdb-clients-6.0.18-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.18/ubuntu/installers/foundationdb-clients_6.0.18-1_amd64.deb>`_
* `foundationdb-server-6.0.18-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.18/ubuntu/installers/foundationdb-server_6.0.18-1_amd64.deb>`_ (depends on the clients package)
RHEL/CentOS EL6
---------------
The RHEL/CentOS EL6 packages are supported on 64-bit RHEL/CentOS 6.x.
* `foundationdb-clients-6.0.15-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel6/installers/foundationdb-clients-6.0.15-1.el6.x86_64.rpm>`_
* `foundationdb-server-6.0.15-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel6/installers/foundationdb-server-6.0.15-1.el6.x86_64.rpm>`_ (depends on the clients package)
* `foundationdb-clients-6.0.18-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.18/rhel6/installers/foundationdb-clients-6.0.18-1.el6.x86_64.rpm>`_
* `foundationdb-server-6.0.18-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.18/rhel6/installers/foundationdb-server-6.0.18-1.el6.x86_64.rpm>`_ (depends on the clients package)
RHEL/CentOS EL7
---------------
The RHEL/CentOS EL7 packages are supported on 64-bit RHEL/CentOS 7.x.
* `foundationdb-clients-6.0.15-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel7/installers/foundationdb-clients-6.0.15-1.el7.x86_64.rpm>`_
* `foundationdb-server-6.0.15-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel7/installers/foundationdb-server-6.0.15-1.el7.x86_64.rpm>`_ (depends on the clients package)
* `foundationdb-clients-6.0.18-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.18/rhel7/installers/foundationdb-clients-6.0.18-1.el7.x86_64.rpm>`_
* `foundationdb-server-6.0.18-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.18/rhel7/installers/foundationdb-server-6.0.18-1.el7.x86_64.rpm>`_ (depends on the clients package)
Windows
-------
The Windows installer is supported on 64-bit Windows XP and later. It includes the client and (optionally) the server.
* `foundationdb-6.0.15-x64.msi <https://www.foundationdb.org/downloads/6.0.15/windows/installers/foundationdb-6.0.15-x64.msi>`_
* `foundationdb-6.0.18-x64.msi <https://www.foundationdb.org/downloads/6.0.18/windows/installers/foundationdb-6.0.18-x64.msi>`_
API Language Bindings
=====================
@ -58,18 +58,18 @@ On macOS and Windows, the FoundationDB Python API bindings are installed as part
If you need to use the FoundationDB Python API from other Python installations or paths, download the Python package:
* `foundationdb-6.0.15.tar.gz <https://www.foundationdb.org/downloads/6.0.15/bindings/python/foundationdb-6.0.15.tar.gz>`_
* `foundationdb-6.0.18.tar.gz <https://www.foundationdb.org/downloads/6.0.18/bindings/python/foundationdb-6.0.18.tar.gz>`_
Ruby 1.9.3/2.0.0+
-----------------
* `fdb-6.0.15.gem <https://www.foundationdb.org/downloads/6.0.15/bindings/ruby/fdb-6.0.15.gem>`_
* `fdb-6.0.18.gem <https://www.foundationdb.org/downloads/6.0.18/bindings/ruby/fdb-6.0.18.gem>`_
Java 8+
-------
* `fdb-java-6.0.15.jar <https://www.foundationdb.org/downloads/6.0.15/bindings/java/fdb-java-6.0.15.jar>`_
* `fdb-java-6.0.15-javadoc.jar <https://www.foundationdb.org/downloads/6.0.15/bindings/java/fdb-java-6.0.15-javadoc.jar>`_
* `fdb-java-6.0.18.jar <https://www.foundationdb.org/downloads/6.0.18/bindings/java/fdb-java-6.0.18.jar>`_
* `fdb-java-6.0.18-javadoc.jar <https://www.foundationdb.org/downloads/6.0.18/bindings/java/fdb-java-6.0.18-javadoc.jar>`_
Go 1.1+
-------

View File

@ -2,9 +2,49 @@
Release Notes
#############
6.0.18
======
Fixes
-----
* Backup metadata could falsely indicate that a backup is not usable. `(PR #1007) <https://github.com/apple/foundationdb/pull/1007>`_
* Blobstore request failures could cause backup expire and delete operations to skip some files. `(PR #1007) <https://github.com/apple/foundationdb/pull/1007>`_
* Blobstore request failures could cause restore to fail to apply some files. `(PR #1007) <https://github.com/apple/foundationdb/pull/1007>`_
* Storage servers with large amounts of data would pause for a short period of time after rebooting. `(PR #1001) <https://github.com/apple/foundationdb/pull/1001>`_
* The client library could leak memory when a thread died. `(PR #1011) <https://github.com/apple/foundationdb/pull/1011>`_
Features
--------
* Added the ability to specify versions as version-days ago from latest log in backup. `(PR #1007) <https://github.com/apple/foundationdb/pull/1007>`_
6.0.17
======
Fixes
-----
* Existing backups did not make progress when upgraded to 6.0.16. `(PR #962) <https://github.com/apple/foundationdb/pull/962>`_
6.0.16
======
Performance
-----------
* Added a new backup folder scheme which results in far fewer kv range folders. `(PR #939) <https://github.com/apple/foundationdb/pull/939>`_
Fixes
-----
* Blobstore REST client attempted to create buckets that already existed. `(PR #923) <https://github.com/apple/foundationdb/pull/923>`_
* DNS would fail if IPv6 responses were received. `(PR #945) <https://github.com/apple/foundationdb/pull/945>`_
* Backup expiration would occasionally fail due to an incorrect assert. `(PR #926) <https://github.com/apple/foundationdb/pull/926>`_
6.0.15
======
Features
--------
@ -68,7 +108,6 @@ Fixes
* HTTP client used by backup-to-blobstore now correctly treats response header field names as case insensitive. [6.0.15] `(PR #904) <https://github.com/apple/foundationdb/pull/904>`_
* Blobstore REST client was not following the S3 API in several ways (bucket name, date, and response formats). [6.0.15] `(PR #914) <https://github.com/apple/foundationdb/pull/914>`_
* Data distribution could queue shard movements for restoring replication at a low priority. [6.0.15] `(PR #907) <https://github.com/apple/foundationdb/pull/907>`_
* Blobstore REST client will no longer attempt to create a bucket that already exists. [6.0.16] `(PR #923) <https://github.com/apple/foundationdb/pull/923>`_
Fixes only impacting 6.0.0+
---------------------------

View File

@ -76,7 +76,7 @@ enum enumProgramExe {
};
enum enumBackupType {
BACKUP_UNDEFINED=0, BACKUP_START, BACKUP_STATUS, BACKUP_ABORT, BACKUP_WAIT, BACKUP_DISCONTINUE, BACKUP_PAUSE, BACKUP_RESUME, BACKUP_EXPIRE, BACKUP_DELETE, BACKUP_DESCRIBE, BACKUP_LIST
BACKUP_UNDEFINED=0, BACKUP_START, BACKUP_STATUS, BACKUP_ABORT, BACKUP_WAIT, BACKUP_DISCONTINUE, BACKUP_PAUSE, BACKUP_RESUME, BACKUP_EXPIRE, BACKUP_DELETE, BACKUP_DESCRIBE, BACKUP_LIST, BACKUP_DUMP
};
enum enumDBType {
@ -91,8 +91,10 @@ enum enumRestoreType {
enum {
// Backup constants
OPT_DESTCONTAINER, OPT_SNAPSHOTINTERVAL, OPT_ERRORLIMIT, OPT_NOSTOPWHENDONE,
OPT_EXPIRE_BEFORE_VERSION, OPT_EXPIRE_BEFORE_DATETIME, OPT_EXPIRE_RESTORABLE_AFTER_VERSION, OPT_EXPIRE_RESTORABLE_AFTER_DATETIME,
OPT_EXPIRE_BEFORE_VERSION, OPT_EXPIRE_BEFORE_DATETIME, OPT_EXPIRE_DELETE_BEFORE_DAYS,
OPT_EXPIRE_RESTORABLE_AFTER_VERSION, OPT_EXPIRE_RESTORABLE_AFTER_DATETIME, OPT_EXPIRE_MIN_RESTORABLE_DAYS,
OPT_BASEURL, OPT_BLOB_CREDENTIALS, OPT_DESCRIBE_DEEP, OPT_DESCRIBE_TIMESTAMPS,
OPT_DUMP_BEGIN, OPT_DUMP_END,
// Backup and Restore constants
OPT_TAGNAME, OPT_BACKUPKEYS, OPT_WAITFORDONE,
@ -118,7 +120,6 @@ CSimpleOpt::SOption g_rgAgentOptions[] = {
#endif
{ OPT_CLUSTERFILE, "-C", SO_REQ_SEP },
{ OPT_CLUSTERFILE, "--cluster_file", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_KNOB, "--knob_", SO_REQ_SEP },
{ OPT_VERSION, "--version", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
@ -126,6 +127,7 @@ CSimpleOpt::SOption g_rgAgentOptions[] = {
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_CRASHONERROR, "--crash", SO_NONE },
{ OPT_LOCALITY, "--locality_", SO_REQ_SEP },
{ OPT_MEMLIMIT, "-m", SO_REQ_SEP },
@ -161,6 +163,7 @@ CSimpleOpt::SOption g_rgBackupStartOptions[] = {
{ OPT_DRYRUN, "--dryrun", SO_NONE },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -190,6 +193,7 @@ CSimpleOpt::SOption g_rgBackupStatusOptions[] = {
{ OPT_TAGNAME, "--tagname", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_VERSION, "--version", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
{ OPT_QUIET, "-q", SO_NONE },
@ -215,6 +219,7 @@ CSimpleOpt::SOption g_rgBackupAbortOptions[] = {
{ OPT_TAGNAME, "--tagname", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -242,6 +247,7 @@ CSimpleOpt::SOption g_rgBackupDiscontinueOptions[] = {
{ OPT_WAITFORDONE, "--waitfordone", SO_NONE },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -269,6 +275,7 @@ CSimpleOpt::SOption g_rgBackupWaitOptions[] = {
{ OPT_NOSTOPWHENDONE, "--no-stop-when-done",SO_NONE },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -292,6 +299,7 @@ CSimpleOpt::SOption g_rgBackupPauseOptions[] = {
{ OPT_CLUSTERFILE, "--cluster_file", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -317,6 +325,7 @@ CSimpleOpt::SOption g_rgBackupExpireOptions[] = {
{ OPT_DESTCONTAINER, "--destcontainer", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
@ -336,6 +345,8 @@ CSimpleOpt::SOption g_rgBackupExpireOptions[] = {
{ OPT_EXPIRE_RESTORABLE_AFTER_DATETIME, "--restorable_after_timestamp", SO_REQ_SEP },
{ OPT_EXPIRE_BEFORE_VERSION, "--expire_before_version", SO_REQ_SEP },
{ OPT_EXPIRE_BEFORE_DATETIME, "--expire_before_timestamp", SO_REQ_SEP },
{ OPT_EXPIRE_MIN_RESTORABLE_DAYS, "--min_restorable_days", SO_REQ_SEP },
{ OPT_EXPIRE_DELETE_BEFORE_DAYS, "--delete_before_days", SO_REQ_SEP },
SO_END_OF_OPTIONS
};
@ -348,6 +359,7 @@ CSimpleOpt::SOption g_rgBackupDeleteOptions[] = {
{ OPT_DESTCONTAINER, "--destcontainer", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
@ -375,6 +387,7 @@ CSimpleOpt::SOption g_rgBackupDescribeOptions[] = {
{ OPT_DESTCONTAINER, "--destcontainer", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
@ -394,6 +407,36 @@ CSimpleOpt::SOption g_rgBackupDescribeOptions[] = {
SO_END_OF_OPTIONS
};
CSimpleOpt::SOption g_rgBackupDumpOptions[] = {
#ifdef _WIN32
{ OPT_PARENTPID, "--parentpid", SO_REQ_SEP },
#endif
{ OPT_CLUSTERFILE, "-C", SO_REQ_SEP },
{ OPT_CLUSTERFILE, "--cluster_file", SO_REQ_SEP },
{ OPT_DESTCONTAINER, "-d", SO_REQ_SEP },
{ OPT_DESTCONTAINER, "--destcontainer", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
{ OPT_CRASHONERROR, "--crash", SO_NONE },
{ OPT_MEMLIMIT, "-m", SO_REQ_SEP },
{ OPT_MEMLIMIT, "--memory", SO_REQ_SEP },
{ OPT_HELP, "-?", SO_NONE },
{ OPT_HELP, "-h", SO_NONE },
{ OPT_HELP, "--help", SO_NONE },
{ OPT_DEVHELP, "--dev-help", SO_NONE },
{ OPT_BLOB_CREDENTIALS, "--blob_credentials", SO_REQ_SEP },
{ OPT_KNOB, "--knob_", SO_REQ_SEP },
{ OPT_DUMP_BEGIN, "--begin", SO_REQ_SEP },
{ OPT_DUMP_END, "--end", SO_REQ_SEP },
SO_END_OF_OPTIONS
};
CSimpleOpt::SOption g_rgBackupListOptions[] = {
#ifdef _WIN32
{ OPT_PARENTPID, "--parentpid", SO_REQ_SEP },
@ -402,6 +445,7 @@ CSimpleOpt::SOption g_rgBackupListOptions[] = {
{ OPT_BASEURL, "--base_url", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
@ -439,6 +483,7 @@ CSimpleOpt::SOption g_rgRestoreOptions[] = {
{ OPT_DBVERSION, "-v", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_DRYRUN, "-n", SO_NONE },
@ -472,6 +517,7 @@ CSimpleOpt::SOption g_rgDBAgentOptions[] = {
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_CRASHONERROR, "--crash", SO_NONE },
{ OPT_LOCALITY, "--locality_", SO_REQ_SEP },
{ OPT_MEMLIMIT, "-m", SO_REQ_SEP },
@ -498,6 +544,7 @@ CSimpleOpt::SOption g_rgDBStartOptions[] = {
{ OPT_BACKUPKEYS, "--keys", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -527,6 +574,7 @@ CSimpleOpt::SOption g_rgDBStatusOptions[] = {
{ OPT_TAGNAME, "--tagname", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_VERSION, "--version", SO_NONE },
{ OPT_VERSION, "-v", SO_NONE },
{ OPT_QUIET, "-q", SO_NONE },
@ -554,6 +602,7 @@ CSimpleOpt::SOption g_rgDBSwitchOptions[] = {
{ OPT_TAGNAME, "--tagname", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -582,6 +631,7 @@ CSimpleOpt::SOption g_rgDBAbortOptions[] = {
{ OPT_TAGNAME, "--tagname", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -607,6 +657,7 @@ CSimpleOpt::SOption g_rgDBPauseOptions[] = {
{ OPT_DEST_CLUSTER, "--destination", SO_REQ_SEP },
{ OPT_TRACE, "--log", SO_NONE },
{ OPT_TRACE_DIR, "--logdir", SO_REQ_SEP },
{ OPT_TRACE_LOG_GROUP, "--loggroup", SO_REQ_SEP },
{ OPT_QUIET, "-q", SO_NONE },
{ OPT_QUIET, "--quiet", SO_NONE },
{ OPT_VERSION, "--version", SO_NONE },
@ -724,10 +775,16 @@ static void printBackupUsage(bool devhelp) {
" in the database to obtain a cutoff version very close to the timestamp given in YYYY-MM-DD.HH:MI:SS format (UTC).\n");
printf(" --expire_before_version VERSION\n"
" Version cutoff for expire operations. Deletes data files containing no data at or after VERSION.\n");
printf(" --delete_before_days NUM_DAYS\n"
" Another way to specify version cutoff for expire operations. Deletes data files containing no data at or after a\n"
" version approximately NUM_DAYS days worth of versions prior to the latest log version in the backup.\n");
printf(" --restorable_after_timestamp DATETIME\n"
" For expire operations, set minimum acceptable restorability to the version equivalent of DATETIME and later.\n");
printf(" --restorable_after_version VERSION\n"
" For expire operations, set minimum acceptable restorability to the VERSION and later.\n");
printf(" --min_restorable_days NUM_DAYS\n"
" For expire operations, set minimum acceptable restorability to approximately NUM_DAYS days worth of versions\n"
" prior to the latest log version in the backup.\n");
printf(" --version_timestamps\n");
printf(" For describe operations, lookup versions in the database to obtain timestamps. A cluster file is required.\n");
printf(" -f, --force For expire operations, force expiration even if minimum restorability would be violated.\n");
@ -736,7 +793,7 @@ static void printBackupUsage(bool devhelp) {
printf(" -e ERRORLIMIT The maximum number of errors printed by status (default is 10).\n");
printf(" -k KEYS List of key ranges to backup.\n"
" If not specified, the entire database will be backed up.\n");
printf(" -n, --dry-run For start or restore operations, performs a trial run with no actual changes made.\n");
printf(" -n, --dryrun For start or restore operations, performs a trial run with no actual changes made.\n");
printf(" -v, --version Print version information and exit.\n");
printf(" -w, --wait Wait for the backup to complete (allowed with `start' and `discontinue').\n");
printf(" -z, --no-stop-when-done\n"
@ -780,7 +837,7 @@ static void printRestoreUsage(bool devhelp ) {
printf(" -k KEYS List of key ranges from the backup to restore\n");
printf(" --remove_prefix PREFIX prefix to remove from the restored keys\n");
printf(" --add_prefix PREFIX prefix to add to the restored keys\n");
printf(" -n, --dry-run Perform a trial run with no changes made.\n");
printf(" -n, --dryrun Perform a trial run with no changes made.\n");
printf(" -v DBVERSION The version at which the database will be restored.\n");
printf(" -h, --help Display this help and exit.\n");
@ -969,6 +1026,7 @@ enumBackupType getBackupType(std::string backupType)
values["delete"] = BACKUP_DELETE;
values["describe"] = BACKUP_DESCRIBE;
values["list"] = BACKUP_LIST;
values["dump"] = BACKUP_DUMP;
}
auto i = values.find(backupType);
@ -1729,11 +1787,10 @@ ACTOR Future<Void> changeDBBackupResumed(Database src, Database dest, bool pause
return Void();
}
ACTOR Future<Void> runRestore(Database db, std::string tagName, std::string container, Standalone<VectorRef<KeyRangeRef>> ranges, Version dbVersion, bool performRestore, bool verbose, bool waitForDone, std::string addPrefix, std::string removePrefix) {
ACTOR Future<Void> runRestore(Database db, std::string tagName, std::string container, Standalone<VectorRef<KeyRangeRef>> ranges, Version targetVersion, bool performRestore, bool verbose, bool waitForDone, std::string addPrefix, std::string removePrefix) {
try
{
state FileBackupAgent backupAgent;
state int64_t restoreVersion = -1;
if(ranges.size() > 1) {
fprintf(stderr, "Currently only a single restore range is supported!\n");
@ -1742,53 +1799,46 @@ ACTOR Future<Void> runRestore(Database db, std::string tagName, std::string cont
state KeyRange range = (ranges.size() == 0) ? normalKeys : ranges.front();
if (performRestore) {
if(dbVersion == invalidVersion) {
BackupDescription desc = wait(IBackupContainer::openContainer(container)->describeBackup());
state Reference<IBackupContainer> bc = IBackupContainer::openContainer(container);
// If targetVersion is unset then use the maximum restorable version from the backup description
if(targetVersion == invalidVersion) {
if(verbose)
printf("No restore target version given, will use maximum restorable version from backup description.\n");
BackupDescription desc = wait(bc->describeBackup());
if(!desc.maxRestorableVersion.present()) {
fprintf(stderr, "The specified backup is not restorable to any version.\n");
throw restore_error();
}
dbVersion = desc.maxRestorableVersion.get();
}
Version _restoreVersion = wait(backupAgent.restore(db, KeyRef(tagName), KeyRef(container), waitForDone, dbVersion, verbose, range, KeyRef(addPrefix), KeyRef(removePrefix)));
restoreVersion = _restoreVersion;
}
else {
state Reference<IBackupContainer> bc = IBackupContainer::openContainer(container);
state BackupDescription description = wait(bc->describeBackup());
targetVersion = desc.maxRestorableVersion.get();
if(dbVersion <= 0) {
Void _ = wait(description.resolveVersionTimes(db));
if(description.maxRestorableVersion.present())
restoreVersion = description.maxRestorableVersion.get();
else {
fprintf(stderr, "Backup is not restorable\n");
throw restore_invalid_version();
}
}
else
restoreVersion = dbVersion;
state Optional<RestorableFileSet> rset = wait(bc->getRestoreSet(restoreVersion));
if(!rset.present()) {
fprintf(stderr, "Insufficient data to restore to version %lld\n", restoreVersion);
throw restore_invalid_version();
if(verbose)
printf("Using target restore version %lld\n", targetVersion);
}
// Display the restore information, if requested
if (verbose) {
printf("[DRY RUN] Restoring backup to version: %lld\n", (long long) restoreVersion);
printf("%s\n", description.toString().c_str());
}
}
if (performRestore) {
Version restoredVersion = wait(backupAgent.restore(db, KeyRef(tagName), KeyRef(container), waitForDone, targetVersion, verbose, range, KeyRef(addPrefix), KeyRef(removePrefix)));
if(waitForDone && verbose) {
// If restore completed then report version restored
printf("Restored to version %lld%s\n", (long long) restoreVersion, (performRestore) ? "" : " (DRY RUN)");
// If restore is now complete then report version restored
printf("Restored to version %lld\n", restoredVersion);
}
}
else {
state Optional<RestorableFileSet> rset = wait(bc->getRestoreSet(targetVersion));
if(!rset.present()) {
fprintf(stderr, "Insufficient data to restore to version %lld. Describe backup for more information.\n", targetVersion);
throw restore_invalid_version();
}
printf("Backup can be used to restore to version %lld\n", targetVersion);
}
}
catch (Error& e) {
if(e.code() == error_code_actor_cancelled)
throw;
@ -1823,6 +1873,33 @@ Reference<IBackupContainer> openBackupContainer(const char *name, std::string de
return c;
}
ACTOR Future<Void> dumpBackupData(const char *name, std::string destinationContainer, Version beginVersion, Version endVersion) {
state Reference<IBackupContainer> c = openBackupContainer(name, destinationContainer);
if(beginVersion < 0 || endVersion < 0) {
BackupDescription desc = wait(c->describeBackup());
if(!desc.maxLogEnd.present()) {
fprintf(stderr, "ERROR: Backup must have log data in order to use relative begin/end versions.\n");
throw backup_invalid_info();
}
if(beginVersion < 0) {
beginVersion += desc.maxLogEnd.get();
}
if(endVersion < 0) {
endVersion += desc.maxLogEnd.get();
}
}
printf("Scanning version range %lld to %lld\n", beginVersion, endVersion);
BackupFileList files = wait(c->dumpFileList(beginVersion, endVersion));
files.toStream(stdout);
return Void();
}
ACTOR Future<Void> expireBackupData(const char *name, std::string destinationContainer, Version endVersion, std::string endDatetime, Database db, bool force, Version restorableAfterVersion, std::string restorableAfterDatetime) {
if (!endDatetime.empty()) {
Version v = wait( timeKeeperVersionFromDatetime(endDatetime, db) );
@ -1842,8 +1919,35 @@ ACTOR Future<Void> expireBackupData(const char *name, std::string destinationCon
try {
Reference<IBackupContainer> c = openBackupContainer(name, destinationContainer);
Void _ = wait(c->expireData(endVersion, force, restorableAfterVersion));
printf("All data before version %lld is deleted.\n", endVersion);
state IBackupContainer::ExpireProgress progress;
state std::string lastProgress;
state Future<Void> expire = c->expireData(endVersion, force, &progress, restorableAfterVersion);
loop {
choose {
when(Void _ = wait(delay(5))) {
std::string p = progress.toString();
if(p != lastProgress) {
int spaces = lastProgress.size() - p.size();
printf("\r%s%s", p.c_str(), (spaces > 0 ? std::string(spaces, ' ').c_str() : "") );
lastProgress = p;
}
}
when(Void _ = wait(expire)) {
break;
}
}
}
std::string p = progress.toString();
int spaces = lastProgress.size() - p.size();
printf("\r%s%s\n", p.c_str(), (spaces > 0 ? std::string(spaces, ' ').c_str() : "") );
if(endVersion < 0)
printf("All data before %lld versions (%lld days) prior to latest backup log has been deleted.\n", -endVersion, -endVersion / ((int64_t)24 * 3600 * CLIENT_KNOBS->CORE_VERSIONSPERSECOND));
else
printf("All data before version %lld has been deleted.\n", endVersion);
}
catch (Error& e) {
if(e.code() == error_code_actor_cancelled)
@ -1864,18 +1968,25 @@ ACTOR Future<Void> deleteBackupContainer(const char *name, std::string destinati
state int numDeleted = 0;
state Future<Void> done = c->deleteContainer(&numDeleted);
state int lastUpdate = -1;
printf("Deleting %s...\n", destinationContainer.c_str());
loop {
choose {
when ( Void _ = wait(done) ) {
printf("The entire container has been deleted.\n");
break;
}
when ( Void _ = wait(delay(3)) ) {
printf("%d files have been deleted so far...\n", numDeleted);
when ( Void _ = wait(delay(5)) ) {
if(numDeleted != lastUpdate) {
printf("\r%d...", numDeleted);
lastUpdate = numDeleted;
}
}
}
}
printf("\r%d objects deleted\n", numDeleted);
printf("The entire container has been deleted.\n");
}
catch (Error& e) {
if(e.code() == error_code_actor_cancelled)
throw;
@ -2072,6 +2183,26 @@ static void addKeyRange(std::string optionValue, Standalone<VectorRef<KeyRangeRe
return;
}
Version parseVersion(const char *str) {
StringRef s((const uint8_t *)str, strlen(str));
if(s.endsWith(LiteralStringRef("days")) || s.endsWith(LiteralStringRef("d"))) {
float days;
if(sscanf(str, "%f", &days) != 1) {
fprintf(stderr, "Could not parse version: %s\n", str);
flushAndExit(FDB_EXIT_ERROR);
}
return (double)CLIENT_KNOBS->CORE_VERSIONSPERSECOND * 24 * 3600 * -days;
}
Version ver;
if(sscanf(str, "%lld", &ver) != 1) {
fprintf(stderr, "Could not parse version: %s\n", str);
flushAndExit(FDB_EXIT_ERROR);
}
return ver;
}
#ifdef ALLOC_INSTRUMENTATION
extern uint8_t *g_extra_memory;
#endif
@ -2150,6 +2281,9 @@ int main(int argc, char* argv[]) {
case BACKUP_DESCRIBE:
args = new CSimpleOpt(argc - 1, &argv[1], g_rgBackupDescribeOptions, SO_O_EXACT);
break;
case BACKUP_DUMP:
args = new CSimpleOpt(argc - 1, &argv[1], g_rgBackupDumpOptions, SO_O_EXACT);
break;
case BACKUP_LIST:
args = new CSimpleOpt(argc - 1, &argv[1], g_rgBackupListOptions, SO_O_EXACT);
break;
@ -2287,6 +2421,8 @@ int main(int argc, char* argv[]) {
uint64_t memLimit = 8LL << 30;
Optional<uint64_t> ti;
std::vector<std::string> blobCredentials;
Version dumpBegin = 0;
Version dumpEnd = std::numeric_limits<Version>::max();
if( argc == 1 ) {
printUsage(programExe, false);
@ -2396,6 +2532,8 @@ int main(int argc, char* argv[]) {
break;
case OPT_EXPIRE_BEFORE_VERSION:
case OPT_EXPIRE_RESTORABLE_AFTER_VERSION:
case OPT_EXPIRE_MIN_RESTORABLE_DAYS:
case OPT_EXPIRE_DELETE_BEFORE_DAYS:
{
const char* a = args->OptionArg();
long long ver = 0;
@ -2404,7 +2542,13 @@ int main(int argc, char* argv[]) {
printHelpTeaser(argv[0]);
return FDB_EXIT_ERROR;
}
if(optId == OPT_EXPIRE_BEFORE_VERSION)
// Interpret the value as days worth of versions relative to now (negative)
if(optId == OPT_EXPIRE_MIN_RESTORABLE_DAYS || optId == OPT_EXPIRE_DELETE_BEFORE_DAYS) {
ver = -ver * 24 * 60 * 60 * CLIENT_KNOBS->CORE_VERSIONSPERSECOND;
}
if(optId == OPT_EXPIRE_BEFORE_VERSION || optId == OPT_EXPIRE_DELETE_BEFORE_DAYS)
expireVersion = ver;
else
expireRestorableAfterVersion = ver;
@ -2536,6 +2680,12 @@ int main(int argc, char* argv[]) {
case OPT_BLOB_CREDENTIALS:
blobCredentials.push_back(args->OptionArg());
break;
case OPT_DUMP_BEGIN:
dumpBegin = parseVersion(args->OptionArg());
break;
case OPT_DUMP_END:
dumpEnd = parseVersion(args->OptionArg());
break;
}
}
@ -2857,11 +3007,17 @@ int main(int argc, char* argv[]) {
// Only pass database optionDatabase Describe will lookup version timestamps if a cluster file was given, but quietly skip them if not.
f = stopAfter( describeBackup(argv[0], destinationContainer, describeDeep, describeTimestamps ? Optional<Database>(db) : Optional<Database>()) );
break;
case BACKUP_LIST:
initTraceFile();
f = stopAfter( listBackup(baseUrl) );
break;
case BACKUP_DUMP:
initTraceFile();
f = stopAfter( dumpBackupData(argv[0], destinationContainer, dumpBegin, dumpEnd) );
break;
case BACKUP_UNDEFINED:
default:
fprintf(stderr, "ERROR: Unsupported backup action %s\n", argv[1]);
@ -2872,8 +3028,13 @@ int main(int argc, char* argv[]) {
break;
case EXE_RESTORE:
if(!initCluster())
if(dryRun) {
initTraceFile();
}
else if(!initCluster()) {
return FDB_EXIT_ERROR;
}
switch(restoreType) {
case RESTORE_START:
f = stopAfter( runRestore(db, tagName, restoreContainer, backupKeys, dbVersion, !dryRun, !quietDisplay, waitForDone, addPrefix, removePrefix) );
@ -3009,5 +3170,5 @@ int main(int argc, char* argv[]) {
status = FDB_EXIT_MAIN_EXCEPTION;
}
return status;
flushAndExit(status);
}

View File

@ -1624,6 +1624,11 @@ ACTOR Future<bool> configure( Database db, std::vector<StringRef> tokens, Refere
printf("Type `configure FORCE <TOKEN>*' to configure without this check\n");
ret=false;
break;
case ConfigurationResult::NOT_ENOUGH_WORKERS:
printf("ERROR: Not enough processes exist to support the specified configuration\n");
printf("Type `configure FORCE <TOKEN>*' to configure without this check\n");
ret=false;
break;
case ConfigurationResult::SUCCESS:
printf("Configuration changed\n");
ret=false;
@ -1730,7 +1735,12 @@ ACTOR Future<bool> fileConfigure(Database db, std::string filePath, bool isNewDa
break;
case ConfigurationResult::REGIONS_CHANGED:
printf("ERROR: The region configuration cannot be changed while simultaneously changing usable_regions\n");
printf("Type `fileconfigure FORCE <TOKEN>*' to configure without this check\n");
printf("Type `fileconfigure FORCE <FILENAME>' to configure without this check\n");
ret=false;
break;
case ConfigurationResult::NOT_ENOUGH_WORKERS:
printf("ERROR: Not enough processes exist to support the specified configuration\n");
printf("Type `fileconfigure FORCE <FILENAME>' to configure without this check\n");
ret=false;
break;
case ConfigurationResult::SUCCESS:

View File

@ -276,7 +276,7 @@ public:
// stopWhenDone will return when the backup is stopped, if enabled. Otherwise, it
// will return when the backup directory is restorable.
Future<int> waitBackup(Database cx, std::string tagName, bool stopWhenDone = true);
Future<int> waitBackup(Database cx, std::string tagName, bool stopWhenDone = true, Reference<IBackupContainer> *pContainer = nullptr, UID *pUID = nullptr);
static const Key keyLastRestorable;
@ -615,6 +615,15 @@ public:
return configSpace.pack(LiteralStringRef(__FUNCTION__));
}
// Number of kv range files that were both committed to persistent storage AND inserted into
// the snapshotRangeFileMap. Note that since insertions could replace 1 or more existing
// map entries this is not necessarily the number of entries currently in the map.
// This value exists to help with sizing of kv range folders for BackupContainers that
// require it.
KeyBackedBinaryValue<int64_t> snapshotRangeFileCount() {
return configSpace.pack(LiteralStringRef(__FUNCTION__));
}
// Coalesced set of ranges already dispatched for writing.
typedef KeyBackedMap<Key, bool> RangeDispatchMapT;
RangeDispatchMapT snapshotRangeDispatchMap() {
@ -671,6 +680,7 @@ public:
copy.snapshotBeginVersion().set(tr, beginVersion.get());
copy.snapshotTargetEndVersion().set(tr, endVersion);
copy.snapshotRangeFileCount().set(tr, 0);
return Void();
});

File diff suppressed because it is too large Load Diff

View File

@ -96,10 +96,12 @@ struct KeyspaceSnapshotFile {
}
};
struct FullBackupListing {
struct BackupFileList {
std::vector<RangeFile> ranges;
std::vector<LogFile> logs;
std::vector<KeyspaceSnapshotFile> snapshots;
void toStream(FILE *fout) const;
};
// The byte counts here only include usable log files and byte counts from kvrange manifests
@ -108,10 +110,19 @@ struct BackupDescription {
std::string url;
std::vector<KeyspaceSnapshotFile> snapshots;
int64_t snapshotBytes;
// The version before which everything has been deleted by an expire
Optional<Version> expiredEndVersion;
// The latest version before which at least some data has been deleted by an expire
Optional<Version> unreliableEndVersion;
// The minimum log version in the backup
Optional<Version> minLogBegin;
// The maximum log version in the backup
Optional<Version> maxLogEnd;
// The maximum log version for which there is contiguous log version coverage extending back to minLogBegin
Optional<Version> contiguousLogEnd;
// The maximum version which this backup can be used to restore to
Optional<Version> maxRestorableVersion;
// The minimum version which this backup can be used to restore to
Optional<Version> minRestorableVersion;
std::string extendedDetail; // Freeform container-specific info.
@ -153,10 +164,11 @@ public:
// Create the container
virtual Future<Void> create() = 0;
virtual Future<bool> exists() = 0;
// Open a log file or range file for writing
virtual Future<Reference<IBackupFile>> writeLogFile(Version beginVersion, Version endVersion, int blockSize) = 0;
virtual Future<Reference<IBackupFile>> writeRangeFile(Version version, int blockSize) = 0;
virtual Future<Reference<IBackupFile>> writeRangeFile(Version snapshotBeginVersion, int snapshotFileCount, Version fileVersion, int blockSize) = 0;
// Write a KeyspaceSnapshotFile of range file names representing a full non overlapping
// snapshot of the key ranges this backup is targeting.
@ -165,23 +177,32 @@ public:
// Open a file for read by name
virtual Future<Reference<IAsyncFile>> readFile(std::string name) = 0;
struct ExpireProgress {
std::string step;
int total;
int done;
std::string toString() const;
};
// Delete backup files which do not contain any data at or after (more recent than) expireEndVersion.
// If force is false, then nothing will be deleted unless there is a restorable snapshot which
// - begins at or after expireEndVersion
// - ends at or before restorableBeginVersion
// If force is true, data is deleted unconditionally which could leave the backup in an unusable state. This is not recommended.
// Returns true if expiration was done.
virtual Future<Void> expireData(Version expireEndVersion, bool force = false, Version restorableBeginVersion = std::numeric_limits<Version>::max()) = 0;
virtual Future<Void> expireData(Version expireEndVersion, bool force = false, ExpireProgress *progress = nullptr, Version restorableBeginVersion = std::numeric_limits<Version>::max()) = 0;
// Delete entire container. During the process, if pNumDeleted is not null it will be
// updated with the count of deleted files so that progress can be seen.
virtual Future<Void> deleteContainer(int *pNumDeleted = nullptr) = 0;
// Return key details about a backup's contents, possibly using cached or stored metadata
// unless deepScan is true.
virtual Future<BackupDescription> describeBackup(bool deepScan = false) = 0;
// Return key details about a backup's contents.
// Unless deepScan is true, use cached metadata, if present, as initial contiguous available log range.
// If logStartVersionOverride is given, log data prior to that version will be ignored for the purposes
// of this describe operation. This can be used to calculate what the restorability of a backup would
// be after deleting all data prior to logStartVersionOverride.
virtual Future<BackupDescription> describeBackup(bool deepScan = false, Version logStartVersionOverride = invalidVersion) = 0;
virtual Future<FullBackupListing> dumpFileList() = 0;
virtual Future<BackupFileList> dumpFileList(Version begin = 0, Version end = std::numeric_limits<Version>::max()) = 0;
// Get exactly the files necessary to restore to targetVersion. Returns non-present if
// restore to given version is not possible.

View File

@ -258,8 +258,17 @@ ACTOR Future<Void> deleteObject_impl(Reference<BlobStoreEndpoint> b, std::string
std::string resource = std::string("/") + bucket + "/" + object;
HTTP::Headers headers;
// 200 or 204 means object successfully deleted, 404 means it already doesn't exist, so any of those are considered successful
Reference<HTTP::Response> r = wait(b->doRequest("DELETE", resource, headers, NULL, 0, {200, 204, 404}));
// 200 means object deleted, 404 means it doesn't exist already, so either success code passed above is fine.
// But if the object already did not exist then the 'delete' is assumed to be successful but a warning is logged.
if(r->code == 404) {
TraceEvent(SevWarnAlways, "BlobStoreEndpointDeleteObjectMissing")
.detail("Host", b->host)
.detail("Bucket", bucket)
.detail("Object", object);
}
return Void();
}
@ -502,8 +511,8 @@ ACTOR Future<Reference<HTTP::Response>> doRequest_impl(Reference<BlobStoreEndpoi
Future<BlobStoreEndpoint::ReusableConnection> frconn = bstore->connect();
// Make a shallow copy of the queue by calling addref() on each buffer in the chain and then prepending that chain to contentCopy
if(pContent != nullptr) {
contentCopy.discardAll();
if(pContent != nullptr) {
PacketBuffer *pFirst = pContent->getUnsent();
PacketBuffer *pLast = nullptr;
for(PacketBuffer *p = pFirst; p != nullptr; p = p->nextPacketBuffer()) {

51
fdbclient/FileBackupAgent.actor.cpp Normal file → Executable file
View File

@ -1002,6 +1002,7 @@ namespace fileBackup {
// Update the range bytes written in the backup config
backup.rangeBytesWritten().atomicOp(tr, file->size(), MutationRef::AddValue);
backup.snapshotRangeFileCount().atomicOp(tr, 1, MutationRef::AddValue);
// See if there is already a file for this key which has an earlier begin, update the map if not.
Optional<BackupConfig::RangeSlice> s = wait(backup.snapshotRangeFileMap().get(tr, range.end));
@ -1127,11 +1128,31 @@ namespace fileBackup {
if(done)
return Void();
// Start writing a new file
// Start writing a new file after verifying this task should keep running as of a new read version (which must be >= outVersion)
outVersion = values.second;
// block size must be at least large enough for 3 max size keys and 2 max size values + overhead so 250k conservatively.
state int blockSize = BUGGIFY ? g_random->randomInt(250e3, 4e6) : CLIENT_KNOBS->BACKUP_RANGEFILE_BLOCK_SIZE;
Reference<IBackupFile> f = wait(bc->writeRangeFile(outVersion, blockSize));
state Version snapshotBeginVersion;
state int64_t snapshotRangeFileCount;
state Reference<ReadYourWritesTransaction> tr(new ReadYourWritesTransaction(cx));
loop {
try {
tr->setOption(FDBTransactionOptions::ACCESS_SYSTEM_KEYS);
tr->setOption(FDBTransactionOptions::LOCK_AWARE);
Void _ = wait(taskBucket->keepRunning(tr, task)
&& storeOrThrow(backup.snapshotBeginVersion().get(tr), snapshotBeginVersion)
&& store(backup.snapshotRangeFileCount().getD(tr), snapshotRangeFileCount)
);
break;
} catch(Error &e) {
Void _ = wait(tr->onError(e));
}
}
Reference<IBackupFile> f = wait(bc->writeRangeFile(snapshotBeginVersion, snapshotRangeFileCount, outVersion, blockSize));
outFile = f;
// Initialize range file writer and write begin key
@ -3358,8 +3379,9 @@ class FileBackupAgentImpl {
public:
static const int MAX_RESTORABLE_FILE_METASECTION_BYTES = 1024 * 8;
// This method will return the final status of the backup
ACTOR static Future<int> waitBackup(FileBackupAgent* backupAgent, Database cx, std::string tagName, bool stopWhenDone) {
// This method will return the final status of the backup at tag, and return the URL that was used on the tag
// when that status value was read.
ACTOR static Future<int> waitBackup(FileBackupAgent* backupAgent, Database cx, std::string tagName, bool stopWhenDone, Reference<IBackupContainer> *pContainer = nullptr, UID *pUID = nullptr) {
state std::string backTrace;
state KeyBackedTag tag = makeBackupTag(tagName);
@ -3377,13 +3399,20 @@ public:
state BackupConfig config(oldUidAndAborted.get().first);
state EBackupState status = wait(config.stateEnum().getD(tr, false, EBackupState::STATE_NEVERRAN));
// Break, if no longer runnable
if (!FileBackupAgent::isRunnable(status)) {
return status;
// Break, if one of the following is true
// - no longer runnable
// - in differential mode (restorable) and stopWhenDone is not enabled
if( !FileBackupAgent::isRunnable(status) || (!stopWhenDone) && (BackupAgentBase::STATE_DIFFERENTIAL == status) ) {
if(pContainer != nullptr) {
Reference<IBackupContainer> c = wait(config.backupContainer().getOrThrow(tr, false, backup_invalid_info()));
*pContainer = c;
}
if(pUID != nullptr) {
*pUID = oldUidAndAborted.get().first;
}
// Break, if in differential mode (restorable) and stopWhenDone is not enabled
if ((!stopWhenDone) && (BackupAgentBase::STATE_DIFFERENTIAL == status)) {
return status;
}
@ -4059,7 +4088,7 @@ void FileBackupAgent::setLastRestorable(Reference<ReadYourWritesTransaction> tr,
tr->set(lastRestorable.pack(tagName), BinaryWriter::toValue<Version>(version, Unversioned()));
}
Future<int> FileBackupAgent::waitBackup(Database cx, std::string tagName, bool stopWhenDone) {
return FileBackupAgentImpl::waitBackup(this, cx, tagName, stopWhenDone);
Future<int> FileBackupAgent::waitBackup(Database cx, std::string tagName, bool stopWhenDone, Reference<IBackupContainer> *pContainer, UID *pUID) {
return FileBackupAgentImpl::waitBackup(this, cx, tagName, stopWhenDone, pContainer, pUID);
}

View File

@ -31,7 +31,7 @@ namespace HTTP {
o.reserve(s.size() * 3);
char buf[4];
for(auto c : s)
if(std::isalnum(c))
if(std::isalnum(c) || c == '?' || c == '/' || c == '-' || c == '_' || c == '.')
o.append(&c, 1);
else {
sprintf(buf, "%%%.02X", c);
@ -293,15 +293,41 @@ namespace HTTP {
// Request content is provided as UnsentPacketQueue *pContent which will be depleted as bytes are sent but the queue itself must live for the life of this actor
// and be destroyed by the caller
// TODO: pSent is very hackish, do something better.
ACTOR Future<Reference<HTTP::Response>> doRequest(Reference<IConnection> conn, std::string verb, std::string resource, HTTP::Headers headers, UnsentPacketQueue *pContent, int contentLen, Reference<IRateControl> sendRate, int64_t *pSent, Reference<IRateControl> recvRate) {
ACTOR Future<Reference<HTTP::Response>> doRequest(Reference<IConnection> conn, std::string verb, std::string resource, HTTP::Headers headers, UnsentPacketQueue *pContent, int contentLen, Reference<IRateControl> sendRate, int64_t *pSent, Reference<IRateControl> recvRate, std::string requestIDHeader) {
state TraceEvent event(SevDebug, "HTTPRequest");
state UnsentPacketQueue empty;
if(pContent == NULL)
pContent = &empty;
// There is no standard http request id header field, so either a global default can be set via a knob
// or it can be set per-request with the requestIDHeader argument (which overrides the default)
if(requestIDHeader.empty()) {
requestIDHeader = CLIENT_KNOBS->HTTP_REQUEST_ID_HEADER;
}
state bool earlyResponse = false;
state int total_sent = 0;
event.detail("DebugID", conn->getDebugID());
event.detail("RemoteAddress", conn->getPeerAddress());
event.detail("Verb", verb);
event.detail("Resource", resource);
event.detail("RequestContentLen", contentLen);
try {
state std::string requestID;
if(!requestIDHeader.empty()) {
requestID = g_random->randomUniqueID().toString();
requestID = requestID.insert(20, "-");
requestID = requestID.insert(16, "-");
requestID = requestID.insert(12, "-");
requestID = requestID.insert(8, "-");
headers[requestIDHeader] = requestID;
event.detail("RequestIDSent", requestID);
}
// Write headers to a packet buffer chain
PacketBuffer *pFirst = new PacketBuffer();
PacketBuffer *pLast = writeRequestHeader(verb, resource, headers, pFirst);
@ -347,19 +373,59 @@ namespace HTTP {
}
Void _ = wait(responseReading);
double elapsed = timer() - send_start;
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 0)
printf("[%s] HTTP code=%d early=%d, time=%fs %s %s contentLen=%d [%d out, response content len %d]\n",
conn->getDebugID().toString().c_str(), r->code, earlyResponse, elapsed, verb.c_str(), resource.c_str(), contentLen, total_sent, (int)r->contentLen);
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 2)
event.detail("ResponseCode", r->code);
event.detail("ResponseContentLen", r->contentLen);
event.detail("Elapsed", elapsed);
Optional<Error> err;
if(!requestIDHeader.empty()) {
std::string responseID;
auto iid = r->headers.find(requestIDHeader);
if(iid != r->headers.end()) {
responseID = iid->second;
}
event.detail("RequestIDReceived", responseID);
if(requestID != responseID) {
err = http_bad_request_id();
// Log a non-debug a error
TraceEvent(SevError, "HTTPRequestFailedIDMismatch")
.detail("DebugID", conn->getDebugID())
.detail("RemoteAddress", conn->getPeerAddress())
.detail("Verb", verb)
.detail("Resource", resource)
.detail("RequestContentLen", contentLen)
.detail("ResponseCode", r->code)
.detail("ResponseContentLen", r->contentLen)
.detail("RequestIDSent", requestID)
.detail("RequestIDReceived", responseID)
.error(err.get());
}
}
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 0) {
printf("[%s] HTTP %scode=%d early=%d, time=%fs %s %s contentLen=%d [%d out, response content len %d]\n",
conn->getDebugID().toString().c_str(),
(err.present() ? format("*ERROR*=%s ", err.get().name()).c_str() : ""),
r->code, earlyResponse, elapsed, verb.c_str(), resource.c_str(), contentLen, total_sent, (int)r->contentLen);
}
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 2) {
printf("[%s] HTTP RESPONSE: %s %s\n%s\n", conn->getDebugID().toString().c_str(), verb.c_str(), resource.c_str(), r->toString().c_str());
}
if(err.present()) {
throw err.get();
}
return r;
} catch(Error &e) {
double elapsed = timer() - send_start;
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 0)
if(CLIENT_KNOBS->HTTP_VERBOSE_LEVEL > 0 && e.code() != error_code_http_bad_request_id) {
printf("[%s] HTTP *ERROR*=%s early=%d, time=%fs %s %s contentLen=%d [%d out]\n",
conn->getDebugID().toString().c_str(), e.name(), earlyResponse, elapsed, verb.c_str(), resource.c_str(), contentLen, total_sent);
}
event.error(e);
throw;
}
}

View File

@ -51,5 +51,5 @@ namespace HTTP {
PacketBuffer * writeRequestHeader(std::string const &verb, std::string const &resource, HTTP::Headers const &headers, PacketBuffer *dest);
// Do an HTTP request to the blob store, parse the response.
Future<Reference<Response>> doRequest(Reference<IConnection> const &conn, std::string const &verb, std::string const &resource, HTTP::Headers const &headers, UnsentPacketQueue * const &pContent, int const &contentLen, Reference<IRateControl> const &sendRate, int64_t * const &pSent, Reference<IRateControl> const &recvRate);
Future<Reference<Response>> doRequest(Reference<IConnection> const &conn, std::string const &verb, std::string const &resource, HTTP::Headers const &headers, UnsentPacketQueue * const &pContent, int const &contentLen, Reference<IRateControl> const &sendRate, int64_t * const &pSent, Reference<IRateControl> const &recvRate, const std::string &requestHeader = std::string());
}

View File

@ -148,6 +148,7 @@ ClientKnobs::ClientKnobs(bool randomize) {
init( HTTP_READ_SIZE, 128*1024 );
init( HTTP_SEND_SIZE, 32*1024 );
init( HTTP_VERBOSE_LEVEL, 0 );
init( HTTP_REQUEST_ID_HEADER, "" );
init( BLOBSTORE_CONNECT_TRIES, 10 );
init( BLOBSTORE_CONNECT_TIMEOUT, 10 );
init( BLOBSTORE_MAX_CONNECTION_LIFE, 120 );

View File

@ -152,6 +152,7 @@ public:
int HTTP_SEND_SIZE;
int HTTP_READ_SIZE;
int HTTP_VERBOSE_LEVEL;
std::string HTTP_REQUEST_ID_HEADER;
int BLOBSTORE_CONNECT_TRIES;
int BLOBSTORE_CONNECT_TIMEOUT;
int BLOBSTORE_MAX_CONNECTION_LIFE;

View File

@ -293,6 +293,7 @@ ACTOR Future<ConfigurationResult::Type> changeConfig( Database cx, std::map<std:
if(!creating && !force) {
state Future<Standalone<RangeResultRef>> fConfig = tr.getRange(configKeys, CLIENT_KNOBS->TOO_MANY);
state Future<vector<ProcessData>> fWorkers = getWorkers(&tr);
Void _ = wait( success(fConfig) || tooLong );
if(!fConfig.isReady()) {
@ -377,6 +378,44 @@ ACTOR Future<ConfigurationResult::Type> changeConfig( Database cx, std::map<std:
}
}
}
Void _ = wait( success(fWorkers) || tooLong );
if(!fWorkers.isReady()) {
return ConfigurationResult::DATABASE_UNAVAILABLE;
}
if(newConfig.regions.size()) {
std::map<Optional<Key>, std::set<Optional<Key>>> dcId_zoneIds;
for(auto& it : fWorkers.get()) {
if( it.processClass.machineClassFitness(ProcessClass::Storage) <= ProcessClass::WorstFit ) {
dcId_zoneIds[it.locality.dcId()].insert(it.locality.zoneId());
}
}
for(auto& region : newConfig.regions) {
if(dcId_zoneIds[region.dcId].size() < std::max(newConfig.storageTeamSize, newConfig.tLogReplicationFactor)) {
return ConfigurationResult::NOT_ENOUGH_WORKERS;
}
if(region.satelliteTLogReplicationFactor > 0 && region.priority >= 0) {
int totalSatelliteProcesses = 0;
for(auto& sat : region.satellites) {
totalSatelliteProcesses += dcId_zoneIds[sat.dcId].size();
}
if(totalSatelliteProcesses < region.satelliteTLogReplicationFactor) {
return ConfigurationResult::NOT_ENOUGH_WORKERS;
}
}
}
} else {
std::set<Optional<Key>> zoneIds;
for(auto& it : fWorkers.get()) {
if( it.processClass.machineClassFitness(ProcessClass::Storage) <= ProcessClass::WorstFit ) {
zoneIds.insert(it.locality.zoneId());
}
}
if(zoneIds.size() < std::max(newConfig.storageTeamSize, newConfig.tLogReplicationFactor)) {
return ConfigurationResult::NOT_ENOUGH_WORKERS;
}
}
}
}

View File

@ -54,6 +54,7 @@ public:
REGION_NOT_FULLY_REPLICATED,
MULTIPLE_ACTIVE_REGIONS,
REGIONS_CHANGED,
NOT_ENOUGH_WORKERS,
SUCCESS
};
};

View File

@ -101,8 +101,9 @@ int eraseDirectoryRecursive(std::string const& dir) {
the directory we're deleting doesn't exist in the first
place */
if (error && errno != ENOENT) {
TraceEvent(SevError, "EraseDirectoryRecursiveError").detail("Directory", dir).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "EraseDirectoryRecursiveError").detail("Directory", dir).GetLastError().error(e);
throw e;
}
#else
#error Port me!

View File

@ -47,10 +47,10 @@ static int send_func(void* ctx, const uint8_t* buf, int len) {
int w = conn->conn->write( &sb );
return w;
} catch ( Error& e ) {
TraceEvent("TLSConnectionSendError", conn->getDebugID()).error(e);
TraceEvent("TLSConnectionSendError", conn->getDebugID()).error(e).suppressFor(1.0);
return -1;
} catch ( ... ) {
TraceEvent("TLSConnectionSendError", conn->getDebugID()).error( unknown_error() );
TraceEvent("TLSConnectionSendError", conn->getDebugID()).error( unknown_error() ).suppressFor(1.0);
return -1;
}
}
@ -63,10 +63,10 @@ static int recv_func(void* ctx, uint8_t* buf, int len) {
int r = conn->conn->read( buf, buf + len );
return r;
} catch ( Error& e ) {
TraceEvent("TLSConnectionRecvError", conn->getDebugID()).error(e);
TraceEvent("TLSConnectionRecvError", conn->getDebugID()).error(e).suppressFor(1.0);
return -1;
} catch ( ... ) {
TraceEvent("TLSConnectionRecvError", conn->getDebugID()).error( unknown_error() );
TraceEvent("TLSConnectionRecvError", conn->getDebugID()).error( unknown_error() ).suppressFor(1.0);
return -1;
}
}

View File

@ -1956,7 +1956,7 @@ ACTOR Future<Void> storageServerTracker(
}
}
} catch( Error &e ) {
if (e.code() != error_code_actor_cancelled)
if (e.code() != error_code_actor_cancelled && errorOut.canBeSet())
errorOut.sendError(e);
throw;
}

View File

@ -373,6 +373,8 @@ ServerKnobs::ServerKnobs(bool randomize, ClientKnobs* clientKnobs) {
init( MAX_STORAGE_SERVER_WATCH_BYTES, 100e6 ); if( randomize && BUGGIFY ) MAX_STORAGE_SERVER_WATCH_BYTES = 10e3;
init( MAX_BYTE_SAMPLE_CLEAR_MAP_SIZE, 1e9 ); if( randomize && BUGGIFY ) MAX_BYTE_SAMPLE_CLEAR_MAP_SIZE = 1e3;
init( LONG_BYTE_SAMPLE_RECOVERY_DELAY, 60.0 );
init( BYTE_SAMPLE_LOAD_PARALLELISM, 32 ); if( randomize && BUGGIFY ) BYTE_SAMPLE_LOAD_PARALLELISM = 1;
init( BYTE_SAMPLE_LOAD_DELAY, 0.0 ); if( randomize && BUGGIFY ) BYTE_SAMPLE_LOAD_DELAY = 0.1;
//Wait Failure
init( BUGGIFY_OUTSTANDING_WAIT_FAILURE_REQUESTS, 2 );

View File

@ -311,6 +311,8 @@ public:
int MAX_STORAGE_SERVER_WATCH_BYTES;
int MAX_BYTE_SAMPLE_CLEAR_MAP_SIZE;
double LONG_BYTE_SAMPLE_RECOVERY_DELAY;
int BYTE_SAMPLE_LOAD_PARALLELISM;
double BYTE_SAMPLE_LOAD_DELAY;
//Wait Failure
int BUGGIFY_OUTSTANDING_WAIT_FAILURE_REQUESTS;

View File

@ -2838,48 +2838,76 @@ void StorageServerDisk::changeLogProtocol(Version version, uint64_t protocol) {
data->addMutationToMutationLogOrStorage(version, MutationRef(MutationRef::SetValue, persistLogProtocol, BinaryWriter::toValue(protocol, Unversioned())));
}
ACTOR Future<Void> applyByteSampleResult( StorageServer* data, KeyRange range, Future<Standalone<VectorRef<KeyValueRef>>> result) {
Standalone<VectorRef<KeyValueRef>> bs = wait( result );
ACTOR Future<Void> applyByteSampleResult( StorageServer* data, IKeyValueStore* storage, Key begin, Key end, std::vector<Standalone<VectorRef<KeyValueRef>>>* results = NULL) {
state int totalFetches = 0;
state int totalKeys = 0;
state int totalBytes = 0;
loop {
Standalone<VectorRef<KeyValueRef>> bs = wait( storage->readRange( KeyRangeRef(begin, end), SERVER_KNOBS->STORAGE_LIMIT_BYTES, SERVER_KNOBS->STORAGE_LIMIT_BYTES ) );
if(results) results->push_back(bs);
int rangeSize = bs.expectedSize();
totalFetches++;
totalKeys += bs.size();
totalBytes += rangeSize;
for( int j = 0; j < bs.size(); j++ ) {
KeyRef key = bs[j].key.removePrefix(persistByteSampleKeys.begin);
if(!data->byteSampleClears.rangeContaining(key).value()) {
data->metrics.byteSample.sample.insert( key, BinaryReader::fromStringRef<int32_t>(bs[j].value, Unversioned()), false );
}
}
data->byteSampleClears.insert(range, true);
if( rangeSize >= SERVER_KNOBS->STORAGE_LIMIT_BYTES ) {
Key nextBegin = keyAfter(bs.back().key);
data->byteSampleClears.insert(KeyRangeRef(begin, nextBegin).removePrefix(persistByteSampleKeys.begin), true);
data->byteSampleClearsTooLarge.set(data->byteSampleClears.size() > SERVER_KNOBS->MAX_BYTE_SAMPLE_CLEAR_MAP_SIZE);
begin = nextBegin;
if(begin == end) {
break;
}
} else {
data->byteSampleClears.insert(KeyRangeRef(begin.removePrefix(persistByteSampleKeys.begin), end == persistByteSampleKeys.end ? LiteralStringRef("\xff\xff\xff") : end.removePrefix(persistByteSampleKeys.begin)), true);
data->byteSampleClearsTooLarge.set(data->byteSampleClears.size() > SERVER_KNOBS->MAX_BYTE_SAMPLE_CLEAR_MAP_SIZE);
break;
}
if(!results) {
Void _ = wait(delay(SERVER_KNOBS->BYTE_SAMPLE_LOAD_DELAY));
}
}
TraceEvent("RecoveredByteSampleRange", data->thisServerID).detail("Begin", printable(begin)).detail("End", printable(end)).detail("Fetches", totalFetches).detail("Keys", totalKeys).detail("ReadBytes", totalBytes);
return Void();
}
ACTOR Future<Void> restoreByteSample(StorageServer* data, IKeyValueStore* storage, Standalone<VectorRef<KeyValueRef>> bsSample) {
ACTOR Future<Void> restoreByteSample(StorageServer* data, IKeyValueStore* storage, Promise<Void> byteSampleSampleRecovered) {
state std::vector<Standalone<VectorRef<KeyValueRef>>> byteSampleSample;
Void _ = wait( applyByteSampleResult(data, storage, persistByteSampleSampleKeys.begin, persistByteSampleSampleKeys.end, &byteSampleSample) );
byteSampleSampleRecovered.send(Void());
Void _ = wait( delay( BUGGIFY ? g_random->random01() * 2.0 : 0.0001 ) );
TraceEvent("RecoveredByteSampleSample", data->thisServerID).detail("Keys", bsSample.size()).detail("ReadBytes", bsSample.expectedSize());
size_t bytes_per_fetch = 0;
// Since the expected size also includes (as of now) the space overhead of the container, we calculate our own number here
for( int i = 0; i < bsSample.size(); i++ )
bytes_per_fetch += BinaryReader::fromStringRef<int32_t>(bsSample[i].value, Unversioned());
bytes_per_fetch /= 32;
for( auto& it : byteSampleSample ) {
for( auto& kv : it ) {
bytes_per_fetch += BinaryReader::fromStringRef<int32_t>(kv.value, Unversioned());
}
}
bytes_per_fetch = (bytes_per_fetch/SERVER_KNOBS->BYTE_SAMPLE_LOAD_PARALLELISM) + 1;
state std::vector<Future<Void>> sampleRanges;
int accumulatedSize = 0;
std::string prefix = PERSIST_PREFIX "BS/";
Key lastStart = LiteralStringRef( PERSIST_PREFIX "BS/" ); // make sure the first range starts at the absolute beginning of the byte sample
for( auto it = bsSample.begin(); it != bsSample.end(); ++it ) {
Key lastStart = persistByteSampleKeys.begin; // make sure the first range starts at the absolute beginning of the byte sample
for( auto& it : byteSampleSample ) {
for( auto& kv : it ) {
if( accumulatedSize >= bytes_per_fetch ) {
accumulatedSize = 0;
Key realKey = it->key.removePrefix( prefix );
KeyRange sampleRange = KeyRangeRef( lastStart, realKey );
sampleRanges.push_back( applyByteSampleResult(data, sampleRange.removePrefix(persistByteSampleKeys.begin), storage->readRange( sampleRange )) );
Key realKey = kv.key.removePrefix( persistByteSampleKeys.begin );
sampleRanges.push_back( applyByteSampleResult(data, storage, lastStart, realKey) );
lastStart = realKey;
}
accumulatedSize += BinaryReader::fromStringRef<int32_t>(it->value, Unversioned());
accumulatedSize += BinaryReader::fromStringRef<int32_t>(kv.value, Unversioned());
}
}
// make sure that the last range goes all the way to the end of the byte sample
KeyRange sampleRange = KeyRangeRef( lastStart, LiteralStringRef( PERSIST_PREFIX "BS0" ));
sampleRanges.push_back( applyByteSampleResult(data, KeyRangeRef(lastStart.removePrefix(persistByteSampleKeys.begin), LiteralStringRef("\xff\xff\xff")), storage->readRange( sampleRange )) );
sampleRanges.push_back( applyByteSampleResult(data, storage, lastStart, persistByteSampleKeys.end) );
Void _ = wait( waitForAll( sampleRanges ) );
TraceEvent("RecoveredByteSampleChunkedRead", data->thisServerID).detail("Ranges",sampleRanges.size());
@ -2897,11 +2925,14 @@ ACTOR Future<bool> restoreDurableState( StorageServer* data, IKeyValueStore* sto
state Future<Optional<Value>> fLogProtocol = storage->readValue(persistLogProtocol);
state Future<Standalone<VectorRef<KeyValueRef>>> fShardAssigned = storage->readRange(persistShardAssignedKeys);
state Future<Standalone<VectorRef<KeyValueRef>>> fShardAvailable = storage->readRange(persistShardAvailableKeys);
state Future<Standalone<VectorRef<KeyValueRef>>> fByteSampleSample = storage->readRange(persistByteSampleSampleKeys);
state Promise<Void> byteSampleSampleRecovered;
data->byteSampleRecovery = restoreByteSample(data, storage, byteSampleSampleRecovered);
TraceEvent("ReadingDurableState", data->thisServerID);
Void _ = wait( waitForAll( (vector<Future<Optional<Value>>>(), fFormat, fID, fVersion, fLogProtocol) ) );
Void _ = wait( waitForAll( (vector<Future<Standalone<VectorRef<KeyValueRef>>>>(), fShardAssigned, fShardAvailable, fByteSampleSample) ) );
Void _ = wait( waitForAll( (vector<Future<Standalone<VectorRef<KeyValueRef>>>>(), fShardAssigned, fShardAvailable) ) );
Void _ = wait( byteSampleSampleRecovered.getFuture() );
TraceEvent("RestoringDurableState", data->thisServerID);
if (!fFormat.get().present()) {
@ -2956,8 +2987,6 @@ ACTOR Future<bool> restoreDurableState( StorageServer* data, IKeyValueStore* sto
Void _ = wait(yield());
}
Void _ = wait( applyByteSampleResult(data, persistByteSampleSampleKeys.removePrefix(persistByteSampleKeys.begin), fByteSampleSample) );
data->byteSampleRecovery = restoreByteSample(data, storage, fByteSampleSample.get());
Void _ = wait( delay( 0.0001 ) );
@ -3557,7 +3586,7 @@ void versionedMapTest() {
const int NSIZE = sizeof(VersionedMap<int,int>::PTreeT);
const int ASIZE = NSIZE<=64 ? 64 : NextPowerOfTwo<NSIZE>::Result;
auto before = FastAllocator< ASIZE >::getMemoryUsed();
auto before = FastAllocator< ASIZE >::getTotalMemory();
for(int v=1; v<=1000; ++v) {
vm.createNewVersion(v);
@ -3571,7 +3600,7 @@ void versionedMapTest() {
}
}
auto after = FastAllocator< ASIZE >::getMemoryUsed();
auto after = FastAllocator< ASIZE >::getTotalMemory();
int count = 0;
for(auto i = vm.atLatest().begin(); i != vm.atLatest().end(); ++i)

View File

@ -188,41 +188,44 @@ struct BackupAndRestoreCorrectnessWorkload : TestWorkload {
if (BUGGIFY) {
state KeyBackedTag backupTag = makeBackupTag(tag.toString());
TraceEvent("BARW_DoBackupWaitForRestorable", randomID).detail("Tag", backupTag.tagName);
// Wait until the backup is in a restorable state
state int resultWait = wait(backupAgent->waitBackup(cx, backupTag.tagName, false));
UidAndAbortedFlagT uidFlag = wait(backupTag.getOrThrow(cx));
state UID logUid = uidFlag.first;
state Reference<IBackupContainer> lastBackupContainer = wait(BackupConfig(logUid).backupContainer().getD(cx));
// Wait until the backup is in a restorable state and get the status, URL, and UID atomically
state Reference<IBackupContainer> lastBackupContainer;
state UID lastBackupUID;
state int resultWait = wait(backupAgent->waitBackup(cx, backupTag.tagName, false, &lastBackupContainer, &lastBackupUID));
state bool restorable = false;
if(lastBackupContainer) {
state BackupDescription desc = wait(lastBackupContainer->describeBackup());
state Future<BackupDescription> fdesc = lastBackupContainer->describeBackup();
Void _ = wait(ready(fdesc));
if(!fdesc.isError()) {
state BackupDescription desc = fdesc.get();
Void _ = wait(desc.resolveVersionTimes(cx));
printf("BackupDescription:\n%s\n", desc.toString().c_str());
restorable = desc.maxRestorableVersion.present();
}
}
TraceEvent("BARW_LastBackupContainer", randomID)
.detail("BackupTag", printable(tag))
.detail("LastBackupContainer", lastBackupContainer ? lastBackupContainer->getURL() : "")
.detail("LogUid", logUid).detail("WaitStatus", resultWait).detail("Restorable", restorable);
.detail("LastBackupUID", lastBackupUID).detail("WaitStatus", resultWait).detail("Restorable", restorable);
// Do not check the backup, if aborted
if (resultWait == BackupAgentBase::STATE_ABORTED) {
}
// Ensure that a backup container was found
else if (!lastBackupContainer) {
TraceEvent("BARW_MissingBackupContainer", randomID).detail("LogUid", logUid).detail("BackupTag", printable(tag)).detail("WaitStatus", resultWait);
TraceEvent(SevError, "BARW_MissingBackupContainer", randomID).detail("LastBackupUID", lastBackupUID).detail("BackupTag", printable(tag)).detail("WaitStatus", resultWait);
printf("BackupCorrectnessMissingBackupContainer tag: %s status: %d\n", printable(tag).c_str(), resultWait);
}
// Check that backup is restorable
else {
if(!restorable) {
TraceEvent("BARW_NotRestorable", randomID).detail("LogUid", logUid).detail("BackupTag", printable(tag))
else if(!restorable) {
TraceEvent(SevError, "BARW_NotRestorable", randomID).detail("LastBackupUID", lastBackupUID).detail("BackupTag", printable(tag))
.detail("BackupFolder", lastBackupContainer->getURL()).detail("WaitStatus", resultWait);
printf("BackupCorrectnessNotRestorable: tag: %s\n", printable(tag).c_str());
}
}
// Abort the backup, if not the first backup because the second backup may have aborted the backup by now
if (startDelay) {

View File

@ -115,5 +115,5 @@ void ErrorCodeTable::addCode(int code, const char *name, const char *description
}
bool isAssertDisabled(int line) {
return FLOW_KNOBS->DISABLE_ASSERTS == -1 || FLOW_KNOBS->DISABLE_ASSERTS == line;
return FLOW_KNOBS && (FLOW_KNOBS->DISABLE_ASSERTS == -1 || FLOW_KNOBS->DISABLE_ASSERTS == line);
}

View File

@ -68,6 +68,8 @@ private:
enum Flags { FLAG_INJECTED_FAULT=1 };
};
Error systemErrorCodeToError();
#undef ERROR
#define ERROR(name, number, description) inline Error name() { return Error( number ); }; enum { error_code_##name = number };
#include "error_definitions.h"

View File

@ -61,13 +61,16 @@
template<int Size>
INIT_SEG thread_local typename FastAllocator<Size>::ThreadData FastAllocator<Size>::threadData;
template<int Size>
thread_local bool FastAllocator<Size>::threadInitialized = false;
#ifdef VALGRIND
template<int Size>
unsigned long FastAllocator<Size>::vLock = 1;
#endif
template<int Size>
void* FastAllocator<Size>::freelist = 0;
void* FastAllocator<Size>::freelist = nullptr;
typedef void (*ThreadInitFunction)();
@ -118,11 +121,11 @@ void recordAllocation( void *ptr, size_t size ) {
#error Instrumentation not supported on this platform
#endif
if( nptrs > 0 ) {
uint32_t a = 0, b = 0;
if( nptrs > 0 ) {
hashlittle2( buffer, nptrs * sizeof(void *), &a, &b );
}
{
double countDelta = std::max(1.0, ((double)SAMPLE_BYTES) / size);
size_t sizeDelta = std::max(SAMPLE_BYTES, size);
ThreadSpinLockHolder holder( memLock );
@ -143,8 +146,6 @@ void recordAllocation( void *ptr, size_t size ) {
}
memSample[(int64_t)ptr] = std::make_pair(a, size);
}
}
}
memSample_entered = false;
#endif
}
@ -188,20 +189,28 @@ struct FastAllocator<Size>::GlobalData {
CRITICAL_SECTION mutex;
std::vector<void*> magazines; // These magazines are always exactly magazine_size ("full")
std::vector<std::pair<int, void*>> partial_magazines; // Magazines that are not "full" and their counts. Only created by releaseThreadMagazines().
long long memoryUsed;
GlobalData() : memoryUsed(0) {
long long totalMemory;
long long partialMagazineUnallocatedMemory;
long long activeThreads;
GlobalData() : totalMemory(0), partialMagazineUnallocatedMemory(0), activeThreads(0) {
InitializeCriticalSection(&mutex);
}
};
template <int Size>
long long FastAllocator<Size>::getMemoryUsed() {
return globalData()->memoryUsed;
long long FastAllocator<Size>::getTotalMemory() {
return globalData()->totalMemory;
}
// This does not include memory held by various threads that's available for allocation
template <int Size>
long long FastAllocator<Size>::getApproximateMemoryUnused() {
return globalData()->magazines.size() * magazine_size * Size + globalData()->partialMagazineUnallocatedMemory;
}
template <int Size>
long long FastAllocator<Size>::getMemoryUnused() {
return globalData()->magazines.size() * magazine_size * Size;
long long FastAllocator<Size>::getActiveThreads() {
return globalData()->activeThreads;
}
static int64_t getSizeCode(int i) {
@ -221,22 +230,29 @@ static int64_t getSizeCode(int i) {
template<int Size>
void *FastAllocator<Size>::allocate() {
if(!threadInitialized) {
initThread();
}
#if FASTALLOC_THREAD_SAFE
ThreadData& thr = threadData;
if (!thr.freelist) {
ASSERT(thr.count == 0);
if (thr.alternate) {
thr.freelist = thr.alternate;
thr.alternate = 0;
thr.alternate = nullptr;
thr.count = magazine_size;
} else
} else {
getMagazine();
}
}
--thr.count;
void* p = thr.freelist;
#if VALGRIND
VALGRIND_MAKE_MEM_DEFINED(p, sizeof(void*));
#endif
thr.freelist = *(void**)p;
ASSERT(!thr.freelist == (thr.count == 0)); // freelist is empty if and only if count is 0
//check( p, true );
#else
void* p = freelist;
@ -257,15 +273,22 @@ void *FastAllocator<Size>::allocate() {
template<int Size>
void FastAllocator<Size>::release(void *ptr) {
if(!threadInitialized) {
initThread();
}
#if FASTALLOC_THREAD_SAFE
ThreadData& thr = threadData;
if (thr.count == magazine_size) {
if (thr.alternate) // Two full magazines, return one
releaseMagazine( thr.alternate );
thr.alternate = thr.freelist;
thr.freelist = 0;
thr.freelist = nullptr;
thr.count = 0;
}
ASSERT(!thr.freelist == (thr.count == 0)); // freelist is empty if and only if count is 0
++thr.count;
*(void**)ptr = thr.freelist;
//check(ptr, false);
@ -334,9 +357,27 @@ void FastAllocator<Size>::check(void* ptr, bool alloc) {
#endif
}
template <int Size>
void FastAllocator<Size>::initThread() {
threadInitialized = true;
if (threadInitFunction) {
threadInitFunction();
}
EnterCriticalSection(&globalData()->mutex);
++globalData()->activeThreads;
LeaveCriticalSection(&globalData()->mutex);
threadData.freelist = nullptr;
threadData.alternate = nullptr;
threadData.count = 0;
}
template <int Size>
void FastAllocator<Size>::getMagazine() {
if (threadInitFunction) threadInitFunction();
ASSERT(threadInitialized);
ASSERT(!threadData.freelist && !threadData.alternate && threadData.count == 0);
EnterCriticalSection(&globalData()->mutex);
if (globalData()->magazines.size()) {
void* m = globalData()->magazines.back();
@ -348,12 +389,13 @@ void FastAllocator<Size>::getMagazine() {
} else if (globalData()->partial_magazines.size()) {
std::pair<int, void*> p = globalData()->partial_magazines.back();
globalData()->partial_magazines.pop_back();
globalData()->partialMagazineUnallocatedMemory -= p.first * Size;
LeaveCriticalSection(&globalData()->mutex);
threadData.freelist = p.second;
threadData.count = p.first;
return;
}
globalData()->memoryUsed += magazine_size*Size;
globalData()->totalMemory += magazine_size*Size;
LeaveCriticalSection(&globalData()->mutex);
// Allocate a new page of data from the system allocator
@ -361,7 +403,7 @@ void FastAllocator<Size>::getMagazine() {
interlockedIncrement(&pageCount);
#endif
void** block = 0;
void** block = nullptr;
#if FAST_ALLOCATOR_DEBUG
#ifdef WIN32
static int alt = 0; alt++;
@ -386,30 +428,42 @@ void FastAllocator<Size>::getMagazine() {
check( &block[i*PSize], false );
}
block[(magazine_size-1)*PSize+1] = block[(magazine_size-1)*PSize] = 0;
block[(magazine_size-1)*PSize+1] = block[(magazine_size-1)*PSize] = nullptr;
check( &block[(magazine_size-1)*PSize], false );
threadData.freelist = block;
threadData.count = magazine_size;
}
template <int Size>
void FastAllocator<Size>::releaseMagazine(void* mag) {
ASSERT(threadInitialized);
EnterCriticalSection(&globalData()->mutex);
globalData()->magazines.push_back(mag);
LeaveCriticalSection(&globalData()->mutex);
}
template <int Size>
void FastAllocator<Size>::releaseThreadMagazines() {
if(threadInitialized) {
threadInitialized = false;
ThreadData& thr = threadData;
if (thr.freelist || thr.alternate) {
EnterCriticalSection(&globalData()->mutex);
if (thr.freelist) globalData()->partial_magazines.push_back( std::make_pair(thr.count, thr.freelist) );
if (thr.alternate) globalData()->magazines.push_back(thr.alternate);
LeaveCriticalSection(&globalData()->mutex);
if (thr.freelist || thr.alternate) {
if (thr.freelist) {
ASSERT(thr.count > 0 && thr.count <= magazine_size);
globalData()->partial_magazines.push_back( std::make_pair(thr.count, thr.freelist) );
globalData()->partialMagazineUnallocatedMemory += thr.count * Size;
}
if (thr.alternate) {
globalData()->magazines.push_back(thr.alternate);
}
}
--globalData()->activeThreads;
LeaveCriticalSection(&globalData()->mutex);
thr.count = 0;
thr.alternate = 0;
thr.freelist = 0;
thr.alternate = nullptr;
thr.freelist = nullptr;
}
}
void releaseAllThreadMagazines() {
@ -427,15 +481,15 @@ void releaseAllThreadMagazines() {
int64_t getTotalUnusedAllocatedMemory() {
int64_t unusedMemory = 0;
unusedMemory += FastAllocator<16>::getMemoryUnused();
unusedMemory += FastAllocator<32>::getMemoryUnused();
unusedMemory += FastAllocator<64>::getMemoryUnused();
unusedMemory += FastAllocator<128>::getMemoryUnused();
unusedMemory += FastAllocator<256>::getMemoryUnused();
unusedMemory += FastAllocator<512>::getMemoryUnused();
unusedMemory += FastAllocator<1024>::getMemoryUnused();
unusedMemory += FastAllocator<2048>::getMemoryUnused();
unusedMemory += FastAllocator<4096>::getMemoryUnused();
unusedMemory += FastAllocator<16>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<32>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<64>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<128>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<256>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<512>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<1024>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<2048>::getApproximateMemoryUnused();
unusedMemory += FastAllocator<4096>::getApproximateMemoryUnused();
return unusedMemory;
}

View File

@ -106,8 +106,9 @@ public:
static void release(void* ptr);
static void check( void* ptr, bool alloc );
static long long getMemoryUsed();
static long long getMemoryUnused();
static long long getTotalMemory();
static long long getApproximateMemoryUnused();
static long long getActiveThreads();
static void releaseThreadMagazines();
@ -129,6 +130,7 @@ private:
void* alternate; // alternate is either a full magazine, or an empty one
};
static thread_local ThreadData threadData;
static thread_local bool threadInitialized;
static GlobalData* globalData() {
#ifdef VALGRIND
ANNOTATE_RWLOCK_ACQUIRED(vLock, 1);
@ -144,7 +146,8 @@ private:
static void* freelist;
FastAllocator(); // not implemented
static void getMagazine(); // sets threadData.freelist and threadData.count
static void initThread();
static void getMagazine();
static void releaseMagazine(void*);
};

View File

@ -158,7 +158,6 @@ public:
ASIOReactor reactor;
INetworkConnections *network; // initially this, but can be changed
tcp::resolver tcpResolver;
int64_t tsc_begin, tsc_end;
double taskBegin;
@ -478,7 +477,6 @@ Net2::Net2(NetworkAddress localAddress, bool useThreadPool, bool useMetrics)
: useThreadPool(useThreadPool),
network(this),
reactor(this),
tcpResolver(reactor.ios),
stopped(false),
tasksIssued(0),
// Until run() is called, yield() will always yield
@ -835,11 +833,13 @@ Future< Reference<IConnection> > Net2::connect( NetworkAddress toAddr, std::stri
}
ACTOR static Future<std::vector<NetworkAddress>> resolveTCPEndpoint_impl( Net2 *self, std::string host, std::string service) {
Promise<std::vector<NetworkAddress>> result;
state tcp::resolver tcpResolver(self->reactor.ios);
Promise<std::vector<NetworkAddress>> promise;
state Future<std::vector<NetworkAddress>> result = promise.getFuture();
self->tcpResolver.async_resolve(tcp::resolver::query(host, service), [=](const boost::system::error_code &ec, tcp::resolver::iterator iter) {
tcpResolver.async_resolve(tcp::resolver::query(host, service), [=](const boost::system::error_code &ec, tcp::resolver::iterator iter) {
if(ec) {
result.sendError(lookup_failed());
promise.sendError(lookup_failed());
return;
}
@ -847,18 +847,27 @@ ACTOR static Future<std::vector<NetworkAddress>> resolveTCPEndpoint_impl( Net2 *
tcp::resolver::iterator end;
while(iter != end) {
// The easiest way to get an ip:port formatted endpoint with this interface is with a string stream because
// endpoint::to_string doesn't exist but operator<< does.
std::stringstream s;
s << iter->endpoint();
addrs.push_back(NetworkAddress::parse(s.str()));
auto endpoint = iter->endpoint();
// Currently only ipv4 is supported by NetworkAddress
auto addr = endpoint.address();
if(addr.is_v4()) {
addrs.push_back(NetworkAddress(addr.to_v4().to_ulong(), endpoint.port()));
}
++iter;
}
result.send(addrs);
if(addrs.empty()) {
promise.sendError(lookup_failed());
}
else {
promise.send(addrs);
}
});
std::vector<NetworkAddress> addresses = wait(result.getFuture());
return addresses;
Void _ = wait(ready(result));
tcpResolver.cancel();
return result.get();
}
Future<std::vector<NetworkAddress>> Net2::resolveTCPEndpoint( std::string host, std::string service) {
@ -1088,7 +1097,7 @@ void net2_test() {
Endpoint destination;
printf(" Used: %lld\n", FastAllocator<4096>::getMemoryUsed());
printf(" Used: %lld\n", FastAllocator<4096>::getTotalMemory());
char junk[100];
@ -1138,6 +1147,6 @@ void net2_test() {
printf("SimSend x 1Kx10K: %0.2f sec\n", timer()-before);
printf(" Bytes: %d\n", totalBytes);
printf(" Used: %lld\n", FastAllocator<4096>::getMemoryUsed());
printf(" Used: %lld\n", FastAllocator<4096>::getTotalMemory());
*/
};

View File

@ -432,22 +432,40 @@ void getMachineRAMInfo(MachineRAMInfo& memInfo) {
#endif
}
Error systemErrorCodeToError() {
#if defined(_WIN32)
if(GetLastError() == ERROR_IO_DEVICE) {
return io_error();
}
#elif defined(__unixish__)
if(errno == EIO || errno == EROFS) {
return io_error();
}
#else
#error Port me!
#endif
return platform_error();
}
void getDiskBytes(std::string const& directory, int64_t& free, int64_t& total) {
INJECT_FAULT( platform_error, "getDiskBytes" );
#if defined(__unixish__)
#ifdef __linux__
struct statvfs buf;
if (statvfs(directory.c_str(), &buf)) {
TraceEvent(SevError, "GetDiskBytesStatvfsError").detail("Directory", directory).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "GetDiskBytesStatvfsError").detail("Directory", directory).GetLastError().error(e);
throw e;
}
uint64_t blockSize = buf.f_frsize;
#elif defined(__APPLE__)
struct statfs buf;
if (statfs(directory.c_str(), &buf)) {
TraceEvent(SevError, "GetDiskBytesStatfsError").detail("Directory", directory).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "GetDiskBytesStatfsError").detail("Directory", directory).GetLastError().error(e);
throw e;
}
uint64_t blockSize = buf.f_bsize;
@ -466,7 +484,9 @@ void getDiskBytes(std::string const& directory, int64_t& free, int64_t& total) {
ULARGE_INTEGER totalSpace;
ULARGE_INTEGER totalFreeSpace;
if( !GetDiskFreeSpaceEx( fullPath.c_str(), &freeSpace, &totalSpace, &totalFreeSpace ) ) {
TraceEvent(SevError, "DiskFreeError").detail("Path", fullPath).GetLastError();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "DiskFreeError").detail("Path", fullPath).GetLastError().error(e);
throw e;
}
total = std::min( (uint64_t) std::numeric_limits<int64_t>::max(), totalSpace.QuadPart );
free = std::min( (uint64_t) std::numeric_limits<int64_t>::max(), freeSpace.QuadPart );
@ -814,8 +834,9 @@ void getDiskStatistics(std::string const& directory, uint64_t& currentIOs, uint6
struct statfs buf;
if (statfs(directory.c_str(), &buf)) {
TraceEvent(SevError, "GetDiskStatisticsStatfsError").detail("Directory", directory).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "GetDiskStatisticsStatfsError").detail("Directory", directory).GetLastError().error(e);
throw e;
}
const char* dev = strrchr(buf.f_mntfromname, '/');
@ -1709,8 +1730,9 @@ bool deleteFile( std::string const& filename ) {
#else
#error Port me!
#endif
TraceEvent(SevError, "DeleteFile").detail("Filename", filename).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "DeleteFile").detail("Filename", filename).GetLastError().error(e);
throw errno;
}
static void createdDirectory() { INJECT_FAULT( platform_error, "createDirectory" ); }
@ -1734,8 +1756,9 @@ bool createDirectory( std::string const& directory ) {
return createDirectory( directory );
}
}
TraceEvent(SevError, "CreateDirectory").detail("Directory", directory).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "CreateDirectory").detail("Directory", directory).GetLastError().error(e);
throw e;
#elif (defined(__linux__) || defined(__APPLE__))
size_t sep = 0;
do {
@ -1744,12 +1767,16 @@ bool createDirectory( std::string const& directory ) {
if (errno == EEXIST)
continue;
TraceEvent(SevError, "CreateDirectory").detail("Directory", directory).GetLastError();
if (errno == EACCES)
throw file_not_writable();
else {
throw platform_error();
Error e;
if(errno == EACCES) {
e = file_not_writable();
}
else {
e = systemErrorCodeToError();
}
TraceEvent(SevError, "CreateDirectory").detail("Directory", directory).GetLastError().error(e);
throw e;
}
createdDirectory();
} while (sep != std::string::npos && sep != directory.length() - 1);
@ -1768,8 +1795,9 @@ std::string abspath( std::string const& filename ) {
#ifdef _WIN32
char nameBuffer[MAX_PATH];
if (!GetFullPathName(filename.c_str(), MAX_PATH, nameBuffer, NULL)) {
TraceEvent(SevError, "AbsolutePathError").detail("Filename", filename).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "AbsolutePathError").detail("Filename", filename).GetLastError().error(e);
throw e;
}
// Not totally obvious from the help whether GetFullPathName canonicalizes slashes, so let's do it...
for(char*x = nameBuffer; *x; x++)
@ -1789,8 +1817,9 @@ std::string abspath( std::string const& filename ) {
return joinPath( abspath( "." ), filename );
}
}
TraceEvent(SevError, "AbsolutePathError").detail("Filename", filename).GetLastError();
throw platform_error();
Error e = systemErrorCodeToError();
TraceEvent(SevError, "AbsolutePathError").detail("Filename", filename).GetLastError().error(e);
throw e;
}
return std::string(r);
#else
@ -2033,6 +2062,19 @@ bool fileExists(std::string const& filename) {
return true;
}
bool directoryExists(std::string const& path) {
#ifdef _WIN32
DWORD bits = ::GetFileAttributes(path.c_str());
return bits != INVALID_FILE_ATTRIBUTES && (bits & FILE_ATTRIBUTE_DIRECTORY);
#else
DIR *d = opendir(path.c_str());
if(d == nullptr)
return false;
closedir(d);
return true;
#endif
}
int64_t fileSize(std::string const& filename) {
#ifdef _WIN32
struct _stati64 file_status;
@ -2190,7 +2232,7 @@ std::string getDefaultPluginPath( const char* plugin_name ) {
}; // namespace platform
#ifdef ALLOC_INSTRUMENTATION
#define TRACEALLOCATOR( size ) TraceEvent("MemSample").detail("Count", FastAllocator<size>::getMemoryUnused()/size).detail("TotalSize", FastAllocator<size>::getMemoryUnused()).detail("SampleCount", 1).detail("Hash", "FastAllocatedUnused" #size ).detail("Bt", "na")
#define TRACEALLOCATOR( size ) TraceEvent("MemSample").detail("Count", FastAllocator<size>::getApproximateMemoryUnused()/size).detail("TotalSize", FastAllocator<size>::getApproximateMemoryUnused()).detail("SampleCount", 1).detail("Hash", "FastAllocatedUnused" #size ).detail("Bt", "na")
#ifdef __linux__
#include <cxxabi.h>
#endif

View File

@ -285,6 +285,9 @@ void threadYield(); // Attempt to yield to other processes or threads
// Returns true iff the file exists
bool fileExists(std::string const& filename);
// Returns true iff the directory exists
bool directoryExists(std::string const& path);
// Returns size of file in bytes
int64_t fileSize(std::string const& filename);

View File

@ -41,8 +41,8 @@ void systemMonitor() {
customSystemMonitor("ProcessMetrics", &statState, true );
}
#define TRACEALLOCATOR( size ) TraceEvent("MemSample").detail("Count", FastAllocator<size>::getMemoryUnused()/size).detail("TotalSize", FastAllocator<size>::getMemoryUnused()).detail("SampleCount", 1).detail("Hash", "FastAllocatedUnused" #size ).detail("Bt", "na")
#define DETAILALLOCATORMEMUSAGE( size ) detail("AllocatedMemory"#size, FastAllocator<size>::getMemoryUsed()).detail("ApproximateUnusedMemory"#size, FastAllocator<size>::getMemoryUnused())
#define TRACEALLOCATOR( size ) TraceEvent("MemSample").detail("Count", FastAllocator<size>::getApproximateMemoryUnused()/size).detail("TotalSize", FastAllocator<size>::getApproximateMemoryUnused()).detail("SampleCount", 1).detail("Hash", "FastAllocatedUnused" #size ).detail("Bt", "na")
#define DETAILALLOCATORMEMUSAGE( size ) detail("TotalMemory"#size, FastAllocator<size>::getTotalMemory()).detail("ApproximateUnusedMemory"#size, FastAllocator<size>::getApproximateMemoryUnused()).detail("ActiveThreads"#size, FastAllocator<size>::getActiveThreads())
SystemStatistics customSystemMonitor(std::string eventName, StatisticsState *statState, bool machineMetrics) {
SystemStatistics currentStats = getSystemStatistics(machineState.folder.present() ? machineState.folder.get() : "",

View File

@ -101,7 +101,7 @@ ERROR( io_timeout, 1521, "A disk IO operation failed to complete in a timely man
ERROR( file_corrupt, 1522, "A structurally corrupt data file was detected" )
ERROR( http_request_failed, 1523, "HTTP response code not received or indicated failure" )
ERROR( http_auth_failed, 1524, "HTTP request failed due to bad credentials" )
ERROR( http_bad_request_id, 1525, "HTTP response contained an unexpected X-Request-ID header" )
// 2xxx Attempt (presumably by a _client_) to do something illegal. If an error is known to
// be internally caused, it should be 41xx
@ -177,6 +177,7 @@ ERROR( backup_invalid_info, 2315, "Backup Container URL invalid")
ERROR( backup_cannot_expire, 2316, "Cannot expire requested data from backup without violating minimum restorability")
ERROR( backup_auth_missing, 2317, "Cannot find authentication details (such as a password or secret key) for the specified Backup Container URL")
ERROR( backup_auth_unreadable, 2318, "Cannot read or parse one or more sources of authentication information for Backup Container URLs")
ERROR( backup_does_not_exist, 2319, "Backup does not exist")
ERROR( restore_invalid_version, 2361, "Invalid restore version")
ERROR( restore_corrupted_data, 2362, "Corrupted backup data")
ERROR( restore_missing_data, 2363, "Missing backup data")

View File

@ -296,6 +296,16 @@ Future<Void> store(Future<T> what, T &out) {
return map(what, [&out](T const &v) { out = v; return Void(); });
}
template<class T>
Future<Void> storeOrThrow(Future<Optional<T>> what, T &out, Error e = key_not_found()) {
return map(what, [&out,e](Optional<T> const &o) {
if(!o.present())
throw e;
out = o.get();
return Void();
});
}
//Waits for a future to be ready, and then applies an asynchronous function to it.
ACTOR template<class T, class F, class U = decltype( fake<F>()(fake<T>()).getValue() )>
Future<U> mapAsync(Future<T> what, F actorFunc)

View File

@ -32,7 +32,7 @@
<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'>
<Product Name='$(var.Title)'
Id='{0EDB0964-987A-4CDC-8CC4-D059C20201DB}'
Id='{A4228020-2D9B-43FA-B3AE-4EE6297105F5}'
UpgradeCode='{A95EA002-686E-4164-8356-C715B7F8B1C8}'
Version='$(var.Version)'
Manufacturer='$(var.Manufacturer)'

View File

@ -1,7 +1,7 @@
<?xml version="1.0"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Version>6.0.16</Version>
<Version>6.0.19</Version>
<PackageName>6.0</PackageName>
</PropertyGroup>
</Project>