Merge pull request #952 from etschannen/master

Merge 6.0 into master
This commit is contained in:
Evan Tschannen 2018-11-27 14:42:49 -08:00 committed by GitHub
commit 4681bebe45
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 308 additions and 110 deletions

View File

@ -10,38 +10,38 @@ macOS
The macOS installation package is supported on macOS 10.7+. It includes the client and (optionally) the server.
* `FoundationDB-6.0.15.pkg <https://www.foundationdb.org/downloads/6.0.15/macOS/installers/FoundationDB-6.0.15.pkg>`_
* `FoundationDB-6.0.16.pkg <https://www.foundationdb.org/downloads/6.0.16/macOS/installers/FoundationDB-6.0.16.pkg>`_
Ubuntu
------
The Ubuntu packages are supported on 64-bit Ubuntu 12.04+, but beware of the Linux kernel bug in Ubuntu 12.x.
* `foundationdb-clients-6.0.15-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.15/ubuntu/installers/foundationdb-clients_6.0.15-1_amd64.deb>`_
* `foundationdb-server-6.0.15-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.15/ubuntu/installers/foundationdb-server_6.0.15-1_amd64.deb>`_ (depends on the clients package)
* `foundationdb-clients-6.0.16-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.16/ubuntu/installers/foundationdb-clients_6.0.16-1_amd64.deb>`_
* `foundationdb-server-6.0.16-1_amd64.deb <https://www.foundationdb.org/downloads/6.0.16/ubuntu/installers/foundationdb-server_6.0.16-1_amd64.deb>`_ (depends on the clients package)
RHEL/CentOS EL6
---------------
The RHEL/CentOS EL6 packages are supported on 64-bit RHEL/CentOS 6.x.
* `foundationdb-clients-6.0.15-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel6/installers/foundationdb-clients-6.0.15-1.el6.x86_64.rpm>`_
* `foundationdb-server-6.0.15-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel6/installers/foundationdb-server-6.0.15-1.el6.x86_64.rpm>`_ (depends on the clients package)
* `foundationdb-clients-6.0.16-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.16/rhel6/installers/foundationdb-clients-6.0.16-1.el6.x86_64.rpm>`_
* `foundationdb-server-6.0.16-1.el6.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.16/rhel6/installers/foundationdb-server-6.0.16-1.el6.x86_64.rpm>`_ (depends on the clients package)
RHEL/CentOS EL7
---------------
The RHEL/CentOS EL7 packages are supported on 64-bit RHEL/CentOS 7.x.
* `foundationdb-clients-6.0.15-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel7/installers/foundationdb-clients-6.0.15-1.el7.x86_64.rpm>`_
* `foundationdb-server-6.0.15-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.15/rhel7/installers/foundationdb-server-6.0.15-1.el7.x86_64.rpm>`_ (depends on the clients package)
* `foundationdb-clients-6.0.16-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.16/rhel7/installers/foundationdb-clients-6.0.16-1.el7.x86_64.rpm>`_
* `foundationdb-server-6.0.16-1.el7.x86_64.rpm <https://www.foundationdb.org/downloads/6.0.16/rhel7/installers/foundationdb-server-6.0.16-1.el7.x86_64.rpm>`_ (depends on the clients package)
Windows
-------
The Windows installer is supported on 64-bit Windows XP and later. It includes the client and (optionally) the server.
* `foundationdb-6.0.15-x64.msi <https://www.foundationdb.org/downloads/6.0.15/windows/installers/foundationdb-6.0.15-x64.msi>`_
* `foundationdb-6.0.16-x64.msi <https://www.foundationdb.org/downloads/6.0.16/windows/installers/foundationdb-6.0.16-x64.msi>`_
API Language Bindings
=====================
@ -58,18 +58,18 @@ On macOS and Windows, the FoundationDB Python API bindings are installed as part
If you need to use the FoundationDB Python API from other Python installations or paths, download the Python package:
* `foundationdb-6.0.15.tar.gz <https://www.foundationdb.org/downloads/6.0.15/bindings/python/foundationdb-6.0.15.tar.gz>`_
* `foundationdb-6.0.16.tar.gz <https://www.foundationdb.org/downloads/6.0.16/bindings/python/foundationdb-6.0.16.tar.gz>`_
Ruby 1.9.3/2.0.0+
-----------------
* `fdb-6.0.15.gem <https://www.foundationdb.org/downloads/6.0.15/bindings/ruby/fdb-6.0.15.gem>`_
* `fdb-6.0.16.gem <https://www.foundationdb.org/downloads/6.0.16/bindings/ruby/fdb-6.0.16.gem>`_
Java 8+
-------
* `fdb-java-6.0.15.jar <https://www.foundationdb.org/downloads/6.0.15/bindings/java/fdb-java-6.0.15.jar>`_
* `fdb-java-6.0.15-javadoc.jar <https://www.foundationdb.org/downloads/6.0.15/bindings/java/fdb-java-6.0.15-javadoc.jar>`_
* `fdb-java-6.0.16.jar <https://www.foundationdb.org/downloads/6.0.16/bindings/java/fdb-java-6.0.16.jar>`_
* `fdb-java-6.0.16-javadoc.jar <https://www.foundationdb.org/downloads/6.0.16/bindings/java/fdb-java-6.0.16-javadoc.jar>`_
Go 1.1+
-------

View File

@ -5,6 +5,21 @@ Release Notes
6.0.16
======
Performance
-----------
* Added a new backup folder scheme which results in far fewer kv range folders. `(PR #939) <https://github.com/apple/foundationdb/pull/939>`_
Fixes
-----
* Blobstore REST client attempted to create buckets that already existed. `(PR #923) <https://github.com/apple/foundationdb/pull/923>`_
* DNS would fail if IPv6 responses were received. `(PR #945) <https://github.com/apple/foundationdb/pull/945>`_
* Backup expiration would occasionally fail due to an incorrect assert. `(PR #926) <https://github.com/apple/foundationdb/pull/926>`_
6.0.15
======
Features
--------
@ -68,7 +83,6 @@ Fixes
* HTTP client used by backup-to-blobstore now correctly treats response header field names as case insensitive. [6.0.15] `(PR #904) <https://github.com/apple/foundationdb/pull/904>`_
* Blobstore REST client was not following the S3 API in several ways (bucket name, date, and response formats). [6.0.15] `(PR #914) <https://github.com/apple/foundationdb/pull/914>`_
* Data distribution could queue shard movements for restoring replication at a low priority. [6.0.15] `(PR #907) <https://github.com/apple/foundationdb/pull/907>`_
* Blobstore REST client will no longer attempt to create a bucket that already exists. [6.0.16] `(PR #923) <https://github.com/apple/foundationdb/pull/923>`_
Fixes only impacting 6.0.0+
---------------------------

View File

@ -615,6 +615,15 @@ public:
return configSpace.pack(LiteralStringRef(__FUNCTION__));
}
// Number of kv range files that were both committed to persistent storage AND inserted into
// the snapshotRangeFileMap. Note that since insertions could replace 1 or more existing
// map entries this is not necessarily the number of entries currently in the map.
// This value exists to help with sizing of kv range folders for BackupContainers that
// require it.
KeyBackedBinaryValue<int64_t> snapshotRangeFileCount() {
return configSpace.pack(LiteralStringRef(__FUNCTION__));
}
// Coalesced set of ranges already dispatched for writing.
typedef KeyBackedMap<Key, bool> RangeDispatchMapT;
RangeDispatchMapT snapshotRangeDispatchMap() {
@ -671,6 +680,7 @@ public:
copy.snapshotBeginVersion().set(tr, beginVersion.get());
copy.snapshotTargetEndVersion().set(tr, endVersion);
copy.snapshotRangeFileCount().set(tr, 0);
return Void();
});

View File

@ -143,16 +143,36 @@ std::string BackupDescription::toString() const {
/* BackupContainerFileSystem implements a backup container which stores files in a nested folder structure.
* Inheritors must only defined methods for writing, reading, deleting, sizing, and listing files.
*
* BackupInfo is stored as a JSON document at
* /info
* Snapshots are stored as JSON at file paths like
* /snapshots/snapshot,startVersion,endVersion,totalBytes
* Log and Range data files at file paths like
* /logs/.../log,startVersion,endVersion,blockSize
* /ranges/.../range,version,uid,blockSize
* Snapshot manifests (a complete set of files constituting a database snapshot for the backup's target ranges)
* are stored as JSON files at paths like
* /snapshots/snapshot,minVersion,maxVersion,totalBytes
*
* Key range files for snapshots are stored at paths like
* /kvranges/snapshot,startVersion/N/range,version,uid,blockSize
* where startVersion is the version at which the backup snapshot execution began and N is a number
* that is increased as key range files are generated over time (at varying rates) such that there
* are around 5,000 key range files in each folder.
*
* Where ... is a multi level path which sorts lexically into version order and targets 10,000 or less
* entries in each folder (though a full speed snapshot could exceed this count at the innermost folder level)
* Note that startVersion will NOT correspond to the minVersion of a snapshot manifest because
* snapshot manifest min/max versions are based on the actual contained data and the first data
* file written will be after the start version of the snapshot's execution.
*
* Log files are at file paths like
* /logs/.../log,startVersion,endVersion,blockSize
* where ... is a multi level path which sorts lexically into version order and results in approximately 1
* unique folder per day containing about 5,000 files.
*
* BACKWARD COMPATIBILITY
*
* Prior to FDB version 6.0.16, key range files were stored using a different folder scheme. Newer versions
* still support this scheme for all restore and backup management operations but key range files generated
* by backup using version 6.0.16 or later use the scheme describe above.
*
* The old format stored key range files at paths like
* /ranges/.../range,version,uid,blockSize
* where ... is a multi level path with sorts lexically into version order and results in up to approximately
* 900 unique folders per day. The number of files per folder depends on the configured snapshot rate and
* database size and will vary from 1 to around 5,000.
*/
class BackupContainerFileSystem : public IBackupContainer {
public:
@ -166,8 +186,8 @@ public:
virtual Future<Void> create() = 0;
// Get a list of fileNames and their sizes in the container under the given path
// The implementation can (but does not have to) use the folder path filter to avoid traversing
// specific subpaths.
// Although not required, an implementation can avoid traversing unwanted subfolders
// by calling folderPathFilter(absoluteFolderPath) and checking for a false return value.
typedef std::vector<std::pair<std::string, int64_t>> FilesAndSizesT;
virtual Future<FilesAndSizesT> listFiles(std::string path = "", std::function<bool(std::string const &)> folderPathFilter = nullptr) = 0;
@ -207,10 +227,24 @@ public:
}
// The innermost folder covers 100 seconds (1e8 versions) During a full speed backup it is possible though very unlikely write about 10,000 snapshot range files during that time.
static std::string rangeVersionFolderString(Version v) {
static std::string old_rangeVersionFolderString(Version v) {
return format("ranges/%s/", versionFolderString(v, 8).c_str());
}
// Get the root folder for a snapshot's data based on its begin version
static std::string snapshotFolderString(Version snapshotBeginVersion) {
return format("kvranges/snapshot.%018lld", snapshotBeginVersion);
}
// Extract the snapshot begin version from a path
static Version extractSnapshotBeginVersion(std::string path) {
Version snapshotBeginVersion;
if(sscanf(path.c_str(), "kvranges/snapshot.%018lld", &snapshotBeginVersion) == 1) {
return snapshotBeginVersion;
}
return invalidVersion;
}
// The innermost folder covers 100,000 seconds (1e11 versions) which is 5,000 mutation log files at current settings.
static std::string logVersionFolderString(Version v) {
return format("logs/%s/", versionFolderString(v, 11).c_str());
@ -220,8 +254,15 @@ public:
return writeFile(logVersionFolderString(beginVersion) + format("log,%lld,%lld,%s,%d", beginVersion, endVersion, g_random->randomUniqueID().toString().c_str(), blockSize));
}
Future<Reference<IBackupFile>> writeRangeFile(Version version, int blockSize) {
return writeFile(rangeVersionFolderString(version) + format("range,%lld,%s,%d", version, g_random->randomUniqueID().toString().c_str(), blockSize));
Future<Reference<IBackupFile>> writeRangeFile(Version snapshotBeginVersion, int snapshotFileCount, Version fileVersion, int blockSize) {
std::string fileName = format("range,%lld,%s,%d", fileVersion, g_random->randomUniqueID().toString().c_str(), blockSize);
// In order to test backward compatibility in simulation, sometimes write to the old path format
if(g_network->isSimulated() && g_random->coinflip()) {
return writeFile(old_rangeVersionFolderString(fileVersion) + fileName);
}
return writeFile(snapshotFolderString(snapshotBeginVersion) + format("/%d/", snapshotFileCount / (BUGGIFY ? 1 : 5000)) + fileName);
}
static bool pathToRangeFile(RangeFile &out, std::string path, int64_t size) {
@ -265,6 +306,7 @@ public:
// TODO: Do this more efficiently, as the range file list for a snapshot could potentially be hundreds of megabytes.
ACTOR static Future<std::vector<RangeFile>> readKeyspaceSnapshot_impl(Reference<BackupContainerFileSystem> bc, KeyspaceSnapshotFile snapshot) {
// Read the range file list for the specified version range, and then index them by fileName.
// This is so we can verify that each of the files listed in the manifest file are also in the container at this time.
std::vector<RangeFile> files = wait(bc->listRangeFiles(snapshot.beginVersion, snapshot.endVersion));
state std::map<std::string, RangeFile> rangeIndex;
for(auto &f : files)
@ -386,11 +428,12 @@ public:
});
}
// List range files, in sorted version order, which contain data at or between beginVersion and endVersion
Future<std::vector<RangeFile>> listRangeFiles(Version beginVersion = 0, Version endVersion = std::numeric_limits<Version>::max()) {
// List range files which contain data at or between beginVersion and endVersion
// NOTE: This reads the range file folder schema from FDB 6.0.15 and earlier and is provided for backward compatibility
Future<std::vector<RangeFile>> old_listRangeFiles(Version beginVersion, Version endVersion) {
// Get the cleaned (without slashes) first and last folders that could contain relevant results.
std::string firstPath = cleanFolderString(rangeVersionFolderString(beginVersion));
std::string lastPath = cleanFolderString(rangeVersionFolderString(endVersion));
std::string firstPath = cleanFolderString(old_rangeVersionFolderString(beginVersion));
std::string lastPath = cleanFolderString(old_rangeVersionFolderString(endVersion));
std::function<bool(std::string const &)> pathFilter = [=](const std::string &folderPath) {
// Remove slashes in the given folder path so that the '/' positions in the version folder string do not matter
@ -407,6 +450,39 @@ public:
if(pathToRangeFile(rf, f.first, f.second) && rf.version >= beginVersion && rf.version <= endVersion)
results.push_back(rf);
}
return results;
});
}
// List range files, sorted in version order, which contain data at or between beginVersion and endVersion
// Note: The contents of each top level snapshot.N folder do not necessarily constitute a valid snapshot
// and therefore listing files is not how RestoreSets are obtained.
// Note: Snapshots partially written using FDB versions prior to 6.0.16 will have some range files stored
// using the old folder scheme read by old_listRangeFiles
Future<std::vector<RangeFile>> listRangeFiles(Version beginVersion, Version endVersion) {
// Until the old folder scheme is no longer supported, read files stored using old folder scheme
Future<std::vector<RangeFile>> oldFiles = old_listRangeFiles(beginVersion, endVersion);
// Define filter function (for listFiles() implementations that use it) to reject any folder
// starting after endVersion
std::function<bool(std::string const &)> pathFilter = [=](std::string const &path) {
return extractSnapshotBeginVersion(path) <= endVersion;
};
Future<std::vector<RangeFile>> newFiles = map(listFiles("kvranges/", pathFilter), [=](const FilesAndSizesT &files) {
std::vector<RangeFile> results;
RangeFile rf;
for(auto &f : files) {
if(pathToRangeFile(rf, f.first, f.second) && rf.version >= beginVersion && rf.version <= endVersion)
results.push_back(rf);
}
return results;
});
return map(success(oldFiles) && success(newFiles), [=](Void _) {
std::vector<RangeFile> results = std::move(newFiles.get());
std::vector<RangeFile> oldResults = std::move(oldFiles.get());
results.insert(results.end(), std::make_move_iterator(oldResults.begin()), std::make_move_iterator(oldResults.end()));
std::sort(results.begin(), results.end());
return results;
});
@ -1362,6 +1438,15 @@ ACTOR Future<Optional<int64_t>> timeKeeperEpochsFromVersion(Version v, Reference
return found.first + (v - found.second) / CLIENT_KNOBS->CORE_VERSIONSPERSECOND;
}
int chooseFileSize(std::vector<int> &sizes) {
int size = 1000;
if(!sizes.empty()) {
size = sizes.back();
sizes.pop_back();
}
return size;
}
ACTOR Future<Void> writeAndVerifyFile(Reference<IBackupContainer> c, Reference<IBackupFile> f, int size) {
state Standalone<StringRef> content;
if(size > 0) {
@ -1384,6 +1469,12 @@ ACTOR Future<Void> writeAndVerifyFile(Reference<IBackupContainer> c, Reference<I
return Void();
}
// Randomly advance version by up to 1 second of versions
Version nextVersion(Version v) {
int64_t increment = g_random->randomInt64(1, CLIENT_KNOBS->CORE_VERSIONSPERSECOND);
return v + increment;
}
ACTOR Future<Void> testBackupContainer(std::string url) {
printf("BackupContainerTest URL %s\n", url.c_str());
@ -1399,86 +1490,115 @@ ACTOR Future<Void> testBackupContainer(std::string url) {
wait(c->create());
state int64_t versionShift = g_random->randomInt64(0, std::numeric_limits<Version>::max() - 500);
state std::vector<Future<Void>> writes;
state std::map<Version, std::vector<std::string>> snapshots;
state std::map<Version, int64_t> snapshotSizes;
state int nRangeFiles = 0;
state std::map<Version, std::string> logs;
state Version v = g_random->randomInt64(0, std::numeric_limits<Version>::max() / 2);
state Reference<IBackupFile> log1 = wait(c->writeLogFile(100 + versionShift, 150 + versionShift, 10));
state Reference<IBackupFile> log2 = wait(c->writeLogFile(150 + versionShift, 300 + versionShift, 10));
state Reference<IBackupFile> range1 = wait(c->writeRangeFile(160 + versionShift, 10));
state Reference<IBackupFile> range2 = wait(c->writeRangeFile(300 + versionShift, 10));
state Reference<IBackupFile> range3 = wait(c->writeRangeFile(310 + versionShift, 10));
// List of sizes to use to test edge cases on underlying file implementations
state std::vector<int> fileSizes = {0, 10000000, 5000005};
wait(
writeAndVerifyFile(c, log1, 0)
&& writeAndVerifyFile(c, log2, g_random->randomInt(0, 10000000))
&& writeAndVerifyFile(c, range1, g_random->randomInt(0, 1000))
&& writeAndVerifyFile(c, range2, g_random->randomInt(0, 100000))
&& writeAndVerifyFile(c, range3, g_random->randomInt(0, 3000000))
);
loop {
state Version logStart = v;
state int kvfiles = g_random->randomInt(0, 3);
wait(
c->writeKeyspaceSnapshotFile({range1->getFileName(), range2->getFileName()}, range1->size() + range2->size())
&& c->writeKeyspaceSnapshotFile({range3->getFileName()}, range3->size())
);
while(kvfiles > 0) {
if(snapshots.empty()) {
snapshots[v] = {};
snapshotSizes[v] = 0;
if(g_random->coinflip()) {
v = nextVersion(v);
}
}
Reference<IBackupFile> range = wait(c->writeRangeFile(snapshots.rbegin()->first, 0, v, 10));
++nRangeFiles;
v = nextVersion(v);
snapshots.rbegin()->second.push_back(range->getFileName());
printf("Checking file list dump\n");
FullBackupListing listing = wait(c->dumpFileList());
ASSERT(listing.logs.size() == 2);
ASSERT(listing.ranges.size() == 3);
ASSERT(listing.snapshots.size() == 2);
int size = chooseFileSize(fileSizes);
snapshotSizes.rbegin()->second += size;
writes.push_back(writeAndVerifyFile(c, range, size));
if(g_random->random01() < .2) {
writes.push_back(c->writeKeyspaceSnapshotFile(snapshots.rbegin()->second, snapshotSizes.rbegin()->second));
snapshots[v] = {};
snapshotSizes[v] = 0;
break;
}
--kvfiles;
}
if(logStart == v || g_random->coinflip()) {
v = nextVersion(v);
}
state Reference<IBackupFile> log = wait(c->writeLogFile(logStart, v, 10));
logs[logStart] = log->getFileName();
int size = chooseFileSize(fileSizes);
writes.push_back(writeAndVerifyFile(c, log, size));
// Randomly stop after a snapshot has finished and all manually seeded file sizes have been used.
if(fileSizes.empty() && !snapshots.empty() && snapshots.rbegin()->second.empty() && g_random->random01() < .2) {
snapshots.erase(snapshots.rbegin()->first);
break;
}
}
wait(waitForAll(writes));
state FullBackupListing listing = wait(c->dumpFileList());
ASSERT(listing.ranges.size() == nRangeFiles);
ASSERT(listing.logs.size() == logs.size());
ASSERT(listing.snapshots.size() == snapshots.size());
state BackupDescription desc = wait(c->describeBackup());
printf("Backup Description 1\n%s", desc.toString().c_str());
printf("\n%s\n", desc.toString().c_str());
ASSERT(desc.maxRestorableVersion.present());
Optional<RestorableFileSet> rest = wait(c->getRestoreSet(desc.maxRestorableVersion.get()));
ASSERT(rest.present());
ASSERT(rest.get().logs.size() == 0);
ASSERT(rest.get().ranges.size() == 1);
// Do a series of expirations and verify resulting state
state int i = 0;
for(; i < listing.snapshots.size(); ++i) {
// Ensure we can still restore to the latest version
Optional<RestorableFileSet> rest = wait(c->getRestoreSet(desc.maxRestorableVersion.get()));
ASSERT(rest.present());
Optional<RestorableFileSet> rest = wait(c->getRestoreSet(150 + versionShift));
ASSERT(!rest.present());
// Ensure we can restore to the end version of snapshot i
Optional<RestorableFileSet> rest = wait(c->getRestoreSet(listing.snapshots[i].endVersion));
ASSERT(rest.present());
Optional<RestorableFileSet> rest = wait(c->getRestoreSet(300 + versionShift));
ASSERT(rest.present());
ASSERT(rest.get().logs.size() == 1);
ASSERT(rest.get().ranges.size() == 2);
// Test expiring to the end of this snapshot
state Version expireVersion = listing.snapshots[i].endVersion;
printf("Expire 1\n");
wait(c->expireData(100 + versionShift));
BackupDescription d = wait(c->describeBackup());
printf("Backup Description 2\n%s", d.toString().c_str());
ASSERT(d.minLogBegin == 100 + versionShift);
ASSERT(d.maxRestorableVersion == desc.maxRestorableVersion);
// Expire everything up to but not including the snapshot end version
printf("EXPIRE TO %lld\n", expireVersion);
state Future<Void> f = c->expireData(expireVersion);
wait(ready(f));
printf("Expire 2\n");
wait(c->expireData(101 + versionShift));
BackupDescription d = wait(c->describeBackup());
printf("Backup Description 3\n%s", d.toString().c_str());
ASSERT(d.minLogBegin == 100 + versionShift);
ASSERT(d.maxRestorableVersion == desc.maxRestorableVersion);
// If there is an error, it must be backup_cannot_expire and we have to be on the last snapshot
if(f.isError()) {
ASSERT(f.getError().code() == error_code_backup_cannot_expire);
ASSERT(i == listing.snapshots.size() - 1);
wait(c->expireData(expireVersion, true));
}
printf("Expire 3\n");
wait(c->expireData(300 + versionShift));
BackupDescription d = wait(c->describeBackup());
printf("Backup Description 4\n%s", d.toString().c_str());
ASSERT(d.minLogBegin.present());
ASSERT(d.snapshots.size() == desc.snapshots.size());
ASSERT(d.maxRestorableVersion == desc.maxRestorableVersion);
printf("Expire 4\n");
wait(c->expireData(301 + versionShift, true));
BackupDescription d = wait(c->describeBackup());
printf("Backup Description 4\n%s", d.toString().c_str());
ASSERT(d.snapshots.size() == 1);
ASSERT(!d.minLogBegin.present());
BackupDescription d = wait(c->describeBackup());
printf("\n%s\n", d.toString().c_str());
}
printf("DELETING\n");
wait(c->deleteContainer());
BackupDescription d = wait(c->describeBackup());
printf("Backup Description 5\n%s", d.toString().c_str());
printf("\n%s\n", d.toString().c_str());
ASSERT(d.snapshots.size() == 0);
ASSERT(!d.minLogBegin.present());
FullBackupListing empty = wait(c->dumpFileList());
ASSERT(empty.ranges.size() == 0);
ASSERT(empty.logs.size() == 0);
ASSERT(empty.snapshots.size() == 0);
printf("BackupContainerTest URL=%s PASSED.\n", url.c_str());
return Void();

View File

@ -156,7 +156,7 @@ public:
// Open a log file or range file for writing
virtual Future<Reference<IBackupFile>> writeLogFile(Version beginVersion, Version endVersion, int blockSize) = 0;
virtual Future<Reference<IBackupFile>> writeRangeFile(Version version, int blockSize) = 0;
virtual Future<Reference<IBackupFile>> writeRangeFile(Version snapshotBeginVersion, int snapshotFileCount, Version fileVersion, int blockSize) = 0;
// Write a KeyspaceSnapshotFile of range file names representing a full non overlapping
// snapshot of the key ranges this backup is targeting.

View File

@ -226,7 +226,21 @@ std::string BlobStoreEndpoint::getResourceURL(std::string resource) {
}
ACTOR Future<bool> bucketExists_impl(Reference<BlobStoreEndpoint> b, std::string bucket) {
Void _ = wait(b->requestRateRead->getAllowance(1));
wait(b->requestRateRead->getAllowance(1));
std::string resource = std::string("/") + bucket;
HTTP::Headers headers;
Reference<HTTP::Response> r = wait(b->doRequest("HEAD", resource, headers, NULL, 0, {200, 404}));
return r->code == 200;
}
Future<bool> BlobStoreEndpoint::bucketExists(std::string const &bucket) {
return bucketExists_impl(Reference<BlobStoreEndpoint>::addRef(this), bucket);
}
ACTOR Future<bool> objectExists_impl(Reference<BlobStoreEndpoint> b, std::string bucket, std::string object) {
wait(b->requestRateRead->getAllowance(1));
std::string resource = std::string("/") + bucket;
HTTP::Headers headers;

View File

@ -1004,6 +1004,7 @@ namespace fileBackup {
// Update the range bytes written in the backup config
backup.rangeBytesWritten().atomicOp(tr, file->size(), MutationRef::AddValue);
backup.snapshotRangeFileCount().atomicOp(tr, 1, MutationRef::AddValue);
// See if there is already a file for this key which has an earlier begin, update the map if not.
Optional<BackupConfig::RangeSlice> s = wait(backup.snapshotRangeFileMap().get(tr, range.end));
@ -1129,11 +1130,31 @@ namespace fileBackup {
if(done)
return Void();
// Start writing a new file
// Start writing a new file after verifying this task should keep running as of a new read version (which must be >= outVersion)
outVersion = values.second;
// block size must be at least large enough for 3 max size keys and 2 max size values + overhead so 250k conservatively.
state int blockSize = BUGGIFY ? g_random->randomInt(250e3, 4e6) : CLIENT_KNOBS->BACKUP_RANGEFILE_BLOCK_SIZE;
Reference<IBackupFile> f = wait(bc->writeRangeFile(outVersion, blockSize));
state Version snapshotBeginVersion;
state int64_t snapshotRangeFileCount;
state Reference<ReadYourWritesTransaction> tr(new ReadYourWritesTransaction(cx));
loop {
try {
tr->setOption(FDBTransactionOptions::ACCESS_SYSTEM_KEYS);
tr->setOption(FDBTransactionOptions::LOCK_AWARE);
wait(taskBucket->keepRunning(tr, task)
&& storeOrThrow(backup.snapshotBeginVersion().get(tr), snapshotBeginVersion)
&& storeOrThrow(backup.snapshotRangeFileCount().get(tr), snapshotRangeFileCount)
);
break;
} catch(Error &e) {
wait(tr->onError(e));
}
}
Reference<IBackupFile> f = wait(bc->writeRangeFile(snapshotBeginVersion, snapshotRangeFileCount, outVersion, blockSize));
outFile = f;
// Initialize range file writer and write begin key

View File

@ -164,7 +164,6 @@ public:
ASIOReactor reactor;
INetworkConnections *network; // initially this, but can be changed
tcp::resolver tcpResolver;
int64_t tsc_begin, tsc_end;
double taskBegin;
@ -484,7 +483,6 @@ Net2::Net2(NetworkAddress localAddress, bool useThreadPool, bool useMetrics)
: useThreadPool(useThreadPool),
network(this),
reactor(this),
tcpResolver(reactor.ios),
stopped(false),
tasksIssued(0),
// Until run() is called, yield() will always yield
@ -841,11 +839,13 @@ Future< Reference<IConnection> > Net2::connect( NetworkAddress toAddr, std::stri
}
ACTOR static Future<std::vector<NetworkAddress>> resolveTCPEndpoint_impl( Net2 *self, std::string host, std::string service) {
Promise<std::vector<NetworkAddress>> result;
state tcp::resolver tcpResolver(self->reactor.ios);
Promise<std::vector<NetworkAddress>> promise;
state Future<std::vector<NetworkAddress>> result = promise.getFuture();
self->tcpResolver.async_resolve(tcp::resolver::query(host, service), [=](const boost::system::error_code &ec, tcp::resolver::iterator iter) {
tcpResolver.async_resolve(tcp::resolver::query(host, service), [=](const boost::system::error_code &ec, tcp::resolver::iterator iter) {
if(ec) {
result.sendError(lookup_failed());
promise.sendError(lookup_failed());
return;
}
@ -853,18 +853,27 @@ ACTOR static Future<std::vector<NetworkAddress>> resolveTCPEndpoint_impl( Net2 *
tcp::resolver::iterator end;
while(iter != end) {
// The easiest way to get an ip:port formatted endpoint with this interface is with a string stream because
// endpoint::to_string doesn't exist but operator<< does.
std::stringstream s;
s << iter->endpoint();
addrs.push_back(NetworkAddress::parse(s.str()));
auto endpoint = iter->endpoint();
// Currently only ipv4 is supported by NetworkAddress
auto addr = endpoint.address();
if(addr.is_v4()) {
addrs.push_back(NetworkAddress(addr.to_v4().to_ulong(), endpoint.port()));
}
++iter;
}
result.send(addrs);
if(addrs.empty()) {
promise.sendError(lookup_failed());
}
else {
promise.send(addrs);
}
});
std::vector<NetworkAddress> addresses = wait(result.getFuture());
return addresses;
wait(ready(result));
tcpResolver.cancel();
return result.get();
}
Future<std::vector<NetworkAddress>> Net2::resolveTCPEndpoint( std::string host, std::string service) {

View File

@ -298,6 +298,16 @@ Future<Void> store(Future<T> what, T &out) {
return map(what, [&out](T const &v) { out = v; return Void(); });
}
template<class T>
Future<Void> storeOrThrow(Future<Optional<T>> what, T &out, Error e = key_not_found()) {
return map(what, [&out,e](Optional<T> const &o) {
if(!o.present())
throw e;
out = o.get();
return Void();
});
}
//Waits for a future to be ready, and then applies an asynchronous function to it.
ACTOR template<class T, class F, class U = decltype( fake<F>()(fake<T>()).getValue() )>
Future<U> mapAsync(Future<T> what, F actorFunc)

View File

@ -32,7 +32,7 @@
<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'>
<Product Name='$(var.Title)'
Id='{0EDB0964-987A-4CDC-8CC4-D059C20201DB}'
Id='{46B49D33-3A61-4867-A65A-7436A1A6FF4A}'
UpgradeCode='{A95EA002-686E-4164-8356-C715B7F8B1C8}'
Version='$(var.Version)'
Manufacturer='$(var.Manufacturer)'