Patch improves on handling scenarios where either commit or grv proxies
value is update to -1 OR `proxies_count` is being reset.
The code splits the proxies between two proxies by ensuring for invalid
input configuration, the min (read as 1) proxies gets provisioned, otherwise,
the split is done based on input values
Patch handles the scenario where mutation supplied values to update grv_proxies
and/or commit_proxies is -1, however, the total proxy count > 1,
uses DEFAULT_COMMIT_GRV_PROXIES_RATIO to split proxies between
grv_proxies & commit_proxies.
Whem the database is not fully replicated to remote, fault tolerance
can be calculated by couting total number of tLogs replicas in primary
and satellite - 1, as we can lose all but one of those, but we were
subtracting 2 instead of 1.
With this patch, if there is a configuration special key in the
database, even if the value is reverted to default/hidden, it will still
be forwarded when constructing a JSON string. This should address the
requirements described in https://github.com/apple/foundationdb/issues/3551.
This causes the following to not compile anymore
\#include <utility>
\#include <vector>
using namespace std::rel_ops;
int main() {
std::vector<int> xs;
return xs.rbegin() != xs.rend();
}
See https://godbolt.org/z/s1977n
Right now, the default is to keep the old backup behavior, i.e., do NOT use
backup workers. Specifically, if BackupType is not set (or is set to default),
the master will not recruit backup workers and will not add pseudo locality for
backup workers.
The StartFullBackupTaskFunc is updated to check if backup worker is enabled.
Only when it is not enabled, starting a backup will wait on all backup workers
to be started.
This mega-commit introduces a new configuration setting, `log_version`,
that controls the TLog implementations and features that are available
within FDB, so that users can opt in to new features if they're willing
to sacrifice backwards compatibility.
fix: the cluster controller attempts to do a commit to determine if the cluster is alive, since its own internal recoveryState might not be up-to-date.
fix: forceMasterFailure on the cluster controller did not always cause the current master to be re-recruited