Merge pull request #6589 from sfc-gh-tclinkenbeard/fix-typos

Fix typos
This commit is contained in:
Jingyu Zhou 2022-03-15 10:10:23 -07:00 committed by GitHub
commit e89ee7d5a0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 59 additions and 60 deletions

View File

@ -149,7 +149,7 @@ Format
------
| One operation type is defined as ``<Type><Count>`` or ``<Type><Count>:<Range>``.
| When Count is omitted, it's equivalent to setting it to 1. (e.g. ``g`` is equivalent to ``g1``)
| Multiple operation types within the same trancaction can be concatenated. (e.g. ``g9u1`` = 9 GETs and 1 update)
| Multiple operation types within the same transaction can be concatenated. (e.g. ``g9u1`` = 9 GETs and 1 update)
Transaction Specification Examples
----------------------------------

View File

@ -161,7 +161,7 @@ struct RangeResultRef : VectorRef<KeyValueRef> {
// False implies that no such values remain
Optional<KeyRef> readThrough; // Only present when 'more' is true. When present, this value represent the end (or
// beginning if reverse) of the range
// which was read to produce these results. This is guarenteed to be less than the requested range.
// which was read to produce these results. This is guaranteed to be less than the requested range.
bool readToBegin;
bool readThroughEnd;

View File

@ -24,7 +24,6 @@ import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;
/**
* Used to represent values written by versionstamp operations with a {@link Tuple}.
* This wraps a single array which should contain twelve bytes. The first ten bytes
@ -37,7 +36,7 @@ import java.util.Arrays;
* over time. The final two bytes are the "user" version and should be set by the client.
* This allows the user to use this class to impose a total order of items across multiple
* transactions in the database in a consistent and conflict-free way. The user can elect to
* ignore this parameter by instantiating the class with the paramaterless {@link #incomplete() incomplete()}
* ignore this parameter by instantiating the class with the parameterless {@link #incomplete() incomplete()}
* and one-parameter {@link #complete(byte[]) complete} static initializers. If they do so,
* then versions are written with a default (constant) user version.
*

View File

@ -454,7 +454,7 @@ An |database-blurb1| Modifications to a database are performed via transactions.
The function will change the region configuration to have a positive priority for the chosen dcId, and a negative priority for all other dcIds.
In particular, no error will be thrown if the given dcId does not exist. It will just not attemp to force a recovery.
In particular, no error will be thrown if the given dcId does not exist. It will just not attempt to force a recovery.
If the database has already recovered, the function does nothing. Thus it's safe to call it multiple times.

View File

@ -115,7 +115,7 @@ Here is a complete list of valid parameters:
*request_timeout_min* (or *rtom*) - Minimum number of seconds to wait for a request to succeed after a connection is established.
*request_tries* (or *rt*) - Number of times to try each request until a parseable HTTP response other than 429 is received.
*request_tries* (or *rt*) - Number of times to try each request until a parsable HTTP response other than 429 is received.
*requests_per_second* (or *rps*) - Max number of requests to start per second.

View File

@ -11,7 +11,7 @@ Testing Error Handling with Buggify
FoundationDB clients need to handle errors correctly. Wrong error handling can lead to many bugs - in the worst case it can
lead to a corrupted database. Because of this it is important that an application or layer author tests properly their
application during failure scenarios. But this is non-trivial. In a developement environment cluster failures are very
application during failure scenarios. But this is non-trivial. In a development environment cluster failures are very
unlikely and it is therefore possible that certain types of exceptions are never tested in a controlled environment.
The simplest way of testing for these kind of errors is a simple mechanism called ``Buggify``. If this option is enabled
@ -327,7 +327,7 @@ processes with the class test. So above 2-step process becomes a bit more comple
1. Write the test (same as above).
2. Set up a cluster with as many test clients as you want.
3. Run the orchestor to actually execute the test.
3. Run the orchestrator to actually execute the test.
Step 1. is explained further up. For step 2., please refer to the general FoundationDB
configuration. The main difference to a normal FoundationDB cluster is that some processes

View File

@ -915,7 +915,7 @@ When using FoundationDB we strongly recommend users to use the retry-loop. In Py
except FDBError as e:
tr.on_error(e.code).wait()
This is also what the transaction decoration in python does, if you pass a ``Database`` object to a decorated function. There are some interesting properies of this retry loop:
This is also what the transaction decoration in python does, if you pass a ``Database`` object to a decorated function. There are some interesting properties of this retry loop:
* We never create a new transaction within that loop. Instead ``tr.on_error`` will create a soft reset on the transaction.
* ``tr.on_error`` returns a future. This is because ``on_error`` will do back off to make sure we don't overwhelm the cluster.

View File

@ -121,8 +121,8 @@ Aggregate stats about cluster health. Reading this key alone is slightly cheaper
**Field** **Type** **Description**
----------------------------------- -------- ---------------
batch_limited boolean Whether or not the cluster is limiting batch priority transactions
limiting_storage_durability_lag number storage_durability_lag that ratekeeper is using to determing throttling (see the description for storage_durability_lag)
limiting_storage_queue number storage_queue that ratekeeper is using to determing throttling (see the description for storage_queue)
limiting_storage_durability_lag number storage_durability_lag that ratekeeper is using to determine throttling (see the description for storage_durability_lag)
limiting_storage_queue number storage_queue that ratekeeper is using to determine throttling (see the description for storage_queue)
tps_limit number The rate at which normal priority transactions are allowed to start
worst_storage_durability_lag number See the description for storage_durability_lag
worst_storage_queue number See the description for storage_queue

View File

@ -13,7 +13,7 @@ This document covers the operation and architecture of the Testing Storage Serve
Summary
============
The TSS feature allows FoundationDB to run an "untrusted" storage engine (the *testing storage engine*) directly in a QA or production envronment with identical workload to the current storage engine, with zero impact on durability or correctness, and minimal impact on performance.
The TSS feature allows FoundationDB to run an "untrusted" storage engine (the *testing storage engine*) directly in a QA or production environment with identical workload to the current storage engine, with zero impact on durability or correctness, and minimal impact on performance.
This allows a FoundationDB cluster operator to validate the correctness and performance of a different storage engine on the exact cluster workload before migrating data to the different storage engine.
@ -44,10 +44,10 @@ The ``status`` command in the FDB :ref:`command line interface <command-line-int
Trace Events
----------------------
Whenever a client detects a *TSS Mismatch*, or when the SS and TSS response differ, and the difference can only be explained by different storage engine contents, it will emit an error-level trace event with a type starting with ``TSSMismatch``, with a different type for each read request. This trace event will include all of the information necessary to investgate the mismatch, such as the TSS storage ID, the full request data, and the summarized replies (full keys and checksummed values) from both the SS and TSS.
Whenever a client detects a *TSS Mismatch*, or when the SS and TSS response differ, and the difference can only be explained by different storage engine contents, it will emit an error-level trace event with a type starting with ``TSSMismatch``, with a different type for each read request. This trace event will include all of the information necessary to investigate the mismatch, such as the TSS storage ID, the full request data, and the summarized replies (full keys and checksummed values) from both the SS and TSS.
Each client emits a ``TSSClientMetrics`` trace event for each TSS pair in the cluster that it has sent requests to recently, similar to the ``TransactionMetrics`` trace event.
It contains the TSS storage ID, and latency statistics for each type of read request. It also includes a count of any mismatches, and a histogram of error codes recieved by the SS and TSS to ensure the storage engines have similar error rates and types.
It contains the TSS storage ID, and latency statistics for each type of read request. It also includes a count of any mismatches, and a histogram of error codes received by the SS and TSS to ensure the storage engines have similar error rates and types.
The ``StorageMetrics`` trace event emitted by storage servers includes the storage ID of its pair if part of a TSS pairing, and includes a ``TSSJointID`` detail with a unique id for the SS/TSS pair that enables correlating the separate StorageMetrics events from the SS and TSS.
@ -101,7 +101,7 @@ The pair recruitment logic is as follows:
* Once DD gets a candidate worker from the Cluster Controller, hold that worker as a desired TSS process.
* Once DD gets a second candidate worker from the Cluster Controller, initialize that worker as a normal SS.
* Once the second candidate worker is successfully initialized, initialize the first candidate worker as a TSS, passing it the storage ID, starting tag + version, and other information from its SS pair. Because the TSS reads from the same tag starting at the same version, it is guaranteed to recieve the same mutations and data movements as its pair.
* Once the second candidate worker is successfully initialized, initialize the first candidate worker as a TSS, passing it the storage ID, starting tag + version, and other information from its SS pair. Because the TSS reads from the same tag starting at the same version, it is guaranteed to receive the same mutations and data movements as its pair.
One implication of this is, during TSS recruitment, the cluster is effectively down one storage process until a second storage process becomes available.
While clusters should be able to handle being down a single storage process anyway to tolerate machine failure, an active TSS recruitment will be cancelled if the lack of that single storage process is causing the cluster to be unhealthy. Similarly, if the cluster is unhealthy and unable to find new teams to replicate data to, any existing TSS processes may be killed to make room for new storage servers.
@ -121,4 +121,4 @@ Because it is only enabled on a small percentage of the cluster and only compare
TSS testing using the recommended small number of TSS pairs may also miss performance pathologies from workloads not experienced by the specific storage teams with TSS pairs in their membership.
TSS testing is not a substitute for full-cluster performance and correctness testing or simulation testing.
TSS testing is not a substitute for full-cluster performance and correctness testing or simulation testing.

View File

@ -1661,7 +1661,7 @@ ACTOR Future<std::string> getLayerStatus(Reference<ReadYourWritesTransaction> tr
return json;
}
// Check for unparseable or expired statuses and delete them.
// Check for unparsable or expired statuses and delete them.
// First checks the first doc in the key range, and if it is valid, alive and not "me" then
// returns. Otherwise, checks the rest of the range as well.
ACTOR Future<Void> cleanupStatus(Reference<ReadYourWritesTransaction> tr,

View File

@ -1201,7 +1201,7 @@ void printStatus(StatusObjectReader statusObj,
// "db" is the handler to the multiversion database
// localDb is the native Database object
// localDb is rarely needed except the "db" has not establised a connection to the cluster where the operation will
// localDb is rarely needed except the "db" has not established a connection to the cluster where the operation will
// return Never as we expect status command to always return, we use "localDb" to return the default result
ACTOR Future<bool> statusCommandActor(Reference<IDatabase> db,
Database localDb,
@ -1255,4 +1255,4 @@ CommandFactory statusFactory(
"statistics.\n\nSpecifying `minimal' will provide a minimal description of the status of your "
"database.\n\nSpecifying `details' will provide load information for individual "
"workers.\n\nSpecifying `json' will provide status information in a machine readable JSON format."));
} // namespace fdb_cli
} // namespace fdb_cli

View File

@ -30,7 +30,7 @@
#include "fdbclient/BlobWorkerInterface.h"
#include "flow/actorcompiler.h" // This must be the last #include.
// TODO more efficient data structure besides std::map? PTree is unecessary since this isn't versioned, but some other
// TODO more efficient data structure besides std::map? PTree is unnecessary since this isn't versioned, but some other
// sorted thing could work. And if it used arenas it'd probably be more efficient with allocations, since everything
// else is in 1 arena and discarded at the end.

View File

@ -308,7 +308,7 @@ struct SplitShardReply {
};
// Split keyrange [shard.begin, shard.end) into num shards.
// Split points are chosen as the arithmeticlly equal division points of the given range.
// Split points are chosen as the arithmetically equal division points of the given range.
struct SplitShardRequest {
constexpr static FileIdentifier file_identifier = 1384443;
KeyRange shard;

View File

@ -656,7 +656,7 @@ struct RangeResultRef : VectorRef<KeyValueRef> {
// limits requested) False implies that no such values remain
Optional<KeyRef> readThrough; // Only present when 'more' is true. When present, this value represent the end (or
// beginning if reverse) of the range which was read to produce these results. This is
// guarenteed to be less than the requested range.
// guaranteed to be less than the requested range.
bool readToBegin;
bool readThroughEnd;

View File

@ -207,7 +207,7 @@ ACTOR Future<Void> read_http_response_headers(Reference<IConnection> conn,
// Reads an HTTP response from a network connection
// If the connection fails while being read the exception will emitted
// If the response is not parseable or complete in some way, http_bad_response will be thrown
// If the response is not parsable or complete in some way, http_bad_response will be thrown
ACTOR Future<Void> read_http_response(Reference<HTTP::Response> r, Reference<IConnection> conn, bool header_only) {
state std::string buf;
state size_t pos = 0;

View File

@ -555,7 +555,7 @@ ACTOR static Future<Void> transactionInfoCommitActor(Transaction* tr, std::vecto
for (auto& chunk : *chunks) {
tr->atomicOp(chunk.key, chunk.value, MutationRef::SetVersionstampedKey);
numCommitBytes += chunk.key.size() + chunk.value.size() -
4; // subtract number of bytes of key that denotes verstion stamp index
4; // subtract number of bytes of key that denotes version stamp index
}
tr->atomicOp(clientLatencyAtomicCtr, StringRef((uint8_t*)&numCommitBytes, 8), MutationRef::AddValue);
wait(tr->commit());
@ -3263,7 +3263,7 @@ ACTOR Future<Void> sameVersionDiffValue(Database cx, Reference<WatchParameters>
state Optional<Value> valSS = wait(tr.get(parameters->key));
Reference<WatchMetadata> metadata = cx->getWatchMetadata(parameters->key.contents());
// val_3 != val_1 (storage server value doesnt match value in map)
// val_3 != val_1 (storage server value doesn't match value in map)
if (metadata.isValid() && valSS != metadata->parameters->value) {
cx->deleteWatchMetadata(parameters->key.contents());
@ -6582,9 +6582,9 @@ ACTOR Future<Standalone<VectorRef<ReadHotRangeWithMetrics>>> getReadHotRanges(Da
UseProvisionalProxies::False));
try {
// TODO: how to handle this?
// This function is called whenever a shard becomes read-hot. But somehow the shard was splitted across more
// than one storage server after become read-hot and before this function is called, i.e. a race condition.
// Should we abort and wait the newly splitted shards to be hot again?
// This function is called whenever a shard becomes read-hot. But somehow the shard was split across more
// than one storage server after becoming read-hot and before this function is called, i.e. a race
// condition. Should we abort and wait for the newly split shards to be hot again?
state int nLocs = locations.size();
// if (nLocs > 1) {
// TraceEvent("RHDDebug")

View File

@ -132,12 +132,12 @@ void setupNetwork(uint64_t transportId = 0, UseMetrics = UseMetrics::False);
// call stopNetwork (from a non-networking thread) can cause the runNetwork() call to
// return.
//
// Throws network_already_setup if g_network has already been initalized
// Throws network_already_setup if g_network has already been initialized
void runNetwork();
// See above. Can be called from a thread that is not the "networking thread"
//
// Throws network_not_setup if g_network has not been initalized
// Throws network_not_setup if g_network has not been initialized
void stopNetwork();
struct StorageMetrics;

View File

@ -68,7 +68,7 @@ public:
"connect_tries (or ct) Number of times to try to connect for each request.",
"connect_timeout (or cto) Number of seconds to wait for a connect request to succeed.",
"max_connection_life (or mcl) Maximum number of seconds to use a single TCP connection.",
"request_tries (or rt) Number of times to try each request until a parseable HTTP "
"request_tries (or rt) Number of times to try each request until a parsable HTTP "
"response other than 429 is received.",
"request_timeout_min (or rtom) Number of seconds to wait for a request to succeed after a "
"connection is established.",

View File

@ -2568,7 +2568,7 @@ void includeLocalities(ReadYourWritesTransaction* ryw) {
}
}
// Reads the excludedlocality and failed locality keys using managment api,
// Reads the excludedlocality and failed locality keys using management api,
// parses them and returns the list.
bool parseLocalitiesFromKeys(ReadYourWritesTransaction* ryw,
bool failed,

View File

@ -281,7 +281,7 @@ public:
// Use special key prefix "\xff\xff/transaction/conflicting_keys/<some_key>",
// to retrieve keys which caused latest not_committed(conflicting with another transaction) error.
// The returned key value pairs are interpretted as :
// The returned key value pairs are interpreted as :
// prefix/<key1> : '1' - any keys equal or larger than this key are (probably) conflicting keys
// prefix/<key2> : '0' - any keys equal or larger than this key are (definitely) not conflicting keys
// Currently, the conflicting keyranges returned are original read_conflict_ranges or union of them.

View File

@ -490,7 +490,7 @@ const Value healthyZoneValue(StringRef const& zoneId, Version version);
std::pair<Key, Version> decodeHealthyZoneValue(ValueRef const&);
// All mutations done to this range are blindly copied into txnStateStore.
// Used to create artifically large txnStateStore instances in testing.
// Used to create artificially large txnStateStore instances in testing.
extern const KeyRangeRef testOnlyTxnStateStorePrefixRange;
// Snapshot + Incremental Restore

View File

@ -52,7 +52,7 @@ FDB_DECLARE_BOOLEAN_PARAM(UpdateParams);
// 4. If the executor loses contact with FDB, another executor may begin at step 2. The first
// Task execution can detect this by checking the result of keepRunning() periodically.
// 5. Once a Task execution's _execute() call returns, the _finish() step is called.
// _finish() is transactional and is guaraunteed to never be called more than once for the
// _finish() is transactional and is guaranteed to never be called more than once for the
// same Task
class Task : public ReferenceCounted<Task> {
public:

View File

@ -921,7 +921,7 @@ ACTOR static void deliver(TransportData* self,
bool inReadSocket) {
// We want to run the task at the right priority. If the priority is higher than the current priority (which is
// ReadSocket) we can just upgrade. Otherwise we'll context switch so that we don't block other tasks that might run
// with a higher priority. ReplyPromiseStream needs to guarentee that messages are recieved in the order they were
// with a higher priority. ReplyPromiseStream needs to guarantee that messages are received in the order they were
// sent, so we are using orderedDelay.
// NOTE: don't skip delay(0) when it's local deliver since it could cause out of order object deconstruction.
if (priority < TaskPriority::ReadSocket || !inReadSocket) {

View File

@ -36,7 +36,7 @@ public:
virtual void delref() = 0;
};
// An IRateControl implemenation that allows at most hands out at most windowLimit units of 'credit' in windowSeconds
// An IRateControl implementation that allows at most hands out at most windowLimit units of 'credit' in windowSeconds
// seconds
class SpeedLimit final : public IRateControl, ReferenceCounted<SpeedLimit> {
public:
@ -89,7 +89,7 @@ private:
Promise<Void> m_stop;
};
// An IRateControl implemenation that enforces no limit
// An IRateControl implementation that enforces no limit
class Unlimited final : public IRateControl, ReferenceCounted<Unlimited> {
public:
Unlimited() {}

View File

@ -274,7 +274,7 @@ struct AcknowledgementReply {
}
};
// Registered on the server to recieve acknowledgements that the client has received stream data. This prevents the
// Registered on the server to receive acknowledgements that the client has received stream data. This prevents the
// server from sending too much data to the client if the client is not consuming it.
struct AcknowledgementReceiver final : FlowReceiver, FastAllocated<AcknowledgementReceiver> {
using FastAllocated<AcknowledgementReceiver>::operator new;

View File

@ -865,7 +865,7 @@ public:
if (!ordered && !currentProcess->rebooting && machine == currentProcess &&
!currentProcess->shutdownSignal.isSet() && FLOW_KNOBS->MAX_BUGGIFIED_DELAY > 0 &&
deterministicRandom()->random01() < 0.25) { // FIXME: why doesnt this work when we are changing machines?
deterministicRandom()->random01() < 0.25) { // FIXME: why doesn't this work when we are changing machines?
seconds += FLOW_KNOBS->MAX_BUGGIFIED_DELAY * pow(deterministicRandom()->random01(), 1000.0);
}

View File

@ -703,7 +703,7 @@ private:
}
}
// Might be a tss removal, which doesn't store a tag there.
// Chained if is a little verbose, but avoids unecessary work
// Chained if is a little verbose, but avoids unnecessary work
if (toCommit && !initialCommit && !serverKeysCleared.size()) {
KeyRangeRef maybeTssRange = range & serverTagKeys;
if (maybeTssRange.singleKeyRange()) {

View File

@ -473,7 +473,7 @@ ACTOR Future<std::pair<BlobGranuleSplitState, Version>> getGranuleSplitState(Tra
}
// writeDelta file writes speculatively in the common case to optimize throughput. It creates the s3 object even though
// the data in it may not yet be committed, and even though previous delta fiels with lower versioned data may still be
// the data in it may not yet be committed, and even though previous delta files with lower versioned data may still be
// in flight. The synchronization happens after the s3 file is written, but before we update the FDB index of what files
// exist. Before updating FDB, we ensure the version is committed and all previous delta files have updated FDB.
ACTOR Future<BlobFileIndex> writeDeltaFile(Reference<BlobWorkerData> bwData,

View File

@ -598,7 +598,7 @@ ACTOR Future<GetReadVersionReply> getLiveCommittedVersion(SpanID parentSpan,
return rep;
}
// Returns the current read version (or minimum known committed verison if requested),
// Returns the current read version (or minimum known committed version if requested),
// to each request in the provided list. Also check if the request should be throttled.
// Update GRV statistics according to the request's priority.
ACTOR Future<Void> sendGrvReplies(Future<GetReadVersionReply> replyFuture,

View File

@ -2531,7 +2531,7 @@ void removeLog(TLogData* self, Reference<LogData> logData) {
}
}
// copy data from old gene to new gene without desiarlzing
// copy data from old gene to new gene without deserializing
ACTOR Future<Void> pullAsyncData(TLogData* self,
Reference<LogData> logData,
std::vector<Tag> tags,

View File

@ -1140,7 +1140,7 @@ ACTOR static Future<Void> signalRestoreCompleted(Reference<RestoreControllerData
}
/*
// Update the most recent time when controller receives hearbeat from each loader and applier
// Update the most recent time when controller receives heartbeat from each loader and applier
// TODO: Replace the heartbeat mechanism with FDB failure monitoring mechanism
ACTOR static Future<Void> updateHeartbeatTime(Reference<RestoreControllerData> self) {
wait(self->recruitedRoles.getFuture());

View File

@ -893,8 +893,8 @@ ACTOR Future<Void> sendMutationsToApplier(
UID applierID = nodeIDs[splitMutationIndex];
DEBUG_MUTATION("RestoreLoaderSplitMutation", commitVersion.version, mutation)
.detail("CommitVersion", commitVersion.toString());
// CAREFUL: The splitted mutations' lifetime is shorter than the for-loop
// Must use deep copy for splitted mutations
// CAREFUL: The split mutations' lifetime is shorter than the for-loop
// Must use deep copy for split mutations
applierVersionedMutationsBuffer[applierID].push_back_deep(
applierVersionedMutationsBuffer[applierID].arena(), VersionedMutation(mutation, commitVersion));
msgSize += mutation.expectedSize();
@ -986,7 +986,7 @@ ACTOR Future<Void> sendMutationsToApplier(
return Void();
}
// Splits a clear range mutation for Appliers and puts results of splitted mutations and
// Splits a clear range mutation for Appliers and puts results of split mutations and
// Applier IDs into "mvector" and "nodeIDs" on return.
void splitMutation(const KeyRangeMap<UID>& krMap,
MutationRef m,
@ -1180,7 +1180,7 @@ void _parseSerializedMutation(KeyRangeMap<Version>* pRangeVersions,
}
// Parsing the data blocks in a range file
// kvOpsIter: saves the parsed versioned-mutations for the sepcific LoadingParam;
// kvOpsIter: saves the parsed versioned-mutations for the specific LoadingParam;
// samplesIter: saves the sampled mutations from the parsed versioned-mutations;
// bc: backup container to read the backup file
// version: the version the parsed mutations should be at

View File

@ -105,7 +105,7 @@ void updateProcessStats(Reference<RestoreRoleData> self) {
}
}
// An actor is schedulable to run if the current worker has enough resourc, i.e.,
// An actor is schedulable to run if the current worker has enough resources, i.e.,
// the worker's memory usage is below the threshold;
// Exception: If the actor is working on the current version batch, we have to schedule
// the actor to run to avoid dead-lock.

View File

@ -251,7 +251,7 @@ struct RestoreControllerInterface : RestoreRoleInterface {
// RestoreAsset uniquely identifies the work unit done by restore roles;
// It is used to ensure exact-once processing on restore loader and applier;
// By combining all RestoreAssets across all verstion batches, restore should process all mutations in
// By combining all RestoreAssets across all version batches, restore should process all mutations in
// backup range and log files up to the target restore version.
struct RestoreAsset {
UID uid;

View File

@ -1984,8 +1984,8 @@ ACTOR static Future<std::vector<std::pair<GrvProxyInterface, EventMap>>> getGrvP
return results;
}
// Returns the number of zones eligble for recruiting new tLogs after zone failures, to maintain the current replication
// factor.
// Returns the number of zones eligible for recruiting new tLogs after zone failures, to maintain the current
// replication factor.
static int getExtraTLogEligibleZones(const std::vector<WorkerDetails>& workers,
const DatabaseConfiguration& configuration) {
std::set<StringRef> allZones;

View File

@ -2061,7 +2061,7 @@ class DWALPagerSnapshot;
// oldest pager version being maintained the remap can be "undone" by popping it from the remap queue,
// copying the alternate page ID's data over top of the original page ID's data, and deleting the remap from memory.
// This process basically describes a "Delayed" Write-Ahead-Log (DWAL) because the remap queue and the newly allocated
// alternate pages it references basically serve as a write ahead log for pages that will eventially be copied
// alternate pages it references basically serve as a write ahead log for pages that will eventually be copied
// back to their original location once the original version is no longer needed.
class DWALPager final : public IPager2 {
public:

View File

@ -465,7 +465,7 @@ ACTOR Future<Void> databaseWarmer(Database cx) {
}
}
// Tries indefinitly to commit a simple, self conflicting transaction
// Tries indefinitely to commit a simple, self conflicting transaction
ACTOR Future<Void> pingDatabase(Database cx) {
state Transaction tr(cx);
loop {

View File

@ -132,7 +132,7 @@ extern bool isAssertDisabled(int line);
enum assert_op { EQ, NE, LT, GT, LE, GE };
// TODO: magic so this works even if const-ness doesn not match.
// TODO: magic so this works even if const-ness doesn't not match.
template <typename T, typename U>
void assert_num_impl(char const* a_nm,
T const& a,

View File

@ -33,7 +33,7 @@
#include <optional>
// Helper macros to allow the init macro to be called with an optional third
// paramater, used to explicit set atomicity of knobs.
// parameter, used to explicit set atomicity of knobs.
#define KNOB_FN(_1, _2, _3, FN, ...) FN
#define INIT_KNOB(knob, value) initKnob(knob, value, #knob)
#define INIT_ATOMIC_KNOB(knob, value, atomic) initKnob(knob, value, #knob, atomic)

View File

@ -655,7 +655,7 @@ struct NullDescriptor {
};
// Descriptor must have the methods name() and typeName(). They can be either static or member functions (such as for
// runtime configurability). Descriptor is inherited so that syntatically Descriptor::fn() works in either case and so
// runtime configurability). Descriptor is inherited so that syntactically Descriptor::fn() works in either case and so
// that an empty Descriptor with static methods will take up 0 space. EventField() accepts an optional Descriptor
// instance.
template <class T, class Descriptor = NullDescriptor, class FieldLevelType = FieldLevel<T>>

View File

@ -21,7 +21,7 @@
// This file, similar to `type_traits` in the standard library, contains utility types that can be used for template
// metaprogramming. While they can be very useful and simplify certain things, please be aware that their use will
// increase compilation times significantly. Therefore it is not recommended to use them in header file if not
// absosultely necessary.
// absolutely necessary.
#pragma once
#include <variant>
@ -50,6 +50,6 @@ struct variant_map_t<std::variant<Args...>, Fun> {
};
// Helper definition for variant_map_t. Instead of using `typename variant_map<...>::type` one can simple use
// `varirant_map<...>` which is equivalent but shorter.
// `variant_map<...>` which is equivalent but shorter.
template <class T, template <class> class Fun>
using variant_map = typename variant_map_t<T, Fun>::type;