* Recruit new singleton for consistency checker.
* Recruit the consistency checker only if enabled.
* Add a yield in monitorConsistencyChecker().
* Minor fixes.
* Consistency check workload enhancements.
* Minor fixes and clarifications.
* clang format
* Clang format.
* Minor fixes, cleanup, debug tracing.
* Misc.
* Move the consistency scan information from dbconfig to a key backed object.
* Move consistency scan config out of db cofig to a state object and feature rename.
* ConsistencyCheck workload refactor.
* devFormat
* Update fdbcli/ConsistencyScanCommand.actor.cpp
* Review Comments.
Co-authored-by: negoyal <neelam.goyal@gmail.com>
Co-authored-by: Ata E Husain Bohra <ata.husain@snowflake.com>
Configuration database data lives on the coordinators. When a change
coordinators command is issued, the data must be sent to the new
coordinators to keep the database consistent.
* Encryption data at-rest db-config
Description
diff-1: Handle 'force' updates to encryption_at_rest db-config
Major changes proposed:
1. Introduce 'encryption_data_at_rest_mode" 'configure new'
option to enable Encryption data at-rest. The feature is disabled
by default.
2. The configuration is meant to be set at the time of database
creation, addition checks will be done to avoid updating the config
in subsequent PR.
3. DatabaseConfiguration validity check to account for "tenant_mode"
set to `required` if Encryption data at-rest is selected given
EncryptionDomain matches Tenant boundaries.
Testing
devCorrectness - 100K
* Granule purge cannot delete history entry for fully deleting granule until all children are completely done splitting
* Several purging fixes related to granule history
* Fixed typo in refactor
* fixing memory model for purgeRange
* formatting
* weakening granule purge test for now
* cleanup
* First version of force purging granules
* fixing issue in BW range assignment reporting
* Fixing incorrect assert with force purging
* Error handling when checking force purged state
* fixed force purging and recover/reassign range races and check
* Handling force purge + boundary change race
* more places to check for force purged status
* fixed manager restart in the middle of force purge bug
* fixing same-BM purge and assignment races in all cases
* weakening orphaned granule history check a bit because of difficult to solve races
* fixing txn options on retry
* loading force purged ranges at start to avoid resuming a merge that is being force purged
* cleanup
* Enabling purging in granule tests, and adding check for leaked change feeds in force purge
* formatting
* missed parameter in merge conflicts
* Fixing leaked change feed race with merge and force purge
* adding change feed cleanup when new blob manager recovers in-progress merge that raced with force purge
* added forcepurge fdbcli command
* Enable configuring the next future protocol version as the current protocol version in FDB client, fdbserver, and fdbcli
* Auto format python files used in upgrade tests
* Add a test for upgrading to a future FDB version
* Emphasize that the options for using future protocol version are intended for test purposes only
* Make the global variable for current protocol version visible only locally
* Refactirng to avoid using currentProtocolVersion() in static intialization
* Update go bindings
* blob: read TenantMap during recovery
Future functionality in the blob subsystem will rely on the tenant data
being loaded. This fixes this issue by loading the tenant data before
completing recovery such that continued actions on existing blob
granules will have access to the tenant data.
Example scenario with failover, splits are restarted before loading the
tenant data:
BM - BlobManager
epoch 3: epoch 4:
BM record intent to split.
Epoch fails.
BM recovery begins.
BM fails to persist split.
BM recovery finishes.
BM.checkBlobWorkerList()
maybeSplitRange().
BM.monitorClientRanges().
loads tenant data.
bin/fdbserver -r simulation -f tests/slow/BlobGranuleCorrectness.toml \
-s 223570924 -b on --crash --trace_format json
* blob: add tuple key truncation for blob granule alignment
FDB has a backup system available using the blob manager and blob
granule subsystem. If we want to audit the data in the blobs, it's a lot
easier if we can align them to something meaningful.
When a blob granule is being split, we ask the storage metrics system
for split points as it holds approximate data distribution metrics.
These keys are then processed to determine if they are a tuple and
should be truncated according to the new knob,
BG_KEY_TUPLE_TRUNCATE_OFFSET.
Here we keep all aligned keys together in the same granule even if it is
larger than the allowed granule size. The following commit will address
this by adding merge boundaries.
* blob: minor clean ups in merging code
1. Rename mergeNow -> seen. This is more inline with clocksweep naming
and removes the confusion between mergeNow and canMergeNow.
2. Make clearMergeCandidate() reset to MergeCandidateCannotMerge to make
a clear distinction what we're accomplishing.
3. Rename canMergeNow() -> mergeEligble().
* blob: add explicit (hard) boundaries
Blob ranges can be specified either through explicit ranges or at the
tenant level. Right now this is managed implicitly. This commit aims to
make it a little more explicit.
Blobification begins in monitorClientRanges() which parses either the
explicit blob ranges or the tenant map. As we do this and add new
ranges, let's explicitly track what is a hard boundary and what isn't.
When blob merging occurs, we respect this boundary. When a hard boundary
is encountered, we submit the found eligible ranges and start looking
for a new range beginning with this hard boundary.
* blob: create BlobGranuleSplitPoints struct
This is a setup for the following commit. Our goal here is to provide a
structure for split points to be passed around. The need is for us to be
able to carry uncommitted state until it is committed and we can apply
these mutations to the in-memory data structures.
* blob: implement soft boundaries
An earlier commit establishes the need to create data boundaries within
a tenant. The reality is we may encounter a set of keys that degnerate
to the same key prefix. We'll need to be able to split those across
granules, but we want to ensure we merge the split granules together
before merging with other granules.
This adds to the BlobGranuleSplitPoints state of new
BlobGranuleMergeBoundary items. BlobGranuleMergeBoundary contains state
saying if it is a left or right boundary. This information is used to,
like hard boundaries, force merging of like granules first.
We read the BlobGranuleMergeBoundary map into memory at recovery.
* Storage server shard management with physical shards.
* Cleanup.
* Resolved comments.
* Added `UnlimintedCommitBytes`.
Co-authored-by: He Liu <heliu@apple.com>
This change adds:
* ability to store the mapping from tenants to quota in the system keyspace,
* a setter and getter function
* a new workload to test this functionality
FDBCORE-2437
* Shard based move.
* Clean up.
* Clear results on retry in getInitialDataDistribution.
* Remove assertion on SHARD_ENCODE_LOCATION_METADATA for compatibility.
* Resolved comments.
Co-authored-by: He Liu <heliu@apple.com>
* Fixing leaked stream with explicit notify failed before destructor
* better logic to prevent races in change feed fetching
* Found new race that makes assert incorrect
* handle server overloaded in initial read from fdb
* Handling more blob error types in granule retry
* Fixing rollback metadata problem, added better debugging
* Fixing version race when fetching change feed metadata
* Better racing split request handling
* fixing assert
* Handle change feed popped check in the blob worker
* fix: do not use a RYW transaction for a versionstamp because of randomize API version (#6768)
* more merge conflict issues
* Change feed destroy fixes
* Fixing change feed destroy and move race
* Check error condition in BG file req
* Using relative endpoints for blob worker interface
* Fixing bug in previous fix
* More destroy and move race fixes
* Don't update empty version on destroy in case it gets rolled back. moved() and removing will take care of ensuring it is not read
* Bug fix (#6796)
* fix: do not use a RYW transaction for a versionstamp because of randomize API version
* fix: if the initialSnapshotVersion was pruned, granule history was incorrect
* added a way to compress null bytes in printable()
* Fixing durability issue with moving and destroying change feeds
* Adding fix for not fully deleting files for a granule that child granules need to re-snapshot
* More destroy and move races
* Fixing change feed destroy and pop races
* Renaming bg prune to purge, and adding a C api and unit test for it
* more cleanup
* review comments
* Observability for granule purging
* better handling for change feed not registered
* Fixed purging bugs (#6815)
* fix: do not use a RYW transaction for a versionstamp because of randomize API version
* fix: if the initialSnapshotVersion was pruned, granule history was incorrect
* added a way to compress null bytes in printable()
* fixed a few purging bugs
Co-authored-by: Evan Tschannen <evan.tschannen@snowflake.com>
* Initialize cluster version at wall-clock time
Previously, new clusters would begin at version 0. After this change,
clusters will initialize at a version matching wall-clock time. Instead
of using the Unix epoch (or Windows epoch), FDB clusters will use a new
epoch, defaulting to January 1, 2010, 01:00:00+00:00. In the future,
this base epoch will be modifiable through fdbcli, allowing
administrators to advance the cluster version.
Basing the version off of time allows different FDB clusters to share
data without running into version issues.
* Send version epoch to master
* Cleanup
* Update fdbserver/storageserver.actor.cpp
Co-authored-by: A.J. Beamon <aj.beamon@snowflake.com>
* Jump directly to expected version if possible
* Fix initial version issue on storage servers
* Add random recovery offset to start version in simulation
* Type fixes
* Disable reference time by default
Enable on a cluster using the fdbcli command `versionepoch add 0`.
* Use correct recoveryTransactionVersion when recovering
* Allow version epoch to be adjusted forwards (to decrease the version)
* Set version epoch in simulation
* Add quiet database check to ensure small version offset
* Fix initial version issue on storage servers
* Disable reference time by default
Enable on a cluster using the fdbcli command `versionepoch add 0`.
* Add fdbcli command to read/write version epoch
* Cause recovery when version epoch is set
* Handle optional version epoch key
* Add ability to clear the version epoch
This causes version advancement to revert to the old methodology whereas
versions attempt to advance by about a million versions per second,
instead of trying to match the clock.
* Update transaction access
* Modify version epoch to use microseconds instead of seconds
* Modify fdbcli version target API
Move commands from `versionepoch` to `targetversion` top level command.
* Add fdbcli tests for
* Temporarily disable targetversion cli tests
* Fix version epoch fetch issue
* Fix Arena issue
* Reduce max version jump in simulation to 1,000,000
* Rework fdbcli API
It now requires two commands to fully switch a cluster to using the
version epoch. First, enable the version epoch with `versionepoch
enable` or `versionepoch set <versionepoch>`. At this point, versions
will be given out at a faster or slower rate in an attempt to reach the
expected version. Then, run `versionepoch commit` to perform a one time
jump to the expected version. This is essentially irreversible.
* Temporarily disable old targetversion tests
* Cleanup
* Move version epoch buggify to sequencer
This will cause some issues with the QuietDatabase check for the version
offset - namely, it won't do anything, since the version epoch is not
being written to the txnStateStore in simulation. This will get fixed in
the future.
Co-authored-by: A.J. Beamon <aj.beamon@snowflake.com>