The PR is updating storage server and Redwood to enable encryption based on the encryption mode in DB config, which was previously controlled by a knob. High level changes are
1. Passing encryption mode in DB config to storage server
1.1 If it is a new storage server, pass the encryption mode through `InitializeStorageRequest`. the encryption mode is pass to Redwood for initialization
1.2 If it is an existing storage server, on restart the storage server will send `GetStorageServerRejoinInfoRequest` to commit proxy, and commit proxy will return the current encryption mode, which it get from DB config on its own initialization. Storage server will compare the DB config encryption mode to the local storage encryption mode, and fail if they don't match
2. Adding a new `encryptionMode()` method to `IKeyValueStore`, which return a future of local encryption mode of the KV store instance. A KV store supporting encryption would need to persist its own encryption mode, and return the mode via the API.
3. Redwood accepts encryption mode from its constructor. For a new Redwood instance, caller has to specific the encryption mode, which will be stored in Redwood per-instance file header. For existing instance, caller is supposed to not passing the encryption mode, and let Redwood find it out from its own header.
4. Refactoring in Redwood to accommodate the above changes.
* added dynamic write amp calculations for blob granule compaction
* changing blob worker parallelism counts to bytes budget to handle less uniform operation sizes
* more snapshotting parallelism for behind feeds
* add a bit of observability when this happens
* adding knobs
* typo
* adjusting some knobs up with buggified granule size
* fixing bugs in dynamic write amp
* fixing formatting
* fixing bug in knob buggification
* fix formatting
* [EaR]: Update KMS request/response to embeded version details
Description
diff-1 : Address review comments
Patch embedd 'version_tag' detail to KMS JSON request/response
payload, this features enables future expansion as well as enables
the path to support multiple versions simulatanesouly if needed
Testing
RESTKmsConnectorUnit.toml updated as per new code
devRunCorrectness - 100K
* [EaR]: Update KMS APIs to split encryption keys endpoints
Description
diff-1: Address review comments
Major changes proposed:
1. Extend fdbserver to allow parsing two endpoints for encryption at-rest
support: getEncrypitonKeys, getLatestEncryptionKeys
2. Update RESTKmsConnector to do the following:
2.1. Split the getLatest and getCipher requests.
2.2. "domain_id" for point lookup marked as 'optional'
Testing
devRunCorrectness - 100K
Bug behavior:
When DD has zero healthy machine teams but more unhealthy machine teams
than the max machine teams DD plans to build, DD will stop building
new machine teams. Due to zero healthy machine team (and zero healthy
server team), DD cannot find a healthy destination team to relocate data.
When data relocation stops, exclusion stops progressing and stuck.
Bug happens when we *shrink* a k-host cluster by
first adding k/2 new host;
then quickly excluding all old hosts.
Fix:
Let DD build temporary extra teams to relocate data.
The extra teams will be cleaned up later by DD's remove extra teams logic.
Simulation test:
There is no simulation test to cover cluster expansion scnenario.
To most closely simulate this behavior, we intentionally overbuild all possible
machine teams to trigger the condition that unhealthy teams is larger than
the maximum teams DD wants to build later.
* Allow multiple keyranges in CheckpointRequest.
Include DataMove ID in CheckpointMetaData.
* Use UID dataMoveId instead of Optional<UID>.
* Implemented ShardedRocks::checkpoint().
* Implementing createCheckpoint().
* Attempted to change getCheckpointMetaData*() for a single keyrange.
* Added getCheckpointMetaDataForRange.
* Minor fixes for NativeAPI.actor.cpp.
* Replace UID CheckpointMetaData::ssId with std::vector<UID>
CheckpointMetaData::src;
* Implemented getCheckpointMetaData() and completed checkpoint creation
and fetch in test.
* Refactoring CheckpointRequest and CheckpointMetaData
rename `dataMoveId` as `actionId` and make it Optional.
* Fixed ctor of CheckpointMetaData.
* Implemented ShardedRocksDB::restore().
* Tested checkpoint restore, and added range check for restore, so that
the target ranges can be a subset of the checkpoint ranges.
* Added test to partially restore a checkpoint.
* Refactor: added checkpointRestore().
* Sort ranges for comparison.
* Cleanups.
* Check restore ranges are empty; Add ranges in main thread.
* Resolved comments.
* Fixed GetCheckpointMetaData range check issue.
* Refactor CheckpointReader for CF checkpoint.
* Added CheckpointAsKeyValues as a parameter for newCheckpointReader.
* PhysicalShard::restoreKvs().
* Added `ranges` in fetchCheckpoint.
* Added RocksDBCheckpointKeyValues::ranges.
* Added ICheckpointIterator and implemented for RocksDBCheckpointReader.
* Refactored OpenAction for CheckpointReader, handled failure cases.
* Use RocksDBCheckpointIterator::end() in readRange.
* Set CheckpointReader timout and other Rocks read options.
* Implementing fetchCheckpointRange().
* Added more CheckpointReader tests.
* Cleanup.
* More cleanup.
* Resolved comments.
Co-authored-by: He Liu <heliu@apple.com>
* Add cleanIdempotencyIds
Delete zero or more idempotency ids older than minAgeSeconds
* Automatically clean idempotency ids from first proxy
* Add test for cleaner
* Fix formatting
* Address review comments