* Granule purge cannot delete history entry for fully deleting granule until all children are completely done splitting
* Several purging fixes related to granule history
* Fixed typo in refactor
* fixing memory model for purgeRange
* formatting
* weakening granule purge test for now
* cleanup
* First version of force purging granules
* fixing issue in BW range assignment reporting
* Fixing incorrect assert with force purging
* Error handling when checking force purged state
* fixed force purging and recover/reassign range races and check
* Handling force purge + boundary change race
* more places to check for force purged status
* fixed manager restart in the middle of force purge bug
* fixing same-BM purge and assignment races in all cases
* weakening orphaned granule history check a bit because of difficult to solve races
* fixing txn options on retry
* loading force purged ranges at start to avoid resuming a merge that is being force purged
* cleanup
* Enabling purging in granule tests, and adding check for leaked change feeds in force purge
* formatting
* missed parameter in merge conflicts
* Fixing leaked change feed race with merge and force purge
* adding change feed cleanup when new blob manager recovers in-progress merge that raced with force purge
* added forcepurge fdbcli command
* Using knownBlobRanges for blob granule ranges whether tenants are enabled or not
* Effectively disabled blob granule tests when tenants enabled to fix ctest
* blob: read TenantMap during recovery
Future functionality in the blob subsystem will rely on the tenant data
being loaded. This fixes this issue by loading the tenant data before
completing recovery such that continued actions on existing blob
granules will have access to the tenant data.
Example scenario with failover, splits are restarted before loading the
tenant data:
BM - BlobManager
epoch 3: epoch 4:
BM record intent to split.
Epoch fails.
BM recovery begins.
BM fails to persist split.
BM recovery finishes.
BM.checkBlobWorkerList()
maybeSplitRange().
BM.monitorClientRanges().
loads tenant data.
bin/fdbserver -r simulation -f tests/slow/BlobGranuleCorrectness.toml \
-s 223570924 -b on --crash --trace_format json
* blob: add tuple key truncation for blob granule alignment
FDB has a backup system available using the blob manager and blob
granule subsystem. If we want to audit the data in the blobs, it's a lot
easier if we can align them to something meaningful.
When a blob granule is being split, we ask the storage metrics system
for split points as it holds approximate data distribution metrics.
These keys are then processed to determine if they are a tuple and
should be truncated according to the new knob,
BG_KEY_TUPLE_TRUNCATE_OFFSET.
Here we keep all aligned keys together in the same granule even if it is
larger than the allowed granule size. The following commit will address
this by adding merge boundaries.
* blob: minor clean ups in merging code
1. Rename mergeNow -> seen. This is more inline with clocksweep naming
and removes the confusion between mergeNow and canMergeNow.
2. Make clearMergeCandidate() reset to MergeCandidateCannotMerge to make
a clear distinction what we're accomplishing.
3. Rename canMergeNow() -> mergeEligble().
* blob: add explicit (hard) boundaries
Blob ranges can be specified either through explicit ranges or at the
tenant level. Right now this is managed implicitly. This commit aims to
make it a little more explicit.
Blobification begins in monitorClientRanges() which parses either the
explicit blob ranges or the tenant map. As we do this and add new
ranges, let's explicitly track what is a hard boundary and what isn't.
When blob merging occurs, we respect this boundary. When a hard boundary
is encountered, we submit the found eligible ranges and start looking
for a new range beginning with this hard boundary.
* blob: create BlobGranuleSplitPoints struct
This is a setup for the following commit. Our goal here is to provide a
structure for split points to be passed around. The need is for us to be
able to carry uncommitted state until it is committed and we can apply
these mutations to the in-memory data structures.
* blob: implement soft boundaries
An earlier commit establishes the need to create data boundaries within
a tenant. The reality is we may encounter a set of keys that degnerate
to the same key prefix. We'll need to be able to split those across
granules, but we want to ensure we merge the split granules together
before merging with other granules.
This adds to the BlobGranuleSplitPoints state of new
BlobGranuleMergeBoundary items. BlobGranuleMergeBoundary contains state
saying if it is a left or right boundary. This information is used to,
like hard boundaries, force merging of like granules first.
We read the BlobGranuleMergeBoundary map into memory at recovery.
* initial structure for remote IKVS server
* moved struct to .h file, added new files to CMakeList
* happy path implementation, connection error when testing
* saved minor local change
* changed tracing to debug
* fixed onClosed and getError being called before init is finished
* fix spawn process bug, now use absolute path
* added server knob to set ikvs process port number
* added server knob for remote/local kv store
* implement simulator remote process spawning
* fixed bug for simulator timeout
* commit all changes
* removed print lines in trace
* added FlowProcess implementation by Markus
* initial debug of FlowProcess, stuck at parent sending OpenKVStoreRequest to child
* temporary fix for process factory throwing segfault on create
* specify public address in command
* change remote kv store knob to false for jenkins build
* made port 0 open random unused port
* change remote store knob to true for benchmark
* set listening port to randomly opened port
* added print lines for jenkins run open kv store timeout debug
* removed most tracing and print lines
* removed tutorial changes
* update handleIOErrors error handling to handle remote-ikvs cases
* Push all debugging changes
* A version where worker bug exists
* A version where restarting tests fail
* Use both the name and the port to determine the child process
* Remove unnecessary update on local address
* Disable remote-kvs for DiskFailureCycle test
* A version where restarting stuck
* A version where most restarting tests green
* Reset connection with child process explicitly
* Remove change on unnecessary files
* Unify flags from _ to -
* fix merging unexpected changes
* fix trac.error to .errorUnsuppressed
* Add license header
* Remove unnecessary header in FlowProcess.actor.cpp
* Fix Windows build
* Fix Windows build, add missing ;
* Fix a stupid bug caused by code dropped by code merging
* Disable remote kvs by default
* Pass the conn_file path to the flow process, though not needed, but the buildNetwork is difficult to tune
* serialization change on readrange
* Update traces
* Refactor the RemoteIKVS interface
* Format files
* Update sim2 interface to not clog connections between parent and child processes in simulation
* Update comments; remove debugging symbols; Add error handling for remote_kvs_cancelled
* Add comments, format files
* Change method name from isBuggifyDisabled to isStableConnection; Decrease(0.1x) latency for stable connections
* Commit the IConnection interface change, forgot in previous commit
* Fix the issue that onClosed request is cancelled by ActorCollection
* Enable the remote kv store knob
* Remove FlowProcess.actor.cpp and move functions to RemoteIKeyValueStore.actor.cpp; Add remote kv store delay to avoid race; Bind the child process to die with parent process
* Fix the bug where one process starts storage server more than once
* Add a please_reboot_remote_kv_store error to restart the storage server worker if remote kvs died abnormally
* Remove unreachable code path and add comments
* Clang format the code
* Fix a simple wait error
* Clang format after merging the main branch
* Testing mixed mode in simulation if remote_kvs knob is enabled, setting the default to false
* Disable remote kvs for PhysicalShardMove which is for RocksDB
* Cleanup #include orders, remove debugging traces
* Revert the reorder in fdbserver.actor.cpp, which fails the gcc build
Co-authored-by: “Lincoln <“lincoln.xiao@snowflake.com”>