The PR is updating storage server and Redwood to enable encryption based on the encryption mode in DB config, which was previously controlled by a knob. High level changes are
1. Passing encryption mode in DB config to storage server
1.1 If it is a new storage server, pass the encryption mode through `InitializeStorageRequest`. the encryption mode is pass to Redwood for initialization
1.2 If it is an existing storage server, on restart the storage server will send `GetStorageServerRejoinInfoRequest` to commit proxy, and commit proxy will return the current encryption mode, which it get from DB config on its own initialization. Storage server will compare the DB config encryption mode to the local storage encryption mode, and fail if they don't match
2. Adding a new `encryptionMode()` method to `IKeyValueStore`, which return a future of local encryption mode of the KV store instance. A KV store supporting encryption would need to persist its own encryption mode, and return the mode via the API.
3. Redwood accepts encryption mode from its constructor. For a new Redwood instance, caller has to specific the encryption mode, which will be stored in Redwood per-instance file header. For existing instance, caller is supposed to not passing the encryption mode, and let Redwood find it out from its own header.
4. Refactoring in Redwood to accommodate the above changes.
The `safeThreadFutureToFuture` function converts a `ThreadFuture` to a Future. There are a few cases where we have a
result of type `ThreadFuture<Standalone<T>>` or `ThreadFuture<Optional<Standalone<T>>` where the memory is not
actually owned by the Standalone. Rather it is owned by the ThreadFuture.
Eventually we should fix this so that the memory is properly owned by the `Standalone`, which is beyond the scope of
this PR. Until then, we should update the implementation of `safeThreadFutureToFuture` to take away ownership from the
`Standalone`/`Optional<Standalone>` so we can detect this problem in simulation.
Test plan:
- correctness test with and without valgrind
Also, to minimize audit log loss, handle token usage audit logging at each usage.
This has a side-effect of making the token use log less bursty.
This also subtly changes the dedup cache policy.
Dedup time window used to be 5 seconds (default) since the start of batch-logging.
Now it's 5 seconds from the first usage since the closing of the previous dedup window