Now that we dropped support for Ruby 2.7, we no longer
need to check if variables are defined before accessing them
to avoid the undefined variable warning.
The change to serialization mirrors the test just below it that also
uses a conditional for the assertion instead of a skip. The conditional
is necessary because memcached entries are not strings.
The class_serial test should not run on Ruby 3.2+ because class_serial
was replaced with Object Shapes. The class_serial value in RubyVM.stat
was removed in ruby/ruby@13bd617ea6
Updates `MemoryStore#write_entry` to pass a `nil` `namespace` to
`exist?`, which expects a _name_ rather than a an already "normalized"
_key_. This fixes a bug where `unless_exist` would overwrite any
existing entry if a `namespace` was used.
The return value was not specified before. Now it returns `true` on a
successful write, `nil` if there was an error talking to the cache
backend, and `false` if the write failed for another reason (e.g. the
key already exists and `unless_exist: true` was passed).
Previously, the `Cache::Store` instrumentation would call
`normalize_key` when adding a key to the log. However, this resulted in
the logged key not always matching the actual key written/read from the
cache:
```irb
irb(main):004> cache.write("foo", "bar", namespace: "baz")
D, [2023-11-10T12:44:59.286362 #169586] DEBUG -- : Cache write: baz:foo ({:compress=>false, :compress_threshold=>1024, :namespace=>"baz"})
=> true
irb(main):005> cache.delete("foo", namespace: "baz")
D, [2023-11-10T12:45:03.071300 #169586] DEBUG -- : Cache delete: foo
=> true
```
In this example, `#write` would correctly log that the key written to
was `baz:foo` because the `namespace` option would be passed to the
`instrument` method. However, other methods like `#delete` would log
that the `foo` key was deleted because the `namespace` option was _not_
passed to `instrument`.
This commit fixes the issue by making the caller responsible for passing
the correct key to `#instrument`. This allows `normalize_key` to be
removed from the log generation which both prevents the key from being
normalized a second time and removes the need to pass the full options
hash into `#instrument`.
Co-authored-by: Jonathan Hefner <jonathan@hefner.pro>
The reason of removing dup 5 years ago in 6da99b4e99 was to decrease memory allocations, so, it makes sense to only dup options when we know they will be overwritten.
We had a few cases of tests being skipped accidentally on CI
hence not bein ran for a long time.
Skipping make sense when running the test suite locally, e.g.
you may not have Redis or some other dependency running.
But on CI, a test not being ran should be considered an error.
This commit fixes a discrepancy in the behavior of the `#increment` and
`#decrement` methods in `RedisCacheStore` when used with Redis versions less
than 7.0.0. The existing condition `count != amount` prevented setting the
Time-To-Live (TTL) for keys that were equal to the increment/decrement amount
after the `INCRBY`/`DECRBY` operation. This occurs when incrementing a
non-existent key by `1`, for example.
Using Redis pipelining, we minimize the network overhead incurred by checking
for existing TTLs. It decouples the TTL operations from the increment/decrement
operation, allowing the TTL to be set correctly regardless of the resulting
value from the `INCRBY`/`DECRBY`.
New tests have been added to verify the correct behavior of `#increment` and
`#decrement` methods, specifically when the `expires_in` option is not used.
Using a separate cache store instance (`@cache_no_ttl`), these tests ensure that
keys are correctly incremented or decremented and that their TTL remains unset.
Co-authored-by: Benjamin Quorning <benjamin@quorning.net>
Co-authored-by: Jury Razumau <jury.razumau@zendesk.com>
Co-authored-by: Edyta Rozczypała <edyta.rozczypala@zendesk.com>
Right now we are using both to test the Rails applications we generate
and to test Rails itself. Let's keep CI for the app and BUILDKITE to
the framework.
It's possible since Rails 6 (3ea2857943) to let the framework create Event objects, but the guides and docs weren't updated to lead with this example.
Manually instantiating an Event doesn't record CPU time and allocations, I've seen it more than once that people copy-pasting the example code get confused about these stats returning 0. The tests here show that - just like the apps I've worked on - the old pattern keeps getting copy-pasted.
Since Rails 6.1 (via c4845aa779), it has
been possible to specify `coder: nil` to allow the store to handle cache
entries directly.
This commit adds documentation and regression tests for the behavior.
Since we clear the entire cache after each test, we can't run these in
parallel otherwise one process can clean the cache of other before the
assertion is run.
This adds a cache optimization such that expired and version-mismatched
cache entries can be detected without deserializing their values. This
optimization is enabled when using cache format version >= 7.1 or a
custom serializer.
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
`Rails.cache.delete('key')` is supposed to return `true` if an entry
exists and `false` otherwise. This is how most stores behave.
However, the `RedisCacheStore` would return a `1` when deleting an entry
that does exist and a `0` otherwise.
As `0` is truthy this is unexpected behaviour.
`RedisCacheStore` now returns true if the entry exists and false
otherwise, making it consistent with the other cache stores.
Similarly the `FileCacheStore` now returns `false` instead of `nil` if
the entry doesn't exist.
A test is added to make sure this behaviour applies to all stores.
The documentation for `delete` has been updated to make the behaviour
explicit.
This commit adds support for replacing the compressor used for
serialized cache entries. Custom compressors must respond to `deflate`
and `inflate`. For example:
```ruby
module MyCompressor
def self.deflate(string)
# compression logic...
end
def self.inflate(compressed)
# decompression logic...
end
end
config.cache_store = :redis_cache_store, { compressor: MyCompressor }
```
As part of this work, cache stores now also support a `:serializer`
option. Similar to the `:coder` option, serializers must respond to
`dump` and `load`. However, serializers are only responsible for
serializing a cached value, whereas coders are responsible for
serializing the entire `ActiveSupport::Cache::Entry` instance.
Additionally, the output from serializers can be automatically
compressed, whereas coders are responsible for their own compression.
Specifying a serializer instead of a coder also enables performance
optimizations, including the bare string optimization introduced by cache
format version 7.1.
To use a binary-encoded string as a byte buffer, appended strings should
be force-encoded as binary. Otherwise, appending a non-ASCII-only
string will raise `Encoding::CompatibilityError`.
Fixes#48748.
This help treating caches entries as expandable.
Because Marshal will hapily serialize almost everything, It's not uncommon to
inadvertently cache a class that's not particularly stable, and cause deserialization
errors on rollout when the implementation changes.
E.g. https://github.com/sporkmonger/addressable/pull/508
With this change, in case of such event, the hit rate will suffer for a
bit, but the application won't return 500s responses.
While working on fixing some deprecation warnings introduced when the
6.1 cache_format deprecation was [moved][1] to be on usage, I found that
the MemCacheStore actually has its own `default_coder` method.
This adds the warning to MemCacheStore's `default_coder` method so that
every cache store will warn on using the 6.1 format.
[1]: 088551c802
This commit factors compression-related cache tests out into a dedicated
module, and reorganizes the tests to make clear what behavior is being
tested.
In #48104, `:message_pack` was added as a supported value for the cache
format version because the format version essentially specifies the
default coder. However, as an API, the format version could potentially
be used to affect other aspects of serialization, such as the default
compression format.
This commit removes `:message_pack` as a supported value for the format
version, and, as a replacement, adds support for specifying
`coder: :message_pack`:
```ruby
# BEFORE
config.active_support.cache_format_version = :message_pack
# AFTER
config.cache_store = :redis_cache_store, { coder: :message_pack }
```
Fix: https://github.com/rails/rails/issues/48352
While we should ensure instantiating the store doesn't immediately
attempt to connect, we should eagerly process arguments so that
if they are somehow invalid we fail early during boot rather than at
runtime.
Additionally, since it's common to get pool parameters from environment
variable, we can use `Integer` and `Float` so that string representations
are valid.
A handful of tests test `write_multi` indirectly, but none test it
directly. This test parallels `test_read_multi`, `test_fetch_multi`,
and `test_delete_multi`.
Follow-up to #48154.
This adds short-circuiting behavior to `delete_multi` when an empty key
list is specified in order to prevent an `ArgumentError` from being
raised, similar to `read_multi`.
This commit addresses a few problems:
1. `read_multi` (and `fetch_multi` and `delete_multi`) logs multiple
keys as if they were a single composite key. For example,
`read_multi("posts/1", "posts/2")` will log "Cache read_multi:
posts/1/posts/2". This can make the log confusing or indecipherable
when keys contain more slashes, such as with view fragments.
2. `write_multi` logs its entire argument as a single composite key.
For example, `write_multi("p1" => post1, "p2" => post2)` will log
"Cache write_multi: p1=#<Post:0x...>/p2=#<Post:0x...>".
3. `MemoryStore#cleanup` logs its instrumentation payload instead of
setting it on the event. For example, when 10 entries are in the
cache, `cleanup` will log "Cache cleanup: size=10" instead of
merging `{ size: 10 }` into the event payload.
Multi-key logging was first added in ca6aba7f30,
then reverted in c4a46fa781 due to being
unwieldy, and then re-added in 2b96d5822b
(for `write_multi`) and 62023884f7 (for
`read_multi`) but without any handling or formatting.
This commit changes the way multi-key operations are logged in order to
prevent these problems. For example, `read_multi("posts/1", "posts/2")`
will now log "Cache read_multi: 2 key(s) specified", and
`write_multi("p1" => post1, "p2" => post2)` will now log "Cache
write_multi: 2 key(s) specified".
Fix: https://github.com/rails/rails/pull/48145
`read_multi`, `write_multi` and `fetch multi` should all
bail out early if somehow called with an empty list.
Co-Authored-By: Joshua Young <djry1999@gmail.com>
This commit introduces a performance optimization for cache entries with
bare string values such as view fragments.
A new `7.1` cache format has been added which includes the optimization,
and the `:message_pack` cache format now includes the optimization as
well. (A new cache format is necessary because, during a rolling
deploy, unupgraded servers must be able to read cache entries from
upgraded servers, which means the optimization cannot be enabled for
existing apps by default.)
New apps will use the `7.1` cache format by default, and existing apps
can enable the format by setting `config.load_defaults 7.1`. Cache
entries written using the `6.1` or `7.0` cache formats can be read when
using the `7.1` format.
**Benchmark**
```ruby
# frozen_string_literal: true
require "benchmark/ips"
serializer_7_0 = ActiveSupport::Cache::SerializerWithFallback[:marshal_7_0]
serializer_7_1 = ActiveSupport::Cache::SerializerWithFallback[:marshal_7_1]
entry = ActiveSupport::Cache::Entry.new(Random.bytes(10_000), version: "123")
Benchmark.ips do |x|
x.report("dump 7.0") do
$dumped_7_0 = serializer_7_0.dump(entry)
end
x.report("dump 7.1") do
$dumped_7_1 = serializer_7_1.dump(entry)
end
x.compare!
end
Benchmark.ips do |x|
x.report("load 7.0") do
serializer_7_0.load($dumped_7_0)
end
x.report("load 7.1") do
serializer_7_1.load($dumped_7_1)
end
x.compare!
end
```
```
Warming up --------------------------------------
dump 7.0 5.482k i/100ms
dump 7.1 10.987k i/100ms
Calculating -------------------------------------
dump 7.0 73.966k (± 6.9%) i/s - 367.294k in 5.005176s
dump 7.1 127.193k (±17.8%) i/s - 615.272k in 5.081387s
Comparison:
dump 7.1: 127192.9 i/s
dump 7.0: 73966.5 i/s - 1.72x (± 0.00) slower
Warming up --------------------------------------
load 7.0 7.425k i/100ms
load 7.1 26.237k i/100ms
Calculating -------------------------------------
load 7.0 85.574k (± 1.7%) i/s - 430.650k in 5.034065s
load 7.1 264.877k (± 1.6%) i/s - 1.338M in 5.052976s
Comparison:
load 7.1: 264876.7 i/s
load 7.0: 85573.7 i/s - 3.10x (± 0.00) slower
```
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
This commit adds support for `:message_pack` as an option for
`config.active_support.cache_format_version`.
Cache entries written using the `6.1` or `7.0` formats can be read when
using the `:message_pack` format. Additionally, cache entries written
using the `:message_pack` format can now be read when using the `6.1` or
`7.0` format. These behaviors makes it easy to migrate between formats
without invalidating the entire cache.
In Rails 7, if you do `Rails.cache.write(key, value, expires_in: 1.minute.from_now)`, it will work. The actual expiration will be much more than a minute away, but it won't raise. (The correct code is `expires_in: 1.minute` or `expires_at: 1.minute.from_now`.)
Since https://github.com/rails/rails/pull/45892 the same code will error with:
```
NoMethodError: undefined method `negative?' for 2008-04-24 00:01:00 -0600:Time
/Users/alex/Code/rails/activesupport/lib/active_support/cache.rb:743:in `merged_options'
/Users/alex/Code/rails/activesupport/lib/active_support/cache.rb:551:in `write'
```
To make it a bit easier to upgrade to Rails 7.1, this PR introduces a better error if you pass a `Time` object to `expires_in:`
```
ArgumentError: expires_in parameter should not be a Time. Did you mean to use expires_at? Got: 2023-04-07 14:47:45 -0600
/Users/alex/Code/rails/activesupport/lib/active_support/cache.rb:765:in `handle_invalid_expires_in'
/Users/alex/Code/rails/activesupport/lib/active_support/cache.rb:745:in `merged_options'
/Users/alex/Code/rails/activesupport/lib/active_support/cache.rb:551:in `write'
```
Why:
----
Following up on [#47323](https://github.com/rails/rails/issues/47323).
Many options are not forwarded to the Dalli client when it is
initialized from the `ActiveSupport::Cache::MemCacheStore`. This is to
support a broader set of features powered by the implementation. When an
instance of a client is passed on the initializer, it takes precedence,
and we have no control over which attributes will be overridden or
re-processed on the client side; this is by design and should remain as
such to allow both projects to progress independently. Having this
option introduces several potential bugs that are difficult to pinpoint
and get multiplied by which version of the tool is used and how each
evolves. During the conversation on the issue, the `Dalli` client
maintainer supports [deprecating](https://github.com/rails/rails/issues/47323#issuecomment-1424292456)
this option.
How:
----
Removing this implicit dependency will ensure each library can evolve
separately and cements the usage of `Dalli::Client` as an [implementation
detail](https://github.com/rails/rails/issues/21595#issuecomment-139815433)
We can not remove a supported feature overnight, so I propose we add a
deprecation warning for the next minor release(7.2 at this time).
There was a constant on the `Cache` namespace only used to restrict
options passed to the `Dalli::Client` initializer that now lives on the
`MemCacheStore` class.
Co-authored-by: Eileen M. Uchitelle <eileencodes@users.noreply.github.com>
It sounds like a default timeout of 1 second can sometimes not be enough.
In normal operations, this should be fine (will result in a cache miss),
but in these tests, we always expect the cache to return the value, hence doing this change for these tests only.
Co-authored-by: Matthew Draper <matthew@trebex.net>