A big problem with the current `SchemaCache` implementation is that it holds
a reference to a connection, and uses it to lazily query the schema when the
information is missing.
This cause issues in multi-threaded contexts because all the connections
of a pool share the same schema, so they are constantly setting themselves
as the connection the schema cache should use, and you can end up in
situation where Thread#1 query the schema cache, which end up using the
connection currently attributed to Thread#2.
For a very long time this worked more or less by accident, because all
connection adapters would have an internal Monitor.
However in https://github.com/rails/rails/pull/46519 we got rid of this
synchronization, revealing the pre-existing issue. Previously it would
work™, but still wasn't ideal because of unexpected the connection
sharing between threads.
The idea here is to refactor the schema cache so that it doesn't hold a
reference onto any connection.
And instead of a `SchemaCache`, connections now have access to a
`SchemaReflection`. Now the connection that needs a schema information
is always provided as an argument, so that in case of a miss, it can be
used to populate the cache.
That should fix the database thread safety issues that were witnessed.
However note that at this stage, the underlying `SchemaCache` isn't
synchronized, so in case of a race condition, the same schema query
can be performed more than once. It could make sense to add synchronization
in a followup.
This refactoring also open the door to query the cache without a connection,
making it easier to eagerly define model attribute methods on boot without
establishing a connection to the database.
Co-Authored-By: Jean Boussier <jean.boussier@gmail.com>
Co-Authored-By: zzak <zzakscott@gmail.com>
Fix when multiple `belongs_to` maps to the same counter_cache column.
In such situation `inverse_which_updates_counter_cache` may find the
wrong relation which leads into an invalid increment of the
counter_cache.
This is done by releying on the `inverse_of` property of the relation
as well as comparing the models the association points two.
Note however that this second check doesn't work for polymorphic
associations.
Fixes#41250
Co-Authored-By: Jean Boussier <jean.boussier@gmail.com>
A developer may not always want to specify query constraints. If, for
example they would like to use a non-composite primary key in an
asscoaition, they need to use primary_key/foreign_key options instead.
Co-authored-by: Nikita Vasilevsky <nikita.vasilevsky@shopify.com>
Support assignment of belongs_to associations with query constraints.
This improves errors messages for composite key mismatches, and some
edge cases with query constraint lists.
Co-authored-by: Nikita Vasilevsky <nikita.vasilevsky@shopify.com>
Catches more bugs in CPK assocaitions by validating the shape of keys on
both ends of the association.
Co-authored-by: Nikita Vasilevsky <nikita.vasilevsky@shopify.com>
The check for `Base.connected?` was added in
4ecdf24bde.
At the time this change was made, we were calling
`Base.connection.reset_runtime` so it made sense to check if the
database was connected. Since
dd61a817de
we have stored that information in a thread instead, negating the need
to check `Base.connected?`.
The problem with checking `Base.connected?` is that in a multi-db
application this will return `false` for any model running a query not
inheriting from `Base`, leaving the `db_runtime` payload `nil`. Since
the information we need is stored in a thread rather than on the
connection directly we can remove the `Base.connected?` check.
Fixes#48687
Fix: https://github.com/rails/rails/issues/45017
Ref: https://github.com/rails/rails/pull/29333
Ref: https://github.com/ruby/timeout/pull/30
Historically only raised errors would trigger a rollback, but in Ruby `2.3`, the `timeout` library
started using `throw` to interupt execution which had the adverse effect of committing open transactions.
To solve this, in Active Record 6.1 the behavior was changed to instead rollback the transaction as it was safer
than to potentially commit an incomplete transaction.
Using `return`, `break` or `throw` inside a `transaction` block was essentially deprecated from Rails 6.1 onwards.
However with the release of `timeout 0.4.0`, `Timeout.timeout` now raises an error again, and Active Record is able
to return to its original, less surprising, behavior.
ActiveRecord::Base has a dedicated ActiveSupport load hook. This adds an
additional hook for ActiveModel::Model, so that when ActiveModel is
being used without ActiveRecord, it can still be modified.
The previous examples were causing failures in CI, after ruby/bigdecimal#264 was merged.
For example:
```
Failure:
NumericExtFormattingTest#test_default_to_fs [/rails/activesupport/test/core_ext/numeric_ext_test.rb:446]:
--- expected
+++ actual
@@ -1 +1,3 @@
-"10000 10.0"
+# encoding: US-ASCII
+# valid: true
+"10 00010.0"
```
The recommendation was to adjust the test to deal with the upstream change more flexibly.
Previously, when merging persisted and in-memory records, we were using
`record[name]` to get the value of the attribute. For records with a
composite primary key, the full CPK would be returned, which was
incompatible with `#_write_attribute`: we'd attempt to assign an Array
to the `id` column, which would result in the column being set to nil.
When dealing with a record with a composite primary key, we need to
write each part of the primary key individually via `#_write_attribute`.
The `name` argument is not useful as `remove_connection` should be called
on the class that established the connection. Allowing `name` to be
passed here doesn't match how any of the other methods behave on the
connection classes. While this behavior has been around for a long time,
I'm not sure anyone is using this as it's not documented when to use
name, nor are there any tests.
If anyone calls a cypher in the console it will show the secret of the
encryptor.
By overriding the `inspect` method to only show the class name we can
avoid accidentally outputting sensitive information.
Before:
```ruby
ActiveRecord::Encryption::Cipher::Aes256Gcm.new(secret).inspect
"#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038 ... @secret=\"\\xAF\\bFh]LV}q\\nl\\xB2U\\xB3 ... >"
```
After:
```ruby
ActiveRecord::Encryption::Cipher::Aes256Gcm(secret).inspect
"#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038>"
```
This help treating caches entries as expandable.
Because Marshal will hapily serialize almost everything, It's not uncommon to
inadvertently cache a class that's not particularly stable, and cause deserialization
errors on rollout when the implementation changes.
E.g. https://github.com/sporkmonger/addressable/pull/508
With this change, in case of such event, the hit rate will suffer for a
bit, but the application won't return 500s responses.
Allows building of records from an association with a has_one through a
singular association with inverse. For belongs_to through associations,
linking the foreign key to the primary key model isn't needed.
For has_one, we cannot build records due to the association not being mutable.
The existing process was attempting to detect duplicates by storing added records in `Set.new`.
When a record is added to `Set.new`, the identity is calculated using `ActiveRecord::Core#hash`, but this value changes before and after saving, so duplicates were not detected before and after saving even for the same object.
This PR fixed the problem by using the `#object_id` of the record to detect duplicates.
Note that when storing a new object obtained by `ActiveRecord::Base.find` etc., duplicates are not eliminated because the `#object_id` is different. This is the same behavior as the current `ActiveRecord::Associations::CollectionProxy#<<`.