Implementing the full `supports_insert_returning?` contract means the SQLite3 adapter supports auto-populated columns (#48241) as well as custom primary keys.
Previously, this default function would produce the static string `"'Ruby ' || 'on ' || 'Rails'"`.
Now, the adapter will appropriately receive and use `"Ruby on Rails"`.
```ruby
change_column_default "test_models", "ruby_on_rails", -> { "('Ruby ' || 'on ' || 'Rails')" }
```
Following up on https://github.com/rails/rails/pull/49192, this commit
adds the transaction `outcome` to the payload, helpful for collecting
stats on how many transactions commit, rollback, restart, or (perhaps
most interestingly) are incomplete because of an error.
The one quirk here is that we have to modify the payload on finish. It's
not the only place this sort of thing happens (instrument mutates the
payload with exceptions, for example), but it does mean we need to dup
the payload we initialize with to avoid mutating it for other tracking.
Co-authored-by: Ian Candy <ipc103@github.com>
Tracking Active Record-managed transactions seems to be a common need,
but there's currently not a great way to do it. Here's a few examples
I've seen:
* GitHub has custom transaction tracking that monkey patches the Active
Record `TransactionManager` and `RealTransaction`. We use the tracking
to prevent opening a transaction to one database cluster inside a
transaction to a different database cluster, and to report slow
transactions (we get slow transaction data directly from MySQL as well,
but it's still helpful to report from the application with backtraces to
help track them down).
* https://github.com/palkan/isolator tracks transactions to prevent
non-atomic interactions like external network calls inside a
transaction. The gem works by subscribing to `sql.active_record`, then
piecing together the transactions by looking for `BEGIN`, `COMMIT`,
`SAVEPOINT`, etc., but this is unreliable:
- https://github.com/palkan/isolator/issues/65
- https://github.com/palkan/isolator/issues/64
* It looks like GitLab patches `TransactionManager` and `RealTransaction`
to track nested savepoints. See https://github.com/palkan/isolator/issues/46
This commit adds a new `transaction.active_record` event that should
provide a more reliable solution for these various use cases. It
includes the connection in the payload (useful, for example, in
differentiating transactions to different databases), but if this change
gets merged we're also planning to add details about what type of
transaction it is (savepoint or real) and what the outcome is (commit,
rollback, restarted, errored).
This instrumentation needs to start and finish at fairly specific times:
- start on materialize
- finish after committing or rolling back, but before the after_commit
or after_rollback callbacks
- finish and start again when the transaction restarts (at least for
real transactions—we've done it for savepoints as well but I'm not
certain we should)
- ensure it finishes if commit and rollback fail (e.g. if the
connection goes away)
To make all that work, this commit uses the lower-level `#build-handle`
API instead of `#instrument`.
Co-authored-by: Ian Candy <ipc103@github.com>
In #49105, `where` examples were added to the `normalizes` documentation
to demonstrate that normalizations are also applied for `where`.
However, as with the `exists?` examples, we should also demonstrate that
normalizations are only applied to keyword arguments, not positional
arguments. We can also address the original source of the confusion by
changing the wording of "finder methods" to "query methods".
This commit also removes the tests added in #49105. `normalizes` works
at the level of attribute types, so there is no need to test every query
method. Testing `find_by` is sufficient. (And, in point of fact,
`find_by` is implemented in terms of `where`.)
The `add_check_constraint` method now accepts an `if_not_exists` option. If set to true an error won't be raised
if the check constraint already exists. In addition, `if_exists` and `if_not_exists` options are transposed
if set when reversing `remove_check_constraint` and `add_check_constraint`. This enables simple creation
of idempotent, non-transactional migrations.
Follow-up to [#47420][]
With the changes made in [#47420][], `has_secure_token` declarations can
be configured to execute in an `after_initialize` callback. This commit
proposed a new Rails 7.1 default: generate all `has_secure_token` values
when their corresponding models are initialized.
To preserve pre-7.1 behavior, applications can set
`config.active_record.generate_secure_token_on = :create`.
By default, generate the value when the model is initialized:
```ruby
class User < ApplicationRecord
has_secure_token
end
record = User.new
record.token # => "fwZcXX6SkJBJRogzMdciS7wf"
```
With `config.active_record.generate_secure_token_on = :create`, generate
the value when the model is created:
```ruby
# config/application.rb
config.active_record.generate_secure_token_on = :create
# app/models/user.rb
class User < ApplicationRecord
has_secure_token on: :create
end
record = User.new
record.token # => nil
record.save!
record.token # => "fwZcXX6SkJBJRogzMdciS7wf"
```
[#47420]: https://github.com/rails/rails/pull/47420
Co-authored-by: Hartley McGuire <skipkayhil@gmail.com>
There were a few 6.1 migration compatibility fixes in [previous][1]
[commits][2]. Most importantly, those commits reorganized some of the
compatibility tests to ensure that the tests would run against every
Migration version. To continue the effort of improving test coverage for
Migration compatibility, this commit converts tests for create_table and
change_column setting the correct precision on datetime columns.
While the create_table tests all pass, the change_column test did not
pass for 7.0 versioned Migrations on sqlite. This was due to the sqlite
adapter not using new_column_definition to set the options on the new
column (new_column_definition is where precision: 6 gets set if no
precision is specified). This happens because columns can't be modified
in place in sqlite and instead the whole table must be recreated and the
data copied. Before this commit, change_column would use the options
of the existing column as a base and merge in the exact options (and
type) passed to change_column.
This commit changes the change_column method to replace the existing
column without using the existing options. This ensures that precision:
6 is set consistently across adapters when change_column is used to
create a datetime column.
[1]: c2f838e80c
[2]: 9b07b2d6ca
`8a5cf4cf4415ae1cdad7feecfb27149c151b0b10` made changes to `to_key` in order
to support composite identifiers. This commit adds CHANGELOG entries for those.
This backfills a changelog entry for PR #48876. This is potentially something Rails users
should be aware of as in very specific situations it can be a change in behavior
This commit deprecates `read_attribute(:id)` returning the primary key
if the model's primary key is not the id column. Starting in Rails 7.2,
`read_attribute(:id)` will always return the value of the id column.
This commit also changes `read_attribute(:id)` for composite primary
key models to return the value of the id column, not the composite
primary key.
While working on #48969, I found that some of the Compatibility test
cases were not working correctly. The tests removed in this commit were
never running the `change_table` migration and so were not actually
testing that `change_table` works correctly. The issue is that the two
migrations created in these tests both have `nil` versions, and so the
Migrator only runs the first one.
This commit refactors the tests so that its easier to test the behavior
of each Migration class version (and I think the rest of the tests
should be updated to use this strategy as well). Additionally, since the
tests are fixed it exposed that `t.change` in a `change_table` is not
behaving as expected so that is fixed as well.
This allows access to the raw id column value on records for which an
id column exists but is not the primary key. This is common amongst
models with composite primary keys.
Despite the inconvenience of double-backslashing, the backslash was chosen
because it's a very common char to use in escaping across multiple programming
languages. A developer, without looking at documentation, may intuitively try
to use it to achieve the desired results in this scenario.
Fixes#37779
The callback when the value is generated. When called with `on:
:initialize`, the value is generated in an `after_initialize` callback,
otherwise the value will be used in a `before_` callback. It will
default to `:create`.
Fix when multiple `belongs_to` maps to the same counter_cache column.
In such situation `inverse_which_updates_counter_cache` may find the
wrong relation which leads into an invalid increment of the
counter_cache.
This is done by releying on the `inverse_of` property of the relation
as well as comparing the models the association points two.
Note however that this second check doesn't work for polymorphic
associations.
Fixes#41250
Co-Authored-By: Jean Boussier <jean.boussier@gmail.com>
Fix: https://github.com/rails/rails/issues/45017
Ref: https://github.com/rails/rails/pull/29333
Ref: https://github.com/ruby/timeout/pull/30
Historically only raised errors would trigger a rollback, but in Ruby `2.3`, the `timeout` library
started using `throw` to interupt execution which had the adverse effect of committing open transactions.
To solve this, in Active Record 6.1 the behavior was changed to instead rollback the transaction as it was safer
than to potentially commit an incomplete transaction.
Using `return`, `break` or `throw` inside a `transaction` block was essentially deprecated from Rails 6.1 onwards.
However with the release of `timeout 0.4.0`, `Timeout.timeout` now raises an error again, and Active Record is able
to return to its original, less surprising, behavior.
The `name` argument is not useful as `remove_connection` should be called
on the class that established the connection. Allowing `name` to be
passed here doesn't match how any of the other methods behave on the
connection classes. While this behavior has been around for a long time,
I'm not sure anyone is using this as it's not documented when to use
name, nor are there any tests.
If anyone calls a cypher in the console it will show the secret of the
encryptor.
By overriding the `inspect` method to only show the class name we can
avoid accidentally outputting sensitive information.
Before:
```ruby
ActiveRecord::Encryption::Cipher::Aes256Gcm.new(secret).inspect
"#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038 ... @secret=\"\\xAF\\bFh]LV}q\\nl\\xB2U\\xB3 ... >"
```
After:
```ruby
ActiveRecord::Encryption::Cipher::Aes256Gcm(secret).inspect
"#<ActiveRecord::Encryption::Cipher::Aes256Gcm:0x0000000104888038>"
```
Allows building of records from an association with a has_one through a
singular association with inverse. For belongs_to through associations,
linking the foreign key to the primary key model isn't needed.
For has_one, we cannot build records due to the association not being mutable.
Fixes#48398
Prepared Statements and Query Logs are incompatible features due to query logs making every query unique.
Co-authored-by: Jean Boussier <jean.boussier@gmail.com>
This reverts commit 6264c1de76, reversing
changes made to 6c80bcdd20.
Reason: Still discussion about the feature. We want to make it opt-in
but we need to better understand why people would want to opt-in to
this behavior.
* Make sure active record encryption configuration happens after initializers have run
Co-authored-by: Cadu Ribeiro <mail@cadu.dev>
* Add a new option to support previous data encrypted non-deterministically with a hash digest of SHA1
There is currently a problem with Active Record encryption for users updating from 7.0 to 7.1 Before
#44873, data encrypted with non-deterministic encryption was always using SHA-1. The reason is that
`ActiveSupport::KeyGenerator.hash_digest_class` is set in an after_initialize block in the railtie config,
but encryption config was running before that, so it was effectively using the previous default SHA1. That
means that existing users are using SHA256 for non deterministic encryption, and SHA1 for deterministic
encryption.
This adds a new option `use_sha1_digest_for_non_deterministic_data` that
users can enable to support for SHA1 and SHA256 when decrypting existing data.
* Set a default value of true for `support_sha1_for_non_deterministic_encryption` and proper initializer values.
We want to enable the flag existing versions (< 7.1), and we want it to be false moving by
default moving forward.
* Make sure the system to auto-filter params supports different initialization orders
This reworks the system to auto-filter params so that it works when encrypted
attributes are declared before the encryption configuration logic runs.
Co-authored-by: Cadu Ribeiro <mail@cadu.dev>
---------
Co-authored-by: Cadu Ribeiro <mail@cadu.dev>
This commit stops issuing the
"Active Record does not support composite primary key" warning
and allows `ActiveRecord::Base#primary_key` to be derived as an `Array`
When find_each/find_in_batches/in_batches are performed on a table with composite primary keys, ascending or descending order can be selected for each key.
```ruby
Person.find_each(order: [:desc, :asc]) do |person|
person.party_all_night!
end
```
Anytime an exception is raised from an adapter we now provide a
`connection_pool` along for the application to further debug what went
wrong. This is an important feature when running a multi-database Rails
application.
We chose to provide the `connection_pool` as it has relevant context
like connection, role and shard. We wanted to avoid providing the
`connection` directly as it might accidentally be used after it's
returned to the pool and been handed to another thread.
The `ConnectionAdapters::PoolConfig` would also have been a reasonable
option except it's `:nodoc:`.
While we had hoped to turn prepared statements on for Rails 7.2, the bug
that's preventing us from doing that is still present. See #43005.
Until this bug is fixed we should not be encouraging applications
running mysql to change the `prepared_statements` in the config to
`true`. In addition to this bug being present, Trilogy does not yet
support `prepared_statements` (although work is in progress).
It will be better to implement this deprecation when mysql2 and trilogy
can both handle `prepared_statements` without major bugs.
This commit extends Active Record creation logic to allow for a database
auto-populated attributes to be assigned on object creation.
Given a `Post` model represented by the following schema:
```ruby
create_table :posts, id: false do |t|
t.integer :sequential_number, auto_increment: true
t.string :title, primary_key: true
t.string :ruby_on_rails, default: -> { "concat('R', 'o', 'R')" }
end
```
where `title` is being used as a primary key, the table has an
integer `sequential_number` column populated by a sequence and
`ruby_on_rails` column has a default function - creation of
`Post` records should populate the `sequential_number` and
`ruby_on_rails` attributes:
```ruby
new_post = Post.create(title: 'My first post')
new_post.sequential_number # => 1
new_post.ruby_on_rails # => 'RoR'
```
* At this moment MySQL and SQLite adapters are limited to only one
column being populated and the column must be the `auto_increment`
while PostgreSQL adapter supports any number of auto-populated
columns through `RETURNING` statement.
If an application is using sharding, they may not want to use `default`
as the `default_shard`. Unfortunately Rails expects there to be a shard
named `default` for certain actions internally. This leads to some
errors on boot and the application is left manually setting
`default_shard=` in their model or updating their shards in
`connects_to` to name `shard_one` to `default`. Neither are a great
solution, especially if Rails can do this for you. Changes to Active
Record are:
* Simplify `connects_to` by merging `database` into `shards` kwarg so we
can do a single loop through provided options.
* Set the `self.default_shard` to the first keys in the shards kwarg.
* Add a test for this behavior
* Update existing test that wasn't testing this to use `default`. I
could have left this test but it really messes with connections in the
other tests and since this isn't testing shard behavior specifically, I
updated it to use `default` as the default shard name.
This is a slight change in behavior from existing applications but
arguably this is fixing a bug because without this an application won't
boot. I originally thought that this would require a huge refactoring to
fix but realized that it makes a lot of sense to take the first shard as
they default. They should all have the same schema so we can assume it's
fine to take the first one.
Fixes: #45390
SelectManager#with currently accepts As and TableAlias nodes.
Neither of these support materialization hints for the query
planner. Both Postgres and SQLite support such hints.
This commit adds a Cte node that does support materialization
hints. It continues to support As and TableAlias nodes by
translating them into Cte nodes.
This is basically a multi-db aware version of `ActiveRecord::Base.connection.disconnect!`.
It also avoid connecting to the database if we weren't already.
This can be useful to reset state after `establish_connection` has been used.
This reverts commit 6adaeb8649, reversing
changes made to a792a62080.
We're going to forward fix this in our application rather than keep this
revert in. Reverting other changes has turned out to be too difficult to
get back to a state where our application is retrying queries.
Partially Fixes#48164
Today, connections are discarded in `within_transaction` if rolling back
fails after the call to `yield` raises. This is done to prevent a
connection from being left in a transaction if the rollback actually
failed.
This change causes connections to be discarded in the following
additional cases where the connection may be left in a transaction:
- If beginning the transaction fails.
- If rolling back the transaction fails.
- If committing the transaction fails, then rolling back fails.
This is accomplished by rescuing all exceptions raised in
`within_transaction` and discarding the connection if the transaction
has not been been both initialized and completed.
This reverts commit 663df3aa09, reversing
changes made to 9b4fff264e.
The changes here forced our code through a different codepath,
circumventing the patch on `execute` to retry queries. Since we also
patch `execute` from Semian it's not clear the correct path forward and
we're going to revert for now.
I don't know how prevalent this really is, but I heard several time
about users having memory exhaustion issues caused by the query cache
when dealing with long running jobs.
Overall it seems sensible for this cache not to be entirely unbounded.
`check_pending!` takes a connection that defaults to `Base.connection`
(or migration_connection but right now that's always Base.connection).
This means that we aren't able to loop through all the configs for an
environment because this is a public API that accepts a single
connection. To fix this I've deprecated `check_pending!` in favor of
`check_all_pending!` which will loop through the configs and check for
pending migrations on all the connections.
Example results:
```
Migrations are pending. To resolve this issue, run:
bin/rails db:migrate
You have 3 pending migrations:
db/migrate/20221213152217_create_posts.rb
db/migrate/20230503150812_add_active_column_to_posts.rb
db/secondary_migrate/20230503173111_create_dogs.rb
```
Before this change, only migrations in `db/migrate` or
`db/secondary_migrate` would be output by `ActiveRecord::Migration.check_pending!`.
I chose not to accept a connection or db_config argument for this new
method because it's not super useful. It's more useful to know all
pending migrations. If it becomes problematic, we can reimplement the
connection option on this method (or reintroduce `check_pending!`.
This deals with a problem introduced in #7743ab95b8e15581f432206245c691434a3993d1a751b9d451170956d59457a9R8
that was preventing query `Class` serialized attributes. Duplicating the original
`Class` argument generates an anonymous class that can't be serialized as YAML.
This change makes query attributes hasheable based on their frozen casted values
to prevent the problem.
This solution is based on an idea by @matthewd from https://github.com/rails/rails/issues/47338#issuecomment-1424402777.
Ruby 3.1 added `intersects?` which is equivalent to `(a & b).any?`. Rubocop added a corresponding cop, `Style/ArrayIntersect`, which transforms the old style to use `intersect?`. Unfortunately as `intersects?` is not delegated on `CollectionProxy`, this leads to false positives that need to be disabled for no good reason other than the fact the method isn't delegated.
This PR add delegation of `intersects?` to `Relation` which fixes this.
`deferrable: true` is deprecated in favor of `deferrable: :immediate`, and
will be removed in Rails 7.2.
Because `deferrable: true` and `deferrable: :deferred` are hard to understand.
Both true and :deferred are truthy values.
This behavior is the same as the deferrable option of the add_unique_key method, added in #46192.
*Hiroyuki Ishii*
It's public API and we can't assume whether the query is read only
so we should clear the cache.
To perform read only queries, `select` and `select_all` can be used.
Adds `:using_index` option to use an existing index when defining a unique constraint.
If you want to change an existing unique index to deferrable, you can use :using_index to create deferrable unique constraints.
```ruby
add_unique_key :users, deferrable: :immediate, using_index: 'unique_index_name'
```
A unique constraint internally constructs a unique index.
If an existing unique index has already been created, the unique constraint
can be created much faster, since there is no need to create the unique index
when generating the constraint.
The [Trilogy database client][trilogy-client] and corresponding
[Active Record adapter][ar-adapter] were both open sourced by GitHub last year.
Shopify has recently taken the plunge and successfully adopted Trilogy in their Rails monolith.
With two major Rails applications running Trilogy successfully, we'd like to propose upstreaming the adapter
to Rails as a MySQL-compatible alternative to Mysql2Adapter.
[trilogy-client]: https://github.com/github/trilogy
[ar-adapter]: https://github.com/github/activerecord-trilogy-adapter
Co-authored-by: Aaron Patterson <tenderlove@github.com>
Co-authored-by: Adam Roben <adam@roben.org>
Co-authored-by: Ali Ibrahim <aibrahim2k2@gmail.com>
Co-authored-by: Aman Gupta <aman@tmm1.net>
Co-authored-by: Arthur Nogueira Neves <github@arthurnn.com>
Co-authored-by: Arthur Schreiber <arthurschreiber@github.com>
Co-authored-by: Ashe Connor <kivikakk@github.com>
Co-authored-by: Brandon Keepers <brandon@opensoul.org>
Co-authored-by: Brian Lopez <seniorlopez@gmail.com>
Co-authored-by: Brooke Kuhlmann <brooke@testdouble.com>
Co-authored-by: Bryana Knight <bryanaknight@github.com>
Co-authored-by: Carl Brasic <brasic@github.com>
Co-authored-by: Chris Bloom <chrisbloom7@github.com>
Co-authored-by: Cliff Pruitt <cliff.pruitt@cliffpruitt.com>
Co-authored-by: Daniel Colson <composerinteralia@github.com>
Co-authored-by: David Calavera <david.calavera@gmail.com>
Co-authored-by: David Celis <davidcelis@github.com>
Co-authored-by: David Ratajczak <david@mockra.com>
Co-authored-by: Dirkjan Bussink <d.bussink@gmail.com>
Co-authored-by: Eileen Uchitelle <eileencodes@gmail.com>
Co-authored-by: Enrique Gonzalez <enriikke@gmail.com>
Co-authored-by: Garrett Bjerkhoel <garrett@github.com>
Co-authored-by: Georgi Knox <georgicodes@github.com>
Co-authored-by: HParker <HParker@github.com>
Co-authored-by: Hailey Somerville <hailey@hailey.lol>
Co-authored-by: James Dennes <jdennes@gmail.com>
Co-authored-by: Jane Sternbach <janester@github.com>
Co-authored-by: Jess Bees <toomanybees@github.com>
Co-authored-by: Jesse Toth <jesse.toth@github.com>
Co-authored-by: Joel Hawksley <joelhawksley@github.com>
Co-authored-by: John Barnette <jbarnette@github.com>
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Co-authored-by: John Hawthorn <john@hawthorn.email>
Co-authored-by: John Nunemaker <nunemaker@gmail.com>
Co-authored-by: Jonathan Hoyt <hoyt@github.com>
Co-authored-by: Katrina Owen <kytrinyx@github.com>
Co-authored-by: Keeran Raj Hawoldar <keeran@gmail.com>
Co-authored-by: Kevin Solorio <soloriok@gmail.com>
Co-authored-by: Leo Correa <lcorr005@gmail.com>
Co-authored-by: Lizz Hale <lizzhale@github.com>
Co-authored-by: Lorin Thwaits <lorint@gmail.com>
Co-authored-by: Matt Jones <al2o3cr@gmail.com>
Co-authored-by: Matthew Draper <matthewd@github.com>
Co-authored-by: Max Veytsman <mveytsman@github.com>
Co-authored-by: Nathan Witmer <nathan@zerowidth.com>
Co-authored-by: Nick Holden <nick.r.holden@gmail.com>
Co-authored-by: Paarth Madan <paarth.madan@shopify.com>
Co-authored-by: Patrick Reynolds <patrick.reynolds@github.com>
Co-authored-by: Rob Sanheim <rsanheim@gmail.com>
Co-authored-by: Rocio Delgado <rocio@github.com>
Co-authored-by: Sam Lambert <sam.lambert@github.com>
Co-authored-by: Shay Frendt <shay@github.com>
Co-authored-by: Shlomi Noach <shlomi-noach@github.com>
Co-authored-by: Sophie Haskins <sophaskins@github.com>
Co-authored-by: Thomas Maurer <tma@github.com>
Co-authored-by: Tim Pease <tim.pease@gmail.com>
Co-authored-by: Yossef Mendelssohn <ymendel@pobox.com>
Co-authored-by: Zack Koppert <zkoppert@github.com>
Co-authored-by: Zhongying Qiao <cryptoque@users.noreply.github.com>
* Infer `foerign_key` when `inverse_of` is present
Automatically infer `foreign_key` on `has_one` and `has_many` associations when `inverse_of` is present.
When inverse of is present, rails has all the info it needs to figure out what the foreign_key on the associated model should be. I can't imagine this breaking anything
* Update test models to remove redundant foreign_keys
* add changelog entry
* fix changelog grammar
Co-authored-by: Rafael Mendonça França <rafael@franca.dev>
This updates the index name generation to always create a valid index name if one is not passed by the user.
Set the limit to 62 bytes to ensure it works for the default configurations of Sqlite, mysql & postgres.
MySQL: 64
Postgres: 63
Sqlite: 62
When over the limit, we fallback to a "short format" that includes a hash to guarantee uniqueness in the generated index name.
Fix: https://github.com/rails/rails/issues/47704
Superseed: https://github.com/rails/rails/pull/47722
While the instance variable ordering bug will be fixed in Ruby 3.2.2,
it's not great that we're depending on such brittle implementation detail.
Additionally, Marshalling Active Record instances is currently very inefficient,
the payload include lots of redundant data that shouldn't make it into the cache.
In this new format the serialized payload only contains basic Ruby core or stdlib objects,
reducing the risk of changes in the internal representation of Rails classes.
With the introduction of composite primary keys, a common usecase is querying for records with tuples representing the composite key. This change introduces new syntax to the where clause that allows specifying an array of columns mapped to a list of corresponding tuples. It converts this to an OR-joined list of separate queries, similar to previous implementations that rely on grouping queries.
In https://github.com/rails/rails/pull/46690 the `db_warnings_action` and `db_warnings_ignore` configs where added. The
`db_warnings_ignore` can take a list of warning messages to match against.
At GitHub we have a subscriber that that does something like this but also filters out error codes. There might also be
other applications that filter via error codes and this could be something they can use instead of just the explicit
messages.
This also refactors the adapter tests in order for mysql2 and postgresql adapters to share the same helper when setting
the db_warnings_action and db_warnings_ignore configs
By default, exclude constraints in PostgreSQL are checked after each statement.
This works for most use cases, but becomes a major limitation when replacing
records with overlapping ranges by using multiple statements.
```ruby
exclusion_constraint :users, "daterange(valid_from, valid_to) WITH &&", deferrable: :immediate
```
Passing `deferrable: :immediate` checks constraint after each statement,
but allows manually deferring the check using `SET CONSTRAINTS ALL DEFERRED`
within a transaction. This will cause the excludes to be checked after the transaction.
It's also possible to change the default behavior from an immediate check
(after the statement), to a deferred check (after the transaction):
```ruby
exclusion_constraint :users, "daterange(valid_from, valid_to) WITH &&", deferrable: :deferred
```
*Hiroyuki Ishii*
```ruby
add_unique_key :sections, [:position], deferrable: :deferred, name: "unique_section_position"
remove_unique_key :sections, name: "unique_section_position"
```
See PostgreSQL's [Unique Constraints](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS) documentation for more on unique constraints.
By default, unique constraints in PostgreSQL are checked after each statement.
This works for most use cases, but becomes a major limitation when replacing
records with unique column by using multiple statements.
An example of swapping unique columns between records.
```ruby
old_item = Item.create!(position: 1)
new_item = Item.create!(position: 2)
Item.transaction do
old_item.update!(position: 2)
new_item.update!(position: 1)
end
```
Using the default behavior, the transaction would fail when executing the
first `UPDATE` statement.
By passing the `:deferrable` option to the `add_unique_key` statement in
migrations, it's possible to defer this check.
```ruby
add_unique_key :items, [:position], deferrable: :immediate
```
Passing `deferrable: :immediate` does not change the behaviour of the previous example,
but allows manually deferring the check using `SET CONSTRAINTS ALL DEFERRED` within a transaction.
This will cause the unique constraints to be checked after the transaction.
It's also possible to adjust the default behavior from an immediate
check (after the statement), to a deferred check (after the transaction):
```ruby
add_unique_key :items, [:position], deferrable: :deferred
```
PostgreSQL allows users to create a unique constraints on top of the unique
index that cannot be deferred. In this case, even if users creates deferrable
unique constraint, the existing unique index does not allow users to violate uniqueness
within the transaction. If you want to change existing unique index to deferrable,
you need execute `remove_index` before creating deferrable unique constraints.
*Hiroyuki Ishii*
Now that we support a way to register custom configurations we need to
allow applications to find those configurations. This change adds a
`config_key` option to `configs_for` to find db configs where the
configuration_hash contains a particular key.
I have also removed the deprecation for `include_replicas` while I was
in here to make the method signature cleaner. I've updated the upgrade
guide with the removal.
Previously, applications could only have two types of database
configuration objects, `HashConfig` and `UrlConfig`. This meant that if
you wanted your config to implement custom methods you had to monkey
patch `DatabaseConfigurations` to take a custom class into account. This
PR allows applications to register a custom db_config handler so that
custom configs can respond to needed methods. This is especially useful
for tools like Vitess where we may want to indicate it's sharded, but
not give Rails direct access to that knowledge.
Using the following database.yml as an example:
```yaml
development:
primary:
database: my_db
animals:
database: my_animals_db
vitess:
sharded: 1
```
We can register a custom handler that will generate `VitessConfig`
objects instead of a `HashConfig` object in an initializer:
```ruby
ActiveRecord::DatabaseConfigurations.register_db_config_handler do |env_name, name, url, config|
next unless config.key?(:vitess)
VitessConfig.new(env_name, name, config)
end
```
and create the `VitessConfig` class:
```ruby
class VitessConfig < ActiveRecord::DatabaseConfigurations::UrlConfig
def sharded?
vitess_config.fetch("sharded", false)
end
private
def vitess_config
configuration_hash.fetch(:vitess_config)
end
end
```
Now when the application is booted, the config with the `vitess` key
will generate a `VitessConfig` object where all others will generate a
`HashConfig`.
Things to keep in mind:
1) It is recommended but not required that these custom configs inherit
from Rails so you don't need to reimplement all the existing methods.
2) Applications must implement the configuration in which their config
should be used, otherwise first config wins (so all their configs
will be the custom one.)
3) The procs must support 4 arguments to accommodate `UrlConfig`. I am
thinking of deprecating this and forcing the URL parsing to happen in
the `UrlConfig` directly.
4) There is one tiny behavior change where when we have a nil url key in
the config hash we no longer merge it back into the configuration hash.
We also end up with a `HashConfig` instead of a `UrlConfig`. I think
this is fine because a `nil` URL is...useless.
Co-authored-by: John Crepezzi <john.crepezzi@gmail.com>
Add support for including non-key columns in
btree indexes for PostgreSQL with the INCLUDE
parameter.
Example:
def change
add_index :users, :email, include: [:id, :created_at]
end
Will result in:
CREATE INDEX index_users_on_email USING btree (email) INCLUDE (id,
created_at)
The INCLUDE parameter is described in the PostgreSQL docs:
https://www.postgresql.org/docs/current/sql-createindex.html
One particularly annoying thing with YAMLColumn type restriction
is that it is only checked on load.
Which means if your code insert data with unsupported types, the
insert will work, but now you'll be unable to read the record, which
makes it hard to fix etc.
That's the reason why I implemented `YAML.safe_dump` (https://github.com/ruby/psych/pull/495).
It applies exactly the same restrictions than `safe_load`, which means
if you attempt to store non-permitted fields, it will fail on insertion
and not on further reads, so you won't create an invalid record in your
database.
It can sometimes happen that `sql` is encoded in UTF-8 but contains
some invalid binary data of some sort.
When this happens `strip` end up raising an EncodingError.
Overall I think this strip is quite wasteful, so we might as well
just skip it.
For databases and adapters which support them (currently PostgreSQL
and MySQL), options can be passed to `explain` to provide more
detailed query plan analysis.
`ActiveRecord.db_warnings_action` can be used to configure the
action to take when a query produces a warning. The warning can be
logged, raised, or trigger custom behaviour provided via a proc.
`ActiveRecord.db_warnings_ignore` allows applications to set an
allowlist of SQL warnings that should always be ignored, regardless
of the configured action.
Co-authored-by: Paarth Madan <paarth.madan@shopify.com>
This patch tries to solve Heroku's new [PostgreSQL extension policy](https://devcenter.heroku.com/changelog-items/2446)
while keeping the migration and schema code idiomatic.
PostgreSQL adapter method `enable_extension` now allows to add an schema in its name.
The extension must be installed on another schema.
Usage:
`enable_extension('other_schema.hstore')`
The `enable_extension` can work with `schema` only if the given schema
already exists in the database.
`ActiveRecord::Base::normalizes` declares a normalization for one or
more attributes. The normalization is applied when the attribute is
assigned or updated, and the normalized value will be persisted to the
database. The normalization is also applied to the corresponding
keyword argument of finder methods. This allows a record to be created
and later queried using unnormalized values. For example:
```ruby
class User < ActiveRecord::Base
normalizes :email, with: -> email { email.strip.downcase }
end
user = User.create(email: " CRUISE-CONTROL@EXAMPLE.COM\n")
user.email # => "cruise-control@example.com"
user = User.find_by(email: "\tCRUISE-CONTROL@EXAMPLE.COM ")
user.email # => "cruise-control@example.com"
user.email_before_type_cast # => "cruise-control@example.com"
User.exists?(email: "\tCRUISE-CONTROL@EXAMPLE.COM ") # => true
User.exists?(["email = ?", "\tCRUISE-CONTROL@EXAMPLE.COM "]) # => false
```
As of https://github.com/rails/rails/pull/46525, the behaviour around
before_committed! callbacks has changed: callbacks are run on every
enrolled record in a transaction, even multiple copies of the same record.
This is a significant change that apps should be able to opt into in order
to avoid unexpected issues.
Currently if you do this:
```ruby
config.active_record.query_log_tags = [:namespaced_controller]
```
A request that's processed by the `NameSpaced::UsersController` will log as `namespaced_controller='NameSpaced%3A%3AUsersController'`.
By contrast if you set the tag to `:controller` it would log as `controller='user'`, much nicer.
This PR makes the `:namespaced_controller` formatting more similar to `:controller` - it will now log as `namespaced_controller='name_spaced/users'`.
* Use storage/ instead of db/ for sqlite3 db files
db/ should be for configuration only, not data. This will make it easier to mount a single volume into a container for testing, development, and even sqlite3 in production.
In case when an index is present, using `lower()` prevents from using
the index.
The index is typically present for columns with uniqueness, and
`lower()` is added for `validates_uniqueness_of ..., case_sensitive: false`.
However, if the index is defined with `lower()`, the query without
`lower()` wouldn't use the index either.
Setup:
```
CREATE EXTENSION citext;
CREATE TABLE citexts (cival citext);
INSERT INTO citexts (SELECT MD5(random()::text) FROM generate_series(1,1000000));
```
Without index:
```
EXPLAIN ANALYZE SELECT * from citexts WHERE cival = 'f00';
Gather (cost=1000.00..14542.43 rows=1 width=33) (actual time=165.923..169.065 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on citexts (cost=0.00..13542.33 rows=1 width=33) (actual time=158.218..158.218 rows=0 loops=3)
Filter: (cival = 'f00'::citext)
Rows Removed by Filter: 333333
Planning Time: 0.070 ms
Execution Time: 169.089 ms
Time: 169.466 ms
EXPLAIN ANALYZE SELECT * from citexts WHERE lower(cival) = lower('f00');
Gather (cost=1000.00..16084.00 rows=5000 width=33) (actual time=166.896..169.881 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on citexts (cost=0.00..14584.00 rows=2083 width=33) (actual time=157.348..157.349 rows=0 loops=3)
Filter: (lower((cival)::text) = 'f00'::text)
Rows Removed by Filter: 333333
Planning Time: 0.084 ms
Execution Time: 169.905 ms
Time: 170.338 ms
```
With index:
```
CREATE INDEX val_citexts ON citexts (cival);
EXPLAIN ANALYZE SELECT * from citexts WHERE cival = 'f00';
Index Only Scan using val_citexts on citexts (cost=0.42..4.44 rows=1 width=33) (actual time=0.051..0.052 rows=0 loops=1)
Index Cond: (cival = 'f00'::citext)
Heap Fetches: 0
Planning Time: 0.118 ms
Execution Time: 0.082 ms
Time: 0.616 ms
EXPLAIN ANALYZE SELECT * from citexts WHERE lower(cival) = lower('f00');
Gather (cost=1000.00..16084.00 rows=5000 width=33) (actual time=167.029..170.401 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on citexts (cost=0.00..14584.00 rows=2083 width=33) (actual time=157.180..157.181 rows=0 loops=3)
Filter: (lower((cival)::text) = 'f00'::text)
Rows Removed by Filter: 333333
Planning Time: 0.132 ms
Execution Time: 170.427 ms
Time: 170.946 ms
DROP INDEX val_citexts;
```
With an index with `lower()` has a reverse effect, a query with
`lower()` performs better:
```
CREATE INDEX val_citexts ON citexts (lower(cival));
EXPLAIN ANALYZE SELECT * from citexts WHERE cival = 'f00';
Gather (cost=1000.00..14542.43 rows=1 width=33) (actual time=174.138..177.311 rows=0 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on citexts (cost=0.00..13542.33 rows=1 width=33) (actual time=165.983..165.984 rows=0 loops=3)
Filter: (cival = 'f00'::citext)
Rows Removed by Filter: 333333
Planning Time: 0.080 ms
Execution Time: 177.333 ms
Time: 177.701 ms
EXPLAIN ANALYZE SELECT * from citexts WHERE lower(cival) = lower('f00');
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on citexts (cost=187.18..7809.06 rows=5000 width=33) (actual time=0.021..0.022 rows=0 loops=1)
Recheck Cond: (lower((cival)::text) = 'f00'::text)
-> Bitmap Index Scan on lower_val_on_citexts (cost=0.00..185.93 rows=5000 width=0) (actual time=0.018..0.018 rows=0 loops=1)
Index Cond: (lower((cival)::text) = 'f00'::text)
Planning Time: 0.102 ms
Execution Time: 0.048 ms
(6 rows)
Time: 0.491 ms
```
This enables subclasses to sync database timezone changes without overriding `#raw_execute`,
which removes the need to redefine `#raw_execute` in the `Mysql2Adapter` and other
adapters subclassing `AbstractMysqlAdapter`.
Co-authored-by: Paarth Madan <paarth.madan@shopify.com>
composed_of values should be automatically frozen by Active Record.
This worked correctly when assigning a new value object via the writer,
but objects instantiated based on database columns were NOT frozen. The
fix consists of calling #dup and then #freeze on the cached value
object when it's added to the aggregation cache in #reader_method.
Additionally, values assigned via the accessor are duplicated and then
frozen to avoid caller confusion.
Previously, assignment would succeed but silently not write to the
database.
The changes to counter_cache are necessary because incrementing the
counter cache for a column calls []=. I investigated an approach to use
_write_attribute instead, however counter caches are expected to resolve
attribute aliases so write_attribute/[]= seems more correct.
Similarly, []= was replaced with _write_attribute in merge_target_lists
to skip the overriden []= and the primary key check. attribute_names
will already return custom primary keys so the primary_key check in
write_attribute is not needed.
Co-authored-by: Alex Ghiculescu <alex@tanda.co>
Previously, encrypted attributes could be added to an application's
filter_parameters which would filter the attribute values from logs.
This commit makes the add_to_filter_parameters additionally add
encrypted attributes to records' filter_attributes, which allows them
to be filtered when models are inspected (such as in the console).
Adds the ability to unscope preload and eager_load associations.
This is the same functionality as `unscope(:includes)` which
may chose to use eager_loading or preloading based on the
overall query complexity. In cases where the type of association
loading is explicit (eager_load and preload), this allows for
unscoping the explicit association loading. This is helpful when
taking an existing query and performing an aggregate when
has_many associations have explicitly asked for eager_load or
preload. In the example below, unscoping the two has_many
associations removes the extra preload query and eager_load
join.
query.eager_load!(:has_many_association1)
query.preload!(:has_many_association2)
query.unscope(:eager_load, :preload).group(:id).select(:id)
There are also cases where depending on the aggregation
and select the explict preload fails when the select does not
include the preload association key. This issue led me to
write this fix.
Provides a wrapper for `new`, to provide feature parity with `create`s
ability to create multiple records from an array of hashes, using the
same notation as the `build` method on associations.
- Associations can create multiple objects from an array of hashes
- Associations can build multiple objects from an array of hashes
- Classes can create multiple objects from an array of hashes
- Classes can build multiple objects from an array of hashes (<- You are here)
When duplicating records, we usually want to create a new record and
don't want to keep the original lock_version. Just like timestamp
columns are not copied.
Co-authored-by: Seonggi Yang <seonggi.yang@gmail.com>
Co-authored-by: Ryohei UEDA <ueda@anipos.co.jp>
Renames `_primary_key_constraints_hash` private method to avoid implying
that it strictly represents a constraint based on the model primary key
Allows columns list used in query constraints to be configurable using
`ActiveRecord::Base.query_constraints` macro
If a user is calling `#execute` directly with a SQL string, they can
also specify whether they'd like the query to be retried in the case
of a connection-related exception. By nature, this means that applications
looking to opt into retries across all queries can patch `#execute` to
call super with `allow_retry: true` instead of needing to reimplement the
method entirely.
Forgot to update the changelog when we changed the default values for
`config.active_record.query_log_tags_format` from `:legacy` to
`:sqlcommenter`.
This change is a followup to: https://github.com/rails/rails/issues/46179
Prior to this commit, `ciphertext_for` returned the cleartext of values
that had not yet been encrypted, such as with an unpersisted record:
```ruby
Post.encrypts :body
post = Post.create!(body: "Hello")
post.ciphertext_for(:body)
# => "{\"p\":\"abc..."
post.body = "World"
post.ciphertext_for(:body)
# => "World"
```
This commit fixes `ciphertext_for` to always return the ciphertext of
encrypted attributes:
```ruby
Post.encrypts :body
post = Post.create!(body: "Hello")
post.ciphertext_for(:body)
# => "{\"p\":\"abc..."
post.body = "World"
post.ciphertext_for(:body)
# => "{\"p\":\"xyz..."
```
Fixed a bug that caused the alias name of "group by" to be too long and the first half of the name would be the same in both cases if it was cut by max identifier length.
Fix#46285
Co-authored-by: Yusaku ONO <yono@users.noreply.github.com>
Prior to this commit, encrypted attributes that used column default
values appeared to be encrypted on create, but were not:
```ruby
Book.encrypts :name
book = Book.create!
book.name
# => "<untitled>"
book.name_before_type_cast
# => "{\"p\":\"abc..."
book.reload.name_before_type_cast
# => "<untitled>"
```
This commit ensures attributes with column default values are encrypted:
```ruby
Book.encrypts :name
book = Book.create!
book.name
# => "<untitled>"
book.name_before_type_cast
# => "{\"p\":\"abc..."
book.reload.name_before_type_cast
# => "{\"p\":\"abc..."
```
The existing "support encrypted attributes defined on columns with
default values" test in `encryptable_record_test.rb` shows the intended
behavior, but it was not failing without a `reload`.
In a multi-db world, delegating from `Base` to the handler doesn't make
much sense. Applications should know when they are dealing with a single
connection (Base.connection) or the handler which deals with multiple
connections. Delegating to the connection handler from Base breaks this
contract and makes behavior confusing. I think eventually a new object
will replace the handler altogether but for now I'd like to just
separate concerns to avoid confusion.
This commit adds a step to validate the options that are used when managing columns and tables in migrations.
The intention is to only validate options for new migrations that are added. Invalid options used in old migrations are silently ignored like they always have been.
Fixes#33284Fixes#39230
Co-authored-by: George Wambold <georgewambold@gmail.com>
Commit 37d1429ab1 introduced the DummyERB to avoid loading the environment when
running `rake -T`.
The DummyCompiler simply replaced all output from `<%=` with a fixed string and
removed everything else. This worked okay when it was used for YAML values.
When using `<%=` within a YAML key, it caused an error in the YAML parser,
making it impossible to use ERB as you would expect. For example a
`database.yml` file containing the following should be possible:
development:
<% 5.times do |i| %>
shard_<%= i %>:
database: db/development_shard_<%= i %>.sqlite3
adapter: sqlite3
<% end %>
Instead of using a broken ERB compiler we can temporarily use a
`Rails.application.config` that does not raise an error when configurations are
accessed which have not been set as described in #35468.
This change removes the `DummyCompiler` and uses the standard `ERB::Compiler`.
It introduces the `DummyConfig` which delegates all known configurations to the
real `Rails::Application::Configuration` instance and returns a dummy string for
everything else. This restores the full ERB capabilities without compromising on
speed when generating the rake tasks for multiple databases.
Deprecates `config.active_record.suppress_multiple_database_warning`.
Prior to this change
t.virtual :column_name, type: :datetime
would erroneously produce the same result as
t.virtual :column_name, type: :datetime, precision: nil
This is because the code path for `virtual` skipped the default lookup:
- `t.datetime` is delegated to the `column` method
- `column` sees `type == :datetime` and sets a default `precision` is none was given
- `column` calls `new_column_definition` to add the result to `@columns_hash`
- `t.virtual` is delegated to the `column` method
- `column` sees `type == :virtual`, not `:datetime`, so skips the default lookup
- `column` calls `new_column_definition` to add the result to `@columns_hash`
- `new_column_definition` sees `type == :virtual`, so sets `type = options[:type]`
By moving the default lookup, we get consistent code paths:
- `t.datetime` is delegated to the `column` method
- `column` calls `new_column_definition` to add the result to `@columns_hash`
- `new_column_definition` sees `type == :datetime` and sets a default `precision` is none was given
- `t.virtual` is delegated to the `column` method
- `column` calls `new_column_definition` to add the result to `@columns_hash`
- `new_column_definition` sees `type == :virtual`, so sets `type = options[:type]`
- `new_column_definition` sees `type == :datetime` and sets a default `precision` is none was given
Problem:
Though, the `MessageVerifier` that powers `signed_id` supports both
`expires_in` and `expires_at`, `signed_id` only supports `expires_in`.
Because of this, generating signed_id that expires at a certain time is
somewhat tedious. Imagine issuing a coupon that is valid only for a
day.
Solution:
Add `expires_at` option to `signed_id` to generate signed ids that
expire at the given time.
Building on the work done in #44576 and #44591, we extend the logic that automatically
reconnects broken db connections to take into account a timeout limit. This ensures
that retries + reconnects are slow-query aware, and that we don't retry queries
if a given amount of time has already passed since the query was first tried.
This value will default to 5 seconds, but can be adjusted via the `connection_retry_timeout`
config.
This avoids problems when complex data structures are mutated _after_
being handed to ActiveRecord for processing. For example false hits in
the query cache.
Fixes#46044
According to the MySQL documentation, database connections default to
ssl-mode=PREFERRED. But PREFERRED doesn't verify the server's identity:
The default setting, --ssl-mode=PREFERRED, produces an encrypted
connection if the other default settings are unchanged. However, to
help prevent sophisticated man-in-the-middle attacks, it is
important for the client to verify the server’s identity. The
settings --ssl-mode=VERIFY_CA and --ssl-mode=VERIFY_IDENTITY are a
better choice than the default setting to help prevent this type of
attack. VERIFY_CA makes the client check that the server’s
certificate is valid. VERIFY_IDENTITY makes the client check that
the server’s certificate is valid, and also makes the client check
that the host name the client is using matches the identity in the
server’s certificate.
https://dev.mysql.com/doc/refman/8.0/en/using-encrypted-connections.html
However both the Rails::DBConsole command and the MySQLDatabaseTasks
ignore the ssl-mode option, making the connection fallback to PREFERRED.
Adding ssl-mode to the forwarded options makes sure the expected mode is
passed to the connection.
Followup to #45908 to match the same behavior as SchemaMigration
Previously, InternalMetadata inherited from ActiveRecord::Base. This is
problematic for multiple databases and resulted in building the code in
AbstractAdapter that was previously there. Rather than hacking around
the fact that InternalMetadata inherits from Base, this PR makes
InternalMetadata an independent object. Then each connection can get it's
own InternalMetadata object. This change required defining the methods
that InternalMetadata was depending on ActiveRecord::Base for (ex
create!). I reimplemented only the methods called by the framework as
this class is no-doc's so it doesn't need to implement anything beyond
that. Now each connection gets it's own InternalMetadata object which
stores the connection.
This change also required adding a NullInternalMetadata class for cases
when we don't have a connection yet but still need to copy migrations
from the MigrationContext. Ultimately I think this is a little weird -
we need to do so much work to pick up a set of files? Maybe something to
explore in the future.
Aside from removing the hack we added back in #36439 this change will
enable my work to stop clobbering and depending directly on
Base.connection in the rake tasks. While working on this I discovered
that we always have a ActiveRecord::InternalMetadata because the
connection is always on Base in the rake tasks. This will free us up
to do less hacky stuff in the migrations and tasks.
Both schema migration and internal metadata are blockers to removing
`Base.connection` and `Base.establish_connection` from rake tasks, work
that is required to drop the reliance on `Base.connection` which will
enable more robust (and correct) sharding behavior in Rails..
The issue fixed by the commit that introduced that entry only existed
in the main branch, so it isn't really a released change worthy of a
CHANGELOG entry.
Previously, SchemaMigration inherited from ActiveRecord::Base. This is
problematic for multiple databases and resulted in building the code in
AbstractAdapter that was previously there. Rather than hacking around
the fact that SchemaMigration inherits from Base, this PR makes
SchemaMigration an independent object. Then each connection can get it's
own SchemaMigration object. This change required defining the methods
that SchemaMigration was depending on ActiveRecord::Base for (ex
create!). I reimplemented only the methods called by the framework as
this class is no-doc's so it doesn't need to implement anything beyond
that. Now each connection gets it's own SchemaMigration object which
stores the connection. I also decided to update the method names (create
-> create_version, delete_by -> delete_version, delete_all ->
delete_all_versions) to be more explicit.
This change also required adding a NullSchemaMigraiton class for cases
when we don't have a connection yet but still need to copy migrations
from the MigrationContext. Ultimately I think this is a little weird -
we need to do so much work to pick up a set of files? Maybe something to
explore in the future.
Aside from removing the hack we added back in #36439 this change will
enable my work to stop clobbering and depending directly on
Base.connection in the rake tasks. While working on this I discovered
that we always have a `ActiveRecord::SchemaMigration` because the
connection is always on `Base` in the rake tasks. This will free us up
to do less hacky stuff in the migrations and tasks.
Following on #45924 I realized that `all_connection_pools` and
`connection_pool_list` don't make much sense as separate methods and
should follow the same deprecation as the other methods on the handler
here. So this PR deprecates `all_connection_pools` in favor of
`connection_pool_list` with an explicit argument of the role or `:all`.
Passing `nil` will throw a deprecation warning to get applications to
be explicit about behavior they expect.
Previously when I implemented multiple database roles in Rails there
were two handlers so it made sense for the methods
`active_connections?`, `clear_active_connections!`,
`clear_reloadable_connections!`, `clear_all_connections!`, and
`flush_idle_connections!` to only operate on the current (or passed)
role and not all pools regardless of role. When I removed this and moved
all the pools to the handler maintained by a pool manager, I left these
methods as-is to preserve the original behavior.
This made sense because I thought these methods were only called by
applications and not called by Rails. I realized yesterday that some of
these methods (`flush_idle_connections!`, `clear_active_connections!`,
and `clear_reloadable_connections!` are all called on boot by the
Active Record railtie.
Unfortunately this means that applications using multiple databases
aren't getting connections flushed or cleared on boot for any connection
but the writing ones.
The change here continues existing behavior if a role like reading is
passed in directly. Otherwise if the role is `nil` (which is the new
default` we fall back to all connections and issue a deprecation
warning. This will be the new default behavior in the future. In order
to easily allow turning off the deprecation warning I've added an `:all`
argument that will use all pools but no warning. The deprecation warning
will only fire if there is more than one role in the pool manager,
otherwise we assume prior behavior.
This bug would have only affected applications with more than one role
and only when these methods are called outside the context of a
`connected_to` block. These methods no longer consider the set
`current_role` and applications need to be explicit if they don't want
these methods to operate on all pools.
Add ability to use hash with columns and aliases inside #select method.
Post
.joins(:comments)
.select(
posts: { id: :post_id, title: :post_title },
comments: { id: :comment_id, body: :comment_body}
)
instead
Post
.joins(:comments)
.select(
"posts.id as post_id, posts.title as post_title,
comments.id as comment_id, comments.body as comment_body"
)
Co-authored-by: Josef Šimánek <193936+simi@users.noreply.github.com>
Co-authored-by: Jean byroot Boussier <19192189+casperisfine@users.noreply.github.com>
When source and target classes have a different set of attributes adapts
attributes such that the extra attributes from target are added.
Fixes#41195
Co-authored-by: SampsonCrowley <sampsonsprojects@gmail.com>
Co-authored-by: Jonathan Hefner <jonathan@hefner.pro>
The `remove_check_constraint` method now accepts an `if_exists` option. If set
to true an error won't be raised if the check constraint doesn't exist.
This commit is a combination of PR #45726 and #45718 with some
additional changes to improve wording, testing, and implementation.
Usage:
```ruby
remove_check_constraint :products, name: "price_check", if_exists: true
```
Fixes#45634
Co-authored-by: Margaret Parsa <mparsa@actbluetech.com>
Co-authored-by: Aditya Bhutani <adi_bhutani16@yahoo.in>
Using `create_or_find_by` in codepaths where most of the time
the record already exist is wasteful on several accounts.
`create_or_find_by` should be the method to use when most of the
time the record doesn't already exist, not a race condition safe
version of `find_or_create_by`.
To make `find_or_create_by` race-condition free, we can search
the record again if the creation failed because of an unicity
constraint.
Co-Authored-By: Alex Kitchens <alexcameron98@gmail.com>
Fix: https://github.com/rails/rails/issues/45585
There's no benefit in serializing it as HWIA, it requires
to allow that type for YAML safe_load and takes more space.
We can cast it back to a regular hash before serialization.
https://github.com/rails/rails/pull/41395 added support for the `timestamptz` type on the Postgres adapter.
As we found [here](https://github.com/rails/rails/pull/41084#issuecomment-1056430921) this causes issues because in some scenarios the new type is not considered a time zone aware attribute, meaning values of this type in the DB are presented as a `Time`, not an `ActiveSupport::TimeWithZone`.
This PR fixes that by ensuring that `timestamptz` is always a time zone aware type, for Postgres users.