This fixes a bug that the `foreign_key` and the `foreign_type` are
separated as different table conditions if a polymorphic association has
a scope that joins another tables.
The CI failure for `test_errors_for_bigint_fks_on_integer_pk_table` is
due to the poor regex that extract all ``` `(\w+)` ``` like parts from
the message (`:foreign_key` should be `"old_car_id"`, but `"engines"`):
https://travis-ci.org/rails/rails/jobs/494123455#L1703
I've improved the regex more strictly and have more exercised mismatched
foreign key tests.
Fixes#35294
This deprecates using class level querying methods if the receiver scope
regarded as leaked, since #32380 and #35186 may cause that silently
leaking information when people upgrade the app.
We need deprecation first before making those.
This reverts commit b67d5c6ded, reversing
changes made to 2e018361c7.
Reason: #35186 may cause that silently leaking information when people
upgrade the app.
We need deprecation first before making this.
In chat Sam Saffron asked how to use the setter now that configurations
is no longer a hash and you need to do AR::Base.configurations["test"]=.
Technically you can do `ActiveRecord::Base.configurations = { the hash
}` but I realized the old way throws an error and is unintuitive.
To aid in the transition from hashes to objects this PR makes a few
changes:
1) Re-adds a deprecated hash setter `[]=` that will add a new hash
to the configurations list OR replace an existing hash if that
environment is already present. This won't be supported in future Rails
versions but a good error is important.
2) Changed to throw deprecation warnings on the methods we decided to support
for hash conversion and raise on the methods we don't support.
3) Refactored the setter/getter hash deprecation warnings messages and
rewrote them.
Getters message:
```
DEPRECATION WARNING: `ActiveRecord::Base.configurations` no longer
returns a hash. Methods that act on the hash like `values` are
deprecated and will be removed in Rails 6.1. Use the `configs_for`
method to collect and iterate over the database configurations.
```
Setter message:
```
DEPRECATION WARNING: Setting `ActiveRecord::Base.configurations` with
`[]=` is deprecated. Use `ActiveRecord::Base.configurations=` directly
to set the configurations instead.
```
4) Rewrote the legacy configurations test file to test all the public
methods in the DatabaseConfigurations class.
This reverts commit eec3e28a1a, reversing
changes made to 5588fb4802.
Reason: Marking as loaded without actual loading is too greedy optimization.
See more context #35239.
Closes#35239.
[Edouard CHIN & Ryuta Kamizono]
Currently custom attributes are always qualified by the table name in
the generated SQL wrongly even if the table doesn't have the named
column, it would cause an invalid SQL error.
Custom attributes should only be qualified if the table has the same
named column.
The implicit delegation in the migration class is to be logged. It is
not intended in the migration compatibility, so it should be avoided.
Fixes#35224.
I implemented Foreign key create in `create_table` for SQLite3 at
#24743. This follows #24743 to implement `add_foreign_key` and
`remove_foreign_key`.
Unfortunately SQLite3 has one limitation that
`PRAGMA foreign_key_list(table-name)` doesn't have constraint name.
So we couldn't implement find/remove foreign key by name for now.
Fixes#35207.
Closes#31343.
The docs for `ActiveRecord::Associations::CollectionProxy` describe
`ActiveRecord::Associations::Association`. I've moved them to
association and rewrote collection proxy's docs to be more applicable to
what the class actually does.`
[ci skip]
`Association#target=` invokes `loaded!`, so we no longer need to call
the `loaded!` explicitly.
Since Preloader is private API, we don't guarantee that it behaves like
older version as long as using Preloader directly. But this refactoring
fortunately also fix the Preloader compatibility issue #35195.
Closes#35195.
I've found a few places in Rails code base where I think it makes sense
to calculate elapsed time more precisely by using
`Concurrent.monotonic_time`:
- Fix calculation of elapsed time in `ActiveSupport::Cache::MemoryStore#prune`
- Fix calculation of elapsed time in
`ActiveRecord::ConnectionAdapters::ConnectionPool::Queue#wait_poll`
- Fix calculation of elapsed time in
`ActiveRecord::ConnectionAdapters::ConnectionPool#attempt_to_checkout_all_existing_connections`
- Fix calculation of elapsed time in `ActiveRecord::ConnectionAdapters::Mysql2Adapter#explain`
See
https://docs.ruby-lang.org/en/2.5.0/Process.html#method-c-clock_gettimehttps://blog.dnsimple.com/2018/03/elapsed-time-with-ruby-the-right-way
Related to 7c4542146f
The `distinct` affects (reduces) rows of the result, so it is important
part when both `distinct` and `offset` are given.
Replacing SELECT clause to `1 AS one` and removing `distinct` and
`order` is just optimization for the `exists?`, we should not apply the
optimization for that case.
Fixes#35191.
`relation.create` populates scope attributes to new record by `scoping`,
it is necessary to assign the scope attributes to the record and to find
STI subclass from the scope attributes.
But the effect of `scoping` is class global, it was caused undesired
behavior that pollute all class level querying methods in initialization
block and callbacks (`after_initialize`, `before_validation`,
`before_save`, etc), which are user provided code.
To avoid the leaking scope issue, restore the original current scope
before initialization block and callbacks are invoked.
Fixes#9894.
Fixes#17577.
Closes#31526.
This is more consistent with Resolver, which has build called. This
allows using a Proc instead of a class, which could be nice if you need
to vary switching logic based on the request in a more ad-hoc way (ie.
check if it is an API request).
This commit speeds up rendering partials by caching the variable name
calculation on the template. The variable name is based on the "virtual
path" used for looking up the template. The same virtual path
information lives on the template, so we can just ask the cached
template object for the variable.
This benchmark takes a couple files, so I'll cat them below:
```
[aaron@TC ~/g/r/actionview (speed-up-partials)]$ cat render_benchmark.rb
require "benchmark/ips"
require "action_view"
require "action_pack"
require "action_controller"
class TestController < ActionController::Base
end
TestController.view_paths = [File.expand_path("test/benchmarks")]
controller_view = TestController.new.view_context
result = Benchmark.ips do |x|
x.report("render") do
controller_view.render("many_partials")
end
end
[aaron@TC ~/g/r/actionview (speed-up-partials)]$ cat test/benchmarks/test/_many_partials.html.erb
Looping:
<ul>
<% 100.times do |i| %>
<%= render partial: "list_item", locals: { i: i } %>
<% end %>
</ul>
[aaron@TC ~/g/r/actionview (speed-up-partials)]$ cat test/benchmarks/test/_list_item.html.erb
<li>Number: <%= i %></li>
```
Benchmark results (master):
```
[aaron@TC ~/g/r/actionview (master)]$ be ruby render_benchmark.rb
Warming up --------------------------------------
render 41.000 i/100ms
Calculating -------------------------------------
render 424.269 (± 3.5%) i/s - 2.132k in 5.031455s
```
Benchmark results (this branch):
```
[aaron@TC ~/g/r/actionview (speed-up-partials)]$ be ruby render_benchmark.rb
Warming up --------------------------------------
render 50.000 i/100ms
Calculating -------------------------------------
render 521.862 (± 3.8%) i/s - 2.650k in 5.085885s
```
Active Record uses `scoping` to delegate to named scopes from relations
for propagating the chaining source scope. It was needed to restore the
source scope in named scopes, but it was caused undesired behavior that
pollute all class level querying methods.
Example:
```ruby
class Topic < ActiveRecord::Base
scope :toplevel, -> { where(parent_id: nil) }
scope :children, -> { where.not(parent_id: nil) }
scope :has_children, -> { where(id: Topic.children.select(:parent_id)) }
end
# Works as expected.
Topic.toplevel.where(id: Topic.children.select(:parent_id))
# Doesn't work due to leaking `toplevel` to `Topic.children`.
Topic.toplevel.has_children
```
Since #29301, the receiver in named scopes has changed from the model
class to the chaining source scope, so the polluting class level
querying methods is no longer required for that purpose.
Fixes#14003.
With this benchmark:
require "bundler/setup"
require "active_record"
require "benchmark/ips"
# This connection will do for database-independent bug reports.
ActiveRecord::Base.establish_connection(adapter: "sqlite3", database: ":memory:")
ActiveRecord::Schema.define do
create_table :posts, force: true do |t|
end
end
class Post < ActiveRecord::Base
end
new_post = Post.new
Benchmark.ips do |b|
b.report("present?") do
new_post.present?
end
b.report("blank?") do
new_post.blank?
end
end
Before:
Warming up --------------------------------------
present? 52.147k i/100ms
blank? 53.077k i/100ms
Calculating -------------------------------------
present? 580.184k (±21.8%) i/s - 2.555M in 5.427085s
blank? 601.537k (± 9.2%) i/s - 2.972M in 5.003503s
After:
Warming up --------------------------------------
present? 378.235k i/100ms
blank? 375.476k i/100ms
Calculating -------------------------------------
present? 17.381M (± 7.5%) i/s - 86.238M in 5.001815s
blank? 17.877M (± 6.4%) i/s - 88.988M in 5.004634s
This improvement is mostly because those methods were hitting
`method_missing` on a lot of levels to be able to return the value.
To avoid all this stack walking we are short-circuiting those methods.
Closes#35059.
This change ensures that all query cahces are cleared across all
connections per handler for the current thread so if you write on one
connection the read will have the query cache cleared.
When I wrote the `connected_to` and `connects_to` API's I wrote them
with the idea in mind that it didn't really matter what the
handlers/roles were called as long as those connecting to the roles knew
which one wrote and which one read.
With the introduction of the middleware Rails begins to assume it's
`writing` and `reading` and there's no room for other roles. At GitHub
we've been using this method for a long time so we have a ton of legacy
code that uses different handler names `default` and `readonly`. We
could rename all our code but I think this is better for a few reasons:
- Legacy apps that have been using multiple databases for a long time
can have an eaiser time switching.
- If we later find this to cause more issues than it's worth we can
easily deprecate.
- We won't force old apps to rewrite the resolver middleware just to use
a different handler.
Adding the writing_role/reading_role required that I move the code that
creates the first handler for writing to the railtie. If I didn't move
this the core class would assign the handler before I was able to assign
a new one in my configuration and I'd end up with 3 handlers instead of
2.
Right now we only have one option that's supported, the delay. However I
can see us supporting other options in the future.
This PR refactors the options to get passed into the resolver so whether
you're using middleware or using the config options you can pass options
to the resolver. This will also make it easy to add new options in the
future.
We sometimes display simple examples of additional parameters that can be
supplied to table-wise methods like these and I found it particularly difficult
to figure out which options `t.foreign_key` accepts without drilling very deep
into the specific SchemaStatements docs.
Since it's relatively common to create foreign keys with custom column names or
primary keys, it seems like this should help quite a few people.
[ci skip]
We need to update using the timestamp from the end of the request, not
the start. For example, if a request spends 5+ seconds writing, we still
want to wait another 5 seconds for replication lag.
Since we now run the update after we yield, we need to use ensure to
make sure we update the timestamp even if there is an exception.
The following PR adds behavior to Rails to allow an application to
automatically switch it's connection from the primary to the replica.
A request will be sent to the replica if:
* The request is a read request (`GET` or `HEAD`)
* AND It's been 2 seconds since the last write to the database (because
we don't want to send a user to a replica if the write hasn't made it
to the replica yet)
A request will be sent to the primary if:
* It's not a GET/HEAD request (ie is a POST, PATCH, etc)
* Has been less than 2 seconds since the last write to the database
The implementation that decides when to switch reads (the 2 seconds) is
"safe" to use in production but not recommended without adequate testing
with your infrastructure. At GitHub in addition to the a 5 second delay
we have a curcuit breaker that checks the replication delay
and will send the query to a replica before the 5 seconds has passed.
This is specific to our application and therefore not something Rails
should be doing for you. You'll need to test and implement more robust
handling of when to switch based on your infrastructure. The auto
switcher in Rails is meant to be a basic implementation / API that acts
as a guide for how to implement autoswitching.
The impementation here is meant to be strict enough that you know how to
implement your own resolver and operations classes but flexible enough
that we're not telling you how to do it.
The middleware is not included automatically and can be installed in
your application with the classes you want to use for the resolver and
operations passed in. If you don't pass any classes into the middleware
the Rails default Resolver and Session classes will be used.
The Resolver decides what parameters define when to
switch, Operations sets timestamps for the Resolver to read from. For
example you may want to use cookies instead of a session so you'd
implement a Resolver::Cookies class and pass that into the middleware
via configuration options.
```
config.active_record.database_selector = { delay: 2.seconds }
config.active_record.database_resolver = MyResolver
config.active_record.database_operations = MyResolver::MyCookies
```
Your classes can inherit from the existing classes and reimplment the
methods (or implement more methods) that you need to do the switching.
You only need to implement methods that you want to change. For example
if you wanted to set the session token for the last read from a replica
you would reimplement the `read_from_replica` method in your resolver
class and implement a method that updates a new timestamp in your
operations class.
Previously if the `url` key in a config hash was nil we'd ignore the
configuration as invalid. This can happen when you're relying on a
`DATABASE_URL` in the env and that is not set in the environment.
```
production:
<<: *default
url: ENV['DATABASE_URL']
```
This PR fixes that case by checking if there is a `url` key in the
config instead of checking if the `url` is not nil in the config.
In addition to changing the conditional we then need to build a url hash
to merge with the original hash in the `UrlConfig` object.
Fixes#35091
The transaction used to restore fixtures is an implementation detail
that should be abstracted away. Idealy a test should behave the same
wether or not transactional fixtures are enabled.
However since transactions have been made lazy, the fixture
transaction started leaking into tests case. e.g. consider the
following (oversimplified) test:
```ruby
class SQLSubscriber
attr_accessor :sql
def initialize
@sql = []
end
def call(*, event)
sql << event[:sql]
end
end
subscriber = SQLSubscriber.new
ActiveSupport::Notifications.subscribe("sql.active_record", subscriber)
User.connection.execute('SELECT 1', 'Generic name')
assert_equal ['SELECT 1'], subscriber.sql
```
On Rails 6 it starts to break because the `sql` array will be `['BEGIN', 'SELECT 1']`.
Several things are wrong here:
- That transaction is not generated by the tested code, so it shouldn't be visible.
- The transaction is not even closed yet, which again doesn't reflect the reality.
In MySQL, the text column size is 65,535 bytes by default (1 GiB in
PostgreSQL). It is sometimes too short when people want to use a text
column, so they sometimes change the text size to mediumtext (16 MiB) or
longtext (4 GiB) by giving the `limit` option.
Unlike MySQL, PostgreSQL doesn't allow the `limit` option for a text
column (raises ERROR: type modifier is not allowed for type "text").
So `limit: 4294967295` (longtext) couldn't be used in Action Text.
I've allowed changing text and blob size without giving the `limit`
option, it prevents that migration failure on PostgreSQL.
While working on another feature for multiple databases (auto-switching)
I observed that in development the first request won't autoload the
application record connection for the primary database and may not yet
know about the replica connection.
In my test application this caused the application to thrown an error if
I tried to send the first request to the replica before the replica was
connected. This wouldn't be an issue in production because the
application is preloaded.
In order to fix this I decided to leave the original error message and
delete the new error message. I updated the original error message to
include the `role` to make it a bit clearer that the connection isn't
established for that particular role.
The error now reads:
```
No connection pool with 'primary' found for the 'reading' role.
```
A single database application will continue uisng the original error
message:
```
No connection pool with 'primary' found.
```
Allows aliasing, predications, ordering, and various other functions on `And` and `Case` nodes. This brings them in line with other nodes like `Binary` and `Unary`.
Currently `conn.column_exists?("testings", "created_at", "datetime")`
returns false even if the table has the `created_at` column.
That reason is that `column.type` is a symbol but passed `type` is not
normalized to symbol unlike `column_name`, it is surprising behavior to
me.
I've improved that to normalize a value before comparison.
Since #31230, `change_column` is executed as a bulk statement.
That caused incorrect type casting column default by looking up the
before changed type, not the after changed type.
In a bulk statement, we can't use `change_column_default_for_alter` if
the statement changes the column type.
This fixes the type casting to use the constructed target sql_type.
Fixes#34938.
Since 31ffbf8d, finder methods no longer raise `RangeError`. So
`StatementCache#execute` is the only place to raise the exception for
finder queries.
`StatementCache` is used for simple equality queries in the codebase.
This means that if `StatementCache#execute` raises `RangeError`, the
result could always be regarded as empty.
So `StatementCache#execute` just return nil in that range error case,
and treat that as empty in the caller side, then we can avoid catching
the exception in much places.
Currently several queries cannot return correct result due to incorrect
`RangeError` handling.
First example:
```ruby
assert_equal true, Topic.where(id: [1, 9223372036854775808]).exists?
assert_equal true, Topic.where.not(id: 9223372036854775808).exists?
```
The first example is obviously to be true, but currently it returns
false.
Second example:
```ruby
assert_equal topics(:first), Topic.where(id: 1..9223372036854775808).find(1)
```
The second example also should return the object, but currently it
raises `RecordNotFound`.
It can be seen from the examples, the queries including large number
assuming empty result is not always correct.
Therefore, This change handles `RangeError` to generate executable SQL
instead of raising `RangeError` to users to always return correct
result. By this change, it is no longer raised `RangeError` to users.
When we added support for multiple databases through a 3-tiered config
and configuration objects this error message got a bit convoluted.
Previously if you had an application with a missing configuation and
multiple databases the error message would look like this:
```
'doesnexist' database is not configured. Available: development,
development, test, test, production, production
(ActiveRecord::AdapterNotSpecified)
```
That's not very descriptive since it duplicates the environments
(because there are multiple databases per environment for this
application).
To fix this I've constructed a bit more readable error message which now
reads like this if you have a multi db app:
```
The `doesntexist` database is not configured for the `production`
environment. (ActiveRecord::AdapterNotSpecified)
Available databases configurations are:
development: primary, primary_readonly
test: primary, primary_readonly
production: primary, primary_readonly
```
And like this if you have a single db app:
```
The `doesntexist` database is not configured for the `production`
environment. (ActiveRecord::AdapterNotSpecified)
Available databases configurations are:
development
test
```
This makes the error message more readable and presents the user all
available options for the database connections.
We need this in order to be able to add this migration for users that
use ActiveStorage during update their apps from Rails 5.2 to Rails 6.0.
Related to #33405
`rake app:update` should update active_storage
`rake app:update` should execute `rake active_storage:update`
if it is used in the app that is being updated.
It will add new active_storage's migrations to users' apps during update Rails.
Context https://github.com/rails/rails/pull/33405#discussion_r204239399
Also, see a related discussion in the Campfire:
https://3.basecamp.com/3076981/buckets/24956/chats/12416418@1236713081
This PR addresses the issue described in #28025. On `dependent: :nullify` strategy only the foreign key of the relation is nullified. However on polymorphic associations the `*_type` column is not nullified leaving the record with a NULL `*_id` but the `*_type` column is present.
This attr writer was introduced at 7db90aa, but the usage is already
removed at bd2f5c0 before v3.2.0.rc1 is released.
If we'd like to customize the visitor in the connection, `arel_visitor`
which is implemented in all adapters (mysql2, postgresql, sqlite3,
oracle-enhanced, sqlserver) could be used for the purpose #23515.
This commit adds support for endless ranges, e.g. (1..), that were added
in Ruby 2.6. They're functionally equivalent to explicitly specifying
Float::INFINITY as the end of the range.
* Enable `Lint/UselessAssignment` cop to avoid unused variable warnings
Since we've addressed the warning "assigned but unused variable"
frequently.
370537de053040446cec5ed618e19276ebafe594
And also, I've found the unused args in c1b14ad which raises no warnings
by the cop, it shows the value of the cop.
Since Ruby 2.6.0 NilClass#to_d is returning `BigDecimal` 0.0, this
breaks `average` compatibility with prior Ruby versions. This patch
makes `average` return `nil` in all Ruby versions when there are no
rows.
`nil`, `Numeric`, and `String` are most basic objects which are passed
to `type_cast`. But now each `when *types_which_need_no_typecasting`
evaluation allocates extra two arrays, it makes `type_cast` slower.
The `types_which_need_no_typecasting` was introduced at #15351, but the
method isn't useful (never used any adapters) since all adapters
(sqlite3, mysql2, postgresql, oracle-enhanced, sqlserver) still
overrides the `_type_cast`.
Just expanding the method would make the `type_cast` 2x faster.
```ruby
module ActiveRecord
module TypeCastFast
def type_cast_fast(value, column = nil)
value = id_value_for_database(value) if value.is_a?(Base)
if column
value = type_cast_from_column(column, value)
end
_type_cast_fast(value)
rescue TypeError
to_type = column ? " to #{column.type}" : ""
raise TypeError, "can't cast #{value.class}#{to_type}"
end
private
def _type_cast_fast(value)
case value
when Symbol, ActiveSupport::Multibyte::Chars, Type::Binary::Data
value.to_s
when true then unquoted_true
when false then unquoted_false
# BigDecimals need to be put in a non-normalized form and quoted.
when BigDecimal then value.to_s("F")
when nil, Numeric, String then value
when Type::Time::Value then quoted_time(value)
when Date, Time then quoted_date(value)
else raise TypeError
end
end
end
end
conn = ActiveRecord::Base.connection
conn.extend ActiveRecord::TypeCastFast
Benchmark.ips do |x|
x.report("type_cast") { conn.type_cast("foo") }
x.report("type_cast_fast") { conn.type_cast_fast("foo") }
x.compare!
end
```
```
Warming up --------------------------------------
type_cast 58.733k i/100ms
type_cast_fast 101.364k i/100ms
Calculating -------------------------------------
type_cast 708.066k (± 5.9%) i/s - 3.583M in 5.080866s
type_cast_fast 1.424M (± 2.3%) i/s - 7.197M in 5.055860s
Comparison:
type_cast_fast: 1424240.0 i/s
type_cast: 708066.0 i/s - 2.01x slower
```
In an application that has a primary and replica database the data
inserted on the primary connection will not be able to be read by the
replica connection.
In a test like this:
```
test "creating a home and then reading it" do
home = Home.create!(owner: "eileencodes")
ActiveRecord::Base.connected_to(role: :default) do
assert_equal 3, Home.count
end
ActiveRecord::Base.connected_to(role: :readonly) do
assert_equal 3, Home.count
end
end
```
The home inserted in the beginning of the test can't be read by the
replica database because when the test is started a transasction is
opened byy `setup_fixtures`. That transaction remains open for the
remainder of the test until we are done and run `teardown_fixtures`.
Because the data isn't actually committed to the database the replica
database cannot see the data insertion.
I considered a couple ways to fix this. I could have written a database
cleaner like class that would allow the data to be committed and then
clean up that data afterwards. But database cleaners can make the
database slow and the point of the fixtures is to be fast.
In GitHub we solve this by sharing the connection pool for the replicas
with the primary (writing) connection. This is a bit hacky but it works.
Additionally since we define `replica? || preventing_writes?` as the
code that blocks writes to the database this will still prevent writing
on the replica / readonly connection. So we get all the behavior of
multiple connections for the same database without slowing down the
database.
In this PR the code loops through the handlers. If the handler doesn't
match the default handler then it retrieves the connection pool from the
default / writing handler and assigns the reading handler's connections
to that pool.
Then in enlist_fixture_connections it maps all the connections for the
default handler because all the connections are now available on that
handler so we don't need to loop through them again.
The test uses a temporary connection pool so we can test this with
sqlite3_mem. This adapter doesn't behave the same as the others and
after looking over how the query cache test works I think this is the
most correct. The issues comes when calling `connects_to` because that
establishes new connections and confuses the sqlite3_mem adapter. I'm
not entirely sure why but I wanted to make sure we tested all adapters
for this change and I checked that it wasn't the shared connection code
that was causing issues - it was the `connects_to` code.
That ability was introduced at #11898 as `Relation#update` without
giving ids, so the ability on the class level is not documented and not
tested.
c83e30d which fixes#33470 has lost two undocumented abilities.
One has fixed at 5c65688, but I missed the ability on the class level.
Removing any feature should not be suddenly happened in a stable version
even if that is not documented.
I've restored the ability and added test case to avoid any regression in
the future.
Fixes#34743.
This reverts 27c6c07 since `arel_attr.to_s` is not right way to avoid
the type error.
That to_s returns `"#<struct Arel::Attributes::Attribute ...>"`, there
is no reason to match the regex to the inspect form.
And also, the regex path is not covered by our test cases. I've tweaked
the regex for redundant part and added assertions for the regex path.
Since `migration_context` has `migrations_paths` itself and provides
methods which returning values from parsed migration files, so there is
no reason to use the `parse_migration_filename` low level API directly.
If you try to call `connected_to` with a role that doesn't have an
established connection you used to get an error that said:
```
>> ActiveRecord::Base.connected_to(role: :i_dont_exist) { Home.first }
ActiveRecord::ConnectionNotEstablished Exception: No connection pool
with 'primary' found.
```
This is confusing because the connection could be established but we
spelled the role wrong.
I've changed this to raise if the `role` used in `connected_to` doesn't
have an associated handler. Users who encounter this should either check
that the role is spelled correctly (writin -> writing), establish a
connection to that role in the model with connects_to, or use the
`database` keyword for the `role`.
I think this will provide a less confusing error message for those
starting out with multiple databases.
Thanks to ko1, passing block parameter to another method is
significantly optimized in Ruby 2.5.
https://bugs.ruby-lang.org/issues/14045
Thus we no longer need to keep this ugly hack.
Since MySQL 5.7.9, the `innodb_default_row_format` option defines the
default row format for InnoDB tables. The default setting is `DYNAMIC`.
The row format is required for indexing on `varchar(255)` with `utf8mb4`
columns.
As long as using MySQL 5.6, CI won't be passed even if MySQL server
setting is properly configured the same as MySQL 5.7
(`innodb_file_per_table = 1`, `innodb_file_format = 'Barracuda'`, and
`innodb_large_prefix = 1`) since InnoDB table is created as the row
format `COMPACT` by default on MySQL 5.6, therefore indexing on string
with `utf8mb4` columns aren't succeeded.
Making `ROW_FORMAT=DYNAMIC` create table option by default for legacy
MySQL version would mitigate the indexing issue on the user side, and it
makes CI would be passed on MySQL 5.6 which is configured properly.
BEGIN transaction would cause COMMIT or ROLLBACK, so unless COMMIT and
ROLLBACK aren't treated as write queries as well as BEGIN, the
`ReadOnlyError` would be raised.
I originally named this `StatementInvalid` because that's what we do in
GitHub, but `@tenderlove` pointed out that this means apps can't test
for or explitly rescue this error. `StatementInvalid` is pretty broad so
I've renamed this to `ReadOnlyError`.
Unlike the `Relation#delete_all`, `delete_all` on collection proxy
doesn't return affected count. Since the `CollectionProxy` is a subclass
of the `Relation`, this inconsistency is probably not intended, so it
should return the count consistently.
This PR adds the ability to prevent writes to a database even if the
database user is able to write (ie the database is a primary and not a
replica).
This is useful for a few reasons: 1) when converting your database from
a single db to a primary/replica setup - you can fix all the writes on
reads early on, 2) when we implement automatic database switching or
when an app is manually switching connections this feature can be used
to ensure reads are reading and writes are writing. We want to make sure
we raise if we ever try to write in read mode, regardless of database
type and 3) for local development if you don't want to set up multiple
databases but do want to support rw/ro queries.
This should be used in conjunction with `connected_to` in write mode.
For example:
```
ActiveRecord::Base.connected_to(role: :writing) do
Dog.connection.while_preventing_writes do
Dog.create! # will raise because we're preventing writes
end
end
ActiveRecord::Base.connected_to(role: :reading) do
Dog.connection.while_preventing_writes do
Dog.first # will not raise because we're not writing
end
end
```
Follow up #33394.
#33394 only fixes the case of scoping with klass methods in the scope
block which invokes `klass.all`.
Query methods in the scope block also need to invoke `klass.all` to be
affected by the scoping.