* Use utf8mb4 character set by default
`utf8mb4` character set supports supplementary characters including emoji.
`utf8` character set with 3-Byte encoding is not enough to support them.
There was a downside of 4-Byte length character set with MySQL 5.5 and 5.6:
"ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes"
for Rails string data type which is mapped to varchar(255) type.
MySQL 5.7 supports 3072 byte key prefix length by default.
* Remove `DEFAULT COLLATE` from Active Record unit test databases
There should be no "one size fits all" collation in MySQL 5.7.
Let MySQL server choose the default collation for Active Record
unit test databases.
Users can choose their best collation for their databases
by setting `options[:collation]` based on their requirements.
* InnoDB FULLTEXT indexes support since MySQL 5.6
it does not have to use MyISAM storage engine whose maximum key length is 1000 bytes.
Using MyISAM storag engine with utf8mb4 character set would cause
"Specified key was too long; max key length is 1000 bytes"
https://dev.mysql.com/doc/refman/5.6/en/innodb-fulltext-index.html
* References
"10.9.1 The utf8mb4 Character Set (4-Byte UTF-8 Unicode Encoding)"
https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8mb4.html
"10.9.2 The utf8mb3 Character Set (3-Byte UTF-8 Unicode Encoding)"
https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-utf8.html
"14.8.1.7 Limits on InnoDB Tables"
https://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html
> If innodb_large_prefix is enabled (the default), the index key prefix limit is 3072 bytes
> for InnoDB tables that use DYNAMIC or COMPRESSED row format.
* CI against MySQL 5.7
Followed this instruction and changed root password to empty string.
https://docs.travis-ci.com/user/database-setup/#MySQL-57
* The recommended minimum version of MySQL is 5.7.9
to support utf8mb4 character set and `innodb_default_row_format`
MySQL 5.7.9 introduces `innodb_default_row_format` to support 3072 byte length index by default.
Users do not have to change MySQL database configuration to support Rails string type.
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_default_row_formathttps://dev.mysql.com/doc/refman/5.7/en/innodb-restrictions.html
> If innodb_large_prefix is enabled (the default),
> the index key prefix limit is 3072 bytes for InnoDB tables that use DYNAMIC or COMPRESSED row format.
* The recommended minimum version of MariaDB is 10.2.2
MariaDB 10.2.2 is the first version of MariaDB supporting `innodb_default_row_format`
Also MariaDB says "MySQL 5.7 is compatible with MariaDB 10.2".
- innodb_default_row_format
https://mariadb.com/kb/en/library/xtradbinnodb-server-system-variables/#innodb_default_row_format
- "MariaDB versus MySQL - Compatibility"
https://mariadb.com/kb/en/library/mariadb-vs-mysql-compatibility/
> MySQL 5.7 is compatible with MariaDB 10.2
- "Supported Character Sets and Collations"
https://mariadb.com/kb/en/library/supported-character-sets-and-collations/
Since 213796f, bind params are used for IN clause if enabled prepared
statements.
Unfortunately, most adapter modules have a limitation for # of bind
params (mysql2 65535, pg 65535, sqlite3 250000). So if eager loading
large number of records at once, that query couldn't be sent to the
database.
Since eager loading/preloading queries are auto-generated by Active
Record itself, so it should be worked regardless of large number of
records like as before.
Fixes#33702.
Use a variable local to the `save_collection_association` method in
`activerecord/lib/active_record/autosave_association.rb`, instead of an
instance variable.
Prior to this PR, when there was a circular series of `autosave: true`
associations, the callback for a `has_many` association was run while
another instance of the same callback on the same association hadn't
finished running. When control returned to the first instance of the
callback, the instance variable had changed, and subsequent associated
records weren't saved correctly. Specifically, the ID field for the
`belongs_to` corresponding to the `has_many` was `nil`.
Remove unnecessary test and comments.
Fixes#28080.
The TIME, DATETIME, and TIMESTAMP types [have supported](https://mariadb.com/kb/en/library/microseconds-in-mariadb/)
a fractional seconds precision from 0 to 6.
Default values from time columns with specified precision is read
as `current_timestamp(n)` from information schema.
rake `db:schema:dump` produces `schema.rb` **without** default values for time columns with the specified precision:
t.datetime "last_message_at", precision: 6, null: false
rake `db:schema:dump` produces `schema.rb` **with** default values for time columns with the specified precision:
t.datetime "last_message_at", precision: 6, default: -> { "current_timestamp(6)" }, null: false
`touch` option was added to `increment!` (#27660) and `update_counters`
(#26995). But that option behaves inconsistently with
`Persistence#touch` method.
If `touch` option is passed attribute names, it won't update
update_at/on attributes unlike `Persistence#touch` method.
Due to changed from `Persistence#touch` to `increment!` with `touch`
option, #31405 has a regression that `counter_cache` with `touch` option
which is passed attribute names won't update update_at/on attributes.
I think that the inconsistency is not intended. To get back consistency,
ensure that `touch` option updates update_at/on attributes.
This fixes a bug with building an object that has multiple
`has_many :through` associations through the same object.
Previously, when building the object via .new, the intermediate
object would be created instead of just being built.
Here's an example:
Given a GameBoard, that has_one Owner and Collection through Game.
The following line would cause a game object to be created in the
database.
GameBoard.new(owner: some_owner, collection: some_collection)
Whereas, if passing only one of those associations into `.new` would
cause the Game object to be built and not created in the database.
Now the above code will only build the Game object, and not save it.
Add a prefix option to ActiveRecord::Store.store_accessor and
ActiveRecord::Store.store. This option allows stores to have identical keys
with different accessors.
This is an alternative of #31877 to fix#31876 caused by #28808.
This issue was caused by a combination of several loose implementation.
* finding automatic inverse association of polymorphic without context (caused by #28808)
* returning `klass` even if `polymorphic?` (exists before #28808)
* loose verification by `valid_inverse_reflection?` (exists before #28808)
This makes `klass` raise if `polymorphic?` not to be misused.
This issue will not happen unless polymorphic `klass` is misused.
Fixes#31876.
Closes#31877.
When removing a record from a has many through association, the counter
cache was being updated even if the through record halted the callback
chain and prevented itself from being destroyed.
Since 5c71000, it has lost to be able to unscope `default_scope` in STI
associations. This change will use `.empty_scope?` instead of
`.values.empty?` to regard as an empty scope if only have
`type_condition`.
`ActiveRecord::Persistence#touch` does not work well when optimistic
locking enabled and `locking_column`, without default value, is null in
the database.
`id` column in `subscribers` was added as a primary key for ignorable in
INSERT. But it caused `NotNullViolation` for oracle-enhanced adapter.
https://github.com/rsim/oracle-enhanced/issues/1357
I changed the column to nullable to address the issue.
Currently the methods of `AttributeMethods::PrimaryKey` are overwritten
by `define_attribute_methods`. It will be broken if a table that
customized primary key has non primary key id column.
It should not be overwritten if a table has any primary key.
Fixes#29350.
Also, explicitly apply the order: generate_subscripts is unlikely to
start returning values out of order, but we should still be clear about
what we want.
The native timestamp type in MySQL is different from datetime type.
Internal representation of the timestamp type is UNIX time, This means
that timestamp columns are affected by time zone.
```
> SET time_zone = '+00:00';
Query OK, 0 rows affected (0.00 sec)
> INSERT INTO time_with_zone(ts,dt) VALUES (NOW(),NOW());
Query OK, 1 row affected (0.02 sec)
> SELECT * FROM time_with_zone;
+---------------------+---------------------+
| ts | dt |
+---------------------+---------------------+
| 2016-02-07 22:11:44 | 2016-02-07 22:11:44 |
+---------------------+---------------------+
1 row in set (0.00 sec)
> SET time_zone = '-08:00';
Query OK, 0 rows affected (0.00 sec)
> SELECT * FROM time_with_zone;
+---------------------+---------------------+
| ts | dt |
+---------------------+---------------------+
| 2016-02-07 14:11:44 | 2016-02-07 22:11:44 |
+---------------------+---------------------+
1 row in set (0.00 sec)
```
Follow up to #26266.
The default type of `primary_key` and `references` were changed to
`bigint` since #26266. But legacy migration and sqlite3 adapter should
keep its previous behavior.
This issue was introduced with d849f42 to solve #19782. However, we can
solve #19782 without causing the issue. It is enough to save only when
necessary.
Fixes#27338.
Since 9.4, PostgreSQL recommends using `pgcrypto`'s `gen_random_uuid()`
to generate version 4 UUIDs instead of the functions in the `uuid-ossp`
extension.
These changes uses the appropriate UUID function depending on the
underlying PostgreSQL server's version, while maintaining
`uuid_generate_v4()` in older migrations.
* Fix undefined method `owners' for NullPreloader:Class
Fixing undefined method `owners' for
ActiveRecord::Associations::Preloader::NullPreloader:Class
* Use Ruby 1.9 hash format
Use Ruby 1.9 hash format
#24192
[Rafael Mendonça França + Ladislav Smola]
habtm join tables commonly have two id columns and it's OK to make those
two id columns a primary key. This commit eliminates the warnings for
join tables that have this setup.
ManageIQ/manageiq#6713
This resolves a bug where if the primary key used is not `id` (ex:
`uuid`), and has a `validates_uniqueness_of` in the model, a uniqueness error
would be raised. This is a partial revert of commit `119b9181ece399c67213543fb5227b82688b536f`, which introduced this behavior.
This fixes incorrect assumptions made by e991c7b that we can assume the
DB is already casting the value for us. The enum type needs additional
information to perform casting, and needs a subtype.
I've opted not to call `super` in `cast`, as we have a known set of
types which we accept there, and the subtype likely doesn't accept them
(symbol -> integer doesn't make sense)
Close#23190
This reverts commit 224eddfc0e, reversing
changes made to 9d681fc74c.
When merging the pull request, I misunderstood `has_secure_token` as declaring a model
has a token from birth and through the rest of its lifetime.
Therefore, supporting conditional creation doesn't make sense. You should never mark a
model as having a secure token if there's a time when it shouldn't have it on creation.
Fix the NoMethodErrors introduced in 224eddf, when adding conditional token creation.
The model declarations but the column wasn't added to the schema.
In Pg and Sqlite3, `:text` and `:binary` have variable unlimited length.
But in MySQL, these have limited length for each types (ref #21591, #21619).
This change adds short-hand methods for each text and blob types.
Example:
create_table :foos do |t|
t.tinyblob :tiny_blob
t.mediumblob :medium_blob
t.longblob :long_blob
t.tinytext :tiny_text
t.mediumtext :medium_text
t.longtext :long_text
end
This solves the following error:
ActiveRecord::StatementInvalid: Could not find table 'guitars'
It seems that the table structure of the `Guitar` model has not been
necessary until now. Due to the wrong table name the model was not
correctly linked to the table.
If argument of `build_record` has key and value which is same as
default value of database, we should also except the key from
`create_scope` in `initialize_attributes`.
Because at first `build_record` initialize record object with argument
of `build_record`, then assign attributes derived from Association's scope.
In this case `record.changed` does not include the key, which value is
same as default value of database, so we should add the key to except list.
Fix#21893.
Currently `tinyblob` is dumped to `t.binary "tiny_blob", limit: 255`.
But `t.binary ... limit: 255` is generating SQL to `varchar(255)`.
It is incorrect. This commit fixes this problem.
`inverse_of` on through associations was accidently removed/caused to
stop working in commit f8d2899 which was part of a refactoring on
`ThroughReflection`.
To fix we moved `inverse_of` and `check_validity_of_inverse!` to the
`AbstractReflection` so it's available to the `ThroughReflection`
without having to dup any methods. We then need to delegate `inverse_name`
method in `ThroughReflection`. `inverse_name` can't be moved to
`AbstractReflection` without moving methods that set the instance
variable `@automatic_inverse_of`.
This adds a test that ensures that `inverse_of` on a `ThroughReflection`
returns the correct class name, and the correct record for the inverse
relationship.
Fixes#21692
And we are passing them as separate types in the query, which means 0
precision is still not supported by older versions of MySQL. I also
missed a handful of other cases where they need to be conditionally
applied.
Specifically, versions of MySQL prior to 5.6 do not support this, which
is what's used on Travis by default. The method `mysql_56?` appeared to
only ever be used to conditionally apply subsecond precision, so I've
generalized it and used it more liberally.
This should fix the test failures caused by #20317
Timestamp column can have less precision than ruby timestamp
In result in how big a fraction of a second can be stored in the
database.
m = Model.create!
m.created_at.usec == m.reload.created_at.usec
# => false
# due to different seconds precision in Time.now and database column
If the precision is low enough, (mysql default is 0, so it is always low
enough by default) the value changes when model is reloaded from the
database. This patch fixes that issue ensuring that any timestamp
assigned as an attribute is converted to column precision under the
attribute.
Since after 87d1aba3c `dependent: :destroy` callbacks on has_one
assocations run *after* destroy, it is possible that a nullification is
attempted on an already destroyed target:
class Car < ActiveRecord::Base
has_one :engine, dependent: :nullify
end
class Engine < ActiveRecord::Base
belongs_to :car, dependent: :destroy
end
> car = Car.create!
> engine = Engine.create!(car: car)
> engine.destroy! # => ActiveRecord::ActiveRecordError: cannot update a
> destroyed record
In the above case, `engine.destroy!` deletes `engine` and *then* triggers the
deletion of `car`, which in turn triggers a nullification of `engine.car_id`.
However, `engine` is already destroyed at that point.
Fixes#21223.
Also removes a false positive test that depends on the fixed bug:
At this time, counter_cache does not work with polymorphic relationships
(which is a bug). The test was added to make sure that no
StaleObjectError is raised when the car is destroyed. No such error is
currently raised because the lock version is not incremented by
appending a wheel to the car.
Furthermore, `assert_difference` succeeds because `car.wheels.count`
does not check the counter cache, but the collection size. The test will
fail if it is replaced with `car.wheels_count || 0`.
This code is so fucked. Things that cause this bug not to replicate:
- Defining the validation before the association (we end up calling
`uniq!` on the errors in the autosave validation)
- Adding `accepts_nested_attributes_for` (I have no clue why. The only
thing it does that should affect this is adds `autosave: true` to the
inverse reflection, and doing that manually doesn't fix this).
This solution is a hack, and I'm almost certain there's a better way to
go about it, but this shouldn't cause a huge hit on validation times,
and is the simplest way to get it done.
Fixes#20874.
If the subtype provides custom schema dumping behavior, we need to defer
to it. We purposely choose not to handle any values other than an array
(which technically should only ever be `nil`, but I'd rather code
defensively here).
Fixes#20515.
`has_many` can now take `index_errors: true` as an
option. When this is enabled, errors for nested models will be
returned alongside an index, as opposed to just the nested model name.
This option can also be enabled (or disabled) globally through
`ActiveRecord::Base.index_nested_attribute_errors`
E.X.
```ruby
class Guitar < ActiveRecord::Base
has_many :tuning_pegs
accepts_nested_attributes_for :tuning_pegs
end
class TuningPeg < ActiveRecord::Base
belongs_to :guitar
validates_numericality_of :pitch
end
```
- Old style
- `guitar.errors["tuning_pegs.pitch"] = ["is not a number"]`
- New style (if defined globally, or set in has_many_relationship)
- `guitar.errors["tuning_pegs[1].pitch"] = ["is not a number"]`
[Michael Probber, Terence Sun]
As described here https://github.com/rails/rails/issues/19420. When
using the Postgres BigInt[] field type the big int value was not being
translated into schema.rb. This caused the field to become just a
regular integer field when building off of schema.rb. This fix will
address this by delegating the limit from the subtype to the Array type.
https://github.com/rails/rails/issues/19420
If there was a polymorphic hm:t association with a scope AND second
non-scoped hm:t association on a model the polymorphic scope would leak
through into the call for the non-polymorhic hm:t association.
This would only break if `hotel.drink_designers` was called before
`hotel.recipes`. If `hotel.recipes` was called first there would be
no problem with the SQL.
Before (employable_type should not be here):
```
SELECT COUNT(*) FROM "drink_designers" INNER JOIN "chefs" ON
"drink_designers"."id" = "chefs"."employable_id" INNER JOIN
"departments" ON "chefs"."department_id" = "departments"."id" WHERE
"departments"."hotel_id" = ? AND "chefs"."employable_type" = ?
[["hotel_id", 1], ["employable_type", "DrinkDesigner"]]
```
After:
```
SELECT COUNT(*) FROM "recipes" INNER JOIN "chefs" ON "recipes"."chef_id"
= "chefs"."id" INNER JOIN "departments" ON "chefs"."department_id" =
"departments"."id" WHERE "departments"."hotel_id" = ? [["hotel_id", 1]]
```
From the SQL you can see that `employable_type` was leaking through when
calling recipes. The solution is to dup the chain of the polymorphic
association so it doesn't get cached. Additionally, this follows
`scope_chain` which dup's the `source_reflection`'s `scope_chain`.
This required another model/table/relationship because the leak only
happens on a hm:t polymorphic that's called before another hm:t on the
same model.
I am specifically testing the SQL here instead of the number of records
becasue the test could pass if there was 1 drink designer recipe for the
drink designer chef even though the `employable_type` was leaking through.
This needs to specifically check that `employable_type` is not in the SQL
statement.
These callbacks will already have been defined when the association was
built. The check against `reflection.autosave` happens at call time, not
at define time, so simply modifying the reflection is sufficient.
Fixes#18704
Prior to this commit if you defined a bidirectional relationship
between two models with destroy dependencies on both sides, a call to
`destroy` would result in an infinite callback loop.
Take the following relationship.
class Content < ActiveRecord::Base
has_one :content_position, dependent: :destroy
end
class ContentPosition < ActiveRecord::Base
belongs_to :content, dependent: :destroy
end
Calling `Content#destroy` or `ContentPosition#destroy` would result in
an infinite callback loop.
This commit changes the behaviour of `ActiveRecord::Callbacks#destroy`
so that it guards against subsequent callbacks.
Thanks to @zetter for demonstrating the issue with failing tests[1].
[1] rails#13609
- Add check for not deleting previously created fixtures, to overcome sti fixtures from multiple files
- Added fixtures and fixtures test to verify the same
- Fixed wrong fixtures duplicating data insertion in same table
To be possible to use a custom column name to save/read the polymorphic
associated type in a has_many or has_one polymorphic association, now users
can use the option :foreign_type to inform in what column the associated object
type will be saved.
In practical terms, this allows serialized columns and tz aware columns
to be used in wheres that go through joins, where they previously would
not behave correctly. Internally, this removes 1/3 of the cases where we
rely on Arel to perform type casting for us.
There were two non-obvious changes required for this. `update_all` on
relation was merging its bind values with arel's in the wrong order.
Additionally, through associations were assuming there would be no bind
parameters in the preloader (presumably because the where would always
be part of a join)
[Melanie Gilman & Sean Griffin]
There are behaviors mentioned in #17161 that:
1. are not documented properly, and
2. don't have specs
This commit addresses the spec absence. For has_many collections,
1. addition (<<) should update the associated object's updated_at (if any)
2. .clear, depending on options[:dependent], calls delete_all, destroy_all, or nullifies the associated object(s)' foreign key.
This fixes <"SQLite3::SQLException: no such column: legacy_things.person_id: SELECT \"legacy_things\".* FROM \"legacy_things\" WHERE \"legacy_things\".\"person_id\" = ?">
in OptimisticLockingTest#test_lock_destroy
with dynamic conditions.
Fixes#16128
This bug was introduced in c35e438620
so it's present from 4.1.2-rc1 and after.
c35e438620
merges any relation scopes passed as proc objects to the relation,
but does *not* take into account the arity of the lambda.
To reproduce: https://gist.github.com/Agis-/5f1f0d664d2cd08dfb9b
As per discussion, this changes the model generators to specify
`null: false` for timestamp columns. A warning is now emitted if
`timestamps` is called without a `null` option specified, so we can
safely change the behavior when no option is specified in Rails 5.
This was previously fixed in e84799d but broken in 3f596f8. This commit
reintroduced the conditional that prevents the foreign keys from being
added to MySQL databases.
The name of the foreign key is not relevant from a users perspective.
Using random names resolves the urge to rename the foreign key when the
respective table or column is renamed.
Reliant on https://github.com/rails/rails/pull/15747 but pulled to a
separate PR to reduce noise. `has_many :through` associations have the
undocumented behavior of automatically detecting counter caches.
However, the way in which it does so is inconsistent with counter caches
everywhere else, and doesn't actually work consistently.
As with normal `has_many` associations, the user should specify the
counter cache on the `belongs_to`, if they'd like it updated.
Now the following case will work fine
class Tag < ActiveRecord::Base
end
class Publisher::Article < ActiveRecord::Base
has_and_belongs_to_many :tags
end
Fixes#15761
As a result of all of the refactoring that's been done, it's now
possible for us to define a public API to allow users to specify
behavior. This is an initial implementation so that I can work off of it
in smaller pieces for additional features/refactorings.
The current behavior will continue to stay the same, though I'd like to
refactor towards the automatic schema detection being built off of this
API, and add the ability to opt out of automatic schema detection.
Use cases:
- We can deprecate a lot of the edge cases around types, now that there
is an alternate path for users who wish to maintain the same behavior.
- I intend to refactor serialized columns to be built on top of this
API.
- Gem and library maintainers are able to interact with `ActiveRecord`
at a slightly lower level in a more stable way.
- Interesting ability to reverse the work flow of adding to the schema.
Model can become the single source of truth for the structure. We can
compare that to what the database says the schema is, diff them, and
generate a migration.
* * *
This bug can be triggered when serializing record R (the instance) of type C
(the class), provided that the following conditions are met:
1. The name of one or more columns/attributes on C/R matches an existing private
method on C (e.g. those defined by `Kernel`, such as `format`).
2. The attribute methods have not yet been generated on C.
In this case, the matching private methods will be called by the serialization
code (with no arguments) and their return values will be serialized instead. If
the method requires one or more arguments, it will result in an `ArgumentError`.
This regression is introduced in d1316bb.
* * *
Attribute methods (e.g. `#name` and `#format`, assuming the class has columns
named `name` and `format` in its database table) are lazily defined. Instead of
defining them when a the class is defined (e.g. in the `inherited` hook on
`ActiveRecord::Base`), this operation is deferred until they are first accessed.
The reason behind this is that is defining those methods requires knowing what
columns are defined on the database table, which usually requires a round-trip
to the database. Deferring their definition until the last-possible moment helps
reducing unnessary work, especially in development mode where classes are
redefined and throw away between requests.
Typically, when an attribute is first accessed (e.g. `a_book.format`), it will
fire the `method_missing` hook on the class, which triggers the definition of
the attribute methods. This even works for methods like `format`, because
calling a private method with an explicit receiver will also trigger that hook.
Unfortunately, `read_attribute_for_serialization` is simply an alias to `send`,
which does not respect method visibility. As a result, when serializing a record
with those conflicting attributes, the `method_missing` is not fired, and as a
result the attribute methods are not defined one would expected.
Before d1316bb, this is negated by the fact that calling the `run_callbacks`
method will also trigger a call to `respond_to?`, which is another trigger point
for the class to define its attribute methods. Therefore, when Active Record
tries to run the `after_find` callbacks, it will also define all the attribute
methods thus masking the problem.
* * *
The proper fix for this problem is probably to restrict `read_attribute_for_serialization`
to call public methods only (i.e. alias `read_attribute_for_serialization` to
`public_send` instead of `send`). This however would be quite risky to change
in a patch release and would probably require a full deprecation cycle.
Another approach would be to override `read_attribute_for_serialization` inside
Active Record to force the definition of attribute methods:
def read_attribute_for_serialization(attribute)
self.class.define_attribute_methods
send(attribute)
end
Unfortunately, this is quite likely going to cause a performance degradation.
This patch therefore restores the behaviour from the 4-0-stable branch by
explicitly forcing the class to define its attribute methods in a similar spot
(when records are initialized). This should not cause any extra roundtrips to
the database because the `@columns` should already be cached on the class.
Fixes#15188.
This reverts commit 9a1abedcde, reversing
changes made to c72d6c91a7.
Conflicts:
activerecord/CHANGELOG.md
activerecord/test/models/comment.rb
This change break integration with activerecord-deprecated_finders so
I'm reverting until we find a way to make it work with this gem.
The foreign_key could be `String` and just doing `owners_map[owner_key]`
could return `nil`.
To prevent this bug, we should `to_s` both keys if their types are
different.
Fixes#14734.
Changed the call to a scope block to be evaluated with instance_eval.
The result is that ScopeRegistry can use the actual class instead of base_class when
caching scopes so queries made by classes with a common ancestor won't leak scopes.
There is no reason for the PG adapter to have a default limit of 255 on :string
columns. See this snippet from the PG docs:
Tip: There is no performance difference among these three types, apart
from increased storage space when using the blank-padded type, and a
few extra CPU cycles to check the length when storing into a
length-constrained column. While character(n) has performance
advantages in some other database systems, there is no such advantage
in PostgreSQL; in fact character(n) is usually the slowest of the
three because of its additional storage costs. In most situations text
or character varying should be used instead.
It was causing error when using `with_options` passing a lambda as its
last argument.
class User < ActiveRecord::Base
with_options dependent: :destroy do |assoc|
assoc.has_many :profiles, -> { where(active: true) }
end
end
It was happening because the `option_merger` was taking the last
argument and checking if it was a Hash. This breaks the HasMany usage,
because its last argument can be a Hash or a Proc.
As the behavior described in this test:
https://github.com/rails/rails/blob/master/activesupport/test/option_merger_test.rb#L69
the method will only accept the lambda, this way it will keep the expected behavior. See 9eaa0a34
conflicting private method defined on its ancestors.
The problem is that `method_defined_within?(name, klass, superklass)`
only works correclty when `klass` and `superklass` are both `Class`es.
If both `klass` and `superklass` are both `Class`es, they share the
same inheritance chain, so if a method is defined on `klass` but not
`superklass`, this method must be introduced at some point between
`klass` and `superklass`.
This does not work when `superklass` is a `Module`. A `Module`'s
inheritance chain contains just itself. So if a method is defined on
`klass` but not on `superklass`, the method could still be defined
somewhere upstream, e.g. in `Object`.
This fix works by avoiding calling `method_defined_within?` with a
module while still fufilling the requirement (checking that the
method is defined withing `superclass` but not is not a generated
attribute method).
4d8ee288 is likely an attempted partial fix for this problem. This
unrolls that fix and properly check the `superclass` as intended.
Fixes#11569.
The `subclass_from_attrs` method is called even if the column specified by
the `inheritance_column` setting doesn't exist. This prevents setting associations
via the attributes hash if the association name clashes with the value of the setting,
typically `:type`. This worked previously in Rails 3.2.
Renaming the test accordingly to its behaviour
Adding 'Fixes' statement to changelog
Improving tests legibility & changelog
Undoing mistakenly removed empty line & further improving changelog
Previously, this would give an `ArgumentError`:
class Issue < ActiveRecord::Base
enum :status, [:open, :finished]
end
Issue.open.build # => ArgumentError: '0' is not a valid status
Issue.open.create # => ArgumentError: '0' is not a valid status
PR #13542 muted the error, but the issue remains. This commit fixes
the issue by allowing the enum value to be written directly via the
setter:
Issue.new.status = 0 # This now sets status to :open
Assigning a value directly via the setter like this is not part of the
documented public API, so users should not rely on this behavior.
Closes#13530.
Previously, the writer methods would simply check whether the passed
argument was the symbol representing the integer value of an enum field.
Therefore, it was not possible to specify the numeric value itself but
the dynamically defined scopes generate where clauses relying on this
kind of values so a chained call to a method like `find_or_initialize_by`
would trigger an `ArgumentError`.
Reference #13530
The dynamic finder was creating the method signature with the parameters name,
which may have reserved words and this way creating invalid Ruby code.
Closes: #13261
Example:
# Before
Dog.find_by_alias('dog name')
# Was creating this method
def self.find_by_alias(alias, options = {})
# After
Dog.find_by_alias('dog name')
# Will create this method
def self.find_by_alias(_alias, options = {})
Currently `scope_chain` uses same array for building different
`scope_chain` for different associations. During processing
these arrays are sometimes mutated and because of in-place
mutation the changed `scope_chain` impacts other reflections.
Fix is to dup the value before adding to the `scope_chain`.
Fixes#3882.