clarify `sys::unix::fd::FileDesc::drop` comment
closes#66876
simply clarifies some resource-relevant things regarding the `close` syscall to reduce the amount of search needed in other parts of the web.
Mark format! with must_use hint
Uses unstable feature https://github.com/rust-lang/rust/issues/94745
Part of #126475
First contribution to rust, please let me know if the blessing of tests is correct
Thanks `@bjorn3` for the help
Make casts of pointers to trait objects stricter
This is an attempt to `fix` https://github.com/rust-lang/rust/issues/120222 and https://github.com/rust-lang/rust/issues/120217.
This is done by adding restrictions on casting pointers to trait objects.
Before this PR the rules were as follows:
> When casting `*const X<dyn A>` -> `*const Y<dyn B>`, principal traits in `A` and `B` must refer to the same trait definition (or no trait).
With this PR the rules are changed to
> When casting `*const X<dyn Src>` -> `*const Y<dyn Dst>`
> - if `Dst` has a principal trait `DstP`,
> - `Src` must have a principal trait `SrcP`
> - `dyn SrcP` and `dyn DstP` must be the same type (modulo the trait object lifetime, `dyn T+'a` -> `dyn T+'b` is allowed)
> - Auto traits in `Dst` must be a subset of auto traits in `Src`
> - Not adhering to this is currently a FCW (warn-by-default + `FutureReleaseErrorReportInDeps`), instead of an error
> - if `Src` has a principal trait `Dst` must as well
> - this restriction will be removed in a follow up PR
This ensures that
1. Principal trait's generic arguments match (no `*const dyn Tr<A>` -> `*const dyn Tr<B>` casts, which are a problem for [#120222](https://github.com/rust-lang/rust/issues/120222))
2. Principal trait's lifetime arguments match (no `*const dyn Tr<'a>` -> `*const dyn Tr<'b>` casts, which are a problem for [#120217](https://github.com/rust-lang/rust/issues/120217))
3. No auto traits can be _added_ (this is a problem for arbitrary self types, see [this comment](https://github.com/rust-lang/rust/pull/120248#discussion_r1463835350))
Some notes:
- We only care about the metadata/last field, so you can still cast `*const dyn T` to `*const WithHeader<dyn T>`, etc
- The lifetime of the trait object itself (`dyn A + 'lt`) is not checked, so you can still cast `*mut FnOnce() + '_` to `*mut FnOnce() + 'static`, etc
- This feels fishy, but I couldn't come up with a reason it must be checked
The diagnostics are currently not great, to say the least, but as far as I can tell this correctly fixes the issues.
cc `@oli-obk` `@compiler-errors` `@lcnr`
Run alloc sync tests
I was browsing the code and this struck me as weird. We're not running some doc tests because, the comment says, Windows builders deadlock. That should absolutely not happen, at least with our current implementation. And if it does happen I'd like to know.
Just to be sure though I'll do some try builds.
try-job: x86_64-msvc
try-job: i686-msvc
try-job: i686-mingw
try-job: x86_64-mingw
Rollup of 8 pull requests
Successful merges:
- #127179 (Print `TypeId` as hex for debugging)
- #127189 (LinkedList's Cursor: method to get a ref to the cursor's list)
- #127236 (doc: update config file path in platform-support/wasm32-wasip1-threads.md)
- #127297 (Improve std::Path's Hash quality by avoiding prefix collisions)
- #127308 (Attribute cleanups)
- #127354 (Describe Sized requirements for mem::offset_of)
- #127409 (Emit a wrap expr span_bug only if context is not tainted)
- #127447 (once_lock: make test not take as long in Miri)
r? `@ghost`
`@rustbot` modify labels: rollup
once_lock: make test not take as long in Miri
Allocating 1000 list elements takes a while (`@zachs18` reported >5min), so let's reduce the iteration count when running in Miri. Unfortunately due to this clever `while let i @ 0..LEN =` thing, the count needs to be a constants, and constants cannot be shadowed, so we need to use another trick to hide the `cfg!(miri)` from the docs. (I think this loop condition may be a bit too clever, it took me a bit to decipher. Ideally this would be `while let i = ... && i < LEN`, but that is not stable yet.)
Describe Sized requirements for mem::offset_of
The container doesn't have to be sized, but the field must be sized (at least until https://github.com/rust-lang/rust/issues/126151 is stable).
Improve std::Path's Hash quality by avoiding prefix collisions
This adds a bit rotation to the already existing state so that the same sequence of characters chunked at different offsets into separate path components results in different hashes.
The tests are from #127255Closes#127254
LinkedList's Cursor: method to get a ref to the cursor's list
We're already providing `.back()` & `.front()`, for which we hold onto a reference to the parent list, so why not share it? Useful for when you got `LinkedList` -> `CursorMut` -> `Cursor` and cannot take another ref to the list, even though you should be able to. This seems to be completely safe & sound.
The name is, of course, bikesheddable.
Print `TypeId` as hex for debugging
In <https://github.com/rust-lang/rust/pull/127134>, the `Debug` impl for `TypeId` was changed to print a single integer rather than a tuple. Change this again to print as hex for more concise and consistent formatting, as was suggested.
Result:
TypeId(0x1378bb1c0a0202683eb65e7c11f2e4d7)
Don't check the capacity every time (and also for `Extend` for tuples, as this is how `unzip()` is implemented).
I did this with an unsafe method on `Extend` that doesn't check for growth (`extend_one_unchecked()`). I've marked it as perma-unstable currently, although we may want to expose it in the future so collections outside of std can benefit from it. Then specialize `Extend for (A, B)` for `TrustedLen` to call it.
It may seem that an alternative way of implementing this is to have a semi-public trait (`#[doc(hidden)]` public, so collections outside of core can implement it) for `extend()` inside tuples, and specialize it from collections. However, it is impossible due to limitations of `min_specialization`.
A concern that may arise with the current approach is that implementing `extend_one_unchecked()` correctly must also incur implementing `extend_reserve()`, otherwise you can have UB. This is a somewhat non-local safety invariant. However, I believe this is fine, since to have actual UB you must have unsafe code inside your `extend_one_unchecked()` that makes incorrect assumption, *and* not implement `extend_reserve()`. I've also documented this requirement.
offset_from, offset: clearly separate safety requirements the user needs to prove from corollaries that automatically follow
By landing https://github.com/rust-lang/rust/pull/116675 we decided that objects larger than `isize::MAX` cannot exist in the address space of a Rust program, which lets us simplify these rules.
For `offset_from`, we can even state that the *absolute* distance fits into an `isize`, and therefore exclude `isize::MIN`. This PR also changes Miri to treat an `isize::MIN` difference like the other isize-overflowing cases.
Add `new_range_api` for RFC 3550
Initial implementation for #125687
This includes a `From<legacy::RangeInclusive> for RangeInclusive` impl for convenience, instead of the `TryFrom` impl from the RFC. Having `From` is highly convenient and the debug assert should find almost all misuses.
This includes re-exports of all existing `Range` types under `core::range`, plus the range-related traits (`RangeBounds`, `Step`, `OneSidedRange`) and the `Bound` enum.
Currently the iterators are just wrappers around the old range types.
Tracking issues:
- https://github.com/rust-lang/rust/issues/123741
- https://github.com/rust-lang/rust/issues/125687
Improve readability of some fmt code examples
Some indent was weird. Some examples were too long (overall better to keep it to maximum 80 columns, but only changed the most outstanding ones).
r? ```@Amanieu```
Improve dead code analysis
Fixes#120770
1. check impl items later if self ty is private although the trait method is public, cause we must use the ty firstly if it's private
2. mark the adt live if it appears in pattern, like generic argument, this implies the use of the adt
3. based on the above, we can handle the case that private adts impl Default, so that we don't need adding rustc_trivial_field_reads on Default, and the logic in should_ignore_item
r? ``@pnkfelix``
This includes a `From<legacy::RangeInclusive> for RangeInclusive` impl for convenience, instead of the `TryFrom` impl from the RFC.
Having `From` is highly convenient and the assertion is unlikely to be a problem in practice.
This includes re-exports of all existing `Range` types under `core::range`, plus the range-related traits (`RangeBounds`, `Step`, `OneSidedRange`) and the `Bound` enum.
Currently the iterators are just wrappers around the old range types,
and most other trait impls delegate to the old rage types as well.
Also includes an `.iter()` shorthand for `.clone().into_iter()`
Update windows-bindgen to 0.58.0
This also switches from the bespoke `std` generated bindings to the normal `sys` ones everyone else uses.
This has almost no difference except that the `sys` bindings use the `windows_targets::links!` macro for FFI imports, which we implement manually. This does cause the diff to look much larger than it really is but the bulk of the changes are mostly contained to the generated code.
The rules for casting `*mut X<dyn A>` -> `*mut Y<dyn B>` are as follows:
- If `B` has a principal
- `A` must have exactly the same principal (including generics)
- Auto traits of `B` must be a subset of autotraits in `A`
Note that `X<_>` and `Y<_>` can be identity, or arbitrary structs with last field being the dyn type.
The lifetime of the trait object itself (`dyn ... + 'a`) is not checked.
This prevents a few soundness issues with `#![feature(arbitrary_self_types)]` and trait upcasting.
Namely, these checks make sure that vtable is always valid for the pointee.
Remove unqualified form import of io::Error in process_vxworks.rs and fallback on remove_dir_impl for vxworks
Hi all,
This is to address issue #127084. On inspections it was found that io::Error refrences were all of qualified form and there was no need to add a unqualified form import. Also to successfully build rust for vxworks, we need to fallback on the remove_impl_dir implementations.
Thank you.
Optimize SipHash by reordering compress instructions
This PR optimizes hashing by changing the order of instructions in the sip.rs `compress` macro so the CPU can parallelize it better. The new order is taken directly from Fig 2.1 in [the SipHash paper](https://eprint.iacr.org/2012/351.pdf) (but with the xors moved which makes it a little faster). I attempted to optimize it some more after this, but I think this might be the optimal instruction order. Note that this shouldn't change the behavior of hashing at all, only statements that don't depend on each other were reordered.
It appears like the current order hasn't changed since its [original implementation from 2012](fada46c421 (diff-b751133c229259d7099bbbc7835324e5504b91ab1aded9464f0c48cd22e5e420R35)) which doesn't look like it was written with data dependencies in mind.
Running `./x bench library/core --stage 0 --test-args hash` before and after this change shows the following results:
Before:
```
benchmarks:
hash::sip::bench_bytes_4 7.20/iter +/- 0.70
hash::sip::bench_bytes_7 9.01/iter +/- 0.35
hash::sip::bench_bytes_8 8.12/iter +/- 0.10
hash::sip::bench_bytes_a_16 10.07/iter +/- 0.44
hash::sip::bench_bytes_b_32 13.46/iter +/- 0.71
hash::sip::bench_bytes_c_128 37.75/iter +/- 0.48
hash::sip::bench_long_str 121.18/iter +/- 3.01
hash::sip::bench_str_of_8_bytes 11.20/iter +/- 0.25
hash::sip::bench_str_over_8_bytes 11.20/iter +/- 0.26
hash::sip::bench_str_under_8_bytes 9.89/iter +/- 0.59
hash::sip::bench_u32 9.57/iter +/- 0.44
hash::sip::bench_u32_keyed 6.97/iter +/- 0.10
hash::sip::bench_u64 8.63/iter +/- 0.07
```
After:
```
benchmarks:
hash::sip::bench_bytes_4 6.64/iter +/- 0.14
hash::sip::bench_bytes_7 8.19/iter +/- 0.07
hash::sip::bench_bytes_8 8.59/iter +/- 0.68
hash::sip::bench_bytes_a_16 9.73/iter +/- 0.49
hash::sip::bench_bytes_b_32 12.70/iter +/- 0.06
hash::sip::bench_bytes_c_128 32.38/iter +/- 0.20
hash::sip::bench_long_str 102.99/iter +/- 0.82
hash::sip::bench_str_of_8_bytes 10.71/iter +/- 0.21
hash::sip::bench_str_over_8_bytes 11.73/iter +/- 0.17
hash::sip::bench_str_under_8_bytes 10.33/iter +/- 0.41
hash::sip::bench_u32 10.41/iter +/- 0.29
hash::sip::bench_u32_keyed 9.50/iter +/- 0.30
hash::sip::bench_u64 8.44/iter +/- 1.09
```
I ran this on my computer so there's some noise, but you can tell at least `bench_long_str` is significantly faster (~18%).
Also, I noticed the same compress function from the library is used in the compiler as well, so I took the liberty of copy-pasting this change to there as well.
Thanks `@semisol` for porting SipHash for another project which led me to notice this issue in Rust, and for helping investigate. <3