This PR adds `PopupCloseBehavior` to improve state of the
<https://github.com/emilk/egui/issues/4607>
`PopupCloseBehavior` determines when popup will be closed.
- `CloseOnClick` popup will be closed if the click happens anywhere even
in the popup's body
- `CloseOnClickAway` popup will be closed if the click happens somewhere
else but in the popup's body.
It also adds a test in the demo app which contains several popups
examples.
---
My ideas about <https://github.com/emilk/egui/issues/4607> is to make
every tooltip and popup a menu. So it will provide more control over
popups and tooltips (you will be able to close a popup by calling
something similar to the `ui.close_menu` if you need to). You won't need
to manually handle it's opening. And also will allow to have multiple
popups opened. That means you can have a popup inside a popup. And it
will also lead to the easier creation of the popups. (should we create a
tracking issue to track changes because to me it seems like a huge
amount of changes to be done?)
---
- Improvements on <https://github.com/emilk/egui/issues/4607>
This example does not use the default features from eframe in order to
avoid accesskit, which panics when run from multiple threads, so it must
manually enable the other default features in order to compile correctly
on Linux.
* Closes <https://github.com/emilk/egui/issues/4682>
* Closes#4534
This PR:
- Introduces `Ui::stack()`, which returns the `UiStack` structure
providing information on the current `Ui` hierarchy.
- **BREAKING**: `Ui::new()` now takes a `UiStackInfo` argument, which is
used to populate some of this `Ui`'s `UiStack`'s fields.
- **BREAKING**: `Ui::child_ui()` and `Ui::child_ui_with_id_source()` now
take an `Option<UiStackInfo>` argument, which is used to populate some
of the children `Ui`'s `UiStack`'s fields.
- New `Area::kind()` builder function, to set the `UiStackKind` value of
the `Area`'s `Ui`.
- Adds a (minimalistic) demo to egui demo (in the "Misc Demos" window).
- Adds a more thorough `test_ui_stack` test/playground demo.
TODO:
- [x] benchmarks
- [x] add example to demo
Future work:
- Add `UiStackKind` and related support for more container (e.g.
`CollapsingHeader`, etc.)
- Add a tag/property system that would allow adding arbitrary data to a
stack node. This data could then be queried by nested `Ui`s. Probably
needed for #3284.
- Add support to track columnar layouts.
---------
Co-authored-by: Emil Ernerfeldt <emil.ernerfeldt@gmail.com>
Fix: example `custom_window_frame`
Sending `ViewportCommand::StartDrag` when the secondary button is
clicked in Windows will disable drag, and drag will not be possible with
the primary button afterwards.
So in the example `custom_window_frame`, modified to use
`dragged_by(PointerButton::Primary)` instead of
`is_pointer_button_down_on()`.
The closure passed to `eframe::run_native` now returns a `Result`,
allowing you to return an error during app creation, which will be
returned to the caller of `run_native`.
This means you need to wrap your `Box::new(MyApp::new(…))` in an
`Ok(…)`.
* Closes https://github.com/emilk/egui/issues/4474
Motivation: I want to replace `cargo-cranky` with workspace lints, first
available in Rust 1.74.
However, `cargo doc` would hange on `wgpu` and `wgpu-core` on 1.74 and
1.75… so now we're on 1.76.
I think this is fine - when 1.78 is released next week we're still two
versions behind the bleeding edge.
…and the branch name is just wrong 🤦
* Closes https://github.com/emilk/egui/issues/3444
* Closes https://github.com/emilk/egui/issues/865
On a touch screen, if you press down on a widget and hold for 0.6
seconds (`MAX_CLICK_DURATION`), it will now trigger a secondary click,
i.e. `Response::secondary_clicked` will be `true`. This means you can
now open context menus on touch screens.
# What's New
* eframe: Added `App::raw_input_hook` allows for the manipulation or
filtering of raw input events
A filter applied to raw input before [`Self::update`]
This allows for the manipulation or filtering of input events before
they are processed by egui.
This can be used to exclude specific keyboard shortcuts, mouse events,
etc.
Additionally, it can be used to add custom keyboard or mouse events
generated by a virtual keyboard.
* examples: Added an example to demonstrates how to implement a custom
virtual keyboard.
[eframe-custom-keypad.webm](https://github.com/emilk/egui/assets/1274171/a9dc8e34-2c35-4172-b7ef-41010b794fb8)
Raw mouse movement is unaccelerated and unclamped by screen boundaries,
and does not relate to any position on the screen.
It is useful in certain situations such as draggable values and 3D
cameras, where screen position does not matter.
https://github.com/emilk/egui/assets/1700581/1400e6a6-0573-41b9-99a1-a9cd305aa1a3
Added `Event::MouseMoved` for integrations to supply raw mouse movement.
Added `Response:drag_motion` to get the raw mouse movement, but will
fall back to delta in case the integration does not supply it.
Nothing should be breaking, but third-party integrations that can send
`Event::MouseMoved` should be updated to do so.
Based on #1614 but updated to the current version, and with better
fallback behaviour.
* Closes#1611
* Supersedes #1614
* Closes https://github.com/emilk/egui/issues/3936
* Closes https://github.com/emilk/egui/issues/3923
* Closes https://github.com/emilk/egui/pull/4058
The interaction code is now done at the start of the frame, using stored
`WidgetRect`s from the previous frame.
The intention is that the new interaction code should be more accurate,
making it easier to hit widgets, and better respecting the rules of
overlapping widgets.
There is a new `style::Interaction::interact_radius` controlling how far
away from a widget the cursor can be and still hit it. This helps big
fat fingers hit small widgets on touch screens.
This PR adds a new `Context::read_response` which lets you read the
`Response` of a `Widget` _before_ you create the widget. This can be
used for styling, or for reading the result of an interaction early (to
prevent frame-delay) for a widget you add late (so it is on top of other
widgets).
# ⚠️ BREAKING CHANGES
`Memory::dragged_id`, `Memory::set_dragged_id` etc have been moved to
`Context`.
The semantics for `Context::dragged_id` is slightly different: a widget
is not considered dragged until egui it is sure this is not a
click-in-progress. For a widget that is only sensitive to drags, that is
right away, but for widgets sensitive to both clicks and drags it is not
until the mouse has moved a certain distance.
# TODO
* [x] Fix panel resizing
* [x] Fix scroll hover weirdness
* [x] Fix Resize widget
* [x] Fix drag-and-drop
* [x] Test all of egui_demo_app
* [x] Change `is_dragging` API
* [x] Consistent naming of start/stop or begin/end drag
* [x] Test `egui_tiles`
* [x] Test Rerun
* [x] Document
* [x] Document breaking changes in PR description
* [x] Test one final time
# Saving for a later PR
* [ ] Fix https://github.com/emilk/egui/issues/4047
* [ ] Specify what the response order for e.g. `ui.horizontal` is
I think both these can be fixed if each `Ui` registers themselves as a
`WidgetRect`, with the possibility to interact with it later, as if the
interaction was under all widgets on top of it.
⚠️ Removes `Context::translate_layer`, replacing it with a sticky
`set_transform_layer`
Adds the capability to scale layers.
Allows interaction with scaled and transformed widgets inside
transformed layers.
I've also added a demo of how to have zooming and panning in a window
(see the video below).
This probably closes#1811. Having a panning and zooming container would
just be creating a new
`Area` with a new id, and applying zooming and panning with
`ctx.transform_layer`.
I've run the github workflow scripts in my repository, so hopefully the
formatting and `cargo cranky` is satisfied.
I'm not sure if all call sites where transforms would be relevant have
been handled. This might also be missing are transforming clipping
rects, but I'm not sure where / how to accomplish that. In the demo, the
clipping rect is transformed to match, which seems to work.
https://github.com/emilk/egui/assets/70821802/77e7e743-cdfe-402f-86e3-7744b3ee7b0f
---------
Co-authored-by: tweoss <fchua@puffer5.stanford.edu>
Co-authored-by: Emil Ernerfeldt <emil.ernerfeldt@gmail.com>
* Closes https://github.com/emilk/egui/issues/3941
Workspace dependencies can be annoying.
If you don't set them to `default-features=false`, then you cannot opt
out of their default features anywhere else, and get warnings if you
try.
So you set `default-features=false`, and then you need to manually opt
in to the default features everywhere else.
Or, as in my case, don't.
I don't have the energy to do this tonight, so I'll just revert.
<!--
Please read the "Making a PR" section of
[`CONTRIBUTING.md`](https://github.com/emilk/egui/blob/master/CONTRIBUTING.md)
before opening a Pull Request!
* Keep your PR:s small and focused.
* If applicable, add a screenshot or gif.
* Unless this is a trivial change, add a line to the relevant
`CHANGELOG.md` under "Unreleased".
* If it is a non-trivial addition, consider adding a demo for it to
`egui_demo_lib`.
* Remember to run `cargo fmt` and `cargo clippy`.
* Open the PR as a draft until you have self-reviewed it and run
`./sh/check.sh`.
* When you have addressed a PR comment, mark it as resolved.
Please be patient! I will review you PR, but my time is limited!
-->
- Added methods to zoom the plot programmatically, to match the
previously added `translate_bounds()`.
- Added an example of how this method can be used to customize the plot
navigation.
Closes#1164.
It required some ugly code in `egui-winit` in order to fix this without
breaking sem-ver (I want this to land in a 0.24.1 patch release). I'll
clean up in time for 0.25.
There is still a font rendering bug when using immediate viewports
across multiple viewports, but that's harder to fix.
* Closes https://github.com/emilk/egui/issues/3602
You can now zoom any egui app by pressing Cmd+Plus, Cmd+Minus or Cmd+0,
just like in a browser. This will change the current `zoom_factor`
(default 1.0) which is persisted in the egui memory, and is the same for
all viewports.
You can turn off the keyboard shortcuts with `ctx.options_mut(|o|
o.zoom_with_keyboard = false);`
`zoom_factor` can also be explicitly read/written with
`ctx.zoom_factor()` and `ctx.set_zoom_factor()`.
This redefines `pixels_per_point` as `zoom_factor *
native_pixels_per_point`, where `native_pixels_per_point` is whatever is
the native scale factor for the monitor that the current viewport is in.
This adds some complexity to the interaction with winit, since we need
to know the current `zoom_factor` in a lot of places, because all egui
IO is done in ui points. I'm pretty sure this PR fixes a bunch of subtle
bugs though that used to be in this code.
`egui::gui_zoom::zoom_with_keyboard_shortcuts` is now gone, and is no
longer needed, as this is now the default behavior.
`Context::set_pixels_per_point` is still there, but it is recommended
you use `Context::set_zoom_factor` instead.