Commit Graph

45 Commits

Author SHA1 Message Date
Damien Elmes 28fa74feb3 Update crate name 2023-09-05 19:08:51 +10:00
Damien Elmes 0c6e3eaa93
Integrate the FSRS optimizer (#2633)
* Support searching for deck configs by name

* Integrate FSRS optimizer into Anki

* Hack in a rough implementation of evaluate_weights()

* Interrupt calculation if user closes dialog

* Fix interrupted error check

* log_loss/rmse

* Update to latest fsrs commit; add progress info to weight evaluation

* Fix progress not appearing when pretrain takes a while

* Update to latest commit
2023-09-05 18:45:05 +10:00
Damien Elmes 3b7c5d71ab Update Rust deps 2023-08-23 11:44:33 +10:00
Damien Elmes a35c1a058d Extract inline images as part of media check
We also need to get to the bottom of what's causing this:
https://forums.ankiweb.net/t/anki-browse-extremely-laggy/32533
2023-07-31 12:23:16 +10:00
Damien Elmes 7a34f83d40 Update regex crate for .extract()
Also update yanked hermit-abi
2023-07-31 12:20:07 +10:00
Damien Elmes 13572a86b2 Move i18n helpers into ftl/, with a single main.rs
Clap gives us a nice help message and better arg parsing
2023-07-04 10:47:15 +10:00
Damien Elmes cb8007ce30 Move .py i18n method generation to Rust 2023-07-03 15:58:46 +10:00
Damien Elmes 8c712cd118 Support creating a standalone sync server 2023-07-02 18:22:44 +10:00
Damien Elmes c6f429ab17 Add option to use LTO in release builds
Shrinks rslib.so from about 40MB to about 26MB, at the cost of considerably
higher build time in a release build.
2023-07-02 18:22:44 +10:00
Damien Elmes d1a4aa352a Switch runner to release build 2023-07-02 10:31:07 +10:00
Damien Elmes 85c2769f80
Update Rust and Python deps (#2567)
* Update Python deps

* Update semver-compat Rust deps

* Update most crates to latest semver

* Update to latest axum-client-ip
2023-07-01 18:26:43 +10:00
Damien Elmes b2e7ab522b Remove some unused Rust dependencies 2023-06-24 19:30:29 +10:00
Damien Elmes e9415b43f4 Migrate archive tool into runner
Also fix minilints declaring a stamp it wasn't creating. The same
approach is necessary with archives now too, as it no longer executes
under a standard "runner run".

For now, rustls is hard-coded - we could pass the desired TLS impl in
from the ./ninja script, but the runner is not recompiled frequently
anyway.
2023-06-23 17:41:31 +10:00
Damien Elmes 40e1520acb Drop workspace-hack in favor of workspace deps
Workspace deps were introduced in Rust 1.64. They don't cover all the
cases that Hakari did unfortunately, but they are simpler to maintain,
and they avoid a couple of issues that Hakari had:

- It sometimes made updating dependencies harder due to the locked versions,
so you had to disable Hakari, do the updates, and then re-generate (
e.g. 943dddf28f)
- The current Hakari config was breaking AnkiDroid's build, as it was
stopping a cross-compile from functioning correctly.
2023-06-23 17:41:31 +10:00
Damien Elmes dd95f6f749 Check for stale licenses.json in minilints
+ Add an anki_process library with some helpers for command running.
2023-06-17 14:01:27 +10:00
Damien Elmes d380f3034c Split io.rs into separate crate, and use it in proto build
Will be handy to use it in our other scripts in the future too - thanks
Rumo!

Results of benchmarking ./run before and after these crate splits:

- Touching a proto file leads to a slight increase: about +90ms
- Touching an rslib file leads to a bigger decrease, as there's less to
recompile: about -700ms

And ./ninja test is even better: about +200ms and -3800ms.
2023-06-12 15:47:51 +10:00
Damien Elmes bac05039a7 Move protobuf generation into a separate crate; write .py interface in Rust
A couple of motivations for this:

- genbackend.py was somewhat messy, and difficult to change with the
lack of types. The mobile clients used it as a base for their generation,
so improving it will make life easier for them too, once they're ported.
- It will make it easier to write a .ts generator in the future
- We currently implement a bunch of helper methods on protobuf types
which don't allow us to compile the protobuf types until we compile
the Anki crate. If we change this in the future, we will be able to
do more of the compilation up-front.

We no longer need to record the services in the proto file, as we can
extract the service order from the compiled protos. Support for map types
has also been added.
2023-06-12 09:52:00 +10:00
Damien Elmes 15dcb09036
Detect incorrect usage of triple slash in TypeScript (#2524)
* Migrate check_copyright to Rust

* Add a new lint to check accidental usages of /// in ts/svelte comments

* Fix a bunch of incorrect jdoc comments

* Move contributor check into minilints

Will allow users to detect the issue locally with './ninja check'
before pushing to CI.

* Make Cargo.toml consistent with other crates
2023-05-26 12:49:44 +10:00
Damien Elmes cf45cbf429
Rework syncing code, and replace local sync server (#2329)
This PR replaces the existing Python-driven sync server with a new one in Rust.
The new server supports both collection and media syncing, and is compatible
with both the new protocol mentioned below, and older clients. A setting has
been added to the preferences screen to point Anki to a local server, and a
similar setting is likely to come to AnkiMobile soon.

Documentation is available here: <https://docs.ankiweb.net/sync-server.html>

In addition to the new server and refactoring, this PR also makes changes to the
sync protocol. The existing sync protocol places payloads and metadata inside a
multipart POST body, which causes a few headaches:

- Legacy clients build the request in a non-deterministic order, meaning the
entire request needs to be scanned to extract the metadata.
- Reqwest's multipart API directly writes the multipart body, without exposing
the resulting stream to us, making it harder to track the progress of the
transfer. We've been relying on a patched version of reqwest for timeouts,
which is a pain to keep up to date.

To address these issues, the metadata is now sent in a HTTP header, with the
data payload sent directly in the body. Instead of the slower gzip, we now
use zstd. The old timeout handling code has been replaced with a new implementation
that wraps the request and response body streams to track progress, allowing us
to drop the git dependencies for reqwest, hyper-timeout and tokio-io-timeout.

The main other change to the protocol is that one-way syncs no longer need to
downgrade the collection to schema 11 prior to sending.
2023-01-18 12:43:46 +10:00
Damien Elmes 37151213cd Move more of the graph processing into the backend
The existing architecture serializes all cards and revlog entries in
the search range into a protobuf message, which the web frontend needs
to decode and then process. The thinking at the time was that this would
make it easier for add-ons to add extra graphs, but in the ~2.5 years
since the new graphs were introduced, no add-ons appear to have taken
advantage of it.

The cards and revlog entries can grow quite large on large collections -
on a collection I tested with approximately 2.5M reviews, the serialized
data is about 110MB, which is a lot to have to deserialize in JavaScript.

This commit shifts the preliminary processing of the data to the Rust end,
which means the data is able to be processed faster, and less needs to
be sent to the frontend. On the test collection above, this reduces the
serialized data from about 110MB to about 160KB, resulting in a more
than 2x performance improvement, and reducing frontend memory usage from
about 400MB to about 40MB.

This also makes #2043 more feasible - while it is still about 50-100%
slower than protobufjs, with the much smaller message size, the difference
is only about 10ms.
2022-12-16 21:42:17 +10:00
Damien Elmes 5e0a761b87
Move away from Bazel (#2202)
(for upgrading users, please see the notes at the bottom)

Bazel brought a lot of nice things to the table, such as rebuilds based on
content changes instead of modification times, caching of build products,
detection of incorrect build rules via a sandbox, and so on. Rewriting the build
in Bazel was also an opportunity to improve on the Makefile-based build we had
prior, which was pretty poor: most dependencies were external or not pinned, and
the build graph was poorly defined and mostly serialized. It was not uncommon
for fresh checkouts to fail due to floating dependencies, or for things to break
when trying to switch to an older commit.

For day-to-day development, I think Bazel served us reasonably well - we could
generally switch between branches while being confident that builds would be
correct and reasonably fast, and not require full rebuilds (except on Windows,
where the lack of a sandbox and the TS rules would cause build breakages when TS
files were renamed/removed).

Bazel achieves that reliability by defining rules for each programming language
that define how source files should be turned into outputs. For the rules to
work with Bazel's sandboxing approach, they often have to reimplement or
partially bypass the standard tools that each programming language provides. The
Rust rules call Rust's compiler directly for example, instead of using Cargo,
and the Python rules extract each PyPi package into a separate folder that gets
added to sys.path.

These separate language rules allow proper declaration of inputs and outputs,
and offer some advantages such as caching of build products and fine-grained
dependency installation. But they also bring some downsides:

- The rules don't always support use-cases/platforms that the standard language
tools do, meaning they need to be patched to be used. I've had to contribute a
number of patches to the Rust, Python and JS rules to unblock various issues.
- The dependencies we use with each language sometimes make assumptions that do
not hold in Bazel, meaning they either need to be pinned or patched, or the
language rules need to be adjusted to accommodate them.

I was hopeful that after the initial setup work, things would be relatively
smooth-sailing. Unfortunately, that has not proved to be the case. Things
frequently broke when dependencies or the language rules were updated, and I
began to get frustrated at the amount of Anki development time I was instead
spending on build system upkeep. It's now about 2 years since switching to
Bazel, and I think it's time to cut losses, and switch to something else that's
a better fit.

The new build system is based on a small build tool called Ninja, and some
custom Rust code in build/. This means that to build Anki, Bazel is no longer
required, but Ninja and Rust need to be installed on your system. Python and
Node toolchains are automatically downloaded like in Bazel.

This new build system should result in faster builds in some cases:

- Because we're using cargo to build now, Rust builds are able to take advantage
of pipelining and incremental debug builds, which we didn't have with Bazel.
It's also easier to override the default linker on Linux/macOS, which can
further improve speeds.
- External Rust crates are now built with opt=1, which improves performance
of debug builds.
- Esbuild is now used to transpile TypeScript, instead of invoking the TypeScript
compiler. This results in faster builds, by deferring typechecking to test/check
time, and by allowing more work to happen in parallel.

As an example of the differences, when testing with the mold linker on Linux,
adding a new message to tags.proto (which triggers a recompile of the bulk of
the Rust and TypeScript code) results in a compile that goes from about 22s on
Bazel to about 7s in the new system. With the standard linker, it's about 9s.

Some other changes of note:

- Our Rust workspace now uses cargo-hakari to ensure all packages agree on
available features, preventing unnecessary rebuilds.
- pylib/anki is now a PEP420 implicit namespace, avoiding the need to merge
source files and generated files into a single folder for running. By telling
VSCode about the extra search path, code completion now works with generated
files without needing to symlink them into the source folder.
- qt/aqt can't use PEP420 as it's difficult to get rid of aqt/__init__.py.
Instead, the generated files are now placed in a separate _aqt package that's
added to the path.
- ts/lib is now exposed as @tslib, so the source code and generated code can be
provided under the same namespace without a merging step.
- MyPy and PyLint are now invoked once for the entire codebase.
- dprint will be used to format TypeScript/json files in the future instead of
the slower prettier (currently turned off to avoid causing conflicts). It can
automatically defer to prettier when formatting Svelte files.
- svelte-check is now used for typechecking our Svelte code, which revealed a
few typing issues that went undetected with the old system.
- The Jest unit tests now work on Windows as well.

If you're upgrading from Bazel, updated usage instructions are in docs/development.md and docs/build.md. A summary of the changes:

- please remove node_modules and .bazel
- install rustup (https://rustup.rs/)
- install rsync if not already installed  (on windows, use pacman - see docs/windows.md)
- install Ninja (unzip from https://github.com/ninja-build/ninja/releases/tag/v1.11.1 and
  place on your path, or from your distro/homebrew if it's 1.10+)
- update .vscode/settings.json from .vscode.dist
2022-11-27 15:24:20 +10:00
Damien Elmes 063623af3c Format .toml files with dprint 2022-11-09 20:03:49 +10:00
RumovZ c521753057
Refactor error handling (#2136)
* Add crate snafu

* Replace all inline structs in AnkiError

* Derive Snafu on AnkiError

* Use snafu for card type errors

* Use snafu whatever error for InvalidInput

* Use snafu for NotFoundError and improve message

* Use snafu for FileIoError to attach context

Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.

* Add more context-attaching io helpers

* Add message, context and backtrace to new snafus

* Utilize error context and backtrace on frontend

* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.

* Rename localized(_description) -> message

* Remove accidentally committed experimental trait

* invalid_input_context -> ok_or_invalid

* ensure_valid_input! -> require!

* Always return `Err` from `invalid_input!`

Instead of a Result to unwrap, the macro accepts a source error now.

* new_tempfile_in_parent -> new_tempfile_in_parent_of

* ok_or_not_found -> or_not_found

* ok_or_invalid -> or_invalid

* Add crate convert_case

* Use unqualified lowercase type name

* Remove uses of snafu::ensure

* Allow public construction of InvalidInputErrors (dae)

Needed to port the AnkiDroid changes.

* Make into_protobuf() public (dae)

Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
2022-10-21 18:02:12 +10:00
Damien Elmes fe9b0aecf7 Add missing sim lines in build files
Latest rules_rust now requires them. Can't use the 0.16 release
due to not-yet-merged https://github.com/google/cargo-raze/pull/524
2022-09-25 12:55:55 +10:00
Damien Elmes cfb309e6b3 Update Rust deps 2022-09-24 13:22:46 +10:00
RumovZ 42cbe42f06
Plaintext import/export (#1850)
* Add crate csv

* Add start of csv importing on backend

* Add Menomosyne serializer

* Add csv and json importing on backend

* Add plaintext importing on frontend

* Add csv metadata extraction on backend

* Add csv importing with GUI

* Fix missing dfa file in build

Added compile_data_attr, then re-ran cargo/update.py.

* Don't use doubly buffered reader in csv

* Escape HTML entities if CSV is not HTML

Also use name 'is_html' consistently.

* Use decimal number as foreign ease (like '2.5')

* ForeignCard.ivl → ForeignCard.interval

* Only allow fixed set of CSV delimiters

* Map timestamp of ForeignCard to native due time

* Don't trim CSV records

* Document use of empty strings for defaults

* Avoid creating CardGenContexts for every note

This requires CardGenContext to be generic, so it works both with an
owned and borrowed notetype.

* Show all accepted file types  in import file picker

* Add import_json_file()

* factor → ease_factor

* delimter_from_value → delimiter_from_value

* Map columns to fields, not the other way around

* Fallback to current config for csv metadata

* Add start of new import csv screen

* Temporary fix for compilation issue on Linux/Mac

* Disable jest bazel action for import-csv

Jest fails with an error code if no tests are available, but this would
not be noticable on Windows as Jest is not run there.

* Fix field mapping issue

* Revert "Temporary fix for compilation issue on Linux/Mac"

This reverts commit 21f8a26140.

* Add HtmlSwitch and move Switch to components

* Fix spacing and make selectors consistent

* Fix shortcut tooltip

* Place import button at the top with path

* Fix meta column indices

* Remove NotetypeForString

* Fix queue and type of foreign cards

* Support different dupe resolution strategies

* Allow dupe resolution selection when importing CSV

* Test import of unnormalized text

Close  #1863.

* Fix logging of foreign notes

* Implement CSV exports

* Use db_scalar() in notes_table_len()

* Rework CSV metadata

- Notetypes and decks are either defined by a global id or by a column.
- If a notetype id is provided, its field map must also be specified.
- If a notetype column is provided, fields are now mapped by index
instead of name at import time. So the first non-meta column is used for
the first field of every note, regardless of notetype. This makes
importing easier and should improve compatiblity with files without a
notetype column.
- Ensure first field can be mapped to a column.
- Meta columns must be defined as `#[meta name]:[column index]` instead
of in the `#columns` tag.
- Column labels contain the raw names defined by the file and must be
prettified by the frontend.

* Adjust frontend to new backend column mapping

* Add force flags for is_html and delimiter

* Detect if CSV is HTML by field content

* Update dupe resolution labels

* Simplify selectors

* Fix coalescence of oneofs in TS

* Disable meta columns from selection

Plus a lot of refactoring.

* Make import button stick to the bottom

* Write delimiter and html flag into csv

* Refetch field map after notetype change

* Fix log labels for csv import

* Log notes whose deck/notetype was missing

* Fix hiding of empty log queues

* Implement adding tags to all notes of a csv

* Fix dupe resolution not being set in log

* Implement adding tags to updated notes of a csv

* Check first note field is not empty

* Temporary fix for build on Linux/Mac

* Fix inverted html check (dae)

* Remove unused ftl string

* Delimiter → Separator

* Remove commented-out line

* Don't accept .json files

* Tweak tag ftl strings

* Remove redundant blur call

* Strip sound and add spaces in csv export

* Export HTML by default

* Fix unset deck in Mnemosyne import

Also accept both numbers and strings for notetypes and decks in JSON.

* Make DupeResolution::Update the default

* Fix missing dot in extension

* Make column indices 1-based

* Remove StickContainer from TagEditor

Fixes line breaking, border and z index on ImportCsvPage.

* Assign different key combos to tag editors

* Log all updated duplicates

Add a log field for the true number of found notes.

* Show identical notes as skipped

* Split tag-editor into separate ts module (dae)

* Add progress for CSV export

* Add progress for text import

* Tidy-ups after tag-editor split (dae)

- import-csv no longer depends on editor
- remove some commented lines
2022-06-01 20:26:16 +10:00
Damien Elmes 95dbf30fb9 updates to the build process and binary bundles
All platforms:

- rename scripts/ to tools/: Bazelisk expects to find its wrapper script
(used by the Mac changes below) in tools/. Rather than have a separate
scripts/ and tools/, it's simpler to just move everything into tools/.
- wheel outputs and binary bundles now go into .bazel/out/dist. While
not technically Bazel build products, doing it this way ensures they get
cleaned up when 'bazel clean' is run, and it keeps them out of the source
folder.
- update to the latest Bazel

Windows changes:

- bazel.bat has been removed, and tools\setup-env.bat has been added.
Other scripts like .\run.bat will automatically call it to set up the
environment.
- because Bazel is now on the path, you can 'bazel test ...' from any
folder, instead of having to do \anki\bazel.
- the bat files can handle being called from any working directory,
so things like running "\anki\tools\python" from c:\ will work.
- build installer as part of bundling process

Mac changes:

- `arch -arch x86_64 bazel ...` will now automatically use a different
build root, so that it is cheap to switch back and forth between archs
on a new Mac.
- tools/run-qt* will now automatically use Rosetta
- disable jemalloc in Mac x86 build for now, as it won't build under
Rosetta (perhaps due to its build scripts using $host_cpu instead of
$target_cpu)
- create app bundle as part of bundling process

Linux changes:

- remove arm64 orjson workaround in Linux bundle, as without a
readily-available, relatively distro-agonstic PyQt/Qt build
we can use, the arm64 Linux bundle is of very limited usefulness.
- update Docker files for release build
- include fcitx5 in both the qt5 and qt6 bundles
- create tarballs as part of the bundling process
2022-02-10 19:23:07 +10:00
Damien Elmes 4d431fb7af move linkchecker into separate crate
The feature-based approach didn't work with cargo-raze, leading
to ./update.py in cargo/ breaking.
2021-12-20 17:27:43 +10:00
RumovZ 0efa3f944f
Garbage collect unused Fluent strings (#1482)
* Canonify import of i18n module

Should always be imported as `tr`, or `tr2` if there is a name collision
(Svelte).

* Add helper for garbage collecting ftl strings

Also add a serializer for ftl asts.

* Add helper for filter-mapping `DirEntry`s

* Fix `i18n_helpers/BUILD.bazel`

* run cargo-raze

* Refactor `garbage_collection.rs`

- Improve helper for file iterating
- Remove unused terms as well
- Fix issue with checking for nested messages by switching to a regex-
based approach (which runs before deleting)
- Some more refactorings and lint fixes

* Fix lints in `serialize.rs`

* Write json pretty and sorted

* Update `serialize.rs` and fix header

* Fix doc and remove `dbg!`

* Add binaries for ftl garbage collection

Also relax type constraints and strip debug tests.

* add rust_binary targets for i18n helpers (dae)

* add scripts to update desktop usage/garbage collect (dae)

Since we've already diverged from 2.1.49, we won't gain anything
from generating a stable json just yet. But once 2.1.50 is released,
we should run 'ftl/update-desktop-usage.sh stable'.

* add keys from AnkiMobile (dae)

* Mention caveats in `remove-unused.sh`
2021-11-12 18:19:01 +10:00
Damien Elmes 5a8e064a7d updated package scripts 2021-10-28 18:46:45 +10:00
Damien Elmes 7797f88553 add aarch64-apple to Rust targets 2021-10-16 18:07:39 +10:00
Damien Elmes 944b064e54 update Rust deps 2021-10-02 20:42:03 +10:00
Damien Elmes 379694915e add linkcheck to Bazel 2021-07-23 20:22:32 +10:00
Damien Elmes c944dd048e strip invalid Unicode chars in media check 2021-07-17 18:30:19 +10:00
Damien Elmes 0fb745cdb9 add a valid, empty file so the check action works in Rust Analyzer 2021-05-05 15:53:27 +10:00
Damien Elmes 89d249b3b6 update to the latest rules_rust + security framework update 2021-03-27 19:28:19 +10:00
Damien Elmes 9aece2a7b8 rework translation handling
Instead of generating a fluent.proto file with a giant enum, create
a .json file representing the translations that downstream consumers
can use for code generation.

This enables the generation of a separate method for each translation,
with a docstring that shows the actual text, and any required arguments
listed in the function signature.

The codebase is still using the old enum for now; updating it will need
to come in future commits, and the old enum will need to be kept
around, as add-ons are referencing it.

Other changes:

- move translation code into a separate crate
- store the translations on a per-file/module basis, which will allow
us to avoid sending 1000+ strings on each JS page load in the future
- drop the undocumented support for external .ftl files, that we weren't
using
- duplicate strings in translation files are now checked for at build
time
- fix i18n test failing when run outside Bazel
- drop slog dependency in i18n module
2021-03-26 09:41:32 +10:00
Damien Elmes f434cff36f remember last input for 'set due'; add string config; nest config types 2021-02-08 14:10:05 +10:00
Damien Elmes 3f3f4b5c36 add aarch64 Linux to cargo; update deps 2020-12-30 13:33:16 +10:00
Damien Elmes 91a9307c39 update to cargo-raze 0.8.0 release 2020-12-18 11:56:56 +10:00
Damien Elmes d49416649c update to latest cargo-raze 2020-12-15 20:28:10 +10:00
Damien Elmes 85b7e1c623 drop unused i686 references
https://forums.ankiweb.net/t/changing-ankis-build-system-to-bazel/4737/9
2020-12-09 15:45:01 +10:00
Damien Elmes 78125aa974 use compile_data with cargo raze
requires https://github.com/ankitects/cargo-raze/releases/tag/anki-2020-12-01
2020-12-01 16:48:45 +10:00
Damien Elmes 9a1ddec553 make pyo/ring data arguments more specific 2020-12-01 16:48:45 +10:00
Damien Elmes c9b3ed1eae switch to workspace for Rust code 2020-11-24 18:41:03 +10:00