This commit has two major goals:
- fix the caching of the QPY files for both the `main` and `stable/*`
branches
- increase the number of compatibility tests between the different
symengine versions that might be involved in the generation and
loading of the QPY files.
Achieving both of these goals also means that it is sensible to move the
job to GitHub Actions at the same time, since it will put more pressure
on the Azure machine concurrency we use.
Caching
-------
The previous QPY tests attempted to cache the generated files for each
historical version of Qiskit, but this was unreliable. The cache never
seemed to hit on backport branches, which was a huge slowdown in the
critical path to getting releases out. The cache restore keys were also
a bit lax, meaning that we might accidentally have invalidated files in
the cache by changing what we wanted to test, but the restore keys
wouldn't have changed.
The cache files would fail to restore as a side-effect of ed79d42
(gh-11526); QPY was moved to be on the tail end of the lint run, rather
than in a test run. This meant that it was no longer run as part of the
push event when updating `main` or one of the `stable/*` branches. In
Azure (and GitHub Actions), the "cache" action accesses a _scoped_
cache, not a universal one for the repository [^1][^2]. Approximately,
base branches each have their own scope, and PR events open a new scope
that is a child of the target branch, the default branch, and the source
branch, if appropriate. A cache task can read from any of its parent
scopes, but write events go to the most local scope. This means that we
haven't been writing to long-standing caches for some time now. PRs
would typically miss the cache on the first attempt, hit their
cache for updates, then miss again once entering the merge queue.
The fix for this is to run the QPY job on branch-update events as well.
The post-job cache action will then write out to a reachable cache for
all following events.
Cross-symengine tests
---------------------
We previously were just running a single test with differing versions of
symengine between the loading and generation of the QPY files. This
refactors the QPY `run_tests.sh` script to run a full pairwise matrix of
compatibility tests, to increase the coverage.
[^1]: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache
[^2]: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops#cache-isolation-and-security
* Skip uninstallable tags in QPY backwards compatibility tests
When a tag has been made, but the package has not yet landed on PyPI,
the QPY job fails in the environment-building step. This is not
actually a failure of the QPY backwards-compatibility guarantees, and
it isn't the job of the QPY tests to detect a bad tag anyway.
* Query PyPI to find versions to test
The logic we actually want for versions to test is "find the versions
that _should_ be installable", rather than "try it and see"; the latter
is susceptible to silently suppressing errors. This new form now
queries PyPI to find what versions of Qiskit are available in binary
distributions for this platform, and filters based on that.
* Include release candidates in testing
We needed this back when `setuptools` first introduced the new editable
installations, but at this point it should work more correctly without
it; our non-CI configurations haven't included it for some time.
The macOS 11 runners are deprecated pending removal. While macOS 14 is
available, it's still marked as in preview on Azure, so macOS 13 is the
current "latest" stable version.
* Add script to report numpy env
* Rename to more descriptive name
* Report Numpy runtime status in CI
* Add threadpoolctl for BLAS information
---------
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Reconfigure CI pipelines for PR and merge queue
This commit refactors the pipeline organisation of our CI to reduce the
total number of jobs run on any given PR merge, and to reduce the
critical path length of merging. In particular:
- the lint, docs and QPY tests are combined into one job for a single CI
worker. These three jobs individually add up to less than a test run,
and moving QPY from the first-stage PR test run to this rebalances the
(now) two jobs in the stage to be more equal in runtime.
- a new pipeline is added specifically for the merge queue. Previously
this reused the same two-stage PR pipeline. The two-stage system
unnecessarily lengthened the critical path, as when a PR enters the
merge queue, it as already passed PR CI, and so is highly likely to
pass all jobs. In addition to flattening to a single stage, one each
of the macOS and Windows jobs are removed to lower the amount of VMs
needed and reduce the chances of a timeout (these OSes are more likely
to get a dodgy VM and time out than Linux). The only way a new test
failure should appear (other than a flaky test) is by a logical merge
conflict, which would be quite unlikely to only affect a particular
Python version.
To make it easier to run the lint and docs jobs together, and ensure
that both run even if there's a failure in one, the full lint
configuration is merged back to `tox.ini`, and that's used to do the
linting. This makes it more consistent for developers, as well.
* Allow cargo as an external in 'tox'
* Add display name to lint job
* Use editable install for tox lint job
The lint job will fail unless the Rust components are installed into the
source directory, since that's where `pylint` will look for them to
resolve the imports.
* Fix typo in merge-queue stage condition
* Replace `qiskit` metapackage with `qiskit-terra`
This commit completely removes the concept of the `qiskit-terra` package
from Qiskit main. The hitherto metapackage `qiskit` is promoted to be
the concrete code package.
This is a completely breaking change for packaging purposes for users,
as there is no clean upgrade path from `qiskit-terra` to `qiskit`; PyPI
and pip do not give us the tools to mark that the former obsoletes and
supersedes the latter. We intend to follow this PR with a technical
blog post somewhere explaining the changes, how users should adapt
("to install Qiskit 1.0, you must start a new venv"), and why we needed
to do this.
The "why", in part, is:
- the metapackage legacy causes awkward upgrade paths on every release;
some packages still depend on `qiskit-terra`, some on `qiskit`, and
pip will happily make those two get out-of-sync when upgrading a
transitive dependency with only a warning that users are used to
ignoring.
- with the 1.0 release, we (believe we) will have some leeway from users
to make such a breaking change.
- having the metapackage split makes it difficult for downstream
projects and developers to test against `main` - they always must
install both `qiskit-terra` and `qiskit`, and the latter has no
meaning in editable installs, so needs re-installing after each
version bump. Problems surrounding this have already broken CI for
Qiskit, for at least one internal IBM repo, and at least my dev
install of Qiskit. This could be improved a bit with more education,
but it's still always going to increase the barrier to entry and make
it much harder to do the right thing.
We will not publish any 1.0 or above version of `qiskit-terra`. All
dependents on Qiskit should switch their requirements to `qiskit`.
* Add manual uninstall for Neko
* Fix Windows paths
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Refer to stdlib documentation for odd shells
---------
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Ensure metapackage is installed during CI and tox
This ensures that the local version of the metapackage is also built and
installed on all CI runs (and in `tox`, where it's overridden) so that
dependencies on the metapackage in our optionals (e.g. Aer) will not
cause the older released version of Terra to be installed.
`tox` does not like having two local packages under test simultaneously
through its default configuration, so this fakes things out by putting
the two packages in the run dependencies and setting `skip_install`.
* Fix sdist build
* Use regular installs for metapackage
* Simplify build requirements install
* only publish images if image tests run
* fix indent in bash command
* fix condition syntax
* set value in the image test
* test runner stopping correctly on fail
* revert purposeful failure
* tidy up trailing whitespace
* Pivot to gha for wheel builds and pypi trusted publishers
This commit pivots the wheel publishing jobs to PyPI trusted publishers.
This mechanism authorizes the repository action directly so that user
creds or tokens are not needed to push the wheels anymore. This
mechanism will be more robust as the the github repository is linked
directly to the pypi project and not dependent on a single user account.
The one tradeoff required for this is that we must use github actions to
leverage this PyPI feature. So this commit migrates the wheel publishing
jobs from azure pipelines to github actions (which we were already using
for non-x86 linux wheels).
* Fix macOS image used in macOS jobs
* Fix nits in metapackage job
* Combine tier 1 platform builds into single upload job
* Fail gracefully on bad `SabreDAG` construction
This is a private, internal Rust type, but it doesn't cost us anything
(meaningful) to bounds-check the accessors and ensure we fail gracefully
rather than panicking if we ever make a mistake in Python space. This
more faithfully fulfills the return value of `SabreDAG::new`, which
previously was an effectively infallible `Result`.
* Run `cargo fmt`
* Move metapackage shim for combined releases
Now that Qiskit 0.44.0 has been released, the Qiskit project is now what
was formerly qiskit-terra. However, because Python packaging lacks a
clean mechanism to enable one package superseding another we're not able
to stop shipping a qiskit-terra package that owns the qiskit python
namespace without introducing a lot of potential user friction. So
moving forward the qiskit project will release 2 packages an inner
qiskit-terra which still contains all the library code and public facing
qiskit package which installs that inner package only. To enable this
workflow this commit migrates the metapackage setup.py into the terra
repository and setups build automation to publish a qiskit package in
addition to the inner terra package on each release tag.
* some follow up on https://github.com/Qiskit/qiskit-terra/pull/10530 (#19)
* some follow up on https://github.com/Qiskit/qiskit-terra/pull/10530
* extend some badges
* This Qiskit contains the building blocks for creating and working with quantum circuits, programs, and algorithms. -> This framework allows for building, transforming, and visualizing quantum circuits.
* The explanation of the Bell state is moving to IBM Quantun learning platform
* I think examples/ should eventually be replaced by proper tutorials
* Add StackOverflow as a forum
* Remove link to https://github.com/Qiskit/qiskit-tutorials
* Update README.md
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Update README.md
* Update README.md
* broken lines in badges
* doi
---------
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Expand lint checks to entire qiskit_pkg dir
* Unify extras in terra setup.py
* Update CI lint job
* Remove unused json imports
* Cleanup manifest file
* Finish comment
---------
Co-authored-by: Luciano Bello <bel@zurich.ibm.com>
* Add GitHub Actions documentation-deployment pipeline
This brings a documentation-deployment pipeline into the Qiskit/Terra
repository, allowing it to fully deploy its documentation to
`qiskit.org`, a task previously only the metapackage could perform.
This does not fully unify the documentation with files from the
metapackage, it just adds a pipeline to do the final deployment.
This includes a revitalised translatable-strings pipeline, which was
previously broken on the metapackage for the last month or two. It also
previously included a fair amount of legacy weight that was no longer
relevant.
* Add missing secret insertions
* Improve logic for deployments
This changes the logic for the deployments so that pushes to 'stable/*'
no longer trigger any deployment to qiskit.org. Instead, tag events
trigger a deployment to the relevant stable branch, and a tag event of
the _latest_ tag triggers a deployment to the documentation root.
The translatables logic is modified to push only the latest full-release
tag.
* Add infrastructure for building tutorials
This first commit is a rebase of Eric's initial PR as of db1ce6254 onto
`main`, fixing up some changes caused by the CI infrastructure changing
a bit since the PR was first opened.
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Harden tutorials Azure job
This moves much of the fetch- and process-related code into separate
scripts that assert far more about the directory structure, and fail if
they do not match the assumptions. We don't want to accidentally get
out-of-sync while we're changing things and end up with a tutorials job
that isn't really doing its job without us noticing.
The tutorials-fetching script can now also be re-used in a separate
GitHub Actions workflow that will handle the full tutorials-included
documentation build and deployment in the future.
The notebook-convertion step is moved into Python space, using
`nbconvert` as a library in order to parallelise the build process for
the tutorials, and to allow CI and developers calling `tox` directly to
specify the output directories for the built tutorials.
* Retarget tutorial-conversion utility as executor
This reorganises the tutorial "conversion" utility to make it clearer
that what it's actually doing is just executing the tutorials. The
script itself is changed to default to editing the files inplace, while
the `tox` job is updated to write the files into a special directory,
making it easier to clean up a dirty build directory and making it so
subsequent local executions will not pick up the converted files.
* Allow configuration of tutorials execution
There was a worry that not being able to configure these would make it
more unpleasant to use `tox` for the jobs locally.
---------
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Add no-optionals and full-optionals test runs
This restructures the CI slightly to perform a complete "no-optionals"
run, and a complete "all optionals" run (for the optionals that are
readily accessible as Python packages, without complex additional
setup). Previously, we only did a partial test with some of the oldest
optional components, which could have allowed for behaviour to
accidentally require optional components without us noticing.
This splits the `requirements-dev.txt` file into two; lines that remain
in the file are what is actually _required_ to run the test suite,
run the style checks, and do the documentation build. The rest (and
everything that was missing) is added to a new
`requirements-optional.txt` file, which can be used to pull in (almost)
all of the packages that Terra can use to provide additional /
accelerated functionality.
Several tests needed to gain additional skips to account for this
change. There is a good chance that some tests are missing skips for
some libraries that are not the first point of failure, but it's hard to
test explicitly for these in one go.
* Fix typo in coverage workflow
* Try relaxing ipython constraints
* Squash newly exposed lint failures
* Fix typo in tutorials pipeline
* Update the 'also update' comments
* Remove unneeded qiskit-toqm dependency
* Section requirements-optional.txt
* Test all optionals on min not max Python version
Optionals are generally more likely to have been made available on the
older Pythons, and some may take excessively long to provide wheels for
the latest versions of Python.
* Add missing test skip
* Fix optional call
* Use correct boolean syntax
* Fix tests relying on Jupyter
* Install ipykernel in tutorials
* Remove HAS_PDFLATEX skip from quantum_info tests
For simple LaTeX tests, IPython/Jupyter can handle the compilation
internally using MathJax, and doesn't actually need a `pdflatex`
installation. We only need that when we're depending on LaTeX libraries
that are beyond what MathJax can handle natively.
* Include additional tutorials dependencies
* Install all of Terra's optionals in tutorial run
* Do not install all optionals in docs build
* Use class-level skips where appropriate
* Do not install ibmq-provider in tutorials run
* Include matplotlib in docs requirements
* Remove unnecessary whitespace
* Split long pip line
* Only install graphviz when installOptionals
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* Install visualization extras in docs build
* Don't `--upgrade` when installing optionals
This is to prevent any optionals that have a dependency on Terra from
potentially upgrading it to some PyPI version during their installation.
This shouldn't happen with the current development model of having only
one supported stable branch and the main branch, but the `-U` is
unnecessary, and not having it is safer for the future.
* Update secondary installation of Terra in docs build
* Install all optionals during the docs build
I yoyo-ed on this, not wanting it to be too onerous to build the
documentation, but in the end, we need to have the optional features
installed in order to build most of the documentation of those features,
and some unrelated parts of the documentation may use these features to
enhance their own output.
* Fix test setup job
* Remove duplication of matplotlib in requirements files
* Update image test installation command
* Restore editable build
* Move `pip check` to `pip` section
* Remove redundant "post-install" description
* Expand comment on first-stage choices
* pytohn lol
---------
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* move and enable visual tests
* change directory for linux visual
* diff visual tests and assert immediately
* use autoformatting
* fix lint errors
* reformat after lint fix
* remove unused var lint err
* fix test image reference paths
* archive and publish image tests on failure
* add results to archive on failure
* fix format issue
* fix graph result naming
* add new source of truth for image tests
* update state city graph reference
* feedback cleanup
* remove unused import
* update state city ref
* visual tests run as separate job
* add visual test job to azure pipelines
* add virtual env to visual tests
* add bad ref for testing
* fix barrier reference
* add bad graph ref for testing
* fix graph references
* rounding error
* update state city ref
* unify failure dir, force failure
* refactor basedon review
* formatting, update test discovery
* import order per linting
* add docstring to vis utilities
* add dev requirements to image tests
* update references
* Remove unnecessary Azure default parameter
* Remove out-of-date references to Binder
* Remove executable bit from Python script
---------
Co-authored-by: Luciano Bello <bel@zurich.ibm.com>
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Use stable Python C API for building Rust extension
This commit tweaks the rust extension code to start using the PyO3 abi3
flag to build binaries that are compatible with all python versions, not
just a single release. Previously, we were building against the version
specific C API and that resulted in needing abinary file for each
supported python version on each supported platform/architecture. By
using the abi3 feature flag and marking the wheels as being built with
the limited api we can reduce our packaging overhead to just having one
wheel file per supported platform/architecture.
The only real code change needed here was to update the memory
marginalization function. PyO3's abi3 feature is incompatible with
returning a big int object from rust (the C API they use for that
conversion isn't part of the stable C API). So this commit updates the
function to convert to create a python int manually using the PyO3 api
where that was being done before.
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Set minimum version on abi3 flag to Python 3.8
* Fix lint
* Use py_limited_api="auto" on RustExtension
According to the docs for the setuptools-rust RustExtension class:
https://setuptools-rust.readthedocs.io/en/latest/reference.html#setuptools_rust.RustExtension
The best setting to use for the py_limited_api argument is `"auto"` as
this will use the setting in the PyO3 module to determine the correct
value to set. This commit updates the setup.py to follow the
recommendation in the docs.
* Update handling of phase input to expval rust calls
The pauli_expval module in Rust that Statevector and DensityMatrix
leverage when computing defines the input type of the phase argument as
Complex64. Previously, the quantum info code in the Statevector and
DensityMatrix classes were passing in a 1 element ndarray for this
parameter. When using the the version specific Python C API in pyo3 it
would convert the single element array to a scalar value. However when
using abi3 this handling was not done (or was not done correctly) and
this caused the tests to fail. This commit updates the quantum info
module to pass the phase as a complex value instead of a 1 element numpy
array to bypass this behavior change in PyO3 when using abi3.
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Set py_limited_api explicitly to True
* DNM: Test cibuildwheel works with abi3
* Add abi3audit to cibuildwheel repair step
* Force setuptools to use abi3 tag
* Add wheel to sdist build
* Workaround abiaudit3 not moving wheels and windows not having a default repair command
* Add source of setup.py hack
* Add comment about pending pyo3 abi3 bigint support
* Revert "DNM: Test cibuildwheel works with abi3"
This reverts commit 8ca24cf1e4.
* Add release note
* Simplify setting abi3 tag in built wheels
* Update releasenotes/notes/use-abi3-4a935e0557d3833b.yaml
Co-authored-by: Jake Lishman <jake@binhbar.com>
* Update release note
* Update releasenotes/notes/use-abi3-4a935e0557d3833b.yaml
---------
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
Co-authored-by: Jake Lishman <jake@binhbar.com>
* Add ruff to local tests and CI
This adds linting using ruff to the relevant configuration files. Only
a few rules are enabled and none of them trigger an error in the current
state of the repo.
* Add comments on running black separately from tox
* Simplify and remove potentially bug causing instructions in CONTRIBUTING
* Update pyproject.toml
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
---------
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
On some OSes and configurations (usually Windows), there can be problems
when a binary attempts to one that is in use, especially itself. For
this reason, it is more reliable to use `python -m pip install ...` than
`pip install`. We have seen some CI failures on Windows due to `pip`
failing to update itself because of the direct-executable `pip install`
form.
Up-to-date Python packages should not require this step, however there
are several packages, especially those that are optional for Terra
functionality, that do not yet contain `pyproject.toml` files when
building them from source. In these cases, `pip` will begin erroring
out from version 23.1 if `wheel` is not installed.
This commit proactively ensures that the minimum build dependencies for
legacy Python packages is prepared and up-to-date before attempting
installations; this includes ensuring that these are updated _inside_
any created venvs as well as outside them.
* Remove unnecessary `pip install` in CI
* Consolidate `pip install` calls in CI
* Add back `-U`
* Go back to editable install so Rust is in debug mode
* Add RUST_DEBUG env var so we can avoid editable install
* Remove SETUPTOOLS_ENABLE_FEATURES and also Aer from lint job
* See if editable install fixes Pylint
* Give up on not using editable installs in CI
I don't know why they're causing issues with loading the Rust extension. But it's not a priority
to figure that out. This PR is still some forward progress.
* Fix `tox -e docs` for macOS
Co-authored-by: Jake Lishman <jake@binhbar.com>
* Fix two issues. Duh, thanks Jake
* Update Azure job to set RUST_DEBUG
Not strictly necessary since Tox already sets it. But makes things consistent and avoids accidentally
removing this in the future
---------
Co-authored-by: Jake Lishman <jake@binhbar.com>
This commit fixes an issue in the recently merged #9584 which added a
rustup toolchain file to the qiskit repo to default to our MSRV when
using rustup. In that commit the intent was for wheel jobs to use the
latest version of the stable Rust to publish precompiled wheels.
However, the mechanism by which that was done in the PR won't work
for linux environments. The linux cibuildwheel jobs leverage a docker
container to build the binaries in that have known manylinux compatible
environment so that the wheels we ship are compatible with the packaging
specification. #9584 was setting the rust version override in the host
environment via `rustup override` which wouldn't propogate to inside the
docker container. To address this issue this commit updates the
cibuildwheel config to set the RUST_TOOLCHAIN environment variable for
the cibuildwheel process. This will set the environment variable
throughout the wheel building ensuring that rustup will use stable
inside the container when we're building our release artifacts.
* Add `rust-toolchain.toml` for a consistent Rust development version
* Add components
* Simplify CI to not set Rust version
It will now use the rust-toolchain.toml file. This is possible because Rustup is already on the PATH, evidenced by us previously running `rustup default`
* Build Azure wheels with stable toolchain
* Use Stable for the sdist test build
---------
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
THe most recent release tox, 4.4.0, changed the behavior of how tox
interacts with constraints files. It is raising an error because it
doesn't like the format of our constraints file in qiskit. This commit
works around this by pinning our tox version. Given that we're preparing
to release 0.23.0 pinning is the fastest solution in the short term, we
can adjust things after the release is out.
* Run QPY backwards compatibility tests in parallel
The QPY backwards compoatibility testing is now taking a significant
amount of time. The tests are designed to ensure that the backwards
compatibility guarantees the QPY format makes are upheld and validates
that a QPY payload generated from any earlier release of Qiskit (since
terra 0.18.0 when QPY was first introduced) can be loaded as an
equivalent circuit with the current development branch of Qiskit. While
when we first started running these tests there were only a handful of
versions we're now validating 19 earlier versions of Qiskit and that
number will only grow over time. This portion of test jobs in CI are
taking an increasing amount of time. To help offset this in the short
term this commit migrates the qpy tests to run in parallel, by
leveraging GNU Parallel. This includes building the virtualenv and
running the qpy generator (which typically is much faster than the
installation time). Locally this results in a huge speedup, although
this will be more modest in CI because of limited resources available
to the CI worker. Longer term when the QPY portion of the job gets too
slow we'll likely have to migrate the qpy tests to a dedicated CI job
that runs in parallel to the unit tests. But hopefully, running the
tests in parallel will provide a sufficient speedup to offset another
job for some time.
* Add caching of qpy files to CI
This commit optimizes the QPY compatibility tests further in CI. The
majority of the time spent during the compatibility tests is building
the virtual environments with historical versions of qiskit installed to
generate the QPY files. To avoid this overhead (and also preserve qpy
files beyond the point at which an old version is no longer installable)
this commit starts caching the qpy files between CI jobs.
* Only print venv building log message when building a venv
* Add python file to cache key
* Localize qpy steps
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Relax constraints on jupyter-core and ipywidgets
These were originally added in #9105 and #9272 respectively, but the
original problem package `seaborn` has released since then, which may
have fixed things.
* Fix suppressions for Jupyter warnings
This removes some now-unnecessary suppressions from image-related
packages, and adds the new suppression for the pyzmq problem, which is
Jupyter's domain to handle. The extra environment variable in the
images test run is to eagerly move to new default behaviour starting in
jupyter-core 6; there is no need for us to pin the package too low,
since this warning is just encouraging people to proactively test the
new behaviour, and it doesn't cause our suite problems.
* Use correct YAML syntax
One day I'll remember this when writing environment variables in YAML
files, but it's not this day.
Right now the lint CI job runs a number of scripts and commands which do
not print any output unless there is an error, while other commands
always print to stdout even on success. This can be confusing if a
command towards the end prints error output after normal stdout from
earlier commands. For example, this was the output from a recent failed
CI run:
------------------------------------
Your code has been rated at 10.00/10
Checking cfg-if v1.0.0
Checking scopeguard v1.1.0
Checking once_cell v1.16.0
Checking either v1.8.0
Checking smallvec v1.10.0
Checking ppv-lite86 v0.2.16
Checking rawpointer v0.2.1
Checking unindent v0.1.10
Checking fixedbitset v0.4.2
Checking matrixmultiply v0.3.2
Checking libc v0.2.137
Checking crossbeam-utils v0.8.12
Checking libm v0.2.5
Checking memoffset v0.6.5
Checking lock_api v0.4.9
Checking crossbeam-channel v0.5.6
Checking getrandom v0.2.8
Checking num_cpus v1.13.1
Checking parking_lot_core v0.9.4
Checking crossbeam-epoch v0.9.11
Checking num-traits v0.2.15
Compiling pyo3-build-config v0.17.3
Checking ahash v0.7.6
Checking rand_core v0.6.4
Checking ahash v0.8.0
Checking parking_lot v0.12.1
Checking crossbeam-deque v0.8.2
Checking rand_chacha v0.3.1
Checking rand_pcg v0.3.1
Checking num-integer v0.1.45
Checking num-complex v0.4.2
Checking rayon-core v1.9.3
Checking rand v0.8.5
Checking num-bigint v0.4.3
Checking rayon v1.5.3
Compiling pyo3-ffi v0.17.3
Compiling pyo3 v0.17.3
Checking rand_distr v0.4.3
Checking hashbrown v0.12.3
Checking ndarray v0.15.6
Checking hashbrown v0.11.2
Checking indexmap v1.9.1
Checking petgraph v0.6.2
Checking numpy v0.17.2
Checking retworkx-core v0.11.0
Checking qiskit-terra v0.23.0 (/home/vsts/work/1/s)
Finished dev [unoptimized + debuginfo] target(s) in 13.10s
ERROR: scipy.stats is imported via sklearn.utils.fixes
The error there comes from the find optional import script, but the
output looks like it's potentially related to the cargo clippy output.
To help make the output more clear for debugging this commit adds a
bunch of echo statements to the ci script that explains what is being
run. This should hopefully make it easier for people not as well versed
in the CI lint job to understand what is going on during a failure.
Using bash trace mode was also considered, but typically it's better to
avoid that in CI jobs just in case a secret is used on command (to
prevent leaking it). Also in this case it likely would have been less
useful because it just prints the command being run which may not be as
obvious as a text description.
* Add support for Python 3.11
Python 3.11.0 was released on 10-24-2022, this commit marks the start of
support for Python 3.11 in qiskit. It adds the supported Python version in
the package metadata and updates the CI configuration to run test jobs
on Python 3.11 and build Python 3.11 wheels on release.
* Fix inspect.Parameter usage for API change in 3.11
Per the Python 3.11.0 release notes inspect.Parameter now raises a
ValueError if the name argument is a Python identifier. This was causing a
test failure in one case where a parameter named `lambda` was used.
This commit adjusts the parameter name in the tests to be lam to avoid
this issue.
* Set a version cap on the jax dev requirement
Currently jax doesn't publish Python 3.11 wheels which is blocking test
runs with python 3.11. Since jax is an optional package only used for
the gradient package we can just skip it as isn't a full blocker for
using python 3.11. This commit sets an environment marker on the jax
dev requirements to only try to install it on Python < 3.11.
* Set python version cap on cplex in CI
* DNM: Test wheel builds work
* Skip tests on i686/win32 wheel buids with python 3.11
* Revert "DNM: Test wheel builds work"
This reverts commit 725c21b465.
* Run QPY backwards compat tests on trailing edge Python version
This commit moves the qpy backwards compatibility testing from the
leading edge python version, which in this PR branch is Python 3.11, to
the trailing edge Python version which is currently 3.7. Trying to add
support for a new Python version has demonstrated that we can't use the
leading edge version as historical versions of Qiskit used to generate
old QPY payloads are not going to be generally installable with newer
Python versions. So by using the trailing edge version instead we can
install all the older versions of Qiskit as there is Python
compatibility for those Qiskit versions. Eventually we will need to
raise the minimum Qiskit version we use in the QPY tests, when Python
3.9 goes EoL in October 2025 and Qiskit Terra 0.18.0 no longer has any
supported versions of Python it was released for. We probably could
get by another year until Python 3.10 goes EoL in 2026 it just means
we're building 0.18.x and 0.19.x from source for the testing, but when
Python 3.11 becomes our oldest supported version we'll likely have to
bump the minimum version.
This does go a bit counter to the intent of the test matrix to make the
first stage return fast and do a more through check in the second stage.
But, in this case the extra runtime is worth the longer term stability
in the tests.
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Remove deprecated networkx dag converter functions
This commit removes the deprecated DAGCircuit and DAGDependency
networkx converter functions. These functions were deprecated
in #7927 as part of the 0.21.0 release. Since the minimum
deprecation window has elapsed we can now remove these functions and
the last usage of optional networkx usage in qiskit terra.
* Remove unused imports
* Move release note to the correct location
* Add networkx to tutorials job environment
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
In #7658 we disabled multiprocessing as part of unittest runs in CI
because the multiple layers of parallelism (subprocess from stestr,
multiprocessing from qiskit, and multithreading from rust in qiskit)
were triggering a latent bug in python's multiprocessing implementation
that was blocking CI. To counter the lost coverage from disabling
multiprocessing in that PR we added a script verify_parallel_map which
force enabled multiprocessing and validated transpile() run in parallel
executed correctly. However, in #8952 we introduced a model for
selectively enabling multiprocessing just for a single test method. This
should allow us to avoid the stochastic failure triggering the deadlock
in python's multiprocessing by overloading parallelism but still test in
isolation that parallel_map() works.
This commit builds on the test class introduced in #8952 and adds
identical test cases to what was previously in verify_parallel_map.py to
move that coverage into the unit test suite. Then the
verify_parallel_map script is removed and all callers are updated to
just run unit tests instead of also executing that script.
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Flush pip cache in azure pipelines
This commit changes the root pip cache key in azure pipelines to flush
the cache and start it over again. Right now the pip cache has grown to
be > 4GB and we're seeing a high rate of the step bogging down while
downloading the data. In an attempt to increase the reliability of the
cache this commit changes the root cache key which will make all the
existing caches miss and it will have to be created again. This should
hopefully shrink the cache size back down to something more manageable
by azure pipelines and increase the reliability of fetching the cache at
the start of each job.
* Remove pip cache step from azure jobs
After running in CI with a cache miss the runtime of the install step
was sufficiently quick that the benefits of the cache are not worth it
when weighed against the potential instability we've seen with the cache
action. This commit opts to just remove the pip caching step completely
from all the azure jobs to improve the reliability and runtime. This
does expose us to a potential risk of a network issue between azure and
pypi's CDN but given the rate of issues we've had just pulling from the
cache storage server it's unlikely that this will be significant to what
we've seen from using a local cache.
* Drop pip cache from tutorials job too
* Revert "Pin setuptools in CI (#8526)"
With the release of setuptools 64.0.1 the issues previously blocking CI
and editable installs more generally should have fixed now. This commit
reverts the pins previously introduced to unblock CI and work around the
broken release.
This reverts commit 82e38d1de0.
* Add back SETUPTOOLS_ENABLE_FEATURES env var for legacy editable install
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Remove stale dev dependencies from requirements-dev.txt
In looking at the requirements-dev.txt list as part of #8498 there were
a few entries which were not needed and/or are being used anymore. This
commit cleans those up and also removes an unused script for reporting
ci failures. The report script has been completed supersceded by native
comment actions in azure pipelines and github actions so we no longer
need to manually call the github api to report CI failures on periodic
jobs.
* Use pip to install extension for lint
* Update CI config to avoid explicit setup.py usage where not needed
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
The macOS 10.15 image we're using in azure pipelines has been deprecated
for some time (as Apple has dropped support for the version) and they've
started periodic brownouts on the version to accelerate the transition
off of the image. We have stayed pinned at the older version because we
had compatibility issues with the newer releases in the past. But, since
this is no longer an option this commit bumps us one version from 10.15
to 11. This doesn't go straight to 12 as Apple proactively disables
support for older platforms in newer OS releases and in Qiskit we try to
maximize platform support, even those using older Apple hardware, so the
minimal version update is made.
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Pin setuptools in CI
The recently released setuptools 64.0.0 release introduced a regression
that prevents editable installs from working (see pypa/setuptools#3498).
This is blocking CI as we use editable installs to build and install
terra for testing. When there is an upstream release fixing this issue
we can remove the pins.
* Remove pip/setuptools/wheel manual install step
* Try venv instead of virtualenv
* Revert "Try venv instead of virtualenv"
This reverts commit 3ada819330.
* Revert "Remove pip/setuptools/wheel manual install step"
This reverts commit 831bc6e0db.
* Pin in constraints.txt too
* Lower version further
* Pin setuptools-rust too
* Set editable install to legacy mode via env var
* Set env variable correctly everywhere we build terra
* Add missing env variable setting for image tests
* Move release note in wrong location and add script to block this
In #8201 I added a release note as part of the PR which documented the
change in behavior. However, I accidentally committed this file in the
wrong location (by running reno new outside of the repo root). This
meant the file was never actually included in the release notes for the
0.21.0 release. This commit corrects this oversight and moves it back to
the proper location.
However, since this isn't my first time making this mistake and I can
expect that others will make it too in the future. This commit also adds
a new script to detect this and raise an error when release notes are
present outside of the proper location. By running this as part of lint
jobs we'll block this mistake from happening again.
* Fix lint
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Explicitly set MSRV for rust extension
This commit explicitly sets the MSRV (minimum supported rust version)
for Qiskit Terra to 1.56 which was released in October 2021 (with 1.56.1
which fixed two CVEs being released Nov. 1st 2021). Previously we had
avoided setting a hard MSRV and opted to try and just loosely support the
past 6 months of rust releases. However, managing it loosely has proven
tricky to manage in practice. This commit sets a hard version for MSRV
and modifies a test job to validate we're able to compile with the MSRV.
This should ensure we're able to avoid breaking compatibility for that
version.
* Explicitly set MSRV to 1.56.1 instead of loosely 1.56
* Explicitly set version as variable in CI config
* Assign MSRV to azure variable
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Overhaul Azure Pipelines configuration
This restructures our monolithic `azure-pipelines.yml` file into a
modular setup that separates the logic of specifying "what to run" from
"how to run". For example, the different processes for running the
Python tests on Linux, Mac and Windows are separated out into three
template files, which are then called by the "branch push" and "PR sync"
pipeline files in a few different stages. This is also to centralise
the logic for "how to run"; previously, it was duplicated in a few
different stages.
There are also major changes to content of "what to run".
Branch push:
This now only runs a single Linux test, just as a sanity check.
Previously it ran the full PR CI suite, including lint, docs,
tutorials, and the entire Python testing matrix. This was
unnecessary, as the branch protection rules and PR merge
strategy ensured that the code must already have been up-to-date
and passed the PR CI before it could be pushed to the branch, so
was simply burning CI resources for no benefit. To the best of
my knowledge, the branch-push CI had never caught a bug.
PR sync:
This remains split into two stages, but now only the oldest and
newest Python versions are tested, rather than the entire
matrix. It is exceedingly rare (I'm not sure it's ever
happened) that a commit breaks an intermediate Python version
_only_, so instead we add a new "nightly" CI run to test the
full matrix, which should be sufficient to catch these. The
"preliminary" stage is reduced to lint, docs and a single test
run; about half of PR commits cause a CI failure, and over 90%
of these would be caught by the new structure. The second stage
contains the tutorials and the rest of the
`{oldest,newest},{windows,mac,linux}` matrix. This is to reduce
total CI load, with the intent that failing runs should almost
universally fail just as quickly as they did before.
Nightly:
Entirely new. This is tests of the full matrix of Python
versions and OSes for the `main` and `stable` branches, if a
given branch was updated since the last run. It should be very
unlikely that this run catches errors.
Tag push:
The deployment process is unchanged, just refactored into a
format for easier management.
* Refactor split pipelines into single entry point
This maintains the same logical splits as the previous commit, but
rather than using four separate pipeline files, it uses template
conditional compilation to put them in a single large file. This lets
us keep continuity with our current Azure setup, and lets us easily
share a series of configuration variables (including a general YAML
object) between the different stages.
This commit swaps from using a `strategy: matrix` in the test jobs, in
favour of using templated loops over the variables from the main
`azure-pipelines.yml` file. This is because passing strings like
`"3.10"` contained within YAML arrays from one template file to another
appears to trigger some sort of implicit conversion somewhere, and
`"3.10"` can be interpreted as `3.1`.
* Leave comment on nightly testing failure
* Reinstate autocancel
* Quote string parameters
Co-authored-by: Kevin Hartman <kevin@hart.mn>
* Split long environment variable
Co-authored-by: Kevin Hartman <kevin@hart.mn>
* Add more comments on YAML syntax
* Add option to install from sdist in Linux tests
Co-authored-by: Kevin Hartman <kevin@hart.mn>