The special note requires remsh which is used internally, under
different name (with one underscore instead of two).
Those macros are not used anywhere in RPM, neither in Fedora as of
today.
References: https://github.com/rpm-software-management/rpm/issues/1211
Signed-off-by: Igor Raits <i.gnatenko.brain@gmail.com>
Adds rpm version as a new expression value type, denoted by v"" (similar
to Python u"", b"" etc), which are compared using rpm version comparison
algorithm rather than regular string comparison.
For example in specs:
%if v"%{python_version}" < v"3.9"
%endif
...but also command lines, arbitrary macros etc:
rpm --eval '%[v"1:1.2" < v"2.0"]'
Fixes: #1217
Use the newly added version converter function for parsing labelCompare()
arguments, gaining automatic access to all formats that we support
in rpm.ver() constructor. Currently this means (E,V,R) tuples which
labelCompare() always used, plus plain old strings. In future, might
be something more.
The low-level rpmvercmp() was always the wrong thing to export to Lua,
but its what people have used because it's there and its used for
comparing full versions. So rather than add a separate API for the higher
level comparison, lets just change rpm.vercmp() do the right thing instead.
Fixes: #897
It's more than a little hysterical that rpm hasn't had a meaningful
public API for parsing and comparing version strings. The API may seem
kinda overkill for some things but then this does give us a place to
check for invalid version strings, which is a thing we never had before
(although the current code doesn't do much in the way of checking)
Fixes: #561
Epoch promotion was a thing around and before the turn of the millenium,
we really shouldn't be carrying that cruft around anymore.
Remove all traces that we can without too much guilt about breaking
ABI/API and not bumping soname which we dont want to do for this
stupid old thing: all the symbols are left in place, they just don't
work anymore. Nobody should notice as nobody can have been using this
stuff in the last 15+ years.
No functional changes, just lose extra baggage that was needed to append
initialize and append rpm.vercmp() from librpm side when the rest is in
librpmio. We might need to bring it back some day, but given the history
here it doesn't seem all that likely.
Adding a new header just for this seems a bit much but we'll be adding
stuff there shortly.
No functional changes as such, this is prerequisite for supporting
version comparison in expressions.
Note that the code is completely unchanged except for the indentation
under the new if __name__ == "__main__":
Note that this change is necessary, but not sufficient to use the
RpmVersion class.
The init of the RpmVersion class will fail when called from an outside
script, because the `parse_version()` function is lazily imported from
the code outside the class. However, adding the import of
parse_version() to RpmVersion class is not done right now, because while
we would import it from `pkg_resources`, other scripts might want to
rely instead of the lightweight `packaging` module for the import. Thus
I'm leaving this conondrum to be addressed in the future.
--normalized-names-format FORMAT
FORMAT of normalized names can be `pep503` [default] or `legacy-dots` (dots allowed)
--normalized-names-provide-both
Provede both `pep503` and `legacy-dots` format of normalized names (useful for a transition period)
Notes from an attempted rewrite from pkg_resources to importlib.metadata in 2020:
1. While pkg_resources can open a metadata on a specified path
(Distribution.from_location()), importlib provides access only to
"installed package metadata", i.e. the the dist-info or egg-info directory
must be "discoverable", i.e. on the sys.path.
- Thankfully only the dist/egg-info directory must exist, the
corresponding Python module does not have to be present.
- The problems this causes:
(a) You have to manipulate the sys.path to add the specific location of
the site-packages directory inside the buildroot
(b) If you have package "foo" in this newly added directory on sys.path
and there is some problem and its dist/egg-info metadata are not found,
importlib.metadata continues searching the sys.path and may discover a
package with the same name (possibly same version) outside the
buildroot.
To get around this, you can manipulate the sys.path to remove all
other "site-packages" directories. But you have to leave the
standard library there, because importlib may import other modules
(in my testing: base64, quopri, random, socket, calendar, uu)
(c) I have not tested how well it works if you're ispecting metadata of
different Python versions than the one you run the script with
(especially Python 2 vs Python 3). This might also cause problems with
dependency specifiers (i.e. python_version != "3.4")
2. Handling of dependencies (requires) is problematic in importlib.metadata
- pkg_resources provides a way to separately list standard requires and a
requires for each "extras" category. importlib does not provide this, it
only spits out a list of strings, each string in the format:
- 'packaging>=14',
- 'towncrier>=18.5.0; extra == "docs"', or
- 'psutil<6,>=5.6.1; (python_version != "3.4") and extra == "testing"
you can either parse these with a regex (fragile) or use the external
`packaging` Python module. `packaging`, however, also doesn't have a great
support for figuring out extra dependencies, it provides the marker api:
- <Marker(\'python_version != "3.4" and extra == "testing"\')>
you can use Marker api to evaluate the condition, but not to parse.
For parsing you can access the private api Marker._markers:
- marker._markers=[[(<Variable('python_version')>, <Op('!=')>, \
<Value('3.4')>)], 'and', (<Variable('extra')>, <Op('==')>, \
<Value('testing')>)]
which beyond the problem of being private is also not very useful for
parsing due to its structure.
- pkg_resources also provides version parsing, which importlib does not
and `packaging` needs to be used
- importlib is part of the standard library, but packaging and its
2 runtime dependencies (pyparsing and six) are not, and therefore we
would go from 1 dependency to 3
3. A few minor issues, more in the next section about equivalents.
importlib.metadata.distribution equivalents of pkg_resources.Distribution attributes:
- pkg_resources: dist.py_version
importlib: # not implemented (but can be guessed from the /usr/lib/pythonXX.YY/ path)
- pkg_resources: dist.project_name
importlib: dist.metadata['name']
- pkg_resources: dist.key
importlib: # not implemented
- pkg_resources: dist.version
importlib: dist.version
- pkg_resources: dist.requires()
importlib: dist.requires # but returns strings with almost no parsing done, and also lists extras
- pkg_resources: dist.requires(extras=dist.extras)
importlib: # not implemented, has to be parsed from dist.requires
- pkg_resources: dist.get_entry_map('console_scripts')
importlib: [ep for ep in importlib.metadata.entry_points()['console_scripts'] if ep.name == pkg][0]
# I have not found a better way to get the console_scripts
- pkg_resources: dist.get_entry_map('gui_scripts')
importlib: # Presumably same as console_scripts, but untested
Upstreaming from Fedora (BZ#1791530)
That is, we add new provides that replace dots with a dash.
Package that used to provide python3dist(zope.component) and python3.8dist(zope.component)
now also provides python3dist(zope-component) and python3.8dist(zope-component).
Package that used to provide python3dist(a.-.-.-.a) now provides python3dist(a-a) as well.
This is consistent with pip behavior, `pip install zope-component` installs zope.component.
Historically, we have always used dist.key (safe_name) from setuptools,
but that is a non-standardized convention -- whether or not it replaces dots
with dashes is not even documented.
We say we use "canonical name" or "normalized name" everywhere, yet we didn't.
We really need to follow the standard (PEP 503):
https://www.python.org/dev/peps/pep-0503/#normalized-names
The proper function here would be packaging.utils.canonicalize_name
https://packaging.pypa.io/en/latest/utils/#packaging.utils.canonicalize_name
-- we reimplement it here to avoid an external dependency.
This is the first required step needed if we want to change our requirements later.
If we decide we don't, for whatever reason, this doesn't break anything.
size_t is unsigned, so returning -1 is not going to have the expected
behavior. Fix it to return ssize_t.
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Fixes regressions from commit 75ec16e660e784d7897b37cac1a2b9b135825f25:
the newly added provides of to-be-built packages were being used for
dependency resolution, such as spec satifying its own buildrequires,
and matched against conflicts in installed packages.
Source packages cannot obsolete anything or provide capabilities or files
to transactions, don't add them to rpmal at all. Explicitly skip checks
against source provides, similarly to what we already did with obsoletes.
Fixes: #1189
Binary packages come in different sizes and so their build time can vary
greatly. Dynamic scheduling, which we currently use for parallel
building, is a good strategy to combat such differences and load-balance
the available CPU cores.
That said, knowing that the build time of a package is proportional to
its size, we can reduce the overall time even further by cleverly
ordering the task queue.
As an example, consider a set of 5 packages, 4 of which take 1 unit of
time to build and one takes 4 units. If we were to build these on a
dual-core system, one possible unit distribution would look like this:
TIME --->
CPU 1 * * * * * * # package 1, 3 and 5
CPU 2 * * # package 2 and 4
Now, compare that to a different distribution where the largest package
5 gets built early on:
TIME --->
CPU 1 * * * * # package 5
CPU 2 * * * * # package 1, 2, 3 and 4
It's obvious that processing the largest packages first gives better
results when dealing with such a mix of small and large packages
(typically a regular package and its debuginfo counterpart,
respectively).
Now, with dynamic scheduling in OpenMP, we cannot directly control the
task queue; we can only generate the tasks and let the runtime system do
its work. What we can do, however, is to provide a hint to the runtime
system for the desired ordering, using the "priority" clause.
So, in this commit, we use the clause to assign a priority value to each
build task based on the respective package size (the bigger the size,
the higher the priority), to help achieve an optimal execution order.
Indeed, in my testing, the priorities were followed to the letter (but
remember, that's not guaranteed by the specification). Interestingly,
even without the use of priorities, simply generating the tasks in the
desired order resulted in the same execution order for me, but that's,
again, just an implementation detail.
Also note that OpenMP is allowed to stop the thread generating the tasks
at any time, and make it execute some of the tasks instead. If the
chosen task happens to be a long-duration one, we might hit a starvation
scenario where the other threads have exhausted the task queue and
there's nobody to generate new tasks. To counter that, this commit also
adds the "untied" clause which allows other threads to pick up where the
generating thread left off, and continue generating new tasks.
Resolves#1045.
Issue a warning if buildtree macros (%_sourcedir etc) contain undefined
macro(s) after expansion, such as things only defined during spec parse.
This always was a murky case that doesn't work in all scenarios, so
a warning seems appropriate. Actual behavior doesn't change here though.
The NSS library often changes in ways that somehow breaks rpm,
and these days upstream does not care about consumers of NSS other
than itself. This inflicts untold amounts of suffering on users
of rpm in distributions where rpm is linked to NSS.
Now that we have a couple of good, well-supported options, there is
no reason to keep supporting NSS as an option.
So now, we are deprecating it for later removal.
In todays' "look ma what crawled from under the bed" episode, we have
encounter a subpackage whose name is not derived from the main package
name, and a manually specified debuginfo package on that. This particular
combo manages to evade all our checks for duplicate package names, and
in the right phase of the moon actually creates corrupt packages due to
two threads end up writing to the same output file simultaneously. Which
is what happened in https://pagure.io/fedora-infrastructure/issue/8816
Catch the case and use the spec-defined variant (because getting rid
of it would be harder) but issue a warning because most likely this
is not what they really wanted.
This is normally unneeded because an index sync is already done
after each package is added to the database. But rebuilding an
empty database lead to no index sync being done and a mismatch
between the generation count. This in turn will trigger a
index regeneration.
Found by using ndb when running the test suite.
Somehow I was under the impression that mapped regions are unmapped when
the corresponding file descriptor is closed, but that's not the case
for POSIX systems.