-fno-inline-functions, -O0, and optnone.
These were really, really tangled together:
- We used the noinline LLVM attribute for -fno-inline
- But not for -fno-inline-functions (breaking LTO)
- But we did use it for -finline-hint-functions (yay, LTO is happy!)
- But we didn't for -O0 (LTO is sad yet again...)
- We had weird structuring of CodeGenOpts with both an inlining
enumeration and a boolean. They interacted in weird ways and
needlessly.
- A *lot* of set smashing went on with setting these, and then got worse
when we considered optnone and other inlining-effecting attributes.
- A bunch of inline affecting attributes were managed in a completely
different place from -fno-inline.
- Even with -fno-inline we failed to put the LLVM noinline attribute
onto many generated function definitions because they didn't show up
as AST-level functions.
- If you passed -O0 but -finline-functions we would run the normal
inliner pass in LLVM despite it being in the O0 pipeline, which really
doesn't make much sense.
- Lastly, we used things like '-fno-inline' to manipulate the pass
pipeline which forced the pass pipeline to be much more
parameterizable than it really needs to be. Instead we can *just* use
the optimization level to select a pipeline and control the rest via
attributes.
Sadly, this causes a bunch of churn in tests because we don't run the
optimizer in the tests and check the contents of attribute sets. It
would be awesome if attribute sets were a bit more FileCheck friendly,
but oh well.
I think this is a significant improvement and should remove the semantic
need to change what inliner pass we run in order to comply with the
requested inlining semantics by relying completely on attributes. It
also cleans up tho optnone and related handling a bit.
One unfortunate aspect of this is that for generating alwaysinline
routines like those in OpenMP we end up removing noinline and then
adding alwaysinline. I tried a bunch of other approaches, but because we
recompute function attributes from scratch and don't have a declaration
here I couldn't find anything substantially cleaner than this.
Differential Revision: https://reviews.llvm.org/D28053
llvm-svn: 290398
Much to my surprise, '-disable-llvm-optzns' which I thought was the
magical flag I wanted to get at the raw LLVM IR coming out of Clang
deosn't do that. It still runs some passes over the IR. I don't want
that, I really want the *raw* IR coming out of Clang and I strongly
suspect everyone else using it is in the same camp.
There is actually a flag that does what I want that I didn't know about
called '-disable-llvm-passes'. I suspect many others don't know about it
either. It both does what I want and is much simpler.
This removes the confusing version and makes that spelling of the flag
an alias for '-disable-llvm-passes'. I've also moved everything in Clang
to use the 'passes' spelling as it seems both more accurate (*all* LLVM
passes are disabled, not just optimizations) and much easier to remember
and spell correctly.
This is part of simplifying how Clang drives LLVM to make it cleaner to
wire up to the new pass manager.
Differential Revision: https://reviews.llvm.org/D28047
llvm-svn: 290392
Several of these tests (the two deleted, and the one removal edit) were
relying on the optimizer to collapse things to test some frontend
feature. The tests were really old and features seemed amply covered by
other parts of the test suite, so I just removed them.
If anyone thinks they're valuable enough to keep/fix, we can play around
with that, for sure.
(inspired by r252872)
llvm-svn: 253114