This patch adds the safe stack instrumentation pass to LLVM, which separates
the program stack into a safe stack, which stores return addresses, register
spills, and local variables that are statically verified to be accessed
in a safe way, and the unsafe stack, which stores everything else. Such
separation makes it much harder for an attacker to corrupt objects on the
safe stack, including function pointers stored in spilled registers and
return addresses. You can find more information about the safe stack, as
well as other parts of or control-flow hijack protection technique in our
OSDI paper on code-pointer integrity (http://dslab.epfl.ch/pubs/cpi.pdf)
and our project website (http://levee.epfl.ch).
The overhead of our implementation of the safe stack is very close to zero
(0.01% on the Phoronix benchmarks). This is lower than the overhead of
stack cookies, which are supported by LLVM and are commonly used today,
yet the security guarantees of the safe stack are strictly stronger than
stack cookies. In some cases, the safe stack improves performance due to
better cache locality.
Our current implementation of the safe stack is stable and robust, we
used it to recompile multiple projects on Linux including Chromium, and
we also recompiled the entire FreeBSD user-space system and more than 100
packages. We ran unit tests on the FreeBSD system and many of the packages
and observed no errors caused by the safe stack. The safe stack is also fully
binary compatible with non-instrumented code and can be applied to parts of
a program selectively.
This patch is our implementation of the safe stack on top of LLVM. The
patches make the following changes:
- Add the safestack function attribute, similar to the ssp, sspstrong and
sspreq attributes.
- Add the SafeStack instrumentation pass that applies the safe stack to all
functions that have the safestack attribute. This pass moves all unsafe local
variables to the unsafe stack with a separate stack pointer, whereas all
safe variables remain on the regular stack that is managed by LLVM as usual.
- Invoke the pass as the last stage before code generation (at the same time
the existing cookie-based stack protector pass is invoked).
- Add unit tests for the safe stack.
Original patch by Volodymyr Kuznetsov and others at the Dependable Systems
Lab at EPFL; updates and upstreaming by myself.
Differential Revision: http://reviews.llvm.org/D6094
llvm-svn: 239761
This commit connects the machine function analysis pass (which creates machine
functions) to the MIR parser, which will initialize the machine functions
with the state from the MIR file and reconstruct the machine IR.
This commit introduces a new interface called 'MachineFunctionInitializer',
which can be used to provide custom initialization for the machine functions.
This commit also introduces a new diagnostic class called
'DiagnosticInfoMIRParser' which is used for MIR parsing errors.
This commit modifies the default diagnostic handling in LLVMContext - now the
the diagnostics are printed directly into llvm::errs() so that the MIR parsing
errors can be printed with colours.
Reviewers: Justin Bogner
Differential Revision: http://reviews.llvm.org/D9928
llvm-svn: 239753
Summary:
NFC: no one uses AnalyzeBranchPredicate yet.
Add TargetInstrInfo::AnalyzeBranchPredicate and implement for x86. A
later change adding support for page-fault based implicit null checks
depends on this.
Reviewers: reames, ab, atrick
Reviewed By: atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10200
llvm-svn: 239742
Summary:
TargetInstrInfo::getLdStBaseRegImmOfs to
TargetInstrInfo::getMemOpBaseRegImmOfs and implement for x86. The
implementation only handles a few easy cases now and will be made more
sophisticated in the future.
This is NFCI: the only user of `getLdStBaseRegImmOfs` (now
`getmemOpBaseRegImmOfs`) is `LoadClusterMotion` and `LoadClusterMotion`
is disabled for x86.
Reviewers: reames, ab, MatzeB, atrick
Reviewed By: MatzeB, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10199
llvm-svn: 239741
Summary:
This instruction encodes a loading operation that may fault, and a label
to branch to if the load page-faults. The locations of potentially
faulting loads and their "handler" destinations are recorded in a
FaultMap section, meant to be consumed by LLVM's clients.
Nothing generates FAULTING_LOAD_OP instructions yet, but they will be
used in a future change.
The documentation (FaultMaps.rst) needs improvement and I will update
this diff with a more expanded version shortly.
Depends on D10196
Reviewers: rnk, reames, AndyAyers, ab, atrick, pgavlin
Reviewed By: atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10197
llvm-svn: 239740
LLVM targeting aarch64 doesn't correctly produce aligned accesses for non-aligned
data at -O0/fast-isel (-mno-unaligned-access).
The root cause seems to be in fast-isel not producing unaligned access correctly
for -mno-unaligned-access.
The patch just aborts fast-isel for loads and stores when -mno-unaligned-access is
present.
The regression test is updated to check this new test case (-mno-unaligned-access
together with fast-isel).
Differential Revision: http://reviews.llvm.org/D10360
llvm-svn: 239732
Summary:
This affects other tools so the previous C++ API has been retained as a
deprecated function for the moment. Clang has been updated with a trivial
patch (not covered by the pre-commit review) to avoid breaking -Werror builds.
Other in-tree tools will be fixed with similar trivial patches.
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rengolin
Reviewed By: rengolin
Subscribers: llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D10366
llvm-svn: 239721
This patch fixes a compilation time issue, when MachineSink faces PHIs
with a huge number of operands. This can happen for example in goto table
based interpreters, where some basic blocks can have several of those PHIs,
each one with several hundreds operands. MachineSink was spending a
significant time re-building and re-sorting the list of successors of
the current MachineBasicBlock. The computing and sorting of the current
MachineBasicBlock successors is now cached.
llvm-svn: 239720
Summary:
ValueTracking used to overwrite the analysis results computed from
assumes and dominating conditions. This patch fixes this issue.
Test Plan: test/Analysis/ValueTracking/assume.ll
Reviewers: hfinkel, majnemer
Reviewed By: majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10283
llvm-svn: 239718
Re-commit after adding "-aarch64-neon-syntax=generic" to fix the failure on OS X.
This patch was firstly committed in r239514, then reverted in r239544 because of a syntax incompatible failure on OS X.
llvm-svn: 239711
- Who defines ${LLVM_SOURCE_DIR} ?
- Would windows_version_resource.rc be available in an *installed* llvm tree?
I suggest it may be installed in ${PREFIX}/share.
llvm-svn: 239703
As noted on Errc.h:
// * std::errc is just marked with is_error_condition_enum. This means that
// common patters like AnErrorCode == errc::no_such_file_or_directory take
// 4 virtual calls instead of two comparisons.
And on some libstdc++ those virtual functions conclude that
------------------------
int main() {
std::error_code foo = std::make_error_code(std::errc::no_such_file_or_directory);
return foo == std::errc::no_such_file_or_directory;
}
-------------------------
should exit with 0.
llvm-svn: 239683
StringSaver now always saves to a BumpPtrAllocator.
The only reason for having the virtual saveImpl is so lld can have a
thread safe version.
The reason for the distinct BumpPtrStringSaver class is to avoid the
virtual destructor.
llvm-svn: 239669
Summary:
I spend some time trying to get the LIT test suite passing. Here are the changes that I needed to make on my machine.
I made the following changes for the following reasons.
1. google-test.py: The Google test format now checks for "[ PASSED ] 1 test." to check if a test passes.
2. discovery.py: The output appears in a different order on my machine than it did in the test.
3. unittest-adaptor.py: The output appears in a different order on my machine than it did in the test.
4. The classname is now formed differently in `getJUnitXML(...)`.
I'm not sure what is causing the output order to differ in discovery.py and unittest-adaptor.py. Does anybody have any thoughts?
Reviewers: ddunbar, danalbert, jroelofs
Reviewed By: jroelofs
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D9864
llvm-svn: 239663
Now the library names in the Makefiles match the library names in
LLVMBuild.txt.
This should hopefully fix the remaining bot failures.
llvm-svn: 239661
r213101 changed the behaviour of this method to not only affect the
PostMachineScheduler scheduler but also the PostRAScheduler scheduler,
renaming should make this fact clear. Also document that the preferred
way is to specify this in the scheduling model instead of overriding
this method.
Differential Revision: http://reviews.llvm.org/D10427
llvm-svn: 239659
This will use Itinieraries if available, but will also work if just a
MCSchedModel is available.
Differential Revision: http://reviews.llvm.org/D10428
llvm-svn: 239658
On error, the temporary output stream wouldn't be flushed and therefore the
caller would see an empty error message.
Patch by Antoine Pitrou
Differential Revision: http://reviews.llvm.org/D10241
llvm-svn: 239646
Summary:
Scalarizer has two data structures that hold information about changes
to the function, Gathered and Scattered. These are cleared in finish()
at the end of runOnFunction() if finish() detects any changes to the
function.
However, finish() was checking for changes by only checking if
Gathered was non-empty. The function visitStore() only modifies
Scattered without touching Gathered. As a result, Scattered could have
ended up having stale data if Scalarizer only scalarized store
instructions. Since the data in Scattered is used during the execution
of the pass, this introduced dangling pointer errors.
The fix is to check whether both Scattered and Gathered are empty
before deciding what to do in finish().
Reviewers: srhines
Reviewed By: srhines
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10422
llvm-svn: 239644
into partitions. Also, add an option to clone stub definitions (not just decls)
into partitions: these definitions could be inlined in some places to avoid the
overhead of calling via the stub.
Found by inspection - no test case yet, although I plan to add a unit test for
this once the CompileOnDemand layer refactoring settles down.
llvm-svn: 239640
- Add glc, slc, and tfe operands to flat instructions
- Add missing flat instructions
- Fix the encoding of flat_load_dwordx3 and flat_store_dwordx3.
llvm-svn: 239637
Based on ArchType, Clang's driver can select a non-Clang compiler.
String parsing in Clang would have sufficed if it were only that,
however this change anticipates true llvm support.
Differential Revision: http://reviews.llvm.org/D10413
llvm-svn: 239631
In the glorious future of opaque pointer types, it won't be possible to
retrieve the pointee type of a pointer type which is what's being done
in this GEP loop - but the first iteration is always a pointer type and
the loop doesn't care about that case, except whether or not the index
is a constant.
So pull that special case out before the loop and start at the second
iteration (index 1) instead.
Originally committed in r236670 and reverted with a test case in
r239015. This change keeps the test case working while also avoiding
depending on pointee types.
llvm-svn: 239629
For hung off uses, we need a Use* to tell use where the operands are.
This was User::OperandList but we want to remove that to save space
of all subclasses which aren't making use of 'hung off uses'.
Hung off uses now allocate their own 'OperandList' Use* in the
User::new which they call.
getOperandList() now uses the hung off uses bit to work out where the
Use* for the OperandList lives. If a User has hung off uses, then this
bit tells them to go back a single Use* from the User* and use that
value as the OperandList.
If a User has no hung off uses, then we get the first operand by
subtracting (NumOperands * sizeof(Use)) from the User this pointer.
This saves a pointer from User and all subclasses. Given the average
size of a subclass of User is 112 or 128 bytes, this saves around 7% of space
With malloc tending to align to 16-bytes the real saving is typically more like 3.5%.
On 'opt -O2 verify-uselistorder.lto.bc', peak memory usage prior to this change
is 149MB and after is 143MB so the savings are around 2.5% of peak.
Looking at some passes which allocate many Instructions and Values, parseIR drops
from 54.25MB to 52.21MB while the Inliner calls to Instruction::clone() drops
from 28.20MB to 27.05MB.
Reviewed by Duncan Exon Smith.
llvm-svn: 239623
There are now 2 versions of User::new. The first takes a size_t and is the current
implementation for subclasses which need 0 or more Use's allocated for their operands.
The new version takes no extra arguments to say that this subclass needs 'hung off uses'.
The HungOffUses bool is now set in this version of User::new and we can assert in
allocHungOffUses that we are allowed to have hung off uses.
This ensures we call the correct version of User::new for subclasses which need hung off uses.
A future commit will then allocate space for a single Use* which will be used
in place of User::OperandList once that field has been removed.
Reviewed by Duncan Exon Smith.
llvm-svn: 239622
This is to try make it very clear that subclasses shouldn't be changing
the value directly. Now that OperandList for normal instructions is computed
using the NumOperands, its critical that the NumOperands is accurate or we
could compute the wrong offset to the first operand.
I looked over all places which update NumOperands and they are all safe.
Hung off use User's don't use NumOperands to compute the OperandList so they
are safe to continue to manipulate it. The only other User which changed it
was GlobalVariable which has an optional init list but always allocated space
for a single Use. It was correctly setting NumOperands to 1 before setting an
initializer, and setting it to 0 after clearing the init list, so the order was safe.
Added some comments to that code to make sure that this isn't changed in future
without being aware of this constraint.
Reviewed by Duncan Exon Smith.
llvm-svn: 239621
We don't want anyone to access OperandList directly as its going to be removed
and computed instead. This uses getter's and setter's instead in which we
can later change the underlying implementation of OperandList.
Reviewed by Duncan Exon Smith.
llvm-svn: 239620
The underlaying issues is that this code can't really know if an OS specific or
processor specific section number should return true or false.
One option would be to assert or return an error, but that looks like over
engineering since extensions are not that common.
It seems better to have these be direct implementation of the ELF spec so that
they are natural for someone familiar with ELF reading the code.
Code that does have to handle OS/Architecture specific values can do it at
a higher level.
llvm-svn: 239618
The CFLAA code currently calls ConstantExpr::getAsInstruction which creates an instruction from a constant expr.
We then pass that instruction to the InstVisitor to analyze it.
Its not necessary to create these instructions as we can just cast from Constant to Operator in the visitor. This is how other InstVisitor’s such as SelectionDAGBuilder handle ConstantExpr.
llvm-svn: 239616
This reinstates my commits r238740/r238741 which I reverted due to a failure
in the clang-cl selfhost tests on Windows. I've now fixed the issue in
clang-cl that caused the failure so hopefully all should be well now.
llvm-svn: 239612
The alignment is not required, so we can just remove it for now.
The old code is a hack as it depends on the buffer management to find
the current column.
If the alignment is really desirable, the proper way to do it is
to pass in a formatted_raw_stream that knows the current column.
llvm-svn: 239603
ARMTargetParser::getFPUFeatures should disable fp16 whenever it
disables vfp4, as otherwise something like -mcpu=cortex-a7 -mfpu=none
leaves us with fp16 enabled (though the only effect that will have is
a wrong build attribute).
Differential Revision: http://reviews.llvm.org/D10397
llvm-svn: 239599
It is valid for globals to be unnamed, but aliases must have a name. To avoid
creating invalid IR, we need to assign names to any aliases we create that
point to unnamed objects that have been moved into combined globals.
llvm-svn: 239590
DebugLoc::getFnDebugLoc() should soon be removed. Also,
getDISubprogram() might become more effective soon and wouldn't need to
scan debug locations at all, if function-level metadata would be emitted
by Clang.
llvm-svn: 239586
Summary:
A side effect of this change is that it IRBuilder now automatically
created debug info locations for new instructions, which is the
same as debug location of insertion point. This is fine for the
functions in questions (GetStoreValueForLoad and
GetMemInstValueForLoad), as they are used in two situations:
* GVN::processLoad, which tries to eliminate a load. In this case
new instructions would have the same debug location as the load they
eventually replace;
* MaterializeAdjustedValue, which adds new instructions to the end
of the basic blocks, which could later be used to replace the load
definition. In this case we don't yet know the way the load would
be eventually replaced (either by assembling the precomputed values
via PHI, or by using them directly), so just using the basic block
strategy seems to be reasonable. There is also a special case
in the code that *would* adjust the location of the last
instruction replacing the load definition to the location of the
load.
Test Plan: regression test suite
Reviewers: echristo, dberlin, dblaikie
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10405
llvm-svn: 239585
We were putting them in the filter field, which is correct for 64-bit
but wrong for 32-bit.
Also switch the order of scope table entry emission so outermost entries
are emitted first, and fix an obvious state assignment bug.
llvm-svn: 239574
Remove the EFLAGS from the stackmap live-out mask. The EFLAGS register is not
supposed to be part of that set, because the X86 calling conventions mark the
register as NOT preserved.
Also remove the IP registers, since spilling and restoring those doesn't really
make any sense.
Related to rdar://problem/21019635.
llvm-svn: 239568
This intrinsic is like framerecover plus a load. It recovers the EH
registration stack allocation from the parent frame and loads the
exception information field out of it, giving back a pointer to an
EXCEPTION_POINTERS struct. It's designed for clang to use in SEH filter
expressions instead of accessing the EXCEPTION_POINTERS parameter that
is available on x64.
This required a minor change to MC to allow defining a label variable to
another absolute framerecover label variable.
llvm-svn: 239567
static local initialization isn't thread safe with MSVC and a race was
reported in PR23817. We can't use std::atomic because it's not trivially
constructible, so instead do some lame volatile global integer
manipulation.
llvm-svn: 239566
We cannot prepend __imp_ in the IR mangler because a function reference may
be emitted unmangled in a constant initializer. The linker is expected to
resolve such references to thunks. This is covered by the new test case.
Strictly speaking we ought to emit two undefined symbols, one with __imp_ and
one without, as we cannot know which symbol the final object file will refer
to. However, this would require rather intrusive changes to IRObjectFile,
and lld works fine without it for now.
This reimplements r239437, which was reverted in r239502.
Differential Revision: http://reviews.llvm.org/D10400
llvm-svn: 239560
Summary:
For the moment, TargetMachine::getTargetTriple() still returns a StringRef.
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rengolin
Reviewed By: rengolin
Subscribers: ted, llvm-commits, rengolin, jholewinski
Differential Revision: http://reviews.llvm.org/D10362
llvm-svn: 239554
This makes emitAbsoluteSymbolDiff always succeed and moves logic from the asm
printer to it.
The object one now also works on ELF. If two symbols are in the same fragment,
we will never move them apart.
llvm-svn: 239552
This improves debug locations in passes that do a lot of basic block
transformations. Important case is LoopUnroll pass, the test for correct
debug locations accompanies this change.
Test Plan: regression test suite
Reviewers: dblaikie, sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10367
llvm-svn: 239551
Use IRBuilder::Create(Cond)?Br instead of constructing instructions
manually with BranchInst::Create(). It's consistent with other
uses of IRBuilder in this pass, and has an additional important
benefit:
Using IRBuilder will ensure that new branch instruction will get
the same debug location as original terminator instruction it will
eventually replace.
For now I'm not adding a testcase, as currently original terminator
instruction also lack debug location due to missing debug location
propagation in BasicBlock::splitBasicBlock. That is, the testcase
will accompany the fix for the latter I'm going to mail soon.
llvm-svn: 239550
Revert "[AArch64] Match interleaved memory accesses into ldN/stN instructions."
Revert "Fixing MSVC 2013 build error."
The test/CodeGen/AArch64/aarch64-interleaved-accesses.ll test was failing on OS X.
llvm-svn: 239544
This only updates one of the uses. The other is used in cases
that may never touch memory, so I'm not sure why this is even
calling it. Perhaps there should be a new, similar hook for such
cases or pass -1 for unknown address space.
llvm-svn: 239540
Now actually stores the non-zero constant instead of 0.
I somehow forgot to include this part of r238108.
The test change was just an independent instruction order swap,
so just add another check line to satisfy CHECK-NEXT.
llvm-svn: 239539
Summary:
This continues the patch series to eliminate StringRef forms of GNU triples
from the internals of LLVM that began in r239036.
Reviewers: rengolin
Reviewed By: rengolin
Subscribers: llvm-commits, jfb, rengolin
Differential Revision: http://reviews.llvm.org/D10361
llvm-svn: 239538
On large goto table based interpreters, where phi nodes can have (very) large
fan-ins, isLiveOut exhibited poor performances: about 40% of the full
codegen time was spent in PHIElim, sorting MachineBasicBlock addresses.
This patch improve the performances for such cases, and does not show
compile time regressions on the LNT, at bootstrap (llvm+clang+lldb) or
any other benchmarks we have in-house.
llvm-svn: 239510
This patch ensures that SHL/SRL/SRA shifts for i8 and i16 vectors avoid scalarization. It builds on the existing i8 SHL vectorized implementation of moving the shift bits up to the sign bit position and separating the 4, 2 & 1 bit shifts with several improvements:
1 - SSE41 targets can use (v)pblendvb directly with the sign bit instead of performing a comparison to feed into a VSELECT node.
2 - pre-SSE41 targets were masking + comparing with an 0x80 constant - we avoid this by using the fact that a set sign bit means a negative integer which can be compared against zero to then feed into VSELECT, avoiding the need for a constant mask (zero generation is much cheaper).
3 - SRA i8 needs to be unpacked to the upper byte of a i16 so that the i16 psraw instruction can be correctly used for sign extension - we have to do more work than for SHL/SRL but perf tests indicate that this is still beneficial.
The i16 implementation is similar but simpler than for i8 - we have to do 8, 4, 2 & 1 bit shifts but less shift masking is involved. SSE41 use of (v)pblendvb requires that the i16 shift amount is splatted to both bytes however.
Tested on SSE2, SSE41 and AVX machines.
Differential Revision: http://reviews.llvm.org/D9474
llvm-svn: 239509
This patch corresponds to review:
http://reviews.llvm.org/D10096
This is the back end portion of the patch related to D10095.
The patch adds the instructions and back end intrinsics for:
vbpermq
vgbbd
llvm-svn: 239505
This reverts commit r239437.
This broke clang-cl self-hosts. We'd end up calling the __imp_ symbol
directly instead of using it to do an indirect function call.
llvm-svn: 239502
It hasn't been used since r130964.
This also removes MachineModuleInfo::isUsedFunction and
MachineModuleInfo::AnalyzeModule, both of which were only
there to support UsedFunctions.
llvm-svn: 239501
PhiNode, SwitchInst, LandingPad and IndirectBr all had virtually identical
logic for growing the hung off uses.
Move it to User so that they can all call a single shared implementation.
Their destructors were all empty after this change and were deleted. They all
have virtual clone_impl methods which can be used as vtable anchors.
Reviewed by Duncan Exon Smith.
llvm-svn: 239492
Now that the subclasses which care about hung off uses let ~User clean it up,
there's no need for a separate method. Just inline it to ~User and delete it.
Reviewed by Duncan Exon Smith.
llvm-svn: 239491
Currently all of the logic for deleting hung off uses, which PHI/switch/etc use,
is in their classes.
This adds a bit to Value which tracks whether that user had hung off uses,
then User can be responsible for clearing them instead of the sub classes.
Note, the bit used here was taken from NumOperands which was 30-bits.
Given the reduction to 29 bits, and the average User being just over 100 bytes,
a single User with 29-bits of num operands would need 50GB of RAM for itself
so its reasonable to assume that 29-bits is enough for now.
This is a step towards hiding all the hung off uses logic in the User.
Reviewed by Duncan Exon Smith.
llvm-svn: 239490
PhiNode's need to allocate space for an array of Use[N] and then BasicBlock*[N].
They had their own allocHungOffUses to handle all of this. This moves the logic
in to User::allocHungOffUses and PhiNode passes in a bool to say to allocate
the BB* space too.
Reviewed by Duncan Exon Smith.
llvm-svn: 239489
If the first argument to a function is a 'this' argument and the second
has the sret attribute, the ArgumentPromotion pass may promote the 'this'
argument to more than one argument, violating the IR constraint that 'sret'
may only be applied to the first or second argument.
Although this IR constraint is arguably unnecessary, it highlighted the fact
that ArgPromotion does not need to preserve this attribute. Dropping the
attribute reduces register pressure in the backend by avoiding the register
copy required by sret. Because sret implies noalias, we also replace the
former with the latter.
Differential Revision: http://reviews.llvm.org/D10353
llvm-svn: 239488
This is a reimplementation of D9780 at the machine instruction level rather than the DAG.
Use the MachineCombiner pass to reassociate scalar single-precision AVX additions (just a
starting point; see the TODO comments) to increase ILP when it's safe to do so.
The code is closely based on the existing MachineCombiner optimization that is implemented
for AArch64.
This patch should not cause the kind of spilling tragedy that led to the reversion of r236031.
Differential Revision: http://reviews.llvm.org/D10321
llvm-svn: 239486