2019-03-07 23:07:55 +08:00
|
|
|
REQUIRES: x86
|
|
|
|
|
2019-03-07 04:18:38 +08:00
|
|
|
RUN: lld-link /entry:main /subsystem:console /out:%t.exe \
|
|
|
|
RUN: %p/Inputs/ret42.obj
|
2015-06-05 03:21:24 +08:00
|
|
|
|
2019-03-07 04:18:38 +08:00
|
|
|
RUN: lld-link /entry:main /subsystem:console /out:%t.exe \
|
|
|
|
RUN: %p/Inputs/ret42.obj /failifmismatch:k1=v1 /failifmismatch:k2=v1
|
2015-06-05 03:21:24 +08:00
|
|
|
|
2019-03-07 04:18:38 +08:00
|
|
|
RUN: lld-link /entry:main /subsystem:console /out:%t.exe \
|
|
|
|
RUN: %p/Inputs/ret42.obj /failifmismatch:k1=v1 /failifmismatch:k1=v1
|
2015-06-05 03:21:24 +08:00
|
|
|
|
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures
This is a follow-up for D70378 (Cover usage of LLD as a library).
While debugging an intermittent failure on a bot, I recalled this scenario which
causes the issue:
1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach
lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(),
then ELFFileBase::init().
2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a
half-initialized ObjFile instance.
3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we
hapily restore the control flow to CrashRecoveryContext::RunSafely() then back
in lld::safeLldMain().
4.Before this patch, we called errorHandler().reset() just after, and this
attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried
to free the half-initialized ObjFile instance, and more precisely its
ObjFile::dwarf member.
Sometimes that worked, sometimes it failed and was catched by the
CrashRecoveryContext. This scenario was the reason we called
errorHandler().reset() through a CrashRecoveryContext.
But in some rare cases, the above repro somehow corrupted the heap, creating a
stack overflow. When the CrashRecoveryContext's filter (that is,
__except (ExceptionFilter(GetExceptionInformation()))) tried to handle the
exception, it crashed again since the stack was exhausted -- and that took the
whole application down. That is the issue seen on the bot. Locally it happens
about 1 times out of 15.
Now this situation can happen anywhere in LLD. Since catching stack overflows is
not a reliable scenario ATM when using CrashRecoveryContext, we're now
preventing further re-entrance when such failures occur, by signaling
lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above),
only one iteration will be executed, instead of two.
Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
|
|
|
RUN: env LLD_IN_TEST=1 not lld-link /entry:main /subsystem:console /out:%t.exe \
|
2019-03-07 04:18:38 +08:00
|
|
|
RUN: %p/Inputs/ret42.obj /failifmismatch:k1=v1 /failifmismatch:k1=v2 2>&1 | FileCheck %s
|
|
|
|
|
2019-03-07 23:07:55 +08:00
|
|
|
RUN: llc < %p/Inputs/failmismatch1.ll -mtriple x86_64-windows-msvc -filetype obj -o %t1.obj
|
|
|
|
RUN: llc < %p/Inputs/failmismatch2.ll -mtriple x86_64-windows-msvc -filetype obj -o %t2.obj
|
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures
This is a follow-up for D70378 (Cover usage of LLD as a library).
While debugging an intermittent failure on a bot, I recalled this scenario which
causes the issue:
1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach
lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(),
then ELFFileBase::init().
2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a
half-initialized ObjFile instance.
3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we
hapily restore the control flow to CrashRecoveryContext::RunSafely() then back
in lld::safeLldMain().
4.Before this patch, we called errorHandler().reset() just after, and this
attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried
to free the half-initialized ObjFile instance, and more precisely its
ObjFile::dwarf member.
Sometimes that worked, sometimes it failed and was catched by the
CrashRecoveryContext. This scenario was the reason we called
errorHandler().reset() through a CrashRecoveryContext.
But in some rare cases, the above repro somehow corrupted the heap, creating a
stack overflow. When the CrashRecoveryContext's filter (that is,
__except (ExceptionFilter(GetExceptionInformation()))) tried to handle the
exception, it crashed again since the stack was exhausted -- and that took the
whole application down. That is the issue seen on the bot. Locally it happens
about 1 times out of 15.
Now this situation can happen anywhere in LLD. Since catching stack overflows is
not a reliable scenario ATM when using CrashRecoveryContext, we're now
preventing further re-entrance when such failures occur, by signaling
lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above),
only one iteration will be executed, instead of two.
Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
|
|
|
RUN: env LLD_IN_TEST=1 not lld-link %t1.obj %t2.obj 2>&1 | FileCheck %s -check-prefix OBJ
|
2019-03-07 04:18:38 +08:00
|
|
|
|
|
|
|
RUN: llvm-lib %t1.obj /out:%t.lib
|
[LLD][COFF] When using LLD-as-a-library, always prevent re-entrance on failures
This is a follow-up for D70378 (Cover usage of LLD as a library).
While debugging an intermittent failure on a bot, I recalled this scenario which
causes the issue:
1.When executing lld/test/ELF/invalid/symtab-sh-info.s L45, we reach
lld::elf::Obj-File::ObjFile() which goes straight into its base ELFFileBase(),
then ELFFileBase::init().
2.At that point fatal() is thrown in lld/ELF/InputFiles.cpp L381, leaving a
half-initialized ObjFile instance.
3.We then end up in lld::exitLld() and since we are running with LLD_IN_TEST, we
hapily restore the control flow to CrashRecoveryContext::RunSafely() then back
in lld::safeLldMain().
4.Before this patch, we called errorHandler().reset() just after, and this
attempted to reset the associated SpecificAlloc<ObjFile<ELF64LE>>. That tried
to free the half-initialized ObjFile instance, and more precisely its
ObjFile::dwarf member.
Sometimes that worked, sometimes it failed and was catched by the
CrashRecoveryContext. This scenario was the reason we called
errorHandler().reset() through a CrashRecoveryContext.
But in some rare cases, the above repro somehow corrupted the heap, creating a
stack overflow. When the CrashRecoveryContext's filter (that is,
__except (ExceptionFilter(GetExceptionInformation()))) tried to handle the
exception, it crashed again since the stack was exhausted -- and that took the
whole application down. That is the issue seen on the bot. Locally it happens
about 1 times out of 15.
Now this situation can happen anywhere in LLD. Since catching stack overflows is
not a reliable scenario ATM when using CrashRecoveryContext, we're now
preventing further re-entrance when such failures occur, by signaling
lld::SafeReturn::canRunAgain=false. When running with LLD_IN_TEST=2 (or above),
only one iteration will be executed, instead of two.
Differential Revision: https://reviews.llvm.org/D88348
2020-11-12 21:14:20 +08:00
|
|
|
RUN: env LLD_IN_TEST=1 not lld-link %t.lib %t2.obj 2>&1 | FileCheck %s -check-prefix LIB
|
2019-03-07 04:18:38 +08:00
|
|
|
|
|
|
|
CHECK: lld-link: error: /failifmismatch: mismatch detected for 'k1':
|
|
|
|
CHECK-NEXT: >>> cmd-line has value v1
|
|
|
|
CHECK-NEXT: >>> cmd-line has value v2
|
|
|
|
|
|
|
|
OBJ: lld-link: error: /failifmismatch: mismatch detected for 'TEST':
|
|
|
|
OBJ-NEXT: >>> {{.*}}failifmismatch.test.tmp1.obj has value 1
|
|
|
|
OBJ-NEXT: >>> {{.*}}failifmismatch.test.tmp2.obj has value 2
|
|
|
|
|
|
|
|
LIB: lld-link: error: /failifmismatch: mismatch detected for 'TEST':
|
|
|
|
LIB-NEXT: >>> {{.*}}failifmismatch.test.tmp2.obj has value 2
|
|
|
|
LIB-NEXT: >>> failifmismatch.test.tmp.lib(failifmismatch.test.tmp1.obj) has value 1
|