[X86] Move the implicit enabling of sse2 for 64-bit mode from X86Subtarget::initSubtargetFeatures to X86_MC::ParseX86Triple.

ParseX86Triple already checks for 64-bit mode and produces a
static string. We can just add +sse2 to the end of that static
string. This avoids a potential reallocation when appending it
to the std::string at runtime.

This is a slight change to the behavior of tools that only use
MC layer which weren't implicitly enabling sse2 before, but will
now. I don't think we check for sse2 explicitly in any MC layer
components so this shouldn't matter in practice. And if it did
matter the new behavior is more correct.
This commit is contained in:
Craig Topper 2020-07-24 11:10:28 -07:00
parent 809600d664
commit 945ed22f33
2 changed files with 4 additions and 7 deletions

View File

@ -44,8 +44,10 @@ using namespace llvm;
std::string X86_MC::ParseX86Triple(const Triple &TT) {
std::string FS;
if (TT.getArch() == Triple::x86_64)
FS = "+64bit-mode,-32bit-mode,-16bit-mode";
// SSE2 should default to enabled in 64-bit mode, but can be turned off
// explicitly.
if (TT.isArch64Bit())
FS = "+64bit-mode,-32bit-mode,-16bit-mode,+sse2";
else if (TT.getEnvironment() != Triple::CODE16)
FS = "-64bit-mode,+32bit-mode,-16bit-mode";
else

View File

@ -234,11 +234,6 @@ void X86Subtarget::initSubtargetFeatures(StringRef CPU, StringRef FS) {
std::string FullFS = X86_MC::ParseX86Triple(TargetTriple);
assert(!FullFS.empty() && "Failed to parse X86 triple");
// SSE2 should default to enabled in 64-bit mode, but can be turned off
// explicitly.
if (TargetTriple.isArch64Bit())
FullFS += ",+sse2";
if (!FS.empty())
FullFS = (Twine(FullFS) + "," + FS).str();