forked from OSchip/llvm-project
da25f968a9
Profiling a basic internal real input read benchmark shows some hot spots in the code used to prepare input for decimal-to-binary conversion, which is of course where the time should be spent. The library that implements decimal to/from binary conversions has been optimized, but not the code in the Fortran runtime that calls it, and there are some obvious light changes worth making here. Move some member functions from *.cpp files into the class definitions of Descriptor and IoStatementState to enable inlining and specialization. Make GetNextInputBytes() the new basic input API within the runtime, replacing GetCurrentChar() -- which is rewritten in terms of GetNextInputBytes -- so that input routines can have the ability to acquire more than one input character at a time and amortize overhead. These changes speed up the time to read 1M random reals using internal I/O from a character array from 1.29s to 0.54s on my machine, which on par with Intel Fortran and much faster than GNU Fortran. Differential Revision: https://reviews.llvm.org/D113697 |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
big-radix-floating-point.h | ||
binary-to-decimal.cpp | ||
decimal-to-binary.cpp |