License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifndef _LINUX_JIFFIES_H
|
|
|
|
#define _LINUX_JIFFIES_H
|
|
|
|
|
2017-05-09 06:55:05 +08:00
|
|
|
#include <linux/cache.h>
|
2008-05-01 19:34:31 +08:00
|
|
|
#include <linux/math64.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/timex.h>
|
|
|
|
#include <asm/param.h> /* for HZ */
|
2015-05-18 20:19:13 +08:00
|
|
|
#include <generated/timeconst.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The following defines establish the engineering parameters of the PLL
|
|
|
|
* model. The HZ variable establishes the timer interrupt frequency, 100 Hz
|
|
|
|
* for the SunOS kernel, 256 Hz for the Ultrix kernel and 1024 Hz for the
|
|
|
|
* OSF/1 kernel. The SHIFT_HZ define expresses the same value as the
|
|
|
|
* nearest power of two in order to avoid hardware multiply operations.
|
|
|
|
*/
|
|
|
|
#if HZ >= 12 && HZ < 24
|
|
|
|
# define SHIFT_HZ 4
|
|
|
|
#elif HZ >= 24 && HZ < 48
|
|
|
|
# define SHIFT_HZ 5
|
|
|
|
#elif HZ >= 48 && HZ < 96
|
|
|
|
# define SHIFT_HZ 6
|
|
|
|
#elif HZ >= 96 && HZ < 192
|
|
|
|
# define SHIFT_HZ 7
|
|
|
|
#elif HZ >= 192 && HZ < 384
|
|
|
|
# define SHIFT_HZ 8
|
|
|
|
#elif HZ >= 384 && HZ < 768
|
|
|
|
# define SHIFT_HZ 9
|
|
|
|
#elif HZ >= 768 && HZ < 1536
|
|
|
|
# define SHIFT_HZ 10
|
2008-01-26 04:08:34 +08:00
|
|
|
#elif HZ >= 1536 && HZ < 3072
|
|
|
|
# define SHIFT_HZ 11
|
|
|
|
#elif HZ >= 3072 && HZ < 6144
|
|
|
|
# define SHIFT_HZ 12
|
|
|
|
#elif HZ >= 6144 && HZ < 12288
|
|
|
|
# define SHIFT_HZ 13
|
2005-04-17 06:20:36 +08:00
|
|
|
#else
|
2008-04-22 06:56:14 +08:00
|
|
|
# error Invalid value of HZ.
|
2005-04-17 06:20:36 +08:00
|
|
|
#endif
|
|
|
|
|
2011-03-31 09:57:33 +08:00
|
|
|
/* Suppose we want to divide two numbers NOM and DEN: NOM/DEN, then we can
|
2005-04-17 06:20:36 +08:00
|
|
|
* improve accuracy by shifting LSH bits, hence calculating:
|
|
|
|
* (NOM << LSH) / DEN
|
|
|
|
* This however means trouble for large NOM, because (NOM << LSH) may no
|
|
|
|
* longer fit in 32 bits. The following way of calculating this gives us
|
|
|
|
* some slack, under the following conditions:
|
|
|
|
* - (NOM / DEN) fits in (32 - LSH) bits.
|
|
|
|
* - (NOM % DEN) fits in (32 - LSH) bits.
|
|
|
|
*/
|
2006-07-30 18:04:02 +08:00
|
|
|
#define SH_DIV(NOM,DEN,LSH) ( (((NOM) / (DEN)) << (LSH)) \
|
|
|
|
+ ((((NOM) % (DEN)) << (LSH)) + (DEN) / 2) / (DEN))
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-28 02:48:09 +08:00
|
|
|
/* LATCH is used in the interval timer and ftape setup. */
|
2012-09-29 05:36:17 +08:00
|
|
|
#define LATCH ((CLOCK_TICK_RATE + HZ/2) / HZ) /* For divider */
|
2012-07-28 02:48:09 +08:00
|
|
|
|
jiffies: Remove compile time assumptions about CLOCK_TICK_RATE
CLOCK_TICK_RATE is used to accurately caclulate exactly how
a tick will be at a given HZ.
This is useful, because while we'd expect NSEC_PER_SEC/HZ,
the underlying hardware will have some granularity limit,
so we won't be able to have exactly HZ ticks per second.
This slight error can cause timekeeping quality problems
when using the jiffies or other jiffies driven clocksources.
Thus we currently use compile time CLOCK_TICK_RATE value to
generate SHIFTED_HZ and NSEC_PER_JIFFIES, which we then use
to adjust the jiffies clocksource to correct this error.
Unfortunately though, since CLOCK_TICK_RATE is a compile
time value, and the jiffies clocksource is registered very
early during boot, there are a number of cases where there
are different possible hardware timers that have different
tick rates. This causes problems in cases like ARM where
there are numerous different types of hardware, each having
their own compile-time CLOCK_TICK_RATE, making it hard to
accurately support different hardware with a single kernel.
For the most part, this doesn't matter all that much, as not
too many systems actually utilize the jiffies or jiffies driven
clocksource. Usually there are other highres clocksources
who's granularity error is negligable.
Even so, we have some complicated calcualtions that we do
everywhere to handle these edge cases.
This patch removes the compile time SHIFTED_HZ value, and
introduces a register_refined_jiffies() function. This results
in the default jiffies clock as being assumed a perfect HZ
freq, and allows archtectures that care about jiffies accuracy
to call register_refined_jiffies() with the tick rate, specified
dynamically at boot.
This allows us, where necessary, to not have a compile time
CLOCK_TICK_RATE constant, simplifies the jiffies code, and
still provides a way to have an accurate jiffies clock.
NOTE: Since this patch does not add register_refinied_jiffies()
calls for every arch, it may cause time quality regressions
in some cases. Its likely these will not be noticable, but
if they are an issue, adding the following to the end of
setup_arch() should resolve the regression:
register_refinied_jiffies(CLOCK_TICK_RATE)
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2012-09-05 00:42:27 +08:00
|
|
|
extern int register_refined_jiffies(long clock_tick_rate);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-28 02:48:10 +08:00
|
|
|
/* TICK_NSEC is the time between ticks in nsec assuming SHIFTED_HZ */
|
jiffies: Remove compile time assumptions about CLOCK_TICK_RATE
CLOCK_TICK_RATE is used to accurately caclulate exactly how
a tick will be at a given HZ.
This is useful, because while we'd expect NSEC_PER_SEC/HZ,
the underlying hardware will have some granularity limit,
so we won't be able to have exactly HZ ticks per second.
This slight error can cause timekeeping quality problems
when using the jiffies or other jiffies driven clocksources.
Thus we currently use compile time CLOCK_TICK_RATE value to
generate SHIFTED_HZ and NSEC_PER_JIFFIES, which we then use
to adjust the jiffies clocksource to correct this error.
Unfortunately though, since CLOCK_TICK_RATE is a compile
time value, and the jiffies clocksource is registered very
early during boot, there are a number of cases where there
are different possible hardware timers that have different
tick rates. This causes problems in cases like ARM where
there are numerous different types of hardware, each having
their own compile-time CLOCK_TICK_RATE, making it hard to
accurately support different hardware with a single kernel.
For the most part, this doesn't matter all that much, as not
too many systems actually utilize the jiffies or jiffies driven
clocksource. Usually there are other highres clocksources
who's granularity error is negligable.
Even so, we have some complicated calcualtions that we do
everywhere to handle these edge cases.
This patch removes the compile time SHIFTED_HZ value, and
introduces a register_refined_jiffies() function. This results
in the default jiffies clock as being assumed a perfect HZ
freq, and allows archtectures that care about jiffies accuracy
to call register_refined_jiffies() with the tick rate, specified
dynamically at boot.
This allows us, where necessary, to not have a compile time
CLOCK_TICK_RATE constant, simplifies the jiffies code, and
still provides a way to have an accurate jiffies clock.
NOTE: Since this patch does not add register_refinied_jiffies()
calls for every arch, it may cause time quality regressions
in some cases. Its likely these will not be noticable, but
if they are an issue, adding the following to the end of
setup_arch() should resolve the regression:
register_refinied_jiffies(CLOCK_TICK_RATE)
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2012-09-05 00:42:27 +08:00
|
|
|
#define TICK_NSEC ((NSEC_PER_SEC+HZ/2)/HZ)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-03-20 17:11:28 +08:00
|
|
|
/* TICK_USEC is the time between ticks in usec assuming SHIFTED_HZ */
|
|
|
|
#define TICK_USEC ((USEC_PER_SEC + HZ/2) / HZ)
|
|
|
|
|
|
|
|
/* USER_TICK_USEC is the time between ticks in usec assuming fake USER_HZ */
|
|
|
|
#define USER_TICK_USEC ((1000000UL + USER_HZ/2) / USER_HZ)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2017-06-03 05:46:16 +08:00
|
|
|
#ifndef __jiffy_arch_data
|
|
|
|
#define __jiffy_arch_data
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2006-11-30 11:53:49 +08:00
|
|
|
* The 64-bit value is not atomic - you MUST NOT read it
|
2012-02-29 08:50:11 +08:00
|
|
|
* without sampling the sequence number in jiffies_lock.
|
2005-04-17 06:20:36 +08:00
|
|
|
* get_jiffies_64() will do this for you as appropriate.
|
|
|
|
*/
|
2017-05-09 06:55:05 +08:00
|
|
|
extern u64 __cacheline_aligned_in_smp jiffies_64;
|
2017-06-03 05:46:16 +08:00
|
|
|
extern unsigned long volatile __cacheline_aligned_in_smp __jiffy_arch_data jiffies;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#if (BITS_PER_LONG < 64)
|
|
|
|
u64 get_jiffies_64(void);
|
|
|
|
#else
|
|
|
|
static inline u64 get_jiffies_64(void)
|
|
|
|
{
|
|
|
|
return (u64)jiffies;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* These inlines deal with timer wrapping correctly. You are
|
|
|
|
* strongly encouraged to use them
|
|
|
|
* 1. Because people otherwise forget
|
|
|
|
* 2. Because if the timer wrap changes in future you won't have to
|
|
|
|
* alter your driver code.
|
|
|
|
*
|
|
|
|
* time_after(a,b) returns true if the time a is after time b.
|
|
|
|
*
|
|
|
|
* Do this with "<0" and ">=0" to only test the sign of the result. A
|
|
|
|
* good compiler would generate better code (and a really good compiler
|
|
|
|
* wouldn't care). Gcc is currently neither.
|
|
|
|
*/
|
|
|
|
#define time_after(a,b) \
|
|
|
|
(typecheck(unsigned long, a) && \
|
|
|
|
typecheck(unsigned long, b) && \
|
jiffies: Avoid undefined behavior from signed overflow
According to the C standard 3.4.3p3, overflow of a signed integer results
in undefined behavior. This commit therefore changes the definitions
of time_after(), time_after_eq(), time_after64(), and time_after_eq64()
to avoid this undefined behavior. The trick is that the subtraction
is done using unsigned arithmetic, which according to 6.2.5p9 cannot
overflow because it is defined as modulo arithmetic. This has the added
(though admittedly quite small) benefit of shortening four lines of code
by four characters each.
Note that the C standard considers the cast from unsigned to
signed to be implementation-defined, see 6.3.1.3p3. However, on a
two's-complement system, an implementation that defines anything other
than a reinterpretation of the bits is free to come to me, and I will be
happy to act as a witness for its being committed to an insane asylum.
(Although I have nothing against saturating arithmetic or signals in some
cases, these things really should not be the default when compiling an
operating-system kernel.)
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Kevin Easton <kevin@guarana.org>
[ paulmck: Included time_after64() and time_after_eq64(), as suggested
by Eric Dumazet, also fixed commit message.]
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-07-27 18:53:54 +08:00
|
|
|
((long)((b) - (a)) < 0))
|
2005-04-17 06:20:36 +08:00
|
|
|
#define time_before(a,b) time_after(b,a)
|
|
|
|
|
|
|
|
#define time_after_eq(a,b) \
|
|
|
|
(typecheck(unsigned long, a) && \
|
|
|
|
typecheck(unsigned long, b) && \
|
jiffies: Avoid undefined behavior from signed overflow
According to the C standard 3.4.3p3, overflow of a signed integer results
in undefined behavior. This commit therefore changes the definitions
of time_after(), time_after_eq(), time_after64(), and time_after_eq64()
to avoid this undefined behavior. The trick is that the subtraction
is done using unsigned arithmetic, which according to 6.2.5p9 cannot
overflow because it is defined as modulo arithmetic. This has the added
(though admittedly quite small) benefit of shortening four lines of code
by four characters each.
Note that the C standard considers the cast from unsigned to
signed to be implementation-defined, see 6.3.1.3p3. However, on a
two's-complement system, an implementation that defines anything other
than a reinterpretation of the bits is free to come to me, and I will be
happy to act as a witness for its being committed to an insane asylum.
(Although I have nothing against saturating arithmetic or signals in some
cases, these things really should not be the default when compiling an
operating-system kernel.)
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Kevin Easton <kevin@guarana.org>
[ paulmck: Included time_after64() and time_after_eq64(), as suggested
by Eric Dumazet, also fixed commit message.]
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-07-27 18:53:54 +08:00
|
|
|
((long)((a) - (b)) >= 0))
|
2005-04-17 06:20:36 +08:00
|
|
|
#define time_before_eq(a,b) time_after_eq(b,a)
|
|
|
|
|
optimize attribute timeouts for "noac" and "actimeo=0"
Hi.
I've been looking at a bugzilla which describes a problem where
a customer was advised to use either the "noac" or "actimeo=0"
mount options to solve a consistency problem that they were
seeing in the file attributes. It turned out that this solution
did not work reliably for them because sometimes, the local
attribute cache was believed to be valid and not timed out.
(With an attribute cache timeout of 0, the cache should always
appear to be timed out.)
In looking at this situation, it appears to me that the problem
is that the attribute cache timeout code has an off-by-one
error in it. It is assuming that the cache is valid in the
region, [read_cache_jiffies, read_cache_jiffies + attrtimeo]. The
cache should be considered valid only in the region,
[read_cache_jiffies, read_cache_jiffies + attrtimeo). With this
change, the options, "noac" and "actimeo=0", work as originally
expected.
This problem was previously addressed by special casing the
attrtimeo == 0 case. However, since the problem is only an off-
by-one error, the cleaner solution is address the off-by-one
error and thus, not require the special case.
Thanx...
ps
Signed-off-by: Peter Staubach <staubach@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2008-12-24 04:21:56 +08:00
|
|
|
/*
|
|
|
|
* Calculate whether a is in the range of [b, c].
|
|
|
|
*/
|
Re: [NFS] [PATCH] Attribute timeout handling and wrapping u32 jiffies
I would like to discuss the idea that the current checks for attribute
timeout using time_after are inadequate for 32bit architectures, since
time_after works correctly only when the two timestamps being compared
are within 2^31 jiffies of each other. The signed overflow caused by
comparing values more than 2^31 jiffies apart will flip the result,
causing incorrect assumptions of validity.
2^31 jiffies is a fairly large period of time (~25 days) when compared
to the lifetime of most kernel data structures, but for long lived NFS
mounts that can sit idle for months (think that for some reason autofs
cannot be used), it is easy to compare inode attribute timestamps with
very disparate or even bogus values (as in when jiffies have wrapped
many times, where the comparison doesn't even make sense).
Currently the code tests for attribute timeout by simply adding the
desired amount of jiffies to the stored timestamp and comparing that
with the current timestamp of obtained attribute data with time_after.
This is incorrect, as it returns true for the desired timeout period
and another full 2^31 range of jiffies.
In testing with artificial jumps (several small jumps, not one big
crank) of the jiffies I was able to reproduce a problem found in a
server with very long lived NFS mounts, where attributes would not be
refreshed even after touching files and directories in the server:
Initial uptime:
03:42:01 up 6 min, 0 users, load average: 0.01, 0.12, 0.07
NFS volume is mounted and time is advanced:
03:38:09 up 25 days, 2 min, 0 users, load average: 1.22, 1.05, 1.08
# ls -l /local/A/foo/bar /nfs/A/foo/bar
-rw-r--r-- 1 root root 0 Dec 17 03:38 /local/A/foo/bar
-rw-r--r-- 1 root root 0 Nov 22 00:36 /nfs/A/foo/bar
# touch /local/A/foo/bar
# ls -l /local/A/foo/bar /nfs/A/foo/bar
-rw-r--r-- 1 root root 0 Dec 17 03:47 /local/A/foo/bar
-rw-r--r-- 1 root root 0 Nov 22 00:36 /nfs/A/foo/bar
We can see the local mtime is updated, but the NFS mount still shows
the old value. The patch below makes it work:
Initial setup...
07:11:02 up 25 days, 1 min, 0 users, load average: 0.15, 0.03, 0.04
# ls -l /local/A/foo/bar /nfs/A/foo/bar
-rw-r--r-- 1 root root 0 Jan 11 07:11 /local/A/foo/bar
-rw-r--r-- 1 root root 0 Jan 11 07:11 /nfs/A/foo/bar
# touch /local/A/foo/bar
# ls -l /local/A/foo/bar /nfs/A/foo/bar
-rw-r--r-- 1 root root 0 Jan 11 07:14 /local/A/foo/bar
-rw-r--r-- 1 root root 0 Jan 11 07:14 /nfs/A/foo/bar
Signed-off-by: Fabio Olive Leite <fleite@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2007-07-27 09:59:00 +08:00
|
|
|
#define time_in_range(a,b,c) \
|
|
|
|
(time_after_eq(a,b) && \
|
|
|
|
time_before_eq(a,c))
|
|
|
|
|
optimize attribute timeouts for "noac" and "actimeo=0"
Hi.
I've been looking at a bugzilla which describes a problem where
a customer was advised to use either the "noac" or "actimeo=0"
mount options to solve a consistency problem that they were
seeing in the file attributes. It turned out that this solution
did not work reliably for them because sometimes, the local
attribute cache was believed to be valid and not timed out.
(With an attribute cache timeout of 0, the cache should always
appear to be timed out.)
In looking at this situation, it appears to me that the problem
is that the attribute cache timeout code has an off-by-one
error in it. It is assuming that the cache is valid in the
region, [read_cache_jiffies, read_cache_jiffies + attrtimeo]. The
cache should be considered valid only in the region,
[read_cache_jiffies, read_cache_jiffies + attrtimeo). With this
change, the options, "noac" and "actimeo=0", work as originally
expected.
This problem was previously addressed by special casing the
attrtimeo == 0 case. However, since the problem is only an off-
by-one error, the cleaner solution is address the off-by-one
error and thus, not require the special case.
Thanx...
ps
Signed-off-by: Peter Staubach <staubach@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2008-12-24 04:21:56 +08:00
|
|
|
/*
|
|
|
|
* Calculate whether a is in the range of [b, c).
|
|
|
|
*/
|
|
|
|
#define time_in_range_open(a,b,c) \
|
|
|
|
(time_after_eq(a,b) && \
|
|
|
|
time_before(a,c))
|
|
|
|
|
2006-09-26 16:52:42 +08:00
|
|
|
/* Same as above, but does so with platform independent 64bit types.
|
|
|
|
* These must be used when utilizing jiffies_64 (i.e. return value of
|
|
|
|
* get_jiffies_64() */
|
|
|
|
#define time_after64(a,b) \
|
|
|
|
(typecheck(__u64, a) && \
|
|
|
|
typecheck(__u64, b) && \
|
jiffies: Avoid undefined behavior from signed overflow
According to the C standard 3.4.3p3, overflow of a signed integer results
in undefined behavior. This commit therefore changes the definitions
of time_after(), time_after_eq(), time_after64(), and time_after_eq64()
to avoid this undefined behavior. The trick is that the subtraction
is done using unsigned arithmetic, which according to 6.2.5p9 cannot
overflow because it is defined as modulo arithmetic. This has the added
(though admittedly quite small) benefit of shortening four lines of code
by four characters each.
Note that the C standard considers the cast from unsigned to
signed to be implementation-defined, see 6.3.1.3p3. However, on a
two's-complement system, an implementation that defines anything other
than a reinterpretation of the bits is free to come to me, and I will be
happy to act as a witness for its being committed to an insane asylum.
(Although I have nothing against saturating arithmetic or signals in some
cases, these things really should not be the default when compiling an
operating-system kernel.)
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Kevin Easton <kevin@guarana.org>
[ paulmck: Included time_after64() and time_after_eq64(), as suggested
by Eric Dumazet, also fixed commit message.]
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-07-27 18:53:54 +08:00
|
|
|
((__s64)((b) - (a)) < 0))
|
2006-09-26 16:52:42 +08:00
|
|
|
#define time_before64(a,b) time_after64(b,a)
|
|
|
|
|
|
|
|
#define time_after_eq64(a,b) \
|
|
|
|
(typecheck(__u64, a) && \
|
|
|
|
typecheck(__u64, b) && \
|
jiffies: Avoid undefined behavior from signed overflow
According to the C standard 3.4.3p3, overflow of a signed integer results
in undefined behavior. This commit therefore changes the definitions
of time_after(), time_after_eq(), time_after64(), and time_after_eq64()
to avoid this undefined behavior. The trick is that the subtraction
is done using unsigned arithmetic, which according to 6.2.5p9 cannot
overflow because it is defined as modulo arithmetic. This has the added
(though admittedly quite small) benefit of shortening four lines of code
by four characters each.
Note that the C standard considers the cast from unsigned to
signed to be implementation-defined, see 6.3.1.3p3. However, on a
two's-complement system, an implementation that defines anything other
than a reinterpretation of the bits is free to come to me, and I will be
happy to act as a witness for its being committed to an insane asylum.
(Although I have nothing against saturating arithmetic or signals in some
cases, these things really should not be the default when compiling an
operating-system kernel.)
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Kevin Easton <kevin@guarana.org>
[ paulmck: Included time_after64() and time_after_eq64(), as suggested
by Eric Dumazet, also fixed commit message.]
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2013-07-27 18:53:54 +08:00
|
|
|
((__s64)((a) - (b)) >= 0))
|
2006-09-26 16:52:42 +08:00
|
|
|
#define time_before_eq64(a,b) time_after_eq64(b,a)
|
|
|
|
|
2013-07-03 04:22:47 +08:00
|
|
|
#define time_in_range64(a, b, c) \
|
|
|
|
(time_after_eq64(a, b) && \
|
|
|
|
time_before_eq64(a, c))
|
|
|
|
|
2008-04-19 04:38:57 +08:00
|
|
|
/*
|
|
|
|
* These four macros compare jiffies and 'a' for convenience.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* time_is_before_jiffies(a) return true if a is before jiffies */
|
|
|
|
#define time_is_before_jiffies(a) time_after(jiffies, a)
|
2016-10-08 07:57:04 +08:00
|
|
|
#define time_is_before_jiffies64(a) time_after64(get_jiffies_64(), a)
|
2008-04-19 04:38:57 +08:00
|
|
|
|
|
|
|
/* time_is_after_jiffies(a) return true if a is after jiffies */
|
|
|
|
#define time_is_after_jiffies(a) time_before(jiffies, a)
|
2016-10-08 07:57:04 +08:00
|
|
|
#define time_is_after_jiffies64(a) time_before64(get_jiffies_64(), a)
|
2008-04-19 04:38:57 +08:00
|
|
|
|
|
|
|
/* time_is_before_eq_jiffies(a) return true if a is before or equal to jiffies*/
|
|
|
|
#define time_is_before_eq_jiffies(a) time_after_eq(jiffies, a)
|
2016-10-08 07:57:04 +08:00
|
|
|
#define time_is_before_eq_jiffies64(a) time_after_eq64(get_jiffies_64(), a)
|
2008-04-19 04:38:57 +08:00
|
|
|
|
|
|
|
/* time_is_after_eq_jiffies(a) return true if a is after or equal to jiffies*/
|
|
|
|
#define time_is_after_eq_jiffies(a) time_before_eq(jiffies, a)
|
2016-10-08 07:57:04 +08:00
|
|
|
#define time_is_after_eq_jiffies64(a) time_before_eq64(get_jiffies_64(), a)
|
2008-04-19 04:38:57 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Have the 32 bit jiffies value wrap 5 minutes after boot
|
|
|
|
* so jiffies wrap bugs show up earlier.
|
|
|
|
*/
|
|
|
|
#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Change timeval to jiffies, trying to avoid the
|
|
|
|
* most obvious overflows..
|
|
|
|
*
|
|
|
|
* And some not so obvious.
|
|
|
|
*
|
2007-02-16 17:27:29 +08:00
|
|
|
* Note that we don't want to return LONG_MAX, because
|
2005-04-17 06:20:36 +08:00
|
|
|
* for various timeout reasons we often end up having
|
|
|
|
* to wait "jiffies+1" in order to guarantee that we wait
|
|
|
|
* at _least_ "jiffies" - so "jiffies+1" had better still
|
|
|
|
* be positive.
|
|
|
|
*/
|
2007-02-16 17:27:29 +08:00
|
|
|
#define MAX_JIFFY_OFFSET ((LONG_MAX >> 1)-1)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-16 16:23:46 +08:00
|
|
|
extern unsigned long preset_lpj;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* We want to do realistic conversions of time so we need to use the same
|
|
|
|
* values the update wall clock code uses as the jiffies size. This value
|
|
|
|
* is: TICK_NSEC (which is defined in timex.h). This
|
2008-02-08 20:19:25 +08:00
|
|
|
* is a constant and is in nanoseconds. We will use scaled math
|
2005-04-17 06:20:36 +08:00
|
|
|
* with a set of scales defined here as SEC_JIFFIE_SC, USEC_JIFFIE_SC and
|
|
|
|
* NSEC_JIFFIE_SC. Note that these defines contain nothing but
|
|
|
|
* constants and so are computed at compile time. SHIFT_HZ (computed in
|
|
|
|
* timex.h) adjusts the scaling for different HZ values.
|
|
|
|
|
|
|
|
* Scaled math??? What is that?
|
|
|
|
*
|
|
|
|
* Scaled math is a way to do integer math on values that would,
|
|
|
|
* otherwise, either overflow, underflow, or cause undesired div
|
|
|
|
* instructions to appear in the execution path. In short, we "scale"
|
|
|
|
* up the operands so they take more bits (more precision, less
|
|
|
|
* underflow), do the desired operation and then "scale" the result back
|
|
|
|
* by the same amount. If we do the scaling by shifting we avoid the
|
|
|
|
* costly mpy and the dastardly div instructions.
|
|
|
|
|
|
|
|
* Suppose, for example, we want to convert from seconds to jiffies
|
|
|
|
* where jiffies is defined in nanoseconds as NSEC_PER_JIFFIE. The
|
|
|
|
* simple math is: jiff = (sec * NSEC_PER_SEC) / NSEC_PER_JIFFIE; We
|
|
|
|
* observe that (NSEC_PER_SEC / NSEC_PER_JIFFIE) is a constant which we
|
|
|
|
* might calculate at compile time, however, the result will only have
|
|
|
|
* about 3-4 bits of precision (less for smaller values of HZ).
|
|
|
|
*
|
|
|
|
* So, we scale as follows:
|
|
|
|
* jiff = (sec) * (NSEC_PER_SEC / NSEC_PER_JIFFIE);
|
|
|
|
* jiff = ((sec) * ((NSEC_PER_SEC * SCALE)/ NSEC_PER_JIFFIE)) / SCALE;
|
|
|
|
* Then we make SCALE a power of two so:
|
|
|
|
* jiff = ((sec) * ((NSEC_PER_SEC << SCALE)/ NSEC_PER_JIFFIE)) >> SCALE;
|
|
|
|
* Now we define:
|
|
|
|
* #define SEC_CONV = ((NSEC_PER_SEC << SCALE)/ NSEC_PER_JIFFIE))
|
|
|
|
* jiff = (sec * SEC_CONV) >> SCALE;
|
|
|
|
*
|
|
|
|
* Often the math we use will expand beyond 32-bits so we tell C how to
|
|
|
|
* do this and pass the 64-bit result of the mpy through the ">> SCALE"
|
|
|
|
* which should take the result back to 32-bits. We want this expansion
|
|
|
|
* to capture as much precision as possible. At the same time we don't
|
|
|
|
* want to overflow so we pick the SCALE to avoid this. In this file,
|
|
|
|
* that means using a different scale for each range of HZ values (as
|
|
|
|
* defined in timex.h).
|
|
|
|
*
|
|
|
|
* For those who want to know, gcc will give a 64-bit result from a "*"
|
|
|
|
* operator if the result is a long long AND at least one of the
|
|
|
|
* operands is cast to long long (usually just prior to the "*" so as
|
|
|
|
* not to confuse it into thinking it really has a 64-bit operand,
|
2008-02-08 20:19:25 +08:00
|
|
|
* which, buy the way, it can do, but it takes more code and at least 2
|
2005-04-17 06:20:36 +08:00
|
|
|
* mpys).
|
|
|
|
|
|
|
|
* We also need to be aware that one second in nanoseconds is only a
|
|
|
|
* couple of bits away from overflowing a 32-bit word, so we MUST use
|
|
|
|
* 64-bits to get the full range time in nanoseconds.
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Here are the scales we will use. One for seconds, nanoseconds and
|
|
|
|
* microseconds.
|
|
|
|
*
|
|
|
|
* Within the limits of cpp we do a rough cut at the SEC_JIFFIE_SC and
|
|
|
|
* check if the sign bit is set. If not, we bump the shift count by 1.
|
|
|
|
* (Gets an extra bit of precision where we can use it.)
|
|
|
|
* We know it is set for HZ = 1024 and HZ = 100 not for 1000.
|
|
|
|
* Haven't tested others.
|
|
|
|
|
|
|
|
* Limits of cpp (for #if expressions) only long (no long long), but
|
|
|
|
* then we only need the most signicant bit.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define SEC_JIFFIE_SC (31 - SHIFT_HZ)
|
|
|
|
#if !((((NSEC_PER_SEC << 2) / TICK_NSEC) << (SEC_JIFFIE_SC - 2)) & 0x80000000)
|
|
|
|
#undef SEC_JIFFIE_SC
|
|
|
|
#define SEC_JIFFIE_SC (32 - SHIFT_HZ)
|
|
|
|
#endif
|
|
|
|
#define NSEC_JIFFIE_SC (SEC_JIFFIE_SC + 29)
|
|
|
|
#define SEC_CONVERSION ((unsigned long)((((u64)NSEC_PER_SEC << SEC_JIFFIE_SC) +\
|
|
|
|
TICK_NSEC -1) / (u64)TICK_NSEC))
|
|
|
|
|
|
|
|
#define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\
|
|
|
|
TICK_NSEC -1) / (u64)TICK_NSEC))
|
|
|
|
/*
|
|
|
|
* The maximum jiffie value is (MAX_INT >> 1). Here we translate that
|
|
|
|
* into seconds. The 64-bit case will overflow if we are not careful,
|
|
|
|
* so use the messy SH_DIV macro to do it. Still all constants.
|
|
|
|
*/
|
|
|
|
#if BITS_PER_LONG < 64
|
|
|
|
# define MAX_SEC_IN_JIFFIES \
|
|
|
|
(long)((u64)((u64)MAX_JIFFY_OFFSET * TICK_NSEC) / NSEC_PER_SEC)
|
|
|
|
#else /* take care of overflow on 64 bits machines */
|
|
|
|
# define MAX_SEC_IN_JIFFIES \
|
|
|
|
(SH_DIV((MAX_JIFFY_OFFSET >> SEC_JIFFIE_SC) * TICK_NSEC, NSEC_PER_SEC, 1) - 1)
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
2007-02-16 17:27:27 +08:00
|
|
|
* Convert various time units to each other:
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2007-02-16 17:27:27 +08:00
|
|
|
extern unsigned int jiffies_to_msecs(const unsigned long j);
|
|
|
|
extern unsigned int jiffies_to_usecs(const unsigned long j);
|
2014-01-15 21:51:38 +08:00
|
|
|
|
|
|
|
static inline u64 jiffies_to_nsecs(const unsigned long j)
|
|
|
|
{
|
|
|
|
return (u64)jiffies_to_usecs(j) * NSEC_PER_USEC;
|
|
|
|
}
|
|
|
|
|
2017-01-31 11:09:17 +08:00
|
|
|
extern u64 jiffies64_to_nsecs(u64 j);
|
|
|
|
|
2015-05-18 20:19:13 +08:00
|
|
|
extern unsigned long __msecs_to_jiffies(const unsigned int m);
|
|
|
|
#if HZ <= MSEC_PER_SEC && !(MSEC_PER_SEC % HZ)
|
|
|
|
/*
|
|
|
|
* HZ is equal to or smaller than 1000, and 1000 is a nice round
|
|
|
|
* multiple of HZ, divide with the factor between them, but round
|
|
|
|
* upwards:
|
|
|
|
*/
|
|
|
|
static inline unsigned long _msecs_to_jiffies(const unsigned int m)
|
|
|
|
{
|
2015-05-19 23:14:51 +08:00
|
|
|
return (m + (MSEC_PER_SEC / HZ) - 1) / (MSEC_PER_SEC / HZ);
|
2015-05-18 20:19:13 +08:00
|
|
|
}
|
|
|
|
#elif HZ > MSEC_PER_SEC && !(HZ % MSEC_PER_SEC)
|
|
|
|
/*
|
|
|
|
* HZ is larger than 1000, and HZ is a nice round multiple of 1000 -
|
|
|
|
* simply multiply with the factor between them.
|
|
|
|
*
|
|
|
|
* But first make sure the multiplication result cannot overflow:
|
|
|
|
*/
|
|
|
|
static inline unsigned long _msecs_to_jiffies(const unsigned int m)
|
|
|
|
{
|
2015-05-19 23:14:51 +08:00
|
|
|
if (m > jiffies_to_msecs(MAX_JIFFY_OFFSET))
|
|
|
|
return MAX_JIFFY_OFFSET;
|
|
|
|
return m * (HZ / MSEC_PER_SEC);
|
2015-05-18 20:19:13 +08:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
/*
|
|
|
|
* Generic case - multiply, round and divide. But first check that if
|
|
|
|
* we are doing a net multiplication, that we wouldn't overflow:
|
|
|
|
*/
|
|
|
|
static inline unsigned long _msecs_to_jiffies(const unsigned int m)
|
|
|
|
{
|
2015-05-19 23:14:51 +08:00
|
|
|
if (HZ > MSEC_PER_SEC && m > jiffies_to_msecs(MAX_JIFFY_OFFSET))
|
|
|
|
return MAX_JIFFY_OFFSET;
|
2015-05-18 20:19:13 +08:00
|
|
|
|
2015-05-19 23:14:51 +08:00
|
|
|
return (MSEC_TO_HZ_MUL32 * m + MSEC_TO_HZ_ADJ32) >> MSEC_TO_HZ_SHR32;
|
2015-05-18 20:19:13 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
/**
|
|
|
|
* msecs_to_jiffies: - convert milliseconds to jiffies
|
|
|
|
* @m: time in milliseconds
|
|
|
|
*
|
|
|
|
* conversion is done as follows:
|
|
|
|
*
|
|
|
|
* - negative values mean 'infinite timeout' (MAX_JIFFY_OFFSET)
|
|
|
|
*
|
|
|
|
* - 'too large' values [that would result in larger than
|
|
|
|
* MAX_JIFFY_OFFSET values] mean 'infinite timeout' too.
|
|
|
|
*
|
|
|
|
* - all other values are converted to jiffies by either multiplying
|
|
|
|
* the input value by a factor or dividing it with a factor and
|
|
|
|
* handling any 32-bit overflows.
|
|
|
|
* for the details see __msecs_to_jiffies()
|
|
|
|
*
|
2015-05-18 20:19:14 +08:00
|
|
|
* msecs_to_jiffies() checks for the passed in value being a constant
|
|
|
|
* via __builtin_constant_p() allowing gcc to eliminate most of the
|
|
|
|
* code, __msecs_to_jiffies() is called if the value passed does not
|
|
|
|
* allow constant folding and the actual conversion must be done at
|
|
|
|
* runtime.
|
|
|
|
* the HZ range specific helpers _msecs_to_jiffies() are called both
|
|
|
|
* directly here and from __msecs_to_jiffies() in the case where
|
|
|
|
* constant folding is not possible.
|
2015-05-18 20:19:13 +08:00
|
|
|
*/
|
2015-08-04 22:15:16 +08:00
|
|
|
static __always_inline unsigned long msecs_to_jiffies(const unsigned int m)
|
2015-05-18 20:19:13 +08:00
|
|
|
{
|
2015-05-18 20:19:14 +08:00
|
|
|
if (__builtin_constant_p(m)) {
|
|
|
|
if ((int)m < 0)
|
|
|
|
return MAX_JIFFY_OFFSET;
|
|
|
|
return _msecs_to_jiffies(m);
|
|
|
|
} else {
|
|
|
|
return __msecs_to_jiffies(m);
|
|
|
|
}
|
2015-05-18 20:19:13 +08:00
|
|
|
}
|
|
|
|
|
2015-05-29 01:09:55 +08:00
|
|
|
extern unsigned long __usecs_to_jiffies(const unsigned int u);
|
2014-10-10 08:44:01 +08:00
|
|
|
#if !(USEC_PER_SEC % HZ)
|
2015-05-29 01:09:55 +08:00
|
|
|
static inline unsigned long _usecs_to_jiffies(const unsigned int u)
|
|
|
|
{
|
|
|
|
return (u + (USEC_PER_SEC / HZ) - 1) / (USEC_PER_SEC / HZ);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline unsigned long _usecs_to_jiffies(const unsigned int u)
|
|
|
|
{
|
|
|
|
return (USEC_TO_HZ_MUL32 * u + USEC_TO_HZ_ADJ32)
|
|
|
|
>> USEC_TO_HZ_SHR32;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-05-29 01:09:56 +08:00
|
|
|
/**
|
|
|
|
* usecs_to_jiffies: - convert microseconds to jiffies
|
|
|
|
* @u: time in microseconds
|
|
|
|
*
|
|
|
|
* conversion is done as follows:
|
|
|
|
*
|
|
|
|
* - 'too large' values [that would result in larger than
|
|
|
|
* MAX_JIFFY_OFFSET values] mean 'infinite timeout' too.
|
|
|
|
*
|
|
|
|
* - all other values are converted to jiffies by either multiplying
|
|
|
|
* the input value by a factor or dividing it with a factor and
|
|
|
|
* handling any 32-bit overflows as for msecs_to_jiffies.
|
|
|
|
*
|
|
|
|
* usecs_to_jiffies() checks for the passed in value being a constant
|
|
|
|
* via __builtin_constant_p() allowing gcc to eliminate most of the
|
|
|
|
* code, __usecs_to_jiffies() is called if the value passed does not
|
|
|
|
* allow constant folding and the actual conversion must be done at
|
|
|
|
* runtime.
|
|
|
|
* the HZ range specific helpers _usecs_to_jiffies() are called both
|
|
|
|
* directly here and from __msecs_to_jiffies() in the case where
|
|
|
|
* constant folding is not possible.
|
|
|
|
*/
|
2015-08-04 22:15:16 +08:00
|
|
|
static __always_inline unsigned long usecs_to_jiffies(const unsigned int u)
|
2015-05-29 01:09:55 +08:00
|
|
|
{
|
2015-05-29 01:09:56 +08:00
|
|
|
if (__builtin_constant_p(u)) {
|
|
|
|
if (u > jiffies_to_usecs(MAX_JIFFY_OFFSET))
|
|
|
|
return MAX_JIFFY_OFFSET;
|
|
|
|
return _usecs_to_jiffies(u);
|
|
|
|
} else {
|
|
|
|
return __usecs_to_jiffies(u);
|
|
|
|
}
|
2015-05-29 01:09:55 +08:00
|
|
|
}
|
|
|
|
|
2015-07-29 20:18:31 +08:00
|
|
|
extern unsigned long timespec64_to_jiffies(const struct timespec64 *value);
|
|
|
|
extern void jiffies_to_timespec64(const unsigned long jiffies,
|
|
|
|
struct timespec64 *value);
|
|
|
|
static inline unsigned long timespec_to_jiffies(const struct timespec *value)
|
|
|
|
{
|
|
|
|
struct timespec64 ts = timespec_to_timespec64(*value);
|
|
|
|
|
|
|
|
return timespec64_to_jiffies(&ts);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void jiffies_to_timespec(const unsigned long jiffies,
|
|
|
|
struct timespec *value)
|
|
|
|
{
|
|
|
|
struct timespec64 ts;
|
|
|
|
|
|
|
|
jiffies_to_timespec64(jiffies, &ts);
|
|
|
|
*value = timespec64_to_timespec(ts);
|
|
|
|
}
|
|
|
|
|
2007-02-16 17:27:27 +08:00
|
|
|
extern unsigned long timeval_to_jiffies(const struct timeval *value);
|
|
|
|
extern void jiffies_to_timeval(const unsigned long jiffies,
|
|
|
|
struct timeval *value);
|
2012-08-09 05:13:53 +08:00
|
|
|
|
2011-09-21 04:53:39 +08:00
|
|
|
extern clock_t jiffies_to_clock_t(unsigned long x);
|
2012-08-09 05:13:53 +08:00
|
|
|
static inline clock_t jiffies_delta_to_clock_t(long delta)
|
|
|
|
{
|
|
|
|
return jiffies_to_clock_t(max(0L, delta));
|
|
|
|
}
|
|
|
|
|
2018-08-01 00:03:32 +08:00
|
|
|
static inline unsigned int jiffies_delta_to_msecs(long delta)
|
|
|
|
{
|
|
|
|
return jiffies_to_msecs(max(0L, delta));
|
|
|
|
}
|
|
|
|
|
2007-02-16 17:27:27 +08:00
|
|
|
extern unsigned long clock_t_to_jiffies(unsigned long x);
|
|
|
|
extern u64 jiffies_64_to_clock_t(u64 x);
|
|
|
|
extern u64 nsec_to_clock_t(u64 x);
|
2010-12-22 09:09:01 +08:00
|
|
|
extern u64 nsecs_to_jiffies64(u64 n);
|
2009-11-26 13:49:27 +08:00
|
|
|
extern unsigned long nsecs_to_jiffies(u64 n);
|
2007-02-16 17:27:27 +08:00
|
|
|
|
|
|
|
#define TIMESTAMP_SIZE 30
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#endif
|