ACPI and power management updates for 3.9-rc1
- Rework of the ACPI namespace scanning code from Rafael J. Wysocki with contributions from Bjorn Helgaas, Jiang Liu, Mika Westerberg, Toshi Kani, and Yinghai Lu. - ACPI power resources handling and ACPI device PM update from Rafael J. Wysocki. - ACPICA update to version 20130117 from Bob Moore and Lv Zheng with contributions from Aaron Lu, Chao Guan, Jesper Juhl, and Tim Gardner. - Support for Intel Lynxpoint LPSS from Mika Westerberg. - cpuidle update from Len Brown including Intel Haswell support, C1 state for intel_idle, removal of global pm_idle. - cpuidle fixes and cleanups from Daniel Lezcano. - cpufreq fixes and cleanups from Viresh Kumar and Fabio Baltieri with contributions from Stratos Karafotis and Rickard Andersson. - Intel P-states driver for Sandy Bridge processors from Dirk Brandewie. - cpufreq driver for Marvell Kirkwood SoCs from Andrew Lunn. - cpufreq fixes related to ordering issues between acpi-cpufreq and powernow-k8 from Borislav Petkov and Matthew Garrett. - cpufreq support for Calxeda Highbank processors from Mark Langsdorf and Rob Herring. - cpufreq driver for the Freescale i.MX6Q SoC and cpufreq-cpu0 update from Shawn Guo. - cpufreq Exynos fixes and cleanups from Jonghwan Choi, Sachin Kamat, and Inderpal Singh. - Support for "lightweight suspend" from Zhang Rui. - Removal of the deprecated power trace API from Paul Gortmaker. - Assorted updates from Andreas Fleig, Colin Ian King, Davidlohr Bueso, Joseph Salisbury, Kees Cook, Li Fei, Nishanth Menon, ShuoX Liu, Srinivas Pandruvada, Tejun Heo, Thomas Renninger, and Yasuaki Ishimatsu. / -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iQIcBAABAgAGBQJRIsArAAoJEKhOf7ml8uNsD6MP/j7C4NA+GTq6RdwoJt+Yki0K 9Ep8I4pEuRFoN/oskv24EyQhpGJIk6UxWcJ/DWFBc+1VhmKORta7k2Idv/wlJA77 s7AcDveA9xcDh+TVfbh87TeuiMSXiSdDZbiaQO+wMizWJAF3F84AnjiAqqqyQcSK bA5/Siz/vWlt9PyYDaQtHTVE4lpvPuVcQdYewsdaH2PsmUjvIg/TUzg28CTrdyvv eHOdBK9R0/OLQLhzRbL0VOGJ//wEl+HJRO0QEhTKPgdQ1e/VH/4Zu5WSzF8P/x4C s2f8U4IKQqulDuDHXtpMpelFm7hRWgsOqZLkcyXLs+0dvSM9CTPO6P0ZaImxUctk 5daHWEsXUnCErDQawt1mcZP8l6qnxofMQIfLXyPVzvlSnHyToTmrtXa1v2u4AuL/ hOo4MYWsFNUmRdtGFFGlExGgEDZ4G5NwiYjRBl/6XJ3v4nhnnMbuzxP8scpoe5m1 8tjroJHZFUUs/mFU/H+oRbHzSzXPmp1sddNaTg4OpVmTn3DDh6ljnFhiItd1Ndw0 5ldVbSe6ETq5RoK0TbzvQOeVpa9F3JfqbrXLQPqfd2iz/No41LQYG1uShRYuXKuA wfEcc+c9VMd3FILu05pGwBnU8VS9VbxTYMz7xDxg6b29Ywnb7u+Q1ycCk2gFYtkS E2oZDuyewTJxaskzYsNr =wijn -----END PGP SIGNATURE----- Merge tag 'pm+acpi-3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull ACPI and power management updates from Rafael Wysocki: - Rework of the ACPI namespace scanning code from Rafael J. Wysocki with contributions from Bjorn Helgaas, Jiang Liu, Mika Westerberg, Toshi Kani, and Yinghai Lu. - ACPI power resources handling and ACPI device PM update from Rafael J Wysocki. - ACPICA update to version 20130117 from Bob Moore and Lv Zheng with contributions from Aaron Lu, Chao Guan, Jesper Juhl, and Tim Gardner. - Support for Intel Lynxpoint LPSS from Mika Westerberg. - cpuidle update from Len Brown including Intel Haswell support, C1 state for intel_idle, removal of global pm_idle. - cpuidle fixes and cleanups from Daniel Lezcano. - cpufreq fixes and cleanups from Viresh Kumar and Fabio Baltieri with contributions from Stratos Karafotis and Rickard Andersson. - Intel P-states driver for Sandy Bridge processors from Dirk Brandewie. - cpufreq driver for Marvell Kirkwood SoCs from Andrew Lunn. - cpufreq fixes related to ordering issues between acpi-cpufreq and powernow-k8 from Borislav Petkov and Matthew Garrett. - cpufreq support for Calxeda Highbank processors from Mark Langsdorf and Rob Herring. - cpufreq driver for the Freescale i.MX6Q SoC and cpufreq-cpu0 update from Shawn Guo. - cpufreq Exynos fixes and cleanups from Jonghwan Choi, Sachin Kamat, and Inderpal Singh. - Support for "lightweight suspend" from Zhang Rui. - Removal of the deprecated power trace API from Paul Gortmaker. - Assorted updates from Andreas Fleig, Colin Ian King, Davidlohr Bueso, Joseph Salisbury, Kees Cook, Li Fei, Nishanth Menon, ShuoX Liu, Srinivas Pandruvada, Tejun Heo, Thomas Renninger, and Yasuaki Ishimatsu. * tag 'pm+acpi-3.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (267 commits) PM idle: remove global declaration of pm_idle unicore32 idle: delete stray pm_idle comment openrisc idle: delete pm_idle mn10300 idle: delete pm_idle microblaze idle: delete pm_idle m32r idle: delete pm_idle, and other dead idle code ia64 idle: delete pm_idle cris idle: delete idle and pm_idle ARM64 idle: delete pm_idle ARM idle: delete pm_idle blackfin idle: delete pm_idle sparc idle: rename pm_idle to sparc_idle sh idle: rename global pm_idle to static sh_idle x86 idle: rename global pm_idle to static x86_idle APM idle: register apm_cpu_idle via cpuidle cpufreq / intel_pstate: Add kernel command line option disable intel_pstate. cpufreq / intel_pstate: Change to disallow module build tools/power turbostat: display SMI count by default intel_idle: export both C1 and C1E ACPI / hotplug: Fix concurrency issues and memory leaks ...
This commit is contained in:
commit
8793422fd9
|
@ -0,0 +1,13 @@
|
|||
What: /sys/devices/.../power_resources_D0/
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../power_resources_D0/ directory is only
|
||||
present for device objects representing ACPI device nodes that
|
||||
use ACPI power resources for power management.
|
||||
|
||||
If present, it contains symbolic links to device directories
|
||||
representing ACPI power resources that need to be turned on for
|
||||
the given device node to be in ACPI power state D0. The names
|
||||
of the links are the same as the names of the directories they
|
||||
point to.
|
|
@ -0,0 +1,14 @@
|
|||
What: /sys/devices/.../power_resources_D1/
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../power_resources_D1/ directory is only
|
||||
present for device objects representing ACPI device nodes that
|
||||
use ACPI power resources for power management and support ACPI
|
||||
power state D1.
|
||||
|
||||
If present, it contains symbolic links to device directories
|
||||
representing ACPI power resources that need to be turned on for
|
||||
the given device node to be in ACPI power state D1. The names
|
||||
of the links are the same as the names of the directories they
|
||||
point to.
|
|
@ -0,0 +1,14 @@
|
|||
What: /sys/devices/.../power_resources_D2/
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../power_resources_D2/ directory is only
|
||||
present for device objects representing ACPI device nodes that
|
||||
use ACPI power resources for power management and support ACPI
|
||||
power state D2.
|
||||
|
||||
If present, it contains symbolic links to device directories
|
||||
representing ACPI power resources that need to be turned on for
|
||||
the given device node to be in ACPI power state D2. The names
|
||||
of the links are the same as the names of the directories they
|
||||
point to.
|
|
@ -0,0 +1,14 @@
|
|||
What: /sys/devices/.../power_resources_D3hot/
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../power_resources_D3hot/ directory is only
|
||||
present for device objects representing ACPI device nodes that
|
||||
use ACPI power resources for power management and support ACPI
|
||||
power state D3hot.
|
||||
|
||||
If present, it contains symbolic links to device directories
|
||||
representing ACPI power resources that need to be turned on for
|
||||
the given device node to be in ACPI power state D3hot. The
|
||||
names of the links are the same as the names of the directories
|
||||
they point to.
|
|
@ -0,0 +1,20 @@
|
|||
What: /sys/devices/.../power_state
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../power_state attribute is only present for
|
||||
device objects representing ACPI device nodes that provide power
|
||||
management methods.
|
||||
|
||||
If present, it contains a string representing the current ACPI
|
||||
power state of the given device node. Its possible values,
|
||||
"D0", "D1", "D2", "D3hot", and "D3cold", reflect the power state
|
||||
names defined by the ACPI specification (ACPI 4 and above).
|
||||
|
||||
If the device node uses shared ACPI power resources, this state
|
||||
determines a list of power resources required not to be turned
|
||||
off. However, some power resources needed by the device node in
|
||||
higher-power (lower-number) states may also be ON because of
|
||||
some other devices using them at the moment.
|
||||
|
||||
This attribute is read-only.
|
|
@ -0,0 +1,23 @@
|
|||
What: /sys/devices/.../real_power_state
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../real_power_state attribute is only present
|
||||
for device objects representing ACPI device nodes that provide
|
||||
power management methods and use ACPI power resources for power
|
||||
management.
|
||||
|
||||
If present, it contains a string representing the real ACPI
|
||||
power state of the given device node as returned by the _PSC
|
||||
control method or inferred from the configuration of power
|
||||
resources. Its possible values, "D0", "D1", "D2", "D3hot", and
|
||||
"D3cold", reflect the power state names defined by the ACPI
|
||||
specification (ACPI 4 and above).
|
||||
|
||||
In some situations the value of this attribute may be different
|
||||
from the value of the /sys/devices/.../power_state attribute for
|
||||
the same device object. If that happens, some shared power
|
||||
resources used by the device node are only ON because of some
|
||||
other devices using them at the moment.
|
||||
|
||||
This attribute is read-only.
|
|
@ -0,0 +1,12 @@
|
|||
What: /sys/devices/.../resource_in_use
|
||||
Date: January 2013
|
||||
Contact: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
Description:
|
||||
The /sys/devices/.../resource_in_use attribute is only present
|
||||
for device objects representing ACPI power resources.
|
||||
|
||||
If present, it contains a number (0 or 1) representing the
|
||||
current status of the given power resource (0 means that the
|
||||
resource is not in use and therefore it has been turned off).
|
||||
|
||||
This attribute is read-only.
|
|
@ -63,8 +63,8 @@ from ACPI tables.
|
|||
Currently the kernel is not able to automatically determine from which ACPI
|
||||
device it should make the corresponding platform device so we need to add
|
||||
the ACPI device explicitly to acpi_platform_device_ids list defined in
|
||||
drivers/acpi/scan.c. This limitation is only for the platform devices, SPI
|
||||
and I2C devices are created automatically as described below.
|
||||
drivers/acpi/acpi_platform.c. This limitation is only for the platform
|
||||
devices, SPI and I2C devices are created automatically as described below.
|
||||
|
||||
SPI serial bus support
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
ACPI Scan Handlers
|
||||
|
||||
Copyright (C) 2012, Intel Corporation
|
||||
Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
|
||||
During system initialization and ACPI-based device hot-add, the ACPI namespace
|
||||
is scanned in search of device objects that generally represent various pieces
|
||||
of hardware. This causes a struct acpi_device object to be created and
|
||||
registered with the driver core for every device object in the ACPI namespace
|
||||
and the hierarchy of those struct acpi_device objects reflects the namespace
|
||||
layout (i.e. parent device objects in the namespace are represented by parent
|
||||
struct acpi_device objects and analogously for their children). Those struct
|
||||
acpi_device objects are referred to as "device nodes" in what follows, but they
|
||||
should not be confused with struct device_node objects used by the Device Trees
|
||||
parsing code (although their role is analogous to the role of those objects).
|
||||
|
||||
During ACPI-based device hot-remove device nodes representing pieces of hardware
|
||||
being removed are unregistered and deleted.
|
||||
|
||||
The core ACPI namespace scanning code in drivers/acpi/scan.c carries out basic
|
||||
initialization of device nodes, such as retrieving common configuration
|
||||
information from the device objects represented by them and populating them with
|
||||
appropriate data, but some of them require additional handling after they have
|
||||
been registered. For example, if the given device node represents a PCI host
|
||||
bridge, its registration should cause the PCI bus under that bridge to be
|
||||
enumerated and PCI devices on that bus to be registered with the driver core.
|
||||
Similarly, if the device node represents a PCI interrupt link, it is necessary
|
||||
to configure that link so that the kernel can use it.
|
||||
|
||||
Those additional configuration tasks usually depend on the type of the hardware
|
||||
component represented by the given device node which can be determined on the
|
||||
basis of the device node's hardware ID (HID). They are performed by objects
|
||||
called ACPI scan handlers represented by the following structure:
|
||||
|
||||
struct acpi_scan_handler {
|
||||
const struct acpi_device_id *ids;
|
||||
struct list_head list_node;
|
||||
int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id);
|
||||
void (*detach)(struct acpi_device *dev);
|
||||
};
|
||||
|
||||
where ids is the list of IDs of device nodes the given handler is supposed to
|
||||
take care of, list_node is the hook to the global list of ACPI scan handlers
|
||||
maintained by the ACPI core and the .attach() and .detach() callbacks are
|
||||
executed, respectively, after registration of new device nodes and before
|
||||
unregistration of device nodes the handler attached to previously.
|
||||
|
||||
The namespace scanning function, acpi_bus_scan(), first registers all of the
|
||||
device nodes in the given namespace scope with the driver core. Then, it tries
|
||||
to match a scan handler against each of them using the ids arrays of the
|
||||
available scan handlers. If a matching scan handler is found, its .attach()
|
||||
callback is executed for the given device node. If that callback returns 1,
|
||||
that means that the handler has claimed the device node and is now responsible
|
||||
for carrying out any additional configuration tasks related to it. It also will
|
||||
be responsible for preparing the device node for unregistration in that case.
|
||||
The device node's handler field is then populated with the address of the scan
|
||||
handler that has claimed it.
|
||||
|
||||
If the .attach() callback returns 0, it means that the device node is not
|
||||
interesting to the given scan handler and may be matched against the next scan
|
||||
handler in the list. If it returns a (negative) error code, that means that
|
||||
the namespace scan should be terminated due to a serious error. The error code
|
||||
returned should then reflect the type of the error.
|
||||
|
||||
The namespace trimming function, acpi_bus_trim(), first executes .detach()
|
||||
callbacks from the scan handlers of all device nodes in the given namespace
|
||||
scope (if they have scan handlers). Next, it unregisters all of the device
|
||||
nodes in that scope.
|
||||
|
||||
ACPI scan handlers can be added to the list maintained by the ACPI core with the
|
||||
help of the acpi_scan_add_handler() function taking a pointer to the new scan
|
||||
handler as an argument. The order in which scan handlers are added to the list
|
||||
is the order in which they are matched against device nodes during namespace
|
||||
scans.
|
||||
|
||||
All scan handles must be added to the list before acpi_bus_scan() is run for the
|
||||
first time and they cannot be removed from it.
|
|
@ -111,6 +111,12 @@ policy->governor must contain the "default policy" for
|
|||
For setting some of these values, the frequency table helpers might be
|
||||
helpful. See the section 2 for more information on them.
|
||||
|
||||
SMP systems normally have same clock source for a group of cpus. For these the
|
||||
.init() would be called only once for the first online cpu. Here the .init()
|
||||
routine must initialize policy->cpus with mask of all possible cpus (Online +
|
||||
Offline) that share the clock. Then the core would copy this mask onto
|
||||
policy->related_cpus and will reset policy->cpus to carry only online cpus.
|
||||
|
||||
|
||||
1.3 verify
|
||||
------------
|
||||
|
|
|
@ -190,11 +190,11 @@ scaling_max_freq show the current "policy limits" (in
|
|||
first set scaling_max_freq, then
|
||||
scaling_min_freq.
|
||||
|
||||
affected_cpus : List of CPUs that require software coordination
|
||||
of frequency.
|
||||
affected_cpus : List of Online CPUs that require software
|
||||
coordination of frequency.
|
||||
|
||||
related_cpus : List of CPUs that need some sort of frequency
|
||||
coordination, whether software or hardware.
|
||||
related_cpus : List of Online + Offline CPUs that need software
|
||||
coordination of frequency.
|
||||
|
||||
scaling_driver : Hardware driver for cpufreq.
|
||||
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
Marvell Kirkwood Platforms Device Tree Bindings
|
||||
-----------------------------------------------
|
||||
|
||||
Boards with a SoC of the Marvell Kirkwood
|
||||
shall have the following property:
|
||||
|
||||
Required root node property:
|
||||
|
||||
compatible: must contain "marvell,kirkwood";
|
||||
|
||||
In order to support the kirkwood cpufreq driver, there must be a node
|
||||
cpus/cpu@0 with three clocks, "cpu_clk", "ddrclk" and "powersave",
|
||||
where the "powersave" clock is a gating clock used to switch the CPU
|
||||
between the "cpu_clk" and the "ddrclk".
|
||||
|
||||
Example:
|
||||
|
||||
cpus {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
cpu@0 {
|
||||
device_type = "cpu";
|
||||
compatible = "marvell,sheeva-88SV131";
|
||||
clocks = <&core_clk 1>, <&core_clk 3>, <&gate_clk 11>;
|
||||
clock-names = "cpu_clk", "ddrclk", "powersave";
|
||||
};
|
|
@ -1039,16 +1039,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
Claim all unknown PCI IDE storage controllers.
|
||||
|
||||
idle= [X86]
|
||||
Format: idle=poll, idle=mwait, idle=halt, idle=nomwait
|
||||
Format: idle=poll, idle=halt, idle=nomwait
|
||||
Poll forces a polling idle loop that can slightly
|
||||
improve the performance of waking up a idle CPU, but
|
||||
will use a lot of power and make the system run hot.
|
||||
Not recommended.
|
||||
idle=mwait: On systems which support MONITOR/MWAIT but
|
||||
the kernel chose to not use it because it doesn't save
|
||||
as much power as a normal idle loop, use the
|
||||
MONITOR/MWAIT idle loop anyways. Performance should be
|
||||
the same as idle=poll.
|
||||
idle=halt: Halt is forced to be used for CPU idle.
|
||||
In such case C2/C3 won't be used again.
|
||||
idle=nomwait: Disable mwait for CPU C-states
|
||||
|
@ -1131,6 +1126,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
0 disables intel_idle and fall back on acpi_idle.
|
||||
1 to 6 specify maximum depth of C-state.
|
||||
|
||||
intel_pstate= [X86]
|
||||
disable
|
||||
Do not enable intel_pstate as the default
|
||||
scaling driver for the supported processors
|
||||
|
||||
intremap= [X86-64, Intel-IOMMU]
|
||||
on enable Interrupt Remapping (default)
|
||||
off disable Interrupt Remapping
|
||||
|
@ -1886,10 +1886,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
wfi(ARM) instruction doesn't work correctly and not to
|
||||
use it. This is also useful when using JTAG debugger.
|
||||
|
||||
no-hlt [BUGS=X86-32] Tells the kernel that the hlt
|
||||
instruction doesn't work correctly and not to
|
||||
use it.
|
||||
|
||||
no_file_caps Tells the kernel not to honor file capabilities. The
|
||||
only way then for a file to be executed with privilege
|
||||
is to be setuid root or executed by root.
|
||||
|
|
|
@ -223,3 +223,8 @@ since they ask the freezer to skip freezing this task, since it is anyway
|
|||
only after the entire suspend/hibernation sequence is complete.
|
||||
So, to summarize, use [un]lock_system_sleep() instead of directly using
|
||||
mutex_[un]lock(&pm_mutex). That would prevent freezing failures.
|
||||
|
||||
V. Miscellaneous
|
||||
/sys/power/pm_freeze_timeout controls how long it will cost at most to freeze
|
||||
all user space processes or all freezable kernel threads, in unit of millisecond.
|
||||
The default value is 20000, with range of unsigned integer.
|
||||
|
|
|
@ -426,6 +426,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
|
|||
'power.runtime_error' is set or 'power.disable_depth' is greater than
|
||||
zero)
|
||||
|
||||
bool pm_runtime_active(struct device *dev);
|
||||
- return true if the device's runtime PM status is 'active' or its
|
||||
'power.disable_depth' field is not equal to zero, or false otherwise
|
||||
|
||||
bool pm_runtime_suspended(struct device *dev);
|
||||
- return true if the device's runtime PM status is 'suspended' and its
|
||||
'power.disable_depth' field is equal to zero, or false otherwise
|
||||
|
|
|
@ -17,7 +17,7 @@ Cf. include/trace/events/power.h for the events definitions.
|
|||
1. Power state switch events
|
||||
============================
|
||||
|
||||
1.1 New trace API
|
||||
1.1 Trace API
|
||||
-----------------
|
||||
|
||||
A 'cpu' event class gathers the CPU-related events: cpuidle and
|
||||
|
@ -41,31 +41,6 @@ The event which has 'state=4294967295' in the trace is very important to the use
|
|||
space tools which are using it to detect the end of the current state, and so to
|
||||
correctly draw the states diagrams and to calculate accurate statistics etc.
|
||||
|
||||
1.2 DEPRECATED trace API
|
||||
------------------------
|
||||
|
||||
A new Kconfig option CONFIG_EVENT_POWER_TRACING_DEPRECATED with the default value of
|
||||
'y' has been created. This allows the legacy trace power API to be used conjointly
|
||||
with the new trace API.
|
||||
The Kconfig option, the old trace API (in include/trace/events/power.h) and the
|
||||
old trace points will disappear in a future release (namely 2.6.41).
|
||||
|
||||
power_start "type=%lu state=%lu cpu_id=%lu"
|
||||
power_frequency "type=%lu state=%lu cpu_id=%lu"
|
||||
power_end "cpu_id=%lu"
|
||||
|
||||
The 'type' parameter takes one of those macros:
|
||||
. POWER_NONE = 0,
|
||||
. POWER_CSTATE = 1, /* C-State */
|
||||
. POWER_PSTATE = 2, /* Frequency change or DVFS */
|
||||
|
||||
The 'state' parameter is set depending on the type:
|
||||
. Target C-state for type=POWER_CSTATE,
|
||||
. Target frequency for type=POWER_PSTATE,
|
||||
|
||||
power_end is used to indicate the exit of a state, corresponding to the latest
|
||||
power_start event.
|
||||
|
||||
2. Clocks events
|
||||
================
|
||||
The clock events are used for clock enable/disable and for
|
||||
|
|
|
@ -37,6 +37,16 @@
|
|||
next-level-cache = <&L2>;
|
||||
clocks = <&a9pll>;
|
||||
clock-names = "cpu";
|
||||
operating-points = <
|
||||
/* kHz ignored */
|
||||
1300000 1000000
|
||||
1200000 1000000
|
||||
1100000 1000000
|
||||
800000 1000000
|
||||
400000 1000000
|
||||
200000 1000000
|
||||
>;
|
||||
clock-latency = <100000>;
|
||||
};
|
||||
|
||||
cpu@901 {
|
||||
|
|
|
@ -172,14 +172,9 @@ static void default_idle(void)
|
|||
local_irq_enable();
|
||||
}
|
||||
|
||||
void (*pm_idle)(void) = default_idle;
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
|
||||
/*
|
||||
* The idle thread, has rather strange semantics for calling pm_idle,
|
||||
* but this is what x86 does and we need to do the same, so that
|
||||
* things like cpuidle get called in the same way. The only difference
|
||||
* is that we always respect 'hlt_counter' to prevent low power idle.
|
||||
* The idle thread.
|
||||
* We always respect 'hlt_counter' to prevent low power idle.
|
||||
*/
|
||||
void cpu_idle(void)
|
||||
{
|
||||
|
@ -210,10 +205,10 @@ void cpu_idle(void)
|
|||
} else if (!need_resched()) {
|
||||
stop_critical_timings();
|
||||
if (cpuidle_idle_call())
|
||||
pm_idle();
|
||||
default_idle();
|
||||
start_critical_timings();
|
||||
/*
|
||||
* pm_idle functions must always
|
||||
* default_idle functions must always
|
||||
* return with IRQs enabled.
|
||||
*/
|
||||
WARN_ON(irqs_disabled());
|
||||
|
|
|
@ -31,7 +31,6 @@ static void __iomem *twd_base;
|
|||
|
||||
static struct clk *twd_clk;
|
||||
static unsigned long twd_timer_rate;
|
||||
static bool common_setup_called;
|
||||
static DEFINE_PER_CPU(bool, percpu_setup_called);
|
||||
|
||||
static struct clock_event_device __percpu **twd_evt;
|
||||
|
@ -239,25 +238,28 @@ static irqreturn_t twd_handler(int irq, void *dev_id)
|
|||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
static struct clk *twd_get_clock(void)
|
||||
static void twd_get_clock(struct device_node *np)
|
||||
{
|
||||
struct clk *clk;
|
||||
int err;
|
||||
|
||||
clk = clk_get_sys("smp_twd", NULL);
|
||||
if (IS_ERR(clk)) {
|
||||
pr_err("smp_twd: clock not found: %d\n", (int)PTR_ERR(clk));
|
||||
return clk;
|
||||
if (np)
|
||||
twd_clk = of_clk_get(np, 0);
|
||||
else
|
||||
twd_clk = clk_get_sys("smp_twd", NULL);
|
||||
|
||||
if (IS_ERR(twd_clk)) {
|
||||
pr_err("smp_twd: clock not found %d\n", (int) PTR_ERR(twd_clk));
|
||||
return;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(clk);
|
||||
err = clk_prepare_enable(twd_clk);
|
||||
if (err) {
|
||||
pr_err("smp_twd: clock failed to prepare+enable: %d\n", err);
|
||||
clk_put(clk);
|
||||
return ERR_PTR(err);
|
||||
clk_put(twd_clk);
|
||||
return;
|
||||
}
|
||||
|
||||
return clk;
|
||||
twd_timer_rate = clk_get_rate(twd_clk);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -280,26 +282,7 @@ static int __cpuinit twd_timer_setup(struct clock_event_device *clk)
|
|||
}
|
||||
per_cpu(percpu_setup_called, cpu) = true;
|
||||
|
||||
/*
|
||||
* This stuff only need to be done once for the entire TWD cluster
|
||||
* during the runtime of the system.
|
||||
*/
|
||||
if (!common_setup_called) {
|
||||
twd_clk = twd_get_clock();
|
||||
|
||||
/*
|
||||
* We use IS_ERR_OR_NULL() here, because if the clock stubs
|
||||
* are active we will get a valid clk reference which is
|
||||
* however NULL and will return the rate 0. In that case we
|
||||
* need to calibrate the rate instead.
|
||||
*/
|
||||
if (!IS_ERR_OR_NULL(twd_clk))
|
||||
twd_timer_rate = clk_get_rate(twd_clk);
|
||||
else
|
||||
twd_calibrate_rate();
|
||||
|
||||
common_setup_called = true;
|
||||
}
|
||||
twd_calibrate_rate();
|
||||
|
||||
/*
|
||||
* The following is done once per CPU the first time .setup() is
|
||||
|
@ -330,7 +313,7 @@ static struct local_timer_ops twd_lt_ops __cpuinitdata = {
|
|||
.stop = twd_timer_stop,
|
||||
};
|
||||
|
||||
static int __init twd_local_timer_common_register(void)
|
||||
static int __init twd_local_timer_common_register(struct device_node *np)
|
||||
{
|
||||
int err;
|
||||
|
||||
|
@ -350,6 +333,8 @@ static int __init twd_local_timer_common_register(void)
|
|||
if (err)
|
||||
goto out_irq;
|
||||
|
||||
twd_get_clock(np);
|
||||
|
||||
return 0;
|
||||
|
||||
out_irq:
|
||||
|
@ -373,7 +358,7 @@ int __init twd_local_timer_register(struct twd_local_timer *tlt)
|
|||
if (!twd_base)
|
||||
return -ENOMEM;
|
||||
|
||||
return twd_local_timer_common_register();
|
||||
return twd_local_timer_common_register(NULL);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
@ -405,7 +390,7 @@ void __init twd_local_timer_of_register(void)
|
|||
goto out;
|
||||
}
|
||||
|
||||
err = twd_local_timer_common_register();
|
||||
err = twd_local_timer_common_register(np);
|
||||
|
||||
out:
|
||||
WARN(err, "twd_local_timer_of_register failed (%d)\n", err);
|
||||
|
|
|
@ -25,53 +25,9 @@
|
|||
|
||||
#define DAVINCI_CPUIDLE_MAX_STATES 2
|
||||
|
||||
struct davinci_ops {
|
||||
void (*enter) (u32 flags);
|
||||
void (*exit) (u32 flags);
|
||||
u32 flags;
|
||||
};
|
||||
|
||||
/* Actual code that puts the SoC in different idle states */
|
||||
static int davinci_enter_idle(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv,
|
||||
int index)
|
||||
{
|
||||
struct cpuidle_state_usage *state_usage = &dev->states_usage[index];
|
||||
struct davinci_ops *ops = cpuidle_get_statedata(state_usage);
|
||||
|
||||
if (ops && ops->enter)
|
||||
ops->enter(ops->flags);
|
||||
|
||||
index = cpuidle_wrap_enter(dev, drv, index,
|
||||
arm_cpuidle_simple_enter);
|
||||
|
||||
if (ops && ops->exit)
|
||||
ops->exit(ops->flags);
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
/* fields in davinci_ops.flags */
|
||||
#define DAVINCI_CPUIDLE_FLAGS_DDR2_PWDN BIT(0)
|
||||
|
||||
static struct cpuidle_driver davinci_idle_driver = {
|
||||
.name = "cpuidle-davinci",
|
||||
.owner = THIS_MODULE,
|
||||
.en_core_tk_irqen = 1,
|
||||
.states[0] = ARM_CPUIDLE_WFI_STATE,
|
||||
.states[1] = {
|
||||
.enter = davinci_enter_idle,
|
||||
.exit_latency = 10,
|
||||
.target_residency = 100000,
|
||||
.flags = CPUIDLE_FLAG_TIME_VALID,
|
||||
.name = "DDR SR",
|
||||
.desc = "WFI and DDR Self Refresh",
|
||||
},
|
||||
.state_count = DAVINCI_CPUIDLE_MAX_STATES,
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct cpuidle_device, davinci_cpuidle_device);
|
||||
static void __iomem *ddr2_reg_base;
|
||||
static bool ddr2_pdown;
|
||||
|
||||
static void davinci_save_ddr_power(int enter, bool pdown)
|
||||
{
|
||||
|
@ -92,21 +48,35 @@ static void davinci_save_ddr_power(int enter, bool pdown)
|
|||
__raw_writel(val, ddr2_reg_base + DDR2_SDRCR_OFFSET);
|
||||
}
|
||||
|
||||
static void davinci_c2state_enter(u32 flags)
|
||||
/* Actual code that puts the SoC in different idle states */
|
||||
static int davinci_enter_idle(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv,
|
||||
int index)
|
||||
{
|
||||
davinci_save_ddr_power(1, !!(flags & DAVINCI_CPUIDLE_FLAGS_DDR2_PWDN));
|
||||
davinci_save_ddr_power(1, ddr2_pdown);
|
||||
|
||||
index = cpuidle_wrap_enter(dev, drv, index,
|
||||
arm_cpuidle_simple_enter);
|
||||
|
||||
davinci_save_ddr_power(0, ddr2_pdown);
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
static void davinci_c2state_exit(u32 flags)
|
||||
{
|
||||
davinci_save_ddr_power(0, !!(flags & DAVINCI_CPUIDLE_FLAGS_DDR2_PWDN));
|
||||
}
|
||||
|
||||
static struct davinci_ops davinci_states[DAVINCI_CPUIDLE_MAX_STATES] = {
|
||||
[1] = {
|
||||
.enter = davinci_c2state_enter,
|
||||
.exit = davinci_c2state_exit,
|
||||
static struct cpuidle_driver davinci_idle_driver = {
|
||||
.name = "cpuidle-davinci",
|
||||
.owner = THIS_MODULE,
|
||||
.en_core_tk_irqen = 1,
|
||||
.states[0] = ARM_CPUIDLE_WFI_STATE,
|
||||
.states[1] = {
|
||||
.enter = davinci_enter_idle,
|
||||
.exit_latency = 10,
|
||||
.target_residency = 100000,
|
||||
.flags = CPUIDLE_FLAG_TIME_VALID,
|
||||
.name = "DDR SR",
|
||||
.desc = "WFI and DDR Self Refresh",
|
||||
},
|
||||
.state_count = DAVINCI_CPUIDLE_MAX_STATES,
|
||||
};
|
||||
|
||||
static int __init davinci_cpuidle_probe(struct platform_device *pdev)
|
||||
|
@ -124,11 +94,7 @@ static int __init davinci_cpuidle_probe(struct platform_device *pdev)
|
|||
|
||||
ddr2_reg_base = pdata->ddr2_ctlr_base;
|
||||
|
||||
if (pdata->ddr2_pdown)
|
||||
davinci_states[1].flags |= DAVINCI_CPUIDLE_FLAGS_DDR2_PWDN;
|
||||
cpuidle_set_statedata(&device->states_usage[1], &davinci_states[1]);
|
||||
|
||||
device->state_count = DAVINCI_CPUIDLE_MAX_STATES;
|
||||
ddr2_pdown = pdata->ddr2_pdown;
|
||||
|
||||
ret = cpuidle_register_driver(&davinci_idle_driver);
|
||||
if (ret) {
|
||||
|
|
|
@ -18,12 +18,25 @@ enum cpufreq_level_index {
|
|||
L20,
|
||||
};
|
||||
|
||||
#define APLL_FREQ(f, a0, a1, a2, a3, a4, a5, a6, a7, b0, b1, b2, m, p, s) \
|
||||
{ \
|
||||
.freq = (f) * 1000, \
|
||||
.clk_div_cpu0 = ((a0) | (a1) << 4 | (a2) << 8 | (a3) << 12 | \
|
||||
(a4) << 16 | (a5) << 20 | (a6) << 24 | (a7) << 28), \
|
||||
.clk_div_cpu1 = (b0 << 0 | b1 << 4 | b2 << 8), \
|
||||
.mps = ((m) << 16 | (p) << 8 | (s)), \
|
||||
}
|
||||
|
||||
struct apll_freq {
|
||||
unsigned int freq;
|
||||
u32 clk_div_cpu0;
|
||||
u32 clk_div_cpu1;
|
||||
u32 mps;
|
||||
};
|
||||
|
||||
struct exynos_dvfs_info {
|
||||
unsigned long mpll_freq_khz;
|
||||
unsigned int pll_safe_idx;
|
||||
unsigned int pm_lock_idx;
|
||||
unsigned int max_support_idx;
|
||||
unsigned int min_support_idx;
|
||||
struct clk *cpu_clk;
|
||||
unsigned int *volt_table;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
config ARCH_HIGHBANK
|
||||
bool "Calxeda ECX-1000/2000 (Highbank/Midway)" if ARCH_MULTI_V7
|
||||
select ARCH_HAS_CPUFREQ
|
||||
select ARCH_HAS_OPP
|
||||
select ARCH_WANT_OPTIONAL_GPIOLIB
|
||||
select ARM_AMBA
|
||||
select ARM_GIC
|
||||
|
@ -11,5 +13,7 @@ config ARCH_HIGHBANK
|
|||
select GENERIC_CLOCKEVENTS
|
||||
select HAVE_ARM_SCU
|
||||
select HAVE_SMP
|
||||
select MAILBOX
|
||||
select PL320_MBOX
|
||||
select SPARSE_IRQ
|
||||
select USE_OF
|
||||
|
|
|
@ -351,12 +351,10 @@ static void omap3_pm_idle(void)
|
|||
if (omap_irq_pending())
|
||||
goto out;
|
||||
|
||||
trace_power_start(POWER_CSTATE, 1, smp_processor_id());
|
||||
trace_cpu_idle(1, smp_processor_id());
|
||||
|
||||
omap_sram_idle();
|
||||
|
||||
trace_power_end(smp_processor_id());
|
||||
trace_cpu_idle(PWR_EVENT_EXIT, smp_processor_id());
|
||||
|
||||
out:
|
||||
|
|
|
@ -243,8 +243,7 @@ static int tegra_cpu_init(struct cpufreq_policy *policy)
|
|||
/* FIXME: what's the actual transition time? */
|
||||
policy->cpuinfo.transition_latency = 300 * 1000;
|
||||
|
||||
policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
|
||||
cpumask_copy(policy->related_cpus, cpu_possible_mask);
|
||||
cpumask_copy(policy->cpus, cpu_possible_mask);
|
||||
|
||||
if (policy->cpu == 0)
|
||||
register_pm_notifier(&tegra_cpu_pm_notifier);
|
||||
|
|
|
@ -97,14 +97,9 @@ static void default_idle(void)
|
|||
local_irq_enable();
|
||||
}
|
||||
|
||||
void (*pm_idle)(void) = default_idle;
|
||||
EXPORT_SYMBOL_GPL(pm_idle);
|
||||
|
||||
/*
|
||||
* The idle thread, has rather strange semantics for calling pm_idle,
|
||||
* but this is what x86 does and we need to do the same, so that
|
||||
* things like cpuidle get called in the same way. The only difference
|
||||
* is that we always respect 'hlt_counter' to prevent low power idle.
|
||||
* The idle thread.
|
||||
* We always respect 'hlt_counter' to prevent low power idle.
|
||||
*/
|
||||
void cpu_idle(void)
|
||||
{
|
||||
|
@ -122,10 +117,10 @@ void cpu_idle(void)
|
|||
local_irq_disable();
|
||||
if (!need_resched()) {
|
||||
stop_critical_timings();
|
||||
pm_idle();
|
||||
default_idle();
|
||||
start_critical_timings();
|
||||
/*
|
||||
* pm_idle functions should always return
|
||||
* default_idle functions should always return
|
||||
* with IRQs enabled.
|
||||
*/
|
||||
WARN_ON(irqs_disabled());
|
||||
|
|
|
@ -39,12 +39,6 @@ int nr_l1stack_tasks;
|
|||
void *l1_stack_base;
|
||||
unsigned long l1_stack_len;
|
||||
|
||||
/*
|
||||
* Powermanagement idle function, if any..
|
||||
*/
|
||||
void (*pm_idle)(void) = NULL;
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
|
||||
void (*pm_power_off)(void) = NULL;
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
|
@ -81,7 +75,6 @@ void cpu_idle(void)
|
|||
{
|
||||
/* endless idle loop with no priority at all */
|
||||
while (1) {
|
||||
void (*idle)(void) = pm_idle;
|
||||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
if (cpu_is_offline(smp_processor_id()))
|
||||
|
|
|
@ -54,11 +54,6 @@ void enable_hlt(void)
|
|||
|
||||
EXPORT_SYMBOL(enable_hlt);
|
||||
|
||||
/*
|
||||
* The following aren't currently used.
|
||||
*/
|
||||
void (*pm_idle)(void);
|
||||
|
||||
extern void default_idle(void);
|
||||
|
||||
void (*pm_power_off)(void);
|
||||
|
@ -77,16 +72,12 @@ void cpu_idle (void)
|
|||
while (1) {
|
||||
rcu_idle_enter();
|
||||
while (!need_resched()) {
|
||||
void (*idle)(void);
|
||||
/*
|
||||
* Mark this as an RCU critical section so that
|
||||
* synchronize_kernel() in the unload path waits
|
||||
* for our completion.
|
||||
*/
|
||||
idle = pm_idle;
|
||||
if (!idle)
|
||||
idle = default_idle;
|
||||
idle();
|
||||
default_idle();
|
||||
}
|
||||
rcu_idle_exit();
|
||||
schedule_preempt_disabled();
|
||||
|
|
|
@ -191,7 +191,7 @@ static int aml_nfw_add(struct acpi_device *device)
|
|||
return aml_nfw_add_global_handler();
|
||||
}
|
||||
|
||||
static int aml_nfw_remove(struct acpi_device *device, int type)
|
||||
static int aml_nfw_remove(struct acpi_device *device)
|
||||
{
|
||||
return aml_nfw_remove_global_handler();
|
||||
}
|
||||
|
|
|
@ -52,10 +52,6 @@
|
|||
|
||||
/* Asm macros */
|
||||
|
||||
#define ACPI_ASM_MACROS
|
||||
#define BREAKPOINT3
|
||||
#define ACPI_DISABLE_IRQS() local_irq_disable()
|
||||
#define ACPI_ENABLE_IRQS() local_irq_enable()
|
||||
#define ACPI_FLUSH_CPU_CACHE()
|
||||
|
||||
static inline int
|
||||
|
|
|
@ -57,8 +57,6 @@ void (*ia64_mark_idle)(int);
|
|||
|
||||
unsigned long boot_option_idle_override = IDLE_NO_OVERRIDE;
|
||||
EXPORT_SYMBOL(boot_option_idle_override);
|
||||
void (*pm_idle) (void);
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
void (*pm_power_off) (void);
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
|
@ -301,7 +299,6 @@ cpu_idle (void)
|
|||
if (mark_idle)
|
||||
(*mark_idle)(1);
|
||||
|
||||
idle = pm_idle;
|
||||
if (!idle)
|
||||
idle = default_idle;
|
||||
(*idle)();
|
||||
|
|
|
@ -1051,7 +1051,6 @@ cpu_init (void)
|
|||
max_num_phys_stacked = num_phys_stacked;
|
||||
}
|
||||
platform_cpu_init();
|
||||
pm_idle = default_idle;
|
||||
}
|
||||
|
||||
void __init
|
||||
|
|
|
@ -44,35 +44,9 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
|
|||
return tsk->thread.lr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Powermanagement idle function, if any..
|
||||
*/
|
||||
static void (*pm_idle)(void) = NULL;
|
||||
|
||||
void (*pm_power_off)(void) = NULL;
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
/*
|
||||
* We use this is we don't have any better
|
||||
* idle routine..
|
||||
*/
|
||||
static void default_idle(void)
|
||||
{
|
||||
/* M32R_FIXME: Please use "cpu_sleep" mode. */
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
/*
|
||||
* On SMP it's slightly faster (but much more power-consuming!)
|
||||
* to poll the ->work.need_resched flag instead of waiting for the
|
||||
* cross-CPU IPI to arrive. Use this option with caution.
|
||||
*/
|
||||
static void poll_idle (void)
|
||||
{
|
||||
/* M32R_FIXME */
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
/*
|
||||
* The idle thread. There's no useful work to be
|
||||
* done, so just try to conserve power and have a
|
||||
|
@ -84,14 +58,8 @@ void cpu_idle (void)
|
|||
/* endless idle loop with no priority at all */
|
||||
while (1) {
|
||||
rcu_idle_enter();
|
||||
while (!need_resched()) {
|
||||
void (*idle)(void) = pm_idle;
|
||||
|
||||
if (!idle)
|
||||
idle = default_idle;
|
||||
|
||||
idle();
|
||||
}
|
||||
while (!need_resched())
|
||||
cpu_relax();
|
||||
rcu_idle_exit();
|
||||
schedule_preempt_disabled();
|
||||
}
|
||||
|
@ -120,21 +88,6 @@ void machine_power_off(void)
|
|||
/* M32R_FIXME */
|
||||
}
|
||||
|
||||
static int __init idle_setup (char *str)
|
||||
{
|
||||
if (!strncmp(str, "poll", 4)) {
|
||||
printk("using poll in idle threads.\n");
|
||||
pm_idle = poll_idle;
|
||||
} else if (!strncmp(str, "sleep", 4)) {
|
||||
printk("using sleep in idle threads.\n");
|
||||
pm_idle = default_idle;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
__setup("idle=", idle_setup);
|
||||
|
||||
void show_regs(struct pt_regs * regs)
|
||||
{
|
||||
printk("\n");
|
||||
|
|
|
@ -41,7 +41,6 @@ void show_regs(struct pt_regs *regs)
|
|||
regs->msr, regs->ear, regs->esr, regs->fsr);
|
||||
}
|
||||
|
||||
void (*pm_idle)(void);
|
||||
void (*pm_power_off)(void) = NULL;
|
||||
EXPORT_SYMBOL(pm_power_off);
|
||||
|
||||
|
@ -98,8 +97,6 @@ void cpu_idle(void)
|
|||
|
||||
/* endless idle loop with no priority at all */
|
||||
while (1) {
|
||||
void (*idle)(void) = pm_idle;
|
||||
|
||||
if (!idle)
|
||||
idle = default_idle;
|
||||
|
||||
|
|
|
@ -36,12 +36,6 @@
|
|||
#include <asm/gdb-stub.h>
|
||||
#include "internal.h"
|
||||
|
||||
/*
|
||||
* power management idle function, if any..
|
||||
*/
|
||||
void (*pm_idle)(void);
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
|
||||
/*
|
||||
* return saved PC of a blocked thread.
|
||||
*/
|
||||
|
@ -113,7 +107,6 @@ void cpu_idle(void)
|
|||
void (*idle)(void);
|
||||
|
||||
smp_rmb();
|
||||
idle = pm_idle;
|
||||
if (!idle) {
|
||||
#if defined(CONFIG_SMP) && !defined(CONFIG_HOTPLUG_CPU)
|
||||
idle = poll_idle;
|
||||
|
|
|
@ -39,11 +39,6 @@
|
|||
|
||||
void (*powersave) (void) = NULL;
|
||||
|
||||
static inline void pm_idle(void)
|
||||
{
|
||||
barrier();
|
||||
}
|
||||
|
||||
void cpu_idle(void)
|
||||
{
|
||||
set_thread_flag(TIF_POLLING_NRFLAG);
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
#include <asm/smp.h>
|
||||
#include <asm/bl_bit.h>
|
||||
|
||||
void (*pm_idle)(void);
|
||||
static void (*sh_idle)(void);
|
||||
|
||||
static int hlt_counter;
|
||||
|
||||
|
@ -103,9 +103,9 @@ void cpu_idle(void)
|
|||
/* Don't trace irqs off for idle */
|
||||
stop_critical_timings();
|
||||
if (cpuidle_idle_call())
|
||||
pm_idle();
|
||||
sh_idle();
|
||||
/*
|
||||
* Sanity check to ensure that pm_idle() returns
|
||||
* Sanity check to ensure that sh_idle() returns
|
||||
* with IRQs enabled
|
||||
*/
|
||||
WARN_ON(irqs_disabled());
|
||||
|
@ -123,13 +123,13 @@ void __init select_idle_routine(void)
|
|||
/*
|
||||
* If a platform has set its own idle routine, leave it alone.
|
||||
*/
|
||||
if (pm_idle)
|
||||
if (sh_idle)
|
||||
return;
|
||||
|
||||
if (hlt_works())
|
||||
pm_idle = default_idle;
|
||||
sh_idle = default_idle;
|
||||
else
|
||||
pm_idle = poll_idle;
|
||||
sh_idle = poll_idle;
|
||||
}
|
||||
|
||||
void stop_this_cpu(void *unused)
|
||||
|
|
|
@ -118,6 +118,7 @@ extern unsigned long get_wchan(struct task_struct *);
|
|||
extern struct task_struct *last_task_used_math;
|
||||
|
||||
#define cpu_relax() barrier()
|
||||
extern void (*sparc_idle)(void);
|
||||
|
||||
#endif
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <asm/uaccess.h>
|
||||
#include <asm/auxio.h>
|
||||
#include <asm/apc.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
/* Debugging
|
||||
*
|
||||
|
@ -158,7 +159,7 @@ static int apc_probe(struct platform_device *op)
|
|||
|
||||
/* Assign power management IDLE handler */
|
||||
if (!apc_no_idle)
|
||||
pm_idle = apc_swift_idle;
|
||||
sparc_idle = apc_swift_idle;
|
||||
|
||||
printk(KERN_INFO "%s: power management initialized%s\n",
|
||||
APC_DEVNAME, apc_no_idle ? " (CPU idle disabled)" : "");
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
#include <asm/leon_amba.h>
|
||||
#include <asm/cpu_type.h>
|
||||
#include <asm/leon.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
/* List of Systems that need fixup instructions around power-down instruction */
|
||||
unsigned int pmc_leon_fixup_ids[] = {
|
||||
|
@ -69,9 +70,9 @@ static int __init leon_pmc_install(void)
|
|||
if (sparc_cpu_model == sparc_leon) {
|
||||
/* Assign power management IDLE handler */
|
||||
if (pmc_leon_need_fixup())
|
||||
pm_idle = pmc_leon_idle_fixup;
|
||||
sparc_idle = pmc_leon_idle_fixup;
|
||||
else
|
||||
pm_idle = pmc_leon_idle;
|
||||
sparc_idle = pmc_leon_idle;
|
||||
|
||||
printk(KERN_INFO "leon: power management initialized\n");
|
||||
}
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <asm/oplib.h>
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/auxio.h>
|
||||
#include <asm/processor.h>
|
||||
|
||||
/* Debug
|
||||
*
|
||||
|
@ -63,7 +64,7 @@ static int pmc_probe(struct platform_device *op)
|
|||
|
||||
#ifndef PMC_NO_IDLE
|
||||
/* Assign power management IDLE handler */
|
||||
pm_idle = pmc_swift_idle;
|
||||
sparc_idle = pmc_swift_idle;
|
||||
#endif
|
||||
|
||||
printk(KERN_INFO "%s: power management initialized\n", PMC_DEVNAME);
|
||||
|
|
|
@ -43,8 +43,7 @@
|
|||
* Power management idle function
|
||||
* Set in pm platform drivers (apc.c and pmc.c)
|
||||
*/
|
||||
void (*pm_idle)(void);
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
void (*sparc_idle)(void);
|
||||
|
||||
/*
|
||||
* Power-off handler instantiation for pm.h compliance
|
||||
|
@ -75,8 +74,8 @@ void cpu_idle(void)
|
|||
/* endless idle loop with no priority at all */
|
||||
for (;;) {
|
||||
while (!need_resched()) {
|
||||
if (pm_idle)
|
||||
(*pm_idle)();
|
||||
if (sparc_idle)
|
||||
(*sparc_idle)();
|
||||
else
|
||||
cpu_relax();
|
||||
}
|
||||
|
|
|
@ -45,11 +45,6 @@ static const char * const processor_modes[] = {
|
|||
"UK18", "UK19", "UK1A", "EXTN", "UK1C", "UK1D", "UK1E", "SUSR"
|
||||
};
|
||||
|
||||
/*
|
||||
* The idle thread, has rather strange semantics for calling pm_idle,
|
||||
* but this is what x86 does and we need to do the same, so that
|
||||
* things like cpuidle get called in the same way.
|
||||
*/
|
||||
void cpu_idle(void)
|
||||
{
|
||||
/* endless idle loop with no priority at all */
|
||||
|
|
|
@ -469,6 +469,16 @@ config X86_MDFLD
|
|||
|
||||
endif
|
||||
|
||||
config X86_INTEL_LPSS
|
||||
bool "Intel Low Power Subsystem Support"
|
||||
depends on ACPI
|
||||
select COMMON_CLK
|
||||
---help---
|
||||
Select to build support for Intel Low Power Subsystem such as
|
||||
found on Intel Lynxpoint PCH. Selecting this option enables
|
||||
things like clock tree (common clock framework) which are needed
|
||||
by the LPSS peripheral drivers.
|
||||
|
||||
config X86_RDC321X
|
||||
bool "RDC R-321x SoC"
|
||||
depends on X86_32
|
||||
|
@ -1927,6 +1937,7 @@ config APM_DO_ENABLE
|
|||
this feature.
|
||||
|
||||
config APM_CPU_IDLE
|
||||
depends on CPU_IDLE
|
||||
bool "Make CPU Idle calls when idle"
|
||||
---help---
|
||||
Enable calls to APM CPU Idle/CPU Busy inside the kernel's idle loop.
|
||||
|
|
|
@ -49,10 +49,6 @@
|
|||
|
||||
/* Asm macros */
|
||||
|
||||
#define ACPI_ASM_MACROS
|
||||
#define BREAKPOINT3
|
||||
#define ACPI_DISABLE_IRQS() local_irq_disable()
|
||||
#define ACPI_ENABLE_IRQS() local_irq_enable()
|
||||
#define ACPI_FLUSH_CPU_CACHE() wbinvd()
|
||||
|
||||
int __acpi_acquire_global_lock(unsigned int *lock);
|
||||
|
|
|
@ -4,7 +4,8 @@
|
|||
#define MWAIT_SUBSTATE_MASK 0xf
|
||||
#define MWAIT_CSTATE_MASK 0xf
|
||||
#define MWAIT_SUBSTATE_SIZE 4
|
||||
#define MWAIT_MAX_NUM_CSTATES 8
|
||||
#define MWAIT_HINT2CSTATE(hint) (((hint) >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK)
|
||||
#define MWAIT_HINT2SUBSTATE(hint) ((hint) & MWAIT_CSTATE_MASK)
|
||||
|
||||
#define CPUID_MWAIT_LEAF 5
|
||||
#define CPUID5_ECX_EXTENSIONS_SUPPORTED 0x1
|
||||
|
|
|
@ -89,7 +89,6 @@ struct cpuinfo_x86 {
|
|||
char wp_works_ok; /* It doesn't on 386's */
|
||||
|
||||
/* Problems on some 486Dx4's and old 386's: */
|
||||
char hlt_works_ok;
|
||||
char hard_math;
|
||||
char rfu;
|
||||
char fdiv_bug;
|
||||
|
@ -165,15 +164,6 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info);
|
|||
|
||||
extern const struct seq_operations cpuinfo_op;
|
||||
|
||||
static inline int hlt_works(int cpu)
|
||||
{
|
||||
#ifdef CONFIG_X86_32
|
||||
return cpu_data(cpu).hlt_works_ok;
|
||||
#else
|
||||
return 1;
|
||||
#endif
|
||||
}
|
||||
|
||||
#define cache_line_size() (boot_cpu_data.x86_cache_alignment)
|
||||
|
||||
extern void cpu_detect(struct cpuinfo_x86 *c);
|
||||
|
@ -725,7 +715,7 @@ extern unsigned long boot_option_idle_override;
|
|||
extern bool amd_e400_c1e_detected;
|
||||
|
||||
enum idle_boot_override {IDLE_NO_OVERRIDE=0, IDLE_HALT, IDLE_NOMWAIT,
|
||||
IDLE_POLL, IDLE_FORCE_MWAIT};
|
||||
IDLE_POLL};
|
||||
|
||||
extern void enable_sep_cpu(void);
|
||||
extern int sysenter_setup(void);
|
||||
|
@ -998,7 +988,11 @@ extern unsigned long arch_align_stack(unsigned long sp);
|
|||
extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
|
||||
|
||||
void default_idle(void);
|
||||
bool set_pm_idle_to_default(void);
|
||||
#ifdef CONFIG_XEN
|
||||
bool xen_set_default_idle(void);
|
||||
#else
|
||||
#define xen_set_default_idle 0
|
||||
#endif
|
||||
|
||||
void stop_this_cpu(void *dummy);
|
||||
|
||||
|
|
|
@ -103,6 +103,8 @@
|
|||
#define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10)
|
||||
#define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11)
|
||||
|
||||
#define MSR_IA32_POWER_CTL 0x000001fc
|
||||
|
||||
#define MSR_IA32_MC0_CTL 0x00000400
|
||||
#define MSR_IA32_MC0_STATUS 0x00000401
|
||||
#define MSR_IA32_MC0_ADDR 0x00000402
|
||||
|
@ -274,6 +276,7 @@
|
|||
#define MSR_IA32_PLATFORM_ID 0x00000017
|
||||
#define MSR_IA32_EBL_CR_POWERON 0x0000002a
|
||||
#define MSR_EBC_FREQUENCY_ID 0x0000002c
|
||||
#define MSR_SMI_COUNT 0x00000034
|
||||
#define MSR_IA32_FEATURE_CONTROL 0x0000003a
|
||||
#define MSR_IA32_TSC_ADJUST 0x0000003b
|
||||
|
||||
|
|
|
@ -232,6 +232,7 @@
|
|||
#include <linux/acpi.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/i8253.h>
|
||||
#include <linux/cpuidle.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/desc.h>
|
||||
|
@ -360,13 +361,35 @@ struct apm_user {
|
|||
* idle percentage above which bios idle calls are done
|
||||
*/
|
||||
#ifdef CONFIG_APM_CPU_IDLE
|
||||
#warning deprecated CONFIG_APM_CPU_IDLE will be deleted in 2012
|
||||
#define DEFAULT_IDLE_THRESHOLD 95
|
||||
#else
|
||||
#define DEFAULT_IDLE_THRESHOLD 100
|
||||
#endif
|
||||
#define DEFAULT_IDLE_PERIOD (100 / 3)
|
||||
|
||||
static int apm_cpu_idle(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int index);
|
||||
|
||||
static struct cpuidle_driver apm_idle_driver = {
|
||||
.name = "apm_idle",
|
||||
.owner = THIS_MODULE,
|
||||
.en_core_tk_irqen = 1,
|
||||
.states = {
|
||||
{ /* entry 0 is for polling */ },
|
||||
{ /* entry 1 is for APM idle */
|
||||
.name = "APM",
|
||||
.desc = "APM idle",
|
||||
.flags = CPUIDLE_FLAG_TIME_VALID,
|
||||
.exit_latency = 250, /* WAG */
|
||||
.target_residency = 500, /* WAG */
|
||||
.enter = &apm_cpu_idle
|
||||
},
|
||||
},
|
||||
.state_count = 2,
|
||||
};
|
||||
|
||||
static struct cpuidle_device apm_cpuidle_device;
|
||||
|
||||
/*
|
||||
* Local variables
|
||||
*/
|
||||
|
@ -377,7 +400,6 @@ static struct {
|
|||
static int clock_slowed;
|
||||
static int idle_threshold __read_mostly = DEFAULT_IDLE_THRESHOLD;
|
||||
static int idle_period __read_mostly = DEFAULT_IDLE_PERIOD;
|
||||
static int set_pm_idle;
|
||||
static int suspends_pending;
|
||||
static int standbys_pending;
|
||||
static int ignore_sys_suspend;
|
||||
|
@ -884,8 +906,6 @@ static void apm_do_busy(void)
|
|||
#define IDLE_CALC_LIMIT (HZ * 100)
|
||||
#define IDLE_LEAKY_MAX 16
|
||||
|
||||
static void (*original_pm_idle)(void) __read_mostly;
|
||||
|
||||
/**
|
||||
* apm_cpu_idle - cpu idling for APM capable Linux
|
||||
*
|
||||
|
@ -894,7 +914,8 @@ static void (*original_pm_idle)(void) __read_mostly;
|
|||
* Furthermore it calls the system default idle routine.
|
||||
*/
|
||||
|
||||
static void apm_cpu_idle(void)
|
||||
static int apm_cpu_idle(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int index)
|
||||
{
|
||||
static int use_apm_idle; /* = 0 */
|
||||
static unsigned int last_jiffies; /* = 0 */
|
||||
|
@ -905,7 +926,6 @@ static void apm_cpu_idle(void)
|
|||
unsigned int jiffies_since_last_check = jiffies - last_jiffies;
|
||||
unsigned int bucket;
|
||||
|
||||
WARN_ONCE(1, "deprecated apm_cpu_idle will be deleted in 2012");
|
||||
recalc:
|
||||
task_cputime(current, NULL, &stime);
|
||||
if (jiffies_since_last_check > IDLE_CALC_LIMIT) {
|
||||
|
@ -951,10 +971,7 @@ recalc:
|
|||
break;
|
||||
}
|
||||
}
|
||||
if (original_pm_idle)
|
||||
original_pm_idle();
|
||||
else
|
||||
default_idle();
|
||||
default_idle();
|
||||
local_irq_disable();
|
||||
jiffies_since_last_check = jiffies - last_jiffies;
|
||||
if (jiffies_since_last_check > idle_period)
|
||||
|
@ -964,7 +981,7 @@ recalc:
|
|||
if (apm_idle_done)
|
||||
apm_do_busy();
|
||||
|
||||
local_irq_enable();
|
||||
return index;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2382,9 +2399,9 @@ static int __init apm_init(void)
|
|||
if (HZ != 100)
|
||||
idle_period = (idle_period * HZ) / 100;
|
||||
if (idle_threshold < 100) {
|
||||
original_pm_idle = pm_idle;
|
||||
pm_idle = apm_cpu_idle;
|
||||
set_pm_idle = 1;
|
||||
if (!cpuidle_register_driver(&apm_idle_driver))
|
||||
if (cpuidle_register_device(&apm_cpuidle_device))
|
||||
cpuidle_unregister_driver(&apm_idle_driver);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -2394,15 +2411,9 @@ static void __exit apm_exit(void)
|
|||
{
|
||||
int error;
|
||||
|
||||
if (set_pm_idle) {
|
||||
pm_idle = original_pm_idle;
|
||||
/*
|
||||
* We are about to unload the current idle thread pm callback
|
||||
* (pm_idle), Wait for all processors to update cached/local
|
||||
* copies of pm_idle before proceeding.
|
||||
*/
|
||||
kick_all_cpus_sync();
|
||||
}
|
||||
cpuidle_unregister_device(&apm_cpuidle_device);
|
||||
cpuidle_unregister_driver(&apm_idle_driver);
|
||||
|
||||
if (((apm_info.bios.flags & APM_BIOS_DISENGAGED) == 0)
|
||||
&& (apm_info.connection_version > 0x0100)) {
|
||||
error = apm_engage_power_management(APM_DEVICE_ALL, 0);
|
||||
|
|
|
@ -17,15 +17,6 @@
|
|||
#include <asm/paravirt.h>
|
||||
#include <asm/alternative.h>
|
||||
|
||||
static int __init no_halt(char *s)
|
||||
{
|
||||
WARN_ONCE(1, "\"no-hlt\" is deprecated, please use \"idle=poll\"\n");
|
||||
boot_cpu_data.hlt_works_ok = 0;
|
||||
return 1;
|
||||
}
|
||||
|
||||
__setup("no-hlt", no_halt);
|
||||
|
||||
static int __init no_387(char *s)
|
||||
{
|
||||
boot_cpu_data.hard_math = 0;
|
||||
|
@ -89,23 +80,6 @@ static void __init check_fpu(void)
|
|||
pr_warn("Hmm, FPU with FDIV bug\n");
|
||||
}
|
||||
|
||||
static void __init check_hlt(void)
|
||||
{
|
||||
if (boot_cpu_data.x86 >= 5 || paravirt_enabled())
|
||||
return;
|
||||
|
||||
pr_info("Checking 'hlt' instruction... ");
|
||||
if (!boot_cpu_data.hlt_works_ok) {
|
||||
pr_cont("disabled\n");
|
||||
return;
|
||||
}
|
||||
halt();
|
||||
halt();
|
||||
halt();
|
||||
halt();
|
||||
pr_cont("OK\n");
|
||||
}
|
||||
|
||||
/*
|
||||
* Check whether we are able to run this kernel safely on SMP.
|
||||
*
|
||||
|
@ -129,7 +103,6 @@ void __init check_bugs(void)
|
|||
print_cpu_info(&boot_cpu_data);
|
||||
#endif
|
||||
check_config();
|
||||
check_hlt();
|
||||
init_utsname()->machine[1] =
|
||||
'0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86);
|
||||
alternative_instructions();
|
||||
|
|
|
@ -28,7 +28,6 @@ static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
|
|||
{
|
||||
seq_printf(m,
|
||||
"fdiv_bug\t: %s\n"
|
||||
"hlt_bug\t\t: %s\n"
|
||||
"f00f_bug\t: %s\n"
|
||||
"coma_bug\t: %s\n"
|
||||
"fpu\t\t: %s\n"
|
||||
|
@ -36,7 +35,6 @@ static void show_cpuinfo_misc(struct seq_file *m, struct cpuinfo_x86 *c)
|
|||
"cpuid level\t: %d\n"
|
||||
"wp\t\t: %s\n",
|
||||
c->fdiv_bug ? "yes" : "no",
|
||||
c->hlt_works_ok ? "no" : "yes",
|
||||
c->f00f_bug ? "yes" : "no",
|
||||
c->coma_bug ? "yes" : "no",
|
||||
c->hard_math ? "yes" : "no",
|
||||
|
|
|
@ -268,13 +268,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
|
|||
unsigned long boot_option_idle_override = IDLE_NO_OVERRIDE;
|
||||
EXPORT_SYMBOL(boot_option_idle_override);
|
||||
|
||||
/*
|
||||
* Powermanagement idle function, if any..
|
||||
*/
|
||||
void (*pm_idle)(void);
|
||||
#ifdef CONFIG_APM_MODULE
|
||||
EXPORT_SYMBOL(pm_idle);
|
||||
#endif
|
||||
static void (*x86_idle)(void);
|
||||
|
||||
#ifndef CONFIG_SMP
|
||||
static inline void play_dead(void)
|
||||
|
@ -351,7 +345,7 @@ void cpu_idle(void)
|
|||
rcu_idle_enter();
|
||||
|
||||
if (cpuidle_idle_call())
|
||||
pm_idle();
|
||||
x86_idle();
|
||||
|
||||
rcu_idle_exit();
|
||||
start_critical_timings();
|
||||
|
@ -375,7 +369,6 @@ void cpu_idle(void)
|
|||
*/
|
||||
void default_idle(void)
|
||||
{
|
||||
trace_power_start_rcuidle(POWER_CSTATE, 1, smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(1, smp_processor_id());
|
||||
current_thread_info()->status &= ~TS_POLLING;
|
||||
/*
|
||||
|
@ -389,21 +382,22 @@ void default_idle(void)
|
|||
else
|
||||
local_irq_enable();
|
||||
current_thread_info()->status |= TS_POLLING;
|
||||
trace_power_end_rcuidle(smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
|
||||
}
|
||||
#ifdef CONFIG_APM_MODULE
|
||||
EXPORT_SYMBOL(default_idle);
|
||||
#endif
|
||||
|
||||
bool set_pm_idle_to_default(void)
|
||||
#ifdef CONFIG_XEN
|
||||
bool xen_set_default_idle(void)
|
||||
{
|
||||
bool ret = !!pm_idle;
|
||||
bool ret = !!x86_idle;
|
||||
|
||||
pm_idle = default_idle;
|
||||
x86_idle = default_idle;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
void stop_this_cpu(void *dummy)
|
||||
{
|
||||
local_irq_disable();
|
||||
|
@ -413,31 +407,8 @@ void stop_this_cpu(void *dummy)
|
|||
set_cpu_online(smp_processor_id(), false);
|
||||
disable_local_APIC();
|
||||
|
||||
for (;;) {
|
||||
if (hlt_works(smp_processor_id()))
|
||||
halt();
|
||||
}
|
||||
}
|
||||
|
||||
/* Default MONITOR/MWAIT with no hints, used for default C1 state */
|
||||
static void mwait_idle(void)
|
||||
{
|
||||
if (!need_resched()) {
|
||||
trace_power_start_rcuidle(POWER_CSTATE, 1, smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(1, smp_processor_id());
|
||||
if (this_cpu_has(X86_FEATURE_CLFLUSH_MONITOR))
|
||||
clflush((void *)¤t_thread_info()->flags);
|
||||
|
||||
__monitor((void *)¤t_thread_info()->flags, 0, 0);
|
||||
smp_mb();
|
||||
if (!need_resched())
|
||||
__sti_mwait(0, 0);
|
||||
else
|
||||
local_irq_enable();
|
||||
trace_power_end_rcuidle(smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
|
||||
} else
|
||||
local_irq_enable();
|
||||
for (;;)
|
||||
halt();
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -447,62 +418,13 @@ static void mwait_idle(void)
|
|||
*/
|
||||
static void poll_idle(void)
|
||||
{
|
||||
trace_power_start_rcuidle(POWER_CSTATE, 0, smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(0, smp_processor_id());
|
||||
local_irq_enable();
|
||||
while (!need_resched())
|
||||
cpu_relax();
|
||||
trace_power_end_rcuidle(smp_processor_id());
|
||||
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
|
||||
}
|
||||
|
||||
/*
|
||||
* mwait selection logic:
|
||||
*
|
||||
* It depends on the CPU. For AMD CPUs that support MWAIT this is
|
||||
* wrong. Family 0x10 and 0x11 CPUs will enter C1 on HLT. Powersavings
|
||||
* then depend on a clock divisor and current Pstate of the core. If
|
||||
* all cores of a processor are in halt state (C1) the processor can
|
||||
* enter the C1E (C1 enhanced) state. If mwait is used this will never
|
||||
* happen.
|
||||
*
|
||||
* idle=mwait overrides this decision and forces the usage of mwait.
|
||||
*/
|
||||
|
||||
#define MWAIT_INFO 0x05
|
||||
#define MWAIT_ECX_EXTENDED_INFO 0x01
|
||||
#define MWAIT_EDX_C1 0xf0
|
||||
|
||||
int mwait_usable(const struct cpuinfo_x86 *c)
|
||||
{
|
||||
u32 eax, ebx, ecx, edx;
|
||||
|
||||
/* Use mwait if idle=mwait boot option is given */
|
||||
if (boot_option_idle_override == IDLE_FORCE_MWAIT)
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* Any idle= boot option other than idle=mwait means that we must not
|
||||
* use mwait. Eg: idle=halt or idle=poll or idle=nomwait
|
||||
*/
|
||||
if (boot_option_idle_override != IDLE_NO_OVERRIDE)
|
||||
return 0;
|
||||
|
||||
if (c->cpuid_level < MWAIT_INFO)
|
||||
return 0;
|
||||
|
||||
cpuid(MWAIT_INFO, &eax, &ebx, &ecx, &edx);
|
||||
/* Check, whether EDX has extended info about MWAIT */
|
||||
if (!(ecx & MWAIT_ECX_EXTENDED_INFO))
|
||||
return 1;
|
||||
|
||||
/*
|
||||
* edx enumeratios MONITOR/MWAIT extensions. Check, whether
|
||||
* C1 supports MWAIT
|
||||
*/
|
||||
return (edx & MWAIT_EDX_C1);
|
||||
}
|
||||
|
||||
bool amd_e400_c1e_detected;
|
||||
EXPORT_SYMBOL(amd_e400_c1e_detected);
|
||||
|
||||
|
@ -567,31 +489,24 @@ static void amd_e400_idle(void)
|
|||
void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
|
||||
{
|
||||
#ifdef CONFIG_SMP
|
||||
if (pm_idle == poll_idle && smp_num_siblings > 1) {
|
||||
if (x86_idle == poll_idle && smp_num_siblings > 1)
|
||||
pr_warn_once("WARNING: polling idle and HT enabled, performance may degrade\n");
|
||||
}
|
||||
#endif
|
||||
if (pm_idle)
|
||||
if (x86_idle)
|
||||
return;
|
||||
|
||||
if (cpu_has(c, X86_FEATURE_MWAIT) && mwait_usable(c)) {
|
||||
/*
|
||||
* One CPU supports mwait => All CPUs supports mwait
|
||||
*/
|
||||
pr_info("using mwait in idle threads\n");
|
||||
pm_idle = mwait_idle;
|
||||
} else if (cpu_has_amd_erratum(amd_erratum_400)) {
|
||||
if (cpu_has_amd_erratum(amd_erratum_400)) {
|
||||
/* E400: APIC timer interrupt does not wake up CPU from C1e */
|
||||
pr_info("using AMD E400 aware idle routine\n");
|
||||
pm_idle = amd_e400_idle;
|
||||
x86_idle = amd_e400_idle;
|
||||
} else
|
||||
pm_idle = default_idle;
|
||||
x86_idle = default_idle;
|
||||
}
|
||||
|
||||
void __init init_amd_e400_c1e_mask(void)
|
||||
{
|
||||
/* If we're using amd_e400_idle, we need to allocate amd_e400_c1e_mask. */
|
||||
if (pm_idle == amd_e400_idle)
|
||||
if (x86_idle == amd_e400_idle)
|
||||
zalloc_cpumask_var(&amd_e400_c1e_mask, GFP_KERNEL);
|
||||
}
|
||||
|
||||
|
@ -602,11 +517,8 @@ static int __init idle_setup(char *str)
|
|||
|
||||
if (!strcmp(str, "poll")) {
|
||||
pr_info("using polling idle threads\n");
|
||||
pm_idle = poll_idle;
|
||||
x86_idle = poll_idle;
|
||||
boot_option_idle_override = IDLE_POLL;
|
||||
} else if (!strcmp(str, "mwait")) {
|
||||
boot_option_idle_override = IDLE_FORCE_MWAIT;
|
||||
WARN_ONCE(1, "\"idle=mwait\" will be removed in 2012\n");
|
||||
} else if (!strcmp(str, "halt")) {
|
||||
/*
|
||||
* When the boot option of idle=halt is added, halt is
|
||||
|
@ -615,7 +527,7 @@ static int __init idle_setup(char *str)
|
|||
* To continue to load the CPU idle driver, don't touch
|
||||
* the boot_option_idle_override.
|
||||
*/
|
||||
pm_idle = default_idle;
|
||||
x86_idle = default_idle;
|
||||
boot_option_idle_override = IDLE_HALT;
|
||||
} else if (!strcmp(str, "nomwait")) {
|
||||
/*
|
||||
|
|
|
@ -1369,7 +1369,7 @@ static inline void mwait_play_dead(void)
|
|||
void *mwait_ptr;
|
||||
struct cpuinfo_x86 *c = __this_cpu_ptr(&cpu_info);
|
||||
|
||||
if (!(this_cpu_has(X86_FEATURE_MWAIT) && mwait_usable(c)))
|
||||
if (!this_cpu_has(X86_FEATURE_MWAIT))
|
||||
return;
|
||||
if (!this_cpu_has(X86_FEATURE_CLFLSH))
|
||||
return;
|
||||
|
|
|
@ -195,7 +195,7 @@ err_sysfs:
|
|||
return r;
|
||||
}
|
||||
|
||||
static int xo15_sci_remove(struct acpi_device *device, int type)
|
||||
static int xo15_sci_remove(struct acpi_device *device)
|
||||
{
|
||||
acpi_disable_gpe(NULL, xo15_sci_gpe);
|
||||
acpi_remove_gpe_handler(NULL, xo15_sci_gpe, xo15_sci_gpe_handler);
|
||||
|
|
|
@ -556,12 +556,9 @@ void __init xen_arch_setup(void)
|
|||
COMMAND_LINE_SIZE : MAX_GUEST_CMDLINE);
|
||||
|
||||
/* Set up idle, making sure it calls safe_halt() pvop */
|
||||
#ifdef CONFIG_X86_32
|
||||
boot_cpu_data.hlt_works_ok = 1;
|
||||
#endif
|
||||
disable_cpuidle();
|
||||
disable_cpufreq();
|
||||
WARN_ON(set_pm_idle_to_default());
|
||||
WARN_ON(xen_set_default_idle());
|
||||
fiddle_vdso();
|
||||
#ifdef CONFIG_NUMA
|
||||
numa_off = 1;
|
||||
|
|
|
@ -134,6 +134,8 @@ source "drivers/hwspinlock/Kconfig"
|
|||
|
||||
source "drivers/clocksource/Kconfig"
|
||||
|
||||
source "drivers/mailbox/Kconfig"
|
||||
|
||||
source "drivers/iommu/Kconfig"
|
||||
|
||||
source "drivers/remoteproc/Kconfig"
|
||||
|
|
|
@ -130,6 +130,7 @@ obj-y += platform/
|
|||
#common clk code
|
||||
obj-y += clk/
|
||||
|
||||
obj-$(CONFIG_MAILBOX) += mailbox/
|
||||
obj-$(CONFIG_HWSPINLOCK) += hwspinlock/
|
||||
obj-$(CONFIG_NFC) += nfc/
|
||||
obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
|
||||
|
|
|
@ -337,7 +337,7 @@ config X86_PM_TIMER
|
|||
systems require this timer.
|
||||
|
||||
config ACPI_CONTAINER
|
||||
tristate "Container and Module Devices (EXPERIMENTAL)"
|
||||
bool "Container and Module Devices (EXPERIMENTAL)"
|
||||
depends on EXPERIMENTAL
|
||||
default (ACPI_HOTPLUG_MEMORY || ACPI_HOTPLUG_CPU || ACPI_HOTPLUG_IO)
|
||||
help
|
||||
|
|
|
@ -37,7 +37,8 @@ acpi-y += resource.o
|
|||
acpi-y += processor_core.o
|
||||
acpi-y += ec.o
|
||||
acpi-$(CONFIG_ACPI_DOCK) += dock.o
|
||||
acpi-y += pci_root.o pci_link.o pci_irq.o pci_bind.o
|
||||
acpi-y += pci_root.o pci_link.o pci_irq.o
|
||||
acpi-y += csrt.o
|
||||
acpi-y += acpi_platform.o
|
||||
acpi-y += power.o
|
||||
acpi-y += event.o
|
||||
|
|
|
@ -60,7 +60,7 @@ static int acpi_ac_open_fs(struct inode *inode, struct file *file);
|
|||
#endif
|
||||
|
||||
static int acpi_ac_add(struct acpi_device *device);
|
||||
static int acpi_ac_remove(struct acpi_device *device, int type);
|
||||
static int acpi_ac_remove(struct acpi_device *device);
|
||||
static void acpi_ac_notify(struct acpi_device *device, u32 event);
|
||||
|
||||
static const struct acpi_device_id ac_device_ids[] = {
|
||||
|
@ -337,7 +337,7 @@ static int acpi_ac_resume(struct device *dev)
|
|||
}
|
||||
#endif
|
||||
|
||||
static int acpi_ac_remove(struct acpi_device *device, int type)
|
||||
static int acpi_ac_remove(struct acpi_device *device)
|
||||
{
|
||||
struct acpi_ac *ac = NULL;
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ MODULE_LICENSE("GPL");
|
|||
#define MEMORY_POWER_OFF_STATE 2
|
||||
|
||||
static int acpi_memory_device_add(struct acpi_device *device);
|
||||
static int acpi_memory_device_remove(struct acpi_device *device, int type);
|
||||
static int acpi_memory_device_remove(struct acpi_device *device);
|
||||
|
||||
static const struct acpi_device_id memory_device_ids[] = {
|
||||
{ACPI_MEMORY_DEVICE_HID, 0},
|
||||
|
@ -153,51 +153,46 @@ acpi_memory_get_device_resources(struct acpi_memory_device *mem_device)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
acpi_memory_get_device(acpi_handle handle,
|
||||
struct acpi_memory_device **mem_device)
|
||||
static int acpi_memory_get_device(acpi_handle handle,
|
||||
struct acpi_memory_device **mem_device)
|
||||
{
|
||||
acpi_status status;
|
||||
acpi_handle phandle;
|
||||
struct acpi_device *device = NULL;
|
||||
struct acpi_device *pdevice = NULL;
|
||||
int result;
|
||||
int result = 0;
|
||||
|
||||
acpi_scan_lock_acquire();
|
||||
|
||||
if (!acpi_bus_get_device(handle, &device) && device)
|
||||
acpi_bus_get_device(handle, &device);
|
||||
if (device)
|
||||
goto end;
|
||||
|
||||
status = acpi_get_parent(handle, &phandle);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
ACPI_EXCEPTION((AE_INFO, status, "Cannot find acpi parent"));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Get the parent device */
|
||||
result = acpi_bus_get_device(phandle, &pdevice);
|
||||
if (result) {
|
||||
acpi_handle_warn(phandle, "Cannot get acpi bus device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Now add the notified device. This creates the acpi_device
|
||||
* and invokes .add function
|
||||
*/
|
||||
result = acpi_bus_add(&device, pdevice, handle, ACPI_BUS_TYPE_DEVICE);
|
||||
result = acpi_bus_scan(handle);
|
||||
if (result) {
|
||||
acpi_handle_warn(handle, "Cannot add acpi bus\n");
|
||||
return -EINVAL;
|
||||
acpi_handle_warn(handle, "ACPI namespace scan failed\n");
|
||||
result = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
result = acpi_bus_get_device(handle, &device);
|
||||
if (result) {
|
||||
acpi_handle_warn(handle, "Missing device object\n");
|
||||
result = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
end:
|
||||
end:
|
||||
*mem_device = acpi_driver_data(device);
|
||||
if (!(*mem_device)) {
|
||||
dev_err(&device->dev, "driver data not found\n");
|
||||
return -ENODEV;
|
||||
result = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
return 0;
|
||||
out:
|
||||
acpi_scan_lock_release();
|
||||
return result;
|
||||
}
|
||||
|
||||
static int acpi_memory_check_device(struct acpi_memory_device *mem_device)
|
||||
|
@ -317,6 +312,7 @@ static void acpi_memory_device_notify(acpi_handle handle, u32 event, void *data)
|
|||
struct acpi_device *device;
|
||||
struct acpi_eject_event *ej_event = NULL;
|
||||
u32 ost_code = ACPI_OST_SC_NON_SPECIFIC_FAILURE; /* default */
|
||||
acpi_status status;
|
||||
|
||||
switch (event) {
|
||||
case ACPI_NOTIFY_BUS_CHECK:
|
||||
|
@ -339,29 +335,40 @@ static void acpi_memory_device_notify(acpi_handle handle, u32 event, void *data)
|
|||
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
||||
"\nReceived EJECT REQUEST notification for device\n"));
|
||||
|
||||
status = AE_ERROR;
|
||||
acpi_scan_lock_acquire();
|
||||
|
||||
if (acpi_bus_get_device(handle, &device)) {
|
||||
acpi_handle_err(handle, "Device doesn't exist\n");
|
||||
break;
|
||||
goto unlock;
|
||||
}
|
||||
mem_device = acpi_driver_data(device);
|
||||
if (!mem_device) {
|
||||
acpi_handle_err(handle, "Driver Data is NULL\n");
|
||||
break;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ej_event = kmalloc(sizeof(*ej_event), GFP_KERNEL);
|
||||
if (!ej_event) {
|
||||
pr_err(PREFIX "No memory, dropping EJECT\n");
|
||||
break;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ej_event->handle = handle;
|
||||
get_device(&device->dev);
|
||||
ej_event->device = device;
|
||||
ej_event->event = ACPI_NOTIFY_EJECT_REQUEST;
|
||||
acpi_os_hotplug_execute(acpi_bus_hot_remove_device,
|
||||
(void *)ej_event);
|
||||
/* The eject is carried out asynchronously. */
|
||||
status = acpi_os_hotplug_execute(acpi_bus_hot_remove_device,
|
||||
ej_event);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
put_device(&device->dev);
|
||||
kfree(ej_event);
|
||||
}
|
||||
|
||||
/* eject is performed asynchronously */
|
||||
return;
|
||||
unlock:
|
||||
acpi_scan_lock_release();
|
||||
if (ACPI_SUCCESS(status))
|
||||
return;
|
||||
default:
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
||||
"Unsupported event [0x%x]\n", event));
|
||||
|
@ -372,7 +379,6 @@ static void acpi_memory_device_notify(acpi_handle handle, u32 event, void *data)
|
|||
|
||||
/* Inform firmware that the hotplug operation has completed */
|
||||
(void) acpi_evaluate_hotplug_ost(handle, event, ost_code, NULL);
|
||||
return;
|
||||
}
|
||||
|
||||
static void acpi_memory_device_free(struct acpi_memory_device *mem_device)
|
||||
|
@ -427,7 +433,7 @@ static int acpi_memory_device_add(struct acpi_device *device)
|
|||
return result;
|
||||
}
|
||||
|
||||
static int acpi_memory_device_remove(struct acpi_device *device, int type)
|
||||
static int acpi_memory_device_remove(struct acpi_device *device)
|
||||
{
|
||||
struct acpi_memory_device *mem_device = NULL;
|
||||
int result;
|
||||
|
|
|
@ -482,8 +482,7 @@ static int acpi_pad_add(struct acpi_device *device)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int acpi_pad_remove(struct acpi_device *device,
|
||||
int type)
|
||||
static int acpi_pad_remove(struct acpi_device *device)
|
||||
{
|
||||
mutex_lock(&isolated_cpus_lock);
|
||||
acpi_pad_idle_cpus(0);
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
@ -21,18 +22,59 @@
|
|||
|
||||
ACPI_MODULE_NAME("platform");
|
||||
|
||||
/* Flags for acpi_create_platform_device */
|
||||
#define ACPI_PLATFORM_CLK BIT(0)
|
||||
|
||||
/*
|
||||
* The following ACPI IDs are known to be suitable for representing as
|
||||
* platform devices.
|
||||
*/
|
||||
static const struct acpi_device_id acpi_platform_device_ids[] = {
|
||||
|
||||
{ "PNP0D40" },
|
||||
|
||||
/* Haswell LPSS devices */
|
||||
{ "INT33C0", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C1", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C2", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C3", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C4", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C5", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C6", ACPI_PLATFORM_CLK },
|
||||
{ "INT33C7", ACPI_PLATFORM_CLK },
|
||||
|
||||
{ }
|
||||
};
|
||||
|
||||
static int acpi_create_platform_clks(struct acpi_device *adev)
|
||||
{
|
||||
static struct platform_device *pdev;
|
||||
|
||||
/* Create Lynxpoint LPSS clocks */
|
||||
if (!pdev && !strncmp(acpi_device_hid(adev), "INT33C", 6)) {
|
||||
pdev = platform_device_register_simple("clk-lpt", -1, NULL, 0);
|
||||
if (IS_ERR(pdev))
|
||||
return PTR_ERR(pdev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_create_platform_device - Create platform device for ACPI device node
|
||||
* @adev: ACPI device node to create a platform device for.
|
||||
* @id: ACPI device ID used to match @adev.
|
||||
*
|
||||
* Check if the given @adev can be represented as a platform device and, if
|
||||
* that's the case, create and register a platform device, populate its common
|
||||
* resources and returns a pointer to it. Otherwise, return %NULL.
|
||||
*
|
||||
* The platform device's name will be taken from the @adev's _HID and _UID.
|
||||
* Name of the platform device will be the same as @adev's.
|
||||
*/
|
||||
struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
|
||||
static int acpi_create_platform_device(struct acpi_device *adev,
|
||||
const struct acpi_device_id *id)
|
||||
{
|
||||
unsigned long flags = id->driver_data;
|
||||
struct platform_device *pdev = NULL;
|
||||
struct acpi_device *acpi_parent;
|
||||
struct platform_device_info pdevinfo;
|
||||
|
@ -41,20 +83,28 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
|
|||
struct resource *resources;
|
||||
int count;
|
||||
|
||||
if (flags & ACPI_PLATFORM_CLK) {
|
||||
int ret = acpi_create_platform_clks(adev);
|
||||
if (ret) {
|
||||
dev_err(&adev->dev, "failed to create clocks\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* If the ACPI node already has a physical device attached, skip it. */
|
||||
if (adev->physical_node_count)
|
||||
return NULL;
|
||||
return 0;
|
||||
|
||||
INIT_LIST_HEAD(&resource_list);
|
||||
count = acpi_dev_get_resources(adev, &resource_list, NULL, NULL);
|
||||
if (count <= 0)
|
||||
return NULL;
|
||||
return 0;
|
||||
|
||||
resources = kmalloc(count * sizeof(struct resource), GFP_KERNEL);
|
||||
if (!resources) {
|
||||
dev_err(&adev->dev, "No memory for resources\n");
|
||||
acpi_dev_free_resource_list(&resource_list);
|
||||
return NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
count = 0;
|
||||
list_for_each_entry(rentry, &resource_list, node)
|
||||
|
@ -100,5 +150,15 @@ struct platform_device *acpi_create_platform_device(struct acpi_device *adev)
|
|||
}
|
||||
|
||||
kfree(resources);
|
||||
return pdev;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct acpi_scan_handler platform_handler = {
|
||||
.ids = acpi_platform_device_ids,
|
||||
.attach = acpi_create_platform_device,
|
||||
};
|
||||
|
||||
void __init acpi_platform_init(void)
|
||||
{
|
||||
acpi_scan_add_handler(&platform_handler);
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ acpi-y += \
|
|||
evgpeinit.o \
|
||||
evgpeutil.o \
|
||||
evglock.o \
|
||||
evhandler.o \
|
||||
evmisc.o \
|
||||
evregion.o \
|
||||
evrgnini.o \
|
||||
|
@ -90,6 +91,7 @@ acpi-y += \
|
|||
nsobject.o \
|
||||
nsparse.o \
|
||||
nspredef.o \
|
||||
nsprepkg.o \
|
||||
nsrepair.o \
|
||||
nsrepair2.o \
|
||||
nssearch.o \
|
||||
|
@ -104,7 +106,9 @@ acpi-$(ACPI_FUTURE_USAGE) += nsdumpdv.o
|
|||
acpi-y += \
|
||||
psargs.o \
|
||||
psloop.o \
|
||||
psobject.o \
|
||||
psopcode.o \
|
||||
psopinfo.o \
|
||||
psparse.o \
|
||||
psscope.o \
|
||||
pstree.o \
|
||||
|
@ -126,7 +130,7 @@ acpi-y += \
|
|||
rsutils.o \
|
||||
rsxface.o
|
||||
|
||||
acpi-$(ACPI_FUTURE_USAGE) += rsdump.o
|
||||
acpi-$(ACPI_FUTURE_USAGE) += rsdump.o rsdumpinfo.o
|
||||
|
||||
acpi-y += \
|
||||
tbfadt.o \
|
||||
|
@ -155,8 +159,10 @@ acpi-y += \
|
|||
utmutex.o \
|
||||
utobject.o \
|
||||
utosi.o \
|
||||
utownerid.o \
|
||||
utresrc.o \
|
||||
utstate.o \
|
||||
utstring.o \
|
||||
utxface.o \
|
||||
utxfinit.o \
|
||||
utxferror.o \
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -51,6 +51,7 @@
|
|||
*
|
||||
* Note: The order of these include files is important.
|
||||
*/
|
||||
#include <acpi/acconfig.h> /* Global configuration constants */
|
||||
#include "acmacros.h" /* C macros */
|
||||
#include "aclocal.h" /* Internal data types */
|
||||
#include "acobject.h" /* ACPI internal object */
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -114,6 +114,21 @@ ACPI_HW_DEPENDENT_RETURN_VOID(void
|
|||
acpi_db_generate_gpe(char *gpe_arg,
|
||||
char *block_arg))
|
||||
|
||||
/*
|
||||
* dbconvert - miscellaneous conversion routines
|
||||
*/
|
||||
acpi_status acpi_db_hex_char_to_value(int hex_char, u8 *return_value);
|
||||
|
||||
acpi_status acpi_db_convert_to_package(char *string, union acpi_object *object);
|
||||
|
||||
acpi_status
|
||||
acpi_db_convert_to_object(acpi_object_type type,
|
||||
char *string, union acpi_object *object);
|
||||
|
||||
u8 *acpi_db_encode_pld_buffer(struct acpi_pld_info *pld_info);
|
||||
|
||||
void acpi_db_dump_pld_buffer(union acpi_object *obj_desc);
|
||||
|
||||
/*
|
||||
* dbmethod - control method commands
|
||||
*/
|
||||
|
@ -191,6 +206,8 @@ void
|
|||
acpi_db_create_execution_threads(char *num_threads_arg,
|
||||
char *num_loops_arg, char *method_name_arg);
|
||||
|
||||
void acpi_db_delete_objects(u32 count, union acpi_object *objects);
|
||||
|
||||
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
|
||||
u32 acpi_db_get_cache_info(struct acpi_memory_list *cache);
|
||||
#endif
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -158,10 +158,23 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
|
|||
void *context);
|
||||
|
||||
/*
|
||||
* evregion - Address Space handling
|
||||
* evhandler - Address space handling
|
||||
*/
|
||||
u8
|
||||
acpi_ev_has_default_handler(struct acpi_namespace_node *node,
|
||||
acpi_adr_space_type space_id);
|
||||
|
||||
acpi_status acpi_ev_install_region_handlers(void);
|
||||
|
||||
acpi_status
|
||||
acpi_ev_install_space_handler(struct acpi_namespace_node *node,
|
||||
acpi_adr_space_type space_id,
|
||||
acpi_adr_space_handler handler,
|
||||
acpi_adr_space_setup setup, void *context);
|
||||
|
||||
/*
|
||||
* evregion - Operation region support
|
||||
*/
|
||||
acpi_status acpi_ev_initialize_op_regions(void);
|
||||
|
||||
acpi_status
|
||||
|
@ -179,12 +192,6 @@ void
|
|||
acpi_ev_detach_region(union acpi_operand_object *region_obj,
|
||||
u8 acpi_ns_is_locked);
|
||||
|
||||
acpi_status
|
||||
acpi_ev_install_space_handler(struct acpi_namespace_node *node,
|
||||
acpi_adr_space_type space_id,
|
||||
acpi_adr_space_handler handler,
|
||||
acpi_adr_space_setup setup, void *context);
|
||||
|
||||
acpi_status
|
||||
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
|
||||
acpi_adr_space_type space_id);
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -192,14 +192,6 @@ ACPI_EXTERN u8 acpi_gbl_integer_bit_width;
|
|||
ACPI_EXTERN u8 acpi_gbl_integer_byte_width;
|
||||
ACPI_EXTERN u8 acpi_gbl_integer_nybble_width;
|
||||
|
||||
/* Mutex for _OSI support */
|
||||
|
||||
ACPI_EXTERN acpi_mutex acpi_gbl_osi_mutex;
|
||||
|
||||
/* Reader/Writer lock is used for namespace walk and dynamic table unload */
|
||||
|
||||
ACPI_EXTERN struct acpi_rw_lock acpi_gbl_namespace_rw_lock;
|
||||
|
||||
/*****************************************************************************
|
||||
*
|
||||
* Mutual exclusion within ACPICA subsystem
|
||||
|
@ -233,6 +225,14 @@ ACPI_EXTERN u8 acpi_gbl_global_lock_pending;
|
|||
ACPI_EXTERN acpi_spinlock acpi_gbl_gpe_lock; /* For GPE data structs and registers */
|
||||
ACPI_EXTERN acpi_spinlock acpi_gbl_hardware_lock; /* For ACPI H/W except GPE registers */
|
||||
|
||||
/* Mutex for _OSI support */
|
||||
|
||||
ACPI_EXTERN acpi_mutex acpi_gbl_osi_mutex;
|
||||
|
||||
/* Reader/Writer lock is used for namespace walk and dynamic table unload */
|
||||
|
||||
ACPI_EXTERN struct acpi_rw_lock acpi_gbl_namespace_rw_lock;
|
||||
|
||||
/*****************************************************************************
|
||||
*
|
||||
* Miscellaneous globals
|
||||
|
@ -252,7 +252,7 @@ ACPI_EXTERN acpi_cache_t *acpi_gbl_operand_cache;
|
|||
ACPI_EXTERN struct acpi_global_notify_handler acpi_gbl_global_notify[2];
|
||||
ACPI_EXTERN acpi_exception_handler acpi_gbl_exception_handler;
|
||||
ACPI_EXTERN acpi_init_handler acpi_gbl_init_handler;
|
||||
ACPI_EXTERN acpi_tbl_handler acpi_gbl_table_handler;
|
||||
ACPI_EXTERN acpi_table_handler acpi_gbl_table_handler;
|
||||
ACPI_EXTERN void *acpi_gbl_table_handler_context;
|
||||
ACPI_EXTERN struct acpi_walk_state *acpi_gbl_breakpoint_walk;
|
||||
ACPI_EXTERN acpi_interface_handler acpi_gbl_interface_handler;
|
||||
|
@ -304,6 +304,7 @@ extern const char *acpi_gbl_region_types[ACPI_NUM_PREDEFINED_REGIONS];
|
|||
ACPI_EXTERN struct acpi_memory_list *acpi_gbl_global_list;
|
||||
ACPI_EXTERN struct acpi_memory_list *acpi_gbl_ns_node_list;
|
||||
ACPI_EXTERN u8 acpi_gbl_display_final_mem_stats;
|
||||
ACPI_EXTERN u8 acpi_gbl_disable_mem_tracking;
|
||||
#endif
|
||||
|
||||
/*****************************************************************************
|
||||
|
@ -365,19 +366,18 @@ ACPI_EXTERN u8 acpi_gbl_sleep_type_b;
|
|||
*
|
||||
****************************************************************************/
|
||||
|
||||
extern struct acpi_fixed_event_info
|
||||
acpi_gbl_fixed_event_info[ACPI_NUM_FIXED_EVENTS];
|
||||
ACPI_EXTERN struct acpi_fixed_event_handler
|
||||
acpi_gbl_fixed_event_handlers[ACPI_NUM_FIXED_EVENTS];
|
||||
ACPI_EXTERN struct acpi_gpe_xrupt_info *acpi_gbl_gpe_xrupt_list_head;
|
||||
ACPI_EXTERN struct acpi_gpe_block_info
|
||||
*acpi_gbl_gpe_fadt_blocks[ACPI_MAX_GPE_BLOCKS];
|
||||
|
||||
#if (!ACPI_REDUCED_HARDWARE)
|
||||
|
||||
ACPI_EXTERN u8 acpi_gbl_all_gpes_initialized;
|
||||
ACPI_EXTERN struct acpi_gpe_xrupt_info *acpi_gbl_gpe_xrupt_list_head;
|
||||
ACPI_EXTERN struct acpi_gpe_block_info
|
||||
*acpi_gbl_gpe_fadt_blocks[ACPI_MAX_GPE_BLOCKS];
|
||||
ACPI_EXTERN acpi_gbl_event_handler acpi_gbl_global_event_handler;
|
||||
ACPI_EXTERN void *acpi_gbl_global_event_handler_context;
|
||||
ACPI_EXTERN struct acpi_fixed_event_handler
|
||||
acpi_gbl_fixed_event_handlers[ACPI_NUM_FIXED_EVENTS];
|
||||
extern struct acpi_fixed_event_info
|
||||
acpi_gbl_fixed_event_info[ACPI_NUM_FIXED_EVENTS];
|
||||
|
||||
#endif /* !ACPI_REDUCED_HARDWARE */
|
||||
|
||||
|
@ -405,7 +405,7 @@ ACPI_EXTERN u32 acpi_gbl_trace_dbg_layer;
|
|||
|
||||
/*****************************************************************************
|
||||
*
|
||||
* Debugger globals
|
||||
* Debugger and Disassembler globals
|
||||
*
|
||||
****************************************************************************/
|
||||
|
||||
|
@ -413,8 +413,12 @@ ACPI_EXTERN u8 acpi_gbl_db_output_flags;
|
|||
|
||||
#ifdef ACPI_DISASSEMBLER
|
||||
|
||||
u8 ACPI_INIT_GLOBAL(acpi_gbl_ignore_noop_operator, FALSE);
|
||||
|
||||
ACPI_EXTERN u8 acpi_gbl_db_opt_disasm;
|
||||
ACPI_EXTERN u8 acpi_gbl_db_opt_verbose;
|
||||
ACPI_EXTERN struct acpi_external_list *acpi_gbl_external_list;
|
||||
ACPI_EXTERN struct acpi_external_file *acpi_gbl_external_file_list;
|
||||
#endif
|
||||
|
||||
#ifdef ACPI_DEBUGGER
|
||||
|
@ -426,6 +430,7 @@ extern u8 acpi_gbl_db_terminate_threads;
|
|||
ACPI_EXTERN u8 acpi_gbl_db_opt_tables;
|
||||
ACPI_EXTERN u8 acpi_gbl_db_opt_stats;
|
||||
ACPI_EXTERN u8 acpi_gbl_db_opt_ini_methods;
|
||||
ACPI_EXTERN u8 acpi_gbl_db_opt_no_region_support;
|
||||
|
||||
ACPI_EXTERN char *acpi_gbl_db_args[ACPI_DEBUGGER_MAX_ARGS];
|
||||
ACPI_EXTERN acpi_object_type acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS];
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -458,7 +458,7 @@ void acpi_ex_reacquire_interpreter(void);
|
|||
|
||||
void acpi_ex_relinquish_interpreter(void);
|
||||
|
||||
void acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc);
|
||||
u8 acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc);
|
||||
|
||||
void acpi_ex_acquire_global_lock(u32 rule);
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -189,11 +189,10 @@ struct acpi_namespace_node {
|
|||
#define ANOBJ_EVALUATED 0x20 /* Set on first evaluation of node */
|
||||
#define ANOBJ_ALLOCATED_BUFFER 0x40 /* Method AML buffer is dynamic (install_method) */
|
||||
|
||||
#define ANOBJ_IS_EXTERNAL 0x08 /* i_aSL only: This object created via External() */
|
||||
#define ANOBJ_METHOD_NO_RETVAL 0x10 /* i_aSL only: Method has no return value */
|
||||
#define ANOBJ_METHOD_SOME_NO_RETVAL 0x20 /* i_aSL only: Method has at least one return value */
|
||||
#define ANOBJ_IS_BIT_OFFSET 0x40 /* i_aSL only: Reference is a bit offset */
|
||||
#define ANOBJ_IS_REFERENCED 0x80 /* i_aSL only: Object was referenced */
|
||||
#define ANOBJ_IS_EXTERNAL 0x08 /* iASL only: This object created via External() */
|
||||
#define ANOBJ_METHOD_NO_RETVAL 0x10 /* iASL only: Method has no return value */
|
||||
#define ANOBJ_METHOD_SOME_NO_RETVAL 0x20 /* iASL only: Method has at least one return value */
|
||||
#define ANOBJ_IS_REFERENCED 0x80 /* iASL only: Object was referenced */
|
||||
|
||||
/* Internal ACPI table management - master table list */
|
||||
|
||||
|
@ -411,11 +410,10 @@ struct acpi_gpe_notify_info {
|
|||
struct acpi_gpe_notify_info *next;
|
||||
};
|
||||
|
||||
struct acpi_gpe_notify_object {
|
||||
struct acpi_namespace_node *node;
|
||||
struct acpi_gpe_notify_object *next;
|
||||
};
|
||||
|
||||
/*
|
||||
* GPE dispatch info. At any time, the GPE can have at most one type
|
||||
* of dispatch - Method, Handler, or Implicit Notify.
|
||||
*/
|
||||
union acpi_gpe_dispatch_info {
|
||||
struct acpi_namespace_node *method_node; /* Method node for this GPE level */
|
||||
struct acpi_gpe_handler_info *handler; /* Installed GPE handler */
|
||||
|
@ -679,6 +677,8 @@ struct acpi_opcode_info {
|
|||
u8 type; /* Opcode type */
|
||||
};
|
||||
|
||||
/* Value associated with the parse object */
|
||||
|
||||
union acpi_parse_value {
|
||||
u64 integer; /* Integer constant (Up to 64 bits) */
|
||||
u32 size; /* bytelist or field size */
|
||||
|
@ -1023,6 +1023,31 @@ struct acpi_port_info {
|
|||
|
||||
#define ACPI_ASCII_ZERO 0x30
|
||||
|
||||
/*****************************************************************************
|
||||
*
|
||||
* Disassembler
|
||||
*
|
||||
****************************************************************************/
|
||||
|
||||
struct acpi_external_list {
|
||||
char *path;
|
||||
char *internal_path;
|
||||
struct acpi_external_list *next;
|
||||
u32 value;
|
||||
u16 length;
|
||||
u8 type;
|
||||
u8 flags;
|
||||
};
|
||||
|
||||
/* Values for Flags field above */
|
||||
|
||||
#define ACPI_IPATH_ALLOCATED 0x01
|
||||
|
||||
struct acpi_external_file {
|
||||
char *path;
|
||||
struct acpi_external_file *next;
|
||||
};
|
||||
|
||||
/*****************************************************************************
|
||||
*
|
||||
* Debugger
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -49,14 +49,18 @@
|
|||
* get into potential aligment issues -- see the STORE macros below.
|
||||
* Use with care.
|
||||
*/
|
||||
#define ACPI_GET8(ptr) *ACPI_CAST_PTR (u8, ptr)
|
||||
#define ACPI_GET16(ptr) *ACPI_CAST_PTR (u16, ptr)
|
||||
#define ACPI_GET32(ptr) *ACPI_CAST_PTR (u32, ptr)
|
||||
#define ACPI_GET64(ptr) *ACPI_CAST_PTR (u64, ptr)
|
||||
#define ACPI_SET8(ptr) *ACPI_CAST_PTR (u8, ptr)
|
||||
#define ACPI_SET16(ptr) *ACPI_CAST_PTR (u16, ptr)
|
||||
#define ACPI_SET32(ptr) *ACPI_CAST_PTR (u32, ptr)
|
||||
#define ACPI_SET64(ptr) *ACPI_CAST_PTR (u64, ptr)
|
||||
#define ACPI_CAST8(ptr) ACPI_CAST_PTR (u8, (ptr))
|
||||
#define ACPI_CAST16(ptr) ACPI_CAST_PTR (u16, (ptr))
|
||||
#define ACPI_CAST32(ptr) ACPI_CAST_PTR (u32, (ptr))
|
||||
#define ACPI_CAST64(ptr) ACPI_CAST_PTR (u64, (ptr))
|
||||
#define ACPI_GET8(ptr) (*ACPI_CAST8 (ptr))
|
||||
#define ACPI_GET16(ptr) (*ACPI_CAST16 (ptr))
|
||||
#define ACPI_GET32(ptr) (*ACPI_CAST32 (ptr))
|
||||
#define ACPI_GET64(ptr) (*ACPI_CAST64 (ptr))
|
||||
#define ACPI_SET8(ptr, val) (*ACPI_CAST8 (ptr) = (u8) (val))
|
||||
#define ACPI_SET16(ptr, val) (*ACPI_CAST16 (ptr) = (u16) (val))
|
||||
#define ACPI_SET32(ptr, val) (*ACPI_CAST32 (ptr) = (u32) (val))
|
||||
#define ACPI_SET64(ptr, val) (*ACPI_CAST64 (ptr) = (u64) (val))
|
||||
|
||||
/*
|
||||
* printf() format helpers
|
||||
|
@ -293,6 +297,26 @@
|
|||
#define ACPI_16BIT_MASK 0x0000FFFF
|
||||
#define ACPI_24BIT_MASK 0x00FFFFFF
|
||||
|
||||
/* Macros to extract flag bits from position zero */
|
||||
|
||||
#define ACPI_GET_1BIT_FLAG(value) ((value) & ACPI_1BIT_MASK)
|
||||
#define ACPI_GET_2BIT_FLAG(value) ((value) & ACPI_2BIT_MASK)
|
||||
#define ACPI_GET_3BIT_FLAG(value) ((value) & ACPI_3BIT_MASK)
|
||||
#define ACPI_GET_4BIT_FLAG(value) ((value) & ACPI_4BIT_MASK)
|
||||
|
||||
/* Macros to extract flag bits from position one and above */
|
||||
|
||||
#define ACPI_EXTRACT_1BIT_FLAG(field, position) (ACPI_GET_1BIT_FLAG ((field) >> position))
|
||||
#define ACPI_EXTRACT_2BIT_FLAG(field, position) (ACPI_GET_2BIT_FLAG ((field) >> position))
|
||||
#define ACPI_EXTRACT_3BIT_FLAG(field, position) (ACPI_GET_3BIT_FLAG ((field) >> position))
|
||||
#define ACPI_EXTRACT_4BIT_FLAG(field, position) (ACPI_GET_4BIT_FLAG ((field) >> position))
|
||||
|
||||
/* ACPI Pathname helpers */
|
||||
|
||||
#define ACPI_IS_ROOT_PREFIX(c) ((c) == (u8) 0x5C) /* Backslash */
|
||||
#define ACPI_IS_PARENT_PREFIX(c) ((c) == (u8) 0x5E) /* Carat */
|
||||
#define ACPI_IS_PATH_SEPARATOR(c) ((c) == (u8) 0x2E) /* Period (dot) */
|
||||
|
||||
/*
|
||||
* An object of type struct acpi_namespace_node can appear in some contexts
|
||||
* where a pointer to an object of type union acpi_operand_object can also
|
||||
|
@ -364,137 +388,6 @@
|
|||
|
||||
#endif /* ACPI_NO_ERROR_MESSAGES */
|
||||
|
||||
/*
|
||||
* Debug macros that are conditionally compiled
|
||||
*/
|
||||
#ifdef ACPI_DEBUG_OUTPUT
|
||||
/*
|
||||
* Function entry tracing
|
||||
*/
|
||||
#define ACPI_FUNCTION_TRACE(a) ACPI_FUNCTION_NAME(a) \
|
||||
acpi_ut_trace(ACPI_DEBUG_PARAMETERS)
|
||||
#define ACPI_FUNCTION_TRACE_PTR(a, b) ACPI_FUNCTION_NAME(a) \
|
||||
acpi_ut_trace_ptr(ACPI_DEBUG_PARAMETERS, (void *)b)
|
||||
#define ACPI_FUNCTION_TRACE_U32(a, b) ACPI_FUNCTION_NAME(a) \
|
||||
acpi_ut_trace_u32(ACPI_DEBUG_PARAMETERS, (u32)b)
|
||||
#define ACPI_FUNCTION_TRACE_STR(a, b) ACPI_FUNCTION_NAME(a) \
|
||||
acpi_ut_trace_str(ACPI_DEBUG_PARAMETERS, (char *)b)
|
||||
|
||||
#define ACPI_FUNCTION_ENTRY() acpi_ut_track_stack_ptr()
|
||||
|
||||
/*
|
||||
* Function exit tracing.
|
||||
* WARNING: These macros include a return statement. This is usually considered
|
||||
* bad form, but having a separate exit macro is very ugly and difficult to maintain.
|
||||
* One of the FUNCTION_TRACE macros above must be used in conjunction with these macros
|
||||
* so that "_AcpiFunctionName" is defined.
|
||||
*
|
||||
* Note: the DO_WHILE0 macro is used to prevent some compilers from complaining
|
||||
* about these constructs.
|
||||
*/
|
||||
#ifdef ACPI_USE_DO_WHILE_0
|
||||
#define ACPI_DO_WHILE0(a) do a while(0)
|
||||
#else
|
||||
#define ACPI_DO_WHILE0(a) a
|
||||
#endif
|
||||
|
||||
#define return_VOID ACPI_DO_WHILE0 ({ \
|
||||
acpi_ut_exit (ACPI_DEBUG_PARAMETERS); \
|
||||
return;})
|
||||
/*
|
||||
* There are two versions of most of the return macros. The default version is
|
||||
* safer, since it avoids side-effects by guaranteeing that the argument will
|
||||
* not be evaluated twice.
|
||||
*
|
||||
* A less-safe version of the macros is provided for optional use if the
|
||||
* compiler uses excessive CPU stack (for example, this may happen in the
|
||||
* debug case if code optimzation is disabled.)
|
||||
*/
|
||||
#ifndef ACPI_SIMPLE_RETURN_MACROS
|
||||
|
||||
#define return_ACPI_STATUS(s) ACPI_DO_WHILE0 ({ \
|
||||
register acpi_status _s = (s); \
|
||||
acpi_ut_status_exit (ACPI_DEBUG_PARAMETERS, _s); \
|
||||
return (_s); })
|
||||
#define return_PTR(s) ACPI_DO_WHILE0 ({ \
|
||||
register void *_s = (void *) (s); \
|
||||
acpi_ut_ptr_exit (ACPI_DEBUG_PARAMETERS, (u8 *) _s); \
|
||||
return (_s); })
|
||||
#define return_VALUE(s) ACPI_DO_WHILE0 ({ \
|
||||
register u64 _s = (s); \
|
||||
acpi_ut_value_exit (ACPI_DEBUG_PARAMETERS, _s); \
|
||||
return (_s); })
|
||||
#define return_UINT8(s) ACPI_DO_WHILE0 ({ \
|
||||
register u8 _s = (u8) (s); \
|
||||
acpi_ut_value_exit (ACPI_DEBUG_PARAMETERS, (u64) _s); \
|
||||
return (_s); })
|
||||
#define return_UINT32(s) ACPI_DO_WHILE0 ({ \
|
||||
register u32 _s = (u32) (s); \
|
||||
acpi_ut_value_exit (ACPI_DEBUG_PARAMETERS, (u64) _s); \
|
||||
return (_s); })
|
||||
#else /* Use original less-safe macros */
|
||||
|
||||
#define return_ACPI_STATUS(s) ACPI_DO_WHILE0 ({ \
|
||||
acpi_ut_status_exit (ACPI_DEBUG_PARAMETERS, (s)); \
|
||||
return((s)); })
|
||||
#define return_PTR(s) ACPI_DO_WHILE0 ({ \
|
||||
acpi_ut_ptr_exit (ACPI_DEBUG_PARAMETERS, (u8 *) (s)); \
|
||||
return((s)); })
|
||||
#define return_VALUE(s) ACPI_DO_WHILE0 ({ \
|
||||
acpi_ut_value_exit (ACPI_DEBUG_PARAMETERS, (u64) (s)); \
|
||||
return((s)); })
|
||||
#define return_UINT8(s) return_VALUE(s)
|
||||
#define return_UINT32(s) return_VALUE(s)
|
||||
|
||||
#endif /* ACPI_SIMPLE_RETURN_MACROS */
|
||||
|
||||
/* Conditional execution */
|
||||
|
||||
#define ACPI_DEBUG_EXEC(a) a
|
||||
#define ACPI_DEBUG_ONLY_MEMBERS(a) a;
|
||||
#define _VERBOSE_STRUCTURES
|
||||
|
||||
/* Various object display routines for debug */
|
||||
|
||||
#define ACPI_DUMP_STACK_ENTRY(a) acpi_ex_dump_operand((a), 0)
|
||||
#define ACPI_DUMP_OPERANDS(a, b ,c) acpi_ex_dump_operands(a, b, c)
|
||||
#define ACPI_DUMP_ENTRY(a, b) acpi_ns_dump_entry (a, b)
|
||||
#define ACPI_DUMP_PATHNAME(a, b, c, d) acpi_ns_dump_pathname(a, b, c, d)
|
||||
#define ACPI_DUMP_BUFFER(a, b) acpi_ut_debug_dump_buffer((u8 *) a, b, DB_BYTE_DISPLAY, _COMPONENT)
|
||||
|
||||
#else
|
||||
/*
|
||||
* This is the non-debug case -- make everything go away,
|
||||
* leaving no executable debug code!
|
||||
*/
|
||||
#define ACPI_DEBUG_EXEC(a)
|
||||
#define ACPI_DEBUG_ONLY_MEMBERS(a)
|
||||
#define ACPI_FUNCTION_TRACE(a)
|
||||
#define ACPI_FUNCTION_TRACE_PTR(a, b)
|
||||
#define ACPI_FUNCTION_TRACE_U32(a, b)
|
||||
#define ACPI_FUNCTION_TRACE_STR(a, b)
|
||||
#define ACPI_FUNCTION_EXIT
|
||||
#define ACPI_FUNCTION_STATUS_EXIT(s)
|
||||
#define ACPI_FUNCTION_VALUE_EXIT(s)
|
||||
#define ACPI_FUNCTION_ENTRY()
|
||||
#define ACPI_DUMP_STACK_ENTRY(a)
|
||||
#define ACPI_DUMP_OPERANDS(a, b, c)
|
||||
#define ACPI_DUMP_ENTRY(a, b)
|
||||
#define ACPI_DUMP_TABLES(a, b)
|
||||
#define ACPI_DUMP_PATHNAME(a, b, c, d)
|
||||
#define ACPI_DUMP_BUFFER(a, b)
|
||||
#define ACPI_DEBUG_PRINT(pl)
|
||||
#define ACPI_DEBUG_PRINT_RAW(pl)
|
||||
|
||||
#define return_VOID return
|
||||
#define return_ACPI_STATUS(s) return(s)
|
||||
#define return_VALUE(s) return(s)
|
||||
#define return_UINT8(s) return(s)
|
||||
#define return_UINT32(s) return(s)
|
||||
#define return_PTR(s) return(s)
|
||||
|
||||
#endif /* ACPI_DEBUG_OUTPUT */
|
||||
|
||||
#if (!ACPI_REDUCED_HARDWARE)
|
||||
#define ACPI_HW_OPTIONAL_FUNCTION(addr) addr
|
||||
#else
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -218,6 +218,18 @@ acpi_ns_check_parameter_count(char *pathname,
|
|||
u32 user_param_count,
|
||||
const union acpi_predefined_info *info);
|
||||
|
||||
acpi_status
|
||||
acpi_ns_check_object_type(struct acpi_predefined_data *data,
|
||||
union acpi_operand_object **return_object_ptr,
|
||||
u32 expected_btypes, u32 package_index);
|
||||
|
||||
/*
|
||||
* nsprepkg - Validation of predefined name packages
|
||||
*/
|
||||
acpi_status
|
||||
acpi_ns_check_package(struct acpi_predefined_data *data,
|
||||
union acpi_operand_object **return_object_ptr);
|
||||
|
||||
/*
|
||||
* nsnames - Name and Scope manipulation
|
||||
*/
|
||||
|
@ -333,8 +345,6 @@ acpi_ns_install_node(struct acpi_walk_state *walk_state,
|
|||
/*
|
||||
* nsutils - Utility functions
|
||||
*/
|
||||
u8 acpi_ns_valid_root_prefix(char prefix);
|
||||
|
||||
acpi_object_type acpi_ns_get_type(struct acpi_namespace_node *node);
|
||||
|
||||
u32 acpi_ns_local(acpi_object_type type);
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -307,7 +307,7 @@ struct acpi_object_addr_handler {
|
|||
struct acpi_namespace_node *node; /* Parent device */
|
||||
void *context;
|
||||
acpi_adr_space_setup setup;
|
||||
union acpi_operand_object *region_list; /* regions using this handler */
|
||||
union acpi_operand_object *region_list; /* Regions using this handler */
|
||||
union acpi_operand_object *next;
|
||||
};
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -105,7 +105,28 @@ union acpi_parse_object *acpi_ps_find_name(union acpi_parse_object *scope,
|
|||
union acpi_parse_object *acpi_ps_get_parent(union acpi_parse_object *op);
|
||||
|
||||
/*
|
||||
* psopcode - AML Opcode information
|
||||
* psobject - support for parse object processing
|
||||
*/
|
||||
acpi_status
|
||||
acpi_ps_build_named_op(struct acpi_walk_state *walk_state,
|
||||
u8 *aml_op_start,
|
||||
union acpi_parse_object *unnamed_op,
|
||||
union acpi_parse_object **op);
|
||||
|
||||
acpi_status
|
||||
acpi_ps_create_op(struct acpi_walk_state *walk_state,
|
||||
u8 *aml_op_start, union acpi_parse_object **new_op);
|
||||
|
||||
acpi_status
|
||||
acpi_ps_complete_op(struct acpi_walk_state *walk_state,
|
||||
union acpi_parse_object **op, acpi_status status);
|
||||
|
||||
acpi_status
|
||||
acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
|
||||
union acpi_parse_object *op, acpi_status status);
|
||||
|
||||
/*
|
||||
* psopinfo - AML Opcode information
|
||||
*/
|
||||
const struct acpi_opcode_info *acpi_ps_get_opcode_info(u16 opcode);
|
||||
|
||||
|
@ -211,8 +232,6 @@ void acpi_ps_free_op(union acpi_parse_object *op);
|
|||
|
||||
u8 acpi_ps_is_leading_char(u32 c);
|
||||
|
||||
u8 acpi_ps_is_prefix_char(u32 c);
|
||||
|
||||
#ifdef ACPI_FUTURE_USAGE
|
||||
u32 acpi_ps_get_name(union acpi_parse_object *op);
|
||||
#endif /* ACPI_FUTURE_USAGE */
|
||||
|
|
|
@ -1,12 +1,11 @@
|
|||
/******************************************************************************
|
||||
*
|
||||
* Name: acpredef - Information table for ACPI predefined methods and objects
|
||||
* $Revision: 1.1 $
|
||||
*
|
||||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -51,13 +50,13 @@
|
|||
*
|
||||
* 1) PTYPE1 packages do not contain sub-packages.
|
||||
*
|
||||
* ACPI_PTYPE1_FIXED: Fixed length, 1 or 2 object types:
|
||||
* ACPI_PTYPE1_FIXED: Fixed-length length, 1 or 2 object types:
|
||||
* object type
|
||||
* count
|
||||
* object type
|
||||
* count
|
||||
*
|
||||
* ACPI_PTYPE1_VAR: Variable length:
|
||||
* ACPI_PTYPE1_VAR: Variable-length length:
|
||||
* object type (Int/Buf/Ref)
|
||||
*
|
||||
* ACPI_PTYPE1_OPTION: Package has some required and some optional elements
|
||||
|
@ -85,10 +84,10 @@
|
|||
* count
|
||||
* (Used for _CST)
|
||||
*
|
||||
* ACPI_PTYPE2_FIXED: Each subpackage is of fixed length
|
||||
* ACPI_PTYPE2_FIXED: Each subpackage is of Fixed-length
|
||||
* (Used for _PRT)
|
||||
*
|
||||
* ACPI_PTYPE2_MIN: Each subpackage has a variable but minimum length
|
||||
* ACPI_PTYPE2_MIN: Each subpackage has a Variable-length but minimum length
|
||||
* (Used for _HPX)
|
||||
*
|
||||
* ACPI_PTYPE2_REV_FIXED: Revision at start, each subpackage is Fixed-length
|
||||
|
@ -124,7 +123,8 @@ enum acpi_return_package_types {
|
|||
* These are the names that can actually be evaluated via acpi_evaluate_object.
|
||||
* Not present in this table are the following:
|
||||
*
|
||||
* 1) Predefined/Reserved names that are never evaluated via acpi_evaluate_object:
|
||||
* 1) Predefined/Reserved names that are never evaluated via
|
||||
* acpi_evaluate_object:
|
||||
* _Lxx and _Exx GPE methods
|
||||
* _Qxx EC methods
|
||||
* _T_x compiler temporary variables
|
||||
|
@ -149,6 +149,8 @@ enum acpi_return_package_types {
|
|||
* information about the expected structure of the package. This information
|
||||
* is saved here (rather than in a separate table) in order to minimize the
|
||||
* overall size of the stored data.
|
||||
*
|
||||
* Note: The additional braces are intended to promote portability.
|
||||
*/
|
||||
static const union acpi_predefined_info predefined_names[] = {
|
||||
{{"_AC0", 0, ACPI_RTYPE_INTEGER}},
|
||||
|
@ -212,9 +214,8 @@ static const union acpi_predefined_info predefined_names[] = {
|
|||
{{"_BCT", 1, ACPI_RTYPE_INTEGER}},
|
||||
{{"_BDN", 0, ACPI_RTYPE_INTEGER}},
|
||||
{{"_BFS", 1, 0}},
|
||||
{{"_BIF", 0, ACPI_RTYPE_PACKAGE} }, /* Fixed-length (9 Int),(4 Str/Buf) */
|
||||
{{{ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 9,
|
||||
ACPI_RTYPE_STRING | ACPI_RTYPE_BUFFER}, 4, 0} },
|
||||
{{"_BIF", 0, ACPI_RTYPE_PACKAGE}}, /* Fixed-length (9 Int),(4 Str) */
|
||||
{{{ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 9, ACPI_RTYPE_STRING}, 4, 0}},
|
||||
|
||||
{{"_BIX", 0, ACPI_RTYPE_PACKAGE}}, /* Fixed-length (16 Int),(4 Str) */
|
||||
{{{ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 16, ACPI_RTYPE_STRING}, 4,
|
||||
|
@ -236,7 +237,8 @@ static const union acpi_predefined_info predefined_names[] = {
|
|||
{{"_CBA", 0, ACPI_RTYPE_INTEGER}}, /* See PCI firmware spec 3.0 */
|
||||
{{"_CDM", 0, ACPI_RTYPE_INTEGER}},
|
||||
{{"_CID", 0, ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING | ACPI_RTYPE_PACKAGE}}, /* Variable-length (Ints/Strs) */
|
||||
{{{ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING, 0,0}, 0,0}},
|
||||
{{{ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING, 0, 0}, 0,
|
||||
0}},
|
||||
|
||||
{{"_CLS", 0, ACPI_RTYPE_PACKAGE}}, /* Fixed-length (3 Int) */
|
||||
{{{ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 3, 0}, 0, 0}},
|
||||
|
@ -251,7 +253,8 @@ static const union acpi_predefined_info predefined_names[] = {
|
|||
{{{ACPI_PTYPE2_COUNT, ACPI_RTYPE_INTEGER, 0,0}, 0,0}},
|
||||
|
||||
{{"_CST", 0, ACPI_RTYPE_PACKAGE}}, /* Variable-length (1 Int(n), n Pkg (1 Buf/3 Int) */
|
||||
{{{ACPI_PTYPE2_PKG_COUNT,ACPI_RTYPE_BUFFER, 1, ACPI_RTYPE_INTEGER}, 3,0}},
|
||||
{{{ACPI_PTYPE2_PKG_COUNT, ACPI_RTYPE_BUFFER, 1, ACPI_RTYPE_INTEGER}, 3,
|
||||
0}},
|
||||
|
||||
{{"_CWS", 1, ACPI_RTYPE_INTEGER}},
|
||||
{{"_DCK", 1, ACPI_RTYPE_INTEGER}},
|
||||
|
@ -342,8 +345,8 @@ static const union acpi_predefined_info predefined_names[] = {
|
|||
{{"_MBM", 0, ACPI_RTYPE_PACKAGE}}, /* Fixed-length (8 Int) */
|
||||
{{{ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 8, 0}, 0, 0}},
|
||||
|
||||
{{"_MLS", 0, ACPI_RTYPE_PACKAGE}}, /* Variable-length (Pkgs) each (2 Str) */
|
||||
{{{ACPI_PTYPE2, ACPI_RTYPE_STRING, 2,0}, 0,0}},
|
||||
{{"_MLS", 0, ACPI_RTYPE_PACKAGE}}, /* Variable-length (Pkgs) each (1 Str/1 Buf) */
|
||||
{{{ACPI_PTYPE2, ACPI_RTYPE_STRING, 1, ACPI_RTYPE_BUFFER}, 1, 0}},
|
||||
|
||||
{{"_MSG", 1, 0}},
|
||||
{{"_MSM", 4, ACPI_RTYPE_INTEGER}},
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -347,18 +347,21 @@ extern struct acpi_rsdump_info *acpi_gbl_dump_resource_dispatch[];
|
|||
extern struct acpi_rsdump_info *acpi_gbl_dump_serial_bus_dispatch[];
|
||||
|
||||
/*
|
||||
* rsdump
|
||||
* rsdumpinfo
|
||||
*/
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_irq[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_prt[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_dma[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_start_dpf[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_end_dpf[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_io[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_io_flags[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_fixed_io[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_vendor[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_end_tag[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_memory24[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_memory32[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_memory_flags[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_fixed_memory32[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_address16[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_address32[];
|
||||
|
@ -372,6 +375,7 @@ extern struct acpi_rsdump_info acpi_rs_dump_common_serial_bus[];
|
|||
extern struct acpi_rsdump_info acpi_rs_dump_i2c_serial_bus[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_spi_serial_bus[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_uart_serial_bus[];
|
||||
extern struct acpi_rsdump_info acpi_rs_dump_general_flags[];
|
||||
#endif
|
||||
|
||||
#endif /* __ACRESRC_H__ */
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -483,39 +483,17 @@ acpi_ut_short_divide(u64 in_dividend,
|
|||
/*
|
||||
* utmisc
|
||||
*/
|
||||
void ut_convert_backslashes(char *pathname);
|
||||
|
||||
const char *acpi_ut_validate_exception(acpi_status status);
|
||||
|
||||
u8 acpi_ut_is_pci_root_bridge(char *id);
|
||||
|
||||
u8 acpi_ut_is_aml_table(struct acpi_table_header *table);
|
||||
|
||||
acpi_status acpi_ut_allocate_owner_id(acpi_owner_id * owner_id);
|
||||
|
||||
void acpi_ut_release_owner_id(acpi_owner_id * owner_id);
|
||||
|
||||
acpi_status
|
||||
acpi_ut_walk_package_tree(union acpi_operand_object *source_object,
|
||||
void *target_object,
|
||||
acpi_pkg_callback walk_callback, void *context);
|
||||
|
||||
void acpi_ut_strupr(char *src_string);
|
||||
|
||||
void acpi_ut_strlwr(char *src_string);
|
||||
|
||||
int acpi_ut_stricmp(char *string1, char *string2);
|
||||
|
||||
void acpi_ut_print_string(char *string, u8 max_length);
|
||||
|
||||
u8 acpi_ut_valid_acpi_name(u32 name);
|
||||
|
||||
void acpi_ut_repair_name(char *name);
|
||||
|
||||
u8 acpi_ut_valid_acpi_char(char character, u32 position);
|
||||
|
||||
acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer);
|
||||
|
||||
/* Values for Base above (16=Hex, 10=Decimal) */
|
||||
|
||||
#define ACPI_ANY_BASE 0
|
||||
|
@ -531,16 +509,26 @@ acpi_ut_display_init_pathname(u8 type,
|
|||
char *path);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* utownerid - Support for Table/Method Owner IDs
|
||||
*/
|
||||
acpi_status acpi_ut_allocate_owner_id(acpi_owner_id * owner_id);
|
||||
|
||||
void acpi_ut_release_owner_id(acpi_owner_id * owner_id);
|
||||
|
||||
/*
|
||||
* utresrc
|
||||
*/
|
||||
acpi_status
|
||||
acpi_ut_walk_aml_resources(u8 *aml,
|
||||
acpi_ut_walk_aml_resources(struct acpi_walk_state *walk_state,
|
||||
u8 *aml,
|
||||
acpi_size aml_length,
|
||||
acpi_walk_aml_callback user_function,
|
||||
void **context);
|
||||
|
||||
acpi_status acpi_ut_validate_resource(void *aml, u8 *return_index);
|
||||
acpi_status
|
||||
acpi_ut_validate_resource(struct acpi_walk_state *walk_state,
|
||||
void *aml, u8 *return_index);
|
||||
|
||||
u32 acpi_ut_get_descriptor_length(void *aml);
|
||||
|
||||
|
@ -553,6 +541,27 @@ u8 acpi_ut_get_resource_type(void *aml);
|
|||
acpi_status
|
||||
acpi_ut_get_resource_end_tag(union acpi_operand_object *obj_desc, u8 **end_tag);
|
||||
|
||||
/*
|
||||
* utstring - String and character utilities
|
||||
*/
|
||||
void acpi_ut_strupr(char *src_string);
|
||||
|
||||
void acpi_ut_strlwr(char *src_string);
|
||||
|
||||
int acpi_ut_stricmp(char *string1, char *string2);
|
||||
|
||||
acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer);
|
||||
|
||||
void acpi_ut_print_string(char *string, u8 max_length);
|
||||
|
||||
void ut_convert_backslashes(char *pathname);
|
||||
|
||||
u8 acpi_ut_valid_acpi_name(u32 name);
|
||||
|
||||
u8 acpi_ut_valid_acpi_char(char character, u32 position);
|
||||
|
||||
void acpi_ut_repair_name(char *name);
|
||||
|
||||
/*
|
||||
* utmutex - mutex support
|
||||
*/
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -199,6 +199,12 @@ struct aml_resource_fixed_dma {
|
|||
struct aml_resource_large_header {
|
||||
AML_RESOURCE_LARGE_HEADER_COMMON};
|
||||
|
||||
/* General Flags for address space resource descriptors */
|
||||
|
||||
#define ACPI_RESOURCE_FLAG_DEC 2
|
||||
#define ACPI_RESOURCE_FLAG_MIF 4
|
||||
#define ACPI_RESOURCE_FLAG_MAF 8
|
||||
|
||||
struct aml_resource_memory24 {
|
||||
AML_RESOURCE_LARGE_HEADER_COMMON u8 flags;
|
||||
u16 minimum;
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -47,7 +47,7 @@
|
|||
#include "acinterp.h"
|
||||
#include "acnamesp.h"
|
||||
#ifdef ACPI_DISASSEMBLER
|
||||
#include <acpi/acdisasm.h>
|
||||
#include "acdisasm.h"
|
||||
#endif
|
||||
|
||||
#define _COMPONENT ACPI_DISPATCHER
|
||||
|
@ -151,6 +151,7 @@ acpi_ds_create_method_mutex(union acpi_operand_object *method_desc)
|
|||
|
||||
status = acpi_os_create_mutex(&mutex_desc->mutex.os_mutex);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
acpi_ut_delete_object_desc(mutex_desc);
|
||||
return_ACPI_STATUS(status);
|
||||
}
|
||||
|
||||
|
@ -378,7 +379,8 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
|
|||
*/
|
||||
info = ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_evaluate_info));
|
||||
if (!info) {
|
||||
return_ACPI_STATUS(AE_NO_MEMORY);
|
||||
status = AE_NO_MEMORY;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
info->parameters = &this_walk_state->operands[0];
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
******************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -388,7 +388,7 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
|
|||
union acpi_parse_object *parent;
|
||||
union acpi_operand_object *obj_desc = NULL;
|
||||
acpi_status status = AE_OK;
|
||||
unsigned i;
|
||||
u32 i;
|
||||
u16 index;
|
||||
u16 reference_count;
|
||||
|
||||
|
@ -525,7 +525,7 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
|
|||
}
|
||||
|
||||
ACPI_INFO((AE_INFO,
|
||||
"Actual Package length (%u) is larger than NumElements field (%u), truncated\n",
|
||||
"Actual Package length (%u) is larger than NumElements field (%u), truncated",
|
||||
i, element_count));
|
||||
} else if (i < element_count) {
|
||||
/*
|
||||
|
@ -703,7 +703,7 @@ acpi_ds_init_object_from_op(struct acpi_walk_state *walk_state,
|
|||
/* Truncate value if we are executing from a 32-bit ACPI table */
|
||||
|
||||
#ifndef ACPI_NO_METHOD_EXECUTION
|
||||
acpi_ex_truncate_for32bit_table(obj_desc);
|
||||
(void)acpi_ex_truncate_for32bit_table(obj_desc);
|
||||
#endif
|
||||
break;
|
||||
|
||||
|
@ -725,8 +725,18 @@ acpi_ds_init_object_from_op(struct acpi_walk_state *walk_state,
|
|||
case AML_TYPE_LITERAL:
|
||||
|
||||
obj_desc->integer.value = op->common.value.integer;
|
||||
|
||||
#ifndef ACPI_NO_METHOD_EXECUTION
|
||||
acpi_ex_truncate_for32bit_table(obj_desc);
|
||||
if (acpi_ex_truncate_for32bit_table(obj_desc)) {
|
||||
|
||||
/* Warn if we found a 64-bit constant in a 32-bit table */
|
||||
|
||||
ACPI_WARNING((AE_INFO,
|
||||
"Truncated 64-bit constant found in 32-bit table: %8.8X%8.8X => %8.8X",
|
||||
ACPI_FORMAT_UINT64(op->common.
|
||||
value.integer),
|
||||
(u32)obj_desc->integer.value));
|
||||
}
|
||||
#endif
|
||||
break;
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -486,18 +486,18 @@ acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
|
|||
ACPI_FUNCTION_TRACE_PTR(ds_eval_table_region_operands, op);
|
||||
|
||||
/*
|
||||
* This is where we evaluate the signature_string and oem_iDString
|
||||
* and oem_table_iDString of the data_table_region declaration
|
||||
* This is where we evaluate the Signature string, oem_id string,
|
||||
* and oem_table_id string of the Data Table Region declaration
|
||||
*/
|
||||
node = op->common.node;
|
||||
|
||||
/* next_op points to signature_string op */
|
||||
/* next_op points to Signature string op */
|
||||
|
||||
next_op = op->common.value.arg;
|
||||
|
||||
/*
|
||||
* Evaluate/create the signature_string and oem_iDString
|
||||
* and oem_table_iDString operands
|
||||
* Evaluate/create the Signature string, oem_id string,
|
||||
* and oem_table_id string operands
|
||||
*/
|
||||
status = acpi_ds_create_operands(walk_state, next_op);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
|
@ -505,8 +505,8 @@ acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
|
|||
}
|
||||
|
||||
/*
|
||||
* Resolve the signature_string and oem_iDString
|
||||
* and oem_table_iDString operands
|
||||
* Resolve the Signature string, oem_id string,
|
||||
* and oem_table_id string operands
|
||||
*/
|
||||
status = acpi_ex_resolve_operands(op->common.aml_opcode,
|
||||
ACPI_WALK_OPERANDS, walk_state);
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
******************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -178,7 +178,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
|
|||
|
||||
if (!op) {
|
||||
ACPI_ERROR((AE_INFO, "Null Op"));
|
||||
return_UINT8(TRUE);
|
||||
return_VALUE(TRUE);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -210,7 +210,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
|
|||
"At Method level, result of [%s] not used\n",
|
||||
acpi_ps_get_opcode_name(op->common.
|
||||
aml_opcode)));
|
||||
return_UINT8(FALSE);
|
||||
return_VALUE(FALSE);
|
||||
}
|
||||
|
||||
/* Get info on the parent. The root_op is AML_SCOPE */
|
||||
|
@ -219,7 +219,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
|
|||
acpi_ps_get_opcode_info(op->common.parent->common.aml_opcode);
|
||||
if (parent_info->class == AML_CLASS_UNKNOWN) {
|
||||
ACPI_ERROR((AE_INFO, "Unknown parent opcode Op=%p", op));
|
||||
return_UINT8(FALSE);
|
||||
return_VALUE(FALSE);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -307,7 +307,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
|
|||
acpi_ps_get_opcode_name(op->common.parent->common.
|
||||
aml_opcode), op));
|
||||
|
||||
return_UINT8(TRUE);
|
||||
return_VALUE(TRUE);
|
||||
|
||||
result_not_used:
|
||||
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
|
||||
|
@ -316,7 +316,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
|
|||
acpi_ps_get_opcode_name(op->common.parent->common.
|
||||
aml_opcode), op));
|
||||
|
||||
return_UINT8(FALSE);
|
||||
return_VALUE(FALSE);
|
||||
}
|
||||
|
||||
/*******************************************************************************
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -149,7 +149,7 @@ acpi_ds_get_predicate_value(struct acpi_walk_state *walk_state,
|
|||
|
||||
/* Truncate the predicate to 32-bits if necessary */
|
||||
|
||||
acpi_ex_truncate_for32bit_table(local_obj_desc);
|
||||
(void)acpi_ex_truncate_for32bit_table(local_obj_desc);
|
||||
|
||||
/*
|
||||
* Save the result of the predicate evaluation on
|
||||
|
@ -706,7 +706,7 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
|
|||
* ACPI 2.0 support for 64-bit integers: Truncate numeric
|
||||
* result value if we are executing from a 32-bit ACPI table
|
||||
*/
|
||||
acpi_ex_truncate_for32bit_table(walk_state->result_obj);
|
||||
(void)acpi_ex_truncate_for32bit_table(walk_state->result_obj);
|
||||
|
||||
/*
|
||||
* Check if we just completed the evaluation of a
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -50,7 +50,7 @@
|
|||
#include "acnamesp.h"
|
||||
|
||||
#ifdef ACPI_ASL_COMPILER
|
||||
#include <acpi/acdisasm.h>
|
||||
#include "acdisasm.h"
|
||||
#endif
|
||||
|
||||
#define _COMPONENT ACPI_DISPATCHER
|
||||
|
@ -178,7 +178,8 @@ acpi_ds_load1_begin_op(struct acpi_walk_state * walk_state,
|
|||
* Target of Scope() not found. Generate an External for it, and
|
||||
* insert the name into the namespace.
|
||||
*/
|
||||
acpi_dm_add_to_external_list(path, ACPI_TYPE_DEVICE, 0);
|
||||
acpi_dm_add_to_external_list(op, path, ACPI_TYPE_DEVICE,
|
||||
0);
|
||||
status =
|
||||
acpi_ns_lookup(walk_state->scope_info, path,
|
||||
object_type, ACPI_IMODE_LOAD_PASS1,
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -222,7 +222,7 @@ acpi_ds_load2_begin_op(struct acpi_walk_state *walk_state,
|
|||
*/
|
||||
ACPI_WARNING((AE_INFO,
|
||||
"Type override - [%4.4s] had invalid type (%s) "
|
||||
"for Scope operator, changed to type ANY\n",
|
||||
"for Scope operator, changed to type ANY",
|
||||
acpi_ut_get_node_name(node),
|
||||
acpi_ut_get_type_name(node->type)));
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*****************************************************************************/
|
||||
|
||||
/*
|
||||
* Copyright (C) 2000 - 2012, Intel Corp.
|
||||
* Copyright (C) 2000 - 2013, Intel Corp.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
|
@ -561,8 +561,8 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
|
|||
status = AE_NO_MEMORY;
|
||||
} else {
|
||||
/*
|
||||
* Invoke the GPE Method (_Lxx, _Exx) i.e., evaluate the _Lxx/_Exx
|
||||
* control method that corresponds to this GPE
|
||||
* Invoke the GPE Method (_Lxx, _Exx) i.e., evaluate the
|
||||
* _Lxx/_Exx control method that corresponds to this GPE
|
||||
*/
|
||||
info->prefix_node =
|
||||
local_gpe_event_info->dispatch.method_node;
|
||||
|
@ -707,7 +707,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
|
|||
if (ACPI_FAILURE(status)) {
|
||||
ACPI_EXCEPTION((AE_INFO, status,
|
||||
"Unable to clear GPE%02X", gpe_number));
|
||||
return_UINT32(ACPI_INTERRUPT_NOT_HANDLED);
|
||||
return_VALUE(ACPI_INTERRUPT_NOT_HANDLED);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -724,7 +724,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
|
|||
if (ACPI_FAILURE(status)) {
|
||||
ACPI_EXCEPTION((AE_INFO, status,
|
||||
"Unable to disable GPE%02X", gpe_number));
|
||||
return_UINT32(ACPI_INTERRUPT_NOT_HANDLED);
|
||||
return_VALUE(ACPI_INTERRUPT_NOT_HANDLED);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -765,7 +765,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
|
|||
gpe_event_info);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
ACPI_EXCEPTION((AE_INFO, status,
|
||||
"Unable to queue handler for GPE%2X - event disabled",
|
||||
"Unable to queue handler for GPE%02X - event disabled",
|
||||
gpe_number));
|
||||
}
|
||||
break;
|
||||
|
@ -784,7 +784,7 @@ acpi_ev_gpe_dispatch(struct acpi_namespace_node *gpe_device,
|
|||
break;
|
||||
}
|
||||
|
||||
return_UINT32(ACPI_INTERRUPT_HANDLED);
|
||||
return_VALUE(ACPI_INTERRUPT_HANDLED);
|
||||
}
|
||||
|
||||
#endif /* !ACPI_REDUCED_HARDWARE */
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue