Power management and ACPI updates for v4.4-rc1

- ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng).
 
    The most significant change is to allow the AML debugger to be
    built into the kernel.  On top of that there is an update related
    to the NFIT table (the ACPI persistent memory interface)
    and a few fixes and cleanups.
 
  - ACPI CPPC2 (Collaborative Processor Performance Control v2)
    support along with a cpufreq frontend (Ashwin Chaugule).
 
    This can only be enabled on ARM64 at this point.
 
  - New ACPI infrastructure for the early probing of IRQ chips and
    clock sources (Marc Zyngier).
 
  - Support for a new hierarchical properties extension of the ACPI
    _DSD (Device Specific Data) device configuration object allowing
    the kernel to handle hierarchical properties (provided by the
    platform firmware this way) automatically and make them available
    to device drivers via the generic device properties interface
    (Rafael Wysocki).
 
  - Generic device properties API extension to obtain an index of
    certain string value in an array of strings, along the lines of
    of_property_match_string(), but working for all of the supported
    firmware node types, and support for the "dma-names" device
    property based on it (Mika Westerberg).
 
  - ACPI core fix to parse the MADT (Multiple APIC Description Table)
    entries in the order expected by platform firmware (and mandated
    by the specification) to avoid confusion on systems with more than
    255 logical CPUs (Lukasz Anaczkowski).
 
  - Consolidation of the ACPI-based handling of PCI host bridges
    on x86 and ia64 (Jiang Liu).
 
  - ACPI core fixes to ensure that the correct IRQ number is used to
    represent the SCI (System Control Interrupt) in the cases when
    it has been re-mapped (Chen Yu).
 
  - New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede).
 
  - ACPI EC driver fixes (Lv Zheng).
 
  - Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri
    Kosina, Rami Rosen, Rasmus Villemoes).
 
  - New mechanism in the PM core allowing drivers to check if the
    platform firmware is going to be involved in the upcoming system
    suspend or if it has been involved in the suspend the system is
    resuming from at the moment (Rafael Wysocki).
 
    This should allow drivers to optimize their suspend/resume
    handling in some cases and the changes include a couple of users
    of it (the i8042 input driver, PCI PM).
 
  - PCI PM fix to prevent runtime-suspended devices with PME enabled
    from being resumed during system suspend even if they aren't
    configured to wake up the system from sleep (Rafael Wysocki).
 
  - New mechanism to report the number of a wakeup IRQ that woke up
    the system from sleep last time (Alexandra Yates).
 
  - Removal of unused interfaces from the generic power domains
    framework and fixes related to latency measurements in that
    code (Ulf Hansson, Daniel Lezcano).
 
  - cpufreq core sysfs interface rework to make it handle CPUs that
    share performance scaling settings (represented by a common
    cpufreq policy object) more symmetrically (Viresh Kumar).
 
    This should help to simplify the CPU offline/online handling among
    other things.
 
  - cpufreq core fixes and cleanups (Viresh Kumar).
 
  - intel_pstate fixes related to the Turbo Activation Ratio (TAR)
    mechanism on client platforms which causes the turbo P-states
    range to vary depending on platform firmware settings (Srinivas
    Pandruvada).
 
  - intel_pstate sysfs interface fix (Prarit Bhargava).
 
  - Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes
    and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G
    Bhat, Luis de Bethencourt).
 
  - cpuidle mvebu driver cleanups (Russell King).
 
  - OPP (Operating Performance Points) framework code reorganization
    to make it more maintainable (Viresh Kumar).
 
  - Intel Broxton support for the RAPL (Running Average Power Limits)
    power capping driver (Amy Wiles).
 
  - Assorted power management code fixes and cleanups (Dan Carpenter,
    Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus
    Villemoes).
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABCAAGBQJWOC9oAAoJEILEb/54YlRx/c8P/joflwoFsISwJccG62YTQMuc
 bMQKM4Kw0vl5La8+pkLpe5t6+mW7l81UFtYF6Dzd8LOKlD9sszD34z1lHmCeT/oR
 wn0uZpHagRyLMUfoyiEtlU/VRU6WQNNtS3EgjwUi7xgFz9Q0pjcCZ9OQ6vKov1j5
 +6j40ODif5sgo+2vl+rztJiV0SIMkYdkgNqgfN1FE9bdLA2Zkk+PxxJbtGQORuDu
 O/K+XhQT2xWquVWi/1p+VtQxs5glBS1oKm0kogV5bElCvNTRNIVABUNcjogITQwo
 QSAKgoCKIoaIl5jtDT6u5dc0y67q/dMtqOY9fOCcOz1Z7jbWQzR8D7mpFWIsJUPK
 K2LClI3t85ynpN6Jref246A6+C9nwB8JMAiAR11oBw7WbBlkd6tbRgcT5B+iz8UE
 FuCCif7pha/Fs+Jt1YRazscIqteQ2bAhhxikuIPMfw2M6M67MNfVNeKA1bAoSM34
 dH7JsilblitvV7shrwJHwXPXCOF2jEPoK8I4/q2+TR5qUxEpRJjelQxXGSaQScMZ
 iNnjeTgv8H8q+rY5Yjzsl4pxP0Fvf7IuqkptWOJbgepg4cQc9pS87wOpY3uEeQzr
 H7ruaQJFCnLO4aXbPNClsiJARhrBk+qMlsh4vBEyCJ2T0ucb+nIUcN4BTi8t85yl
 X97BfHHUiDoUrnIsNids
 =1gaH
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management and ACPI updates from Rafael Wysocki:
 "Quite a new features are included this time.

  First off, the Collaborative Processor Performance Control interface
  (version 2) defined by ACPI will now be supported on ARM64 along with
  a cpufreq frontend for CPU performance scaling.

  Second, ACPI gets a new infrastructure for the early probing of IRQ
  chips and clock sources (along the lines of the existing similar
  mechanism for DT).

  Next, the ACPI core and the generic device properties API will now
  support a recently introduced hierarchical properties extension of the
  _DSD (Device Specific Data) ACPI device configuration object.  If the
  ACPI platform firmware uses that extension to organize device
  properties in a hierarchical way, the kernel will automatically handle
  it and make those properties available to device drivers via the
  generic device properties API.

  It also will be possible to build the ACPICA's AML interpreter
  debugger into the kernel now and use that to diagnose AML-related
  problems more efficiently.  In the future, this should make it
  possible to single-step AML execution and do similar things.
  Interesting stuff, although somewhat experimental at this point.

  Finally, the PM core gets a new mechanism that can be used by device
  drivers to distinguish between suspend-to-RAM (based on platform
  firmware support) and suspend-to-idle (or other variants of system
  suspend the platform firmware is not involved in) and possibly
  optimize their device suspend/resume handling accordingly.

  In addition to that, some existing features are re-organized quite
  substantially.

  First, the ACPI-based handling of PCI host bridges on x86 and ia64 is
  unified and the common code goes into the ACPI core (so as to reduce
  code duplication and eliminate non-essential differences between the
  two architectures in that area).

  Second, the Operating Performance Points (OPP) framework is
  reorganized to make the code easier to find and follow.

  Next, the cpufreq core's sysfs interface is reorganized to get rid of
  the "primary CPU" concept for configurations in which the same
  performance scaling settings are shared between multiple CPUs.

  Finally, some interfaces that aren't necessary any more are dropped
  from the generic power domains framework.

  On top of the above we have some minor extensions, cleanups and bug
  fixes in multiple places, as usual.

  Specifics:

   - ACPICA update to upstream revision 20150930 (Bob Moore, Lv Zheng).

     The most significant change is to allow the AML debugger to be
     built into the kernel.  On top of that there is an update related
     to the NFIT table (the ACPI persistent memory interface) and a few
     fixes and cleanups.

   - ACPI CPPC2 (Collaborative Processor Performance Control v2) support
     along with a cpufreq frontend (Ashwin Chaugule).

     This can only be enabled on ARM64 at this point.

   - New ACPI infrastructure for the early probing of IRQ chips and
     clock sources (Marc Zyngier).

   - Support for a new hierarchical properties extension of the ACPI
     _DSD (Device Specific Data) device configuration object allowing
     the kernel to handle hierarchical properties (provided by the
     platform firmware this way) automatically and make them available
     to device drivers via the generic device properties interface
     (Rafael Wysocki).

   - Generic device properties API extension to obtain an index of
     certain string value in an array of strings, along the lines of
     of_property_match_string(), but working for all of the supported
     firmware node types, and support for the "dma-names" device
     property based on it (Mika Westerberg).

   - ACPI core fix to parse the MADT (Multiple APIC Description Table)
     entries in the order expected by platform firmware (and mandated by
     the specification) to avoid confusion on systems with more than 255
     logical CPUs (Lukasz Anaczkowski).

   - Consolidation of the ACPI-based handling of PCI host bridges on x86
     and ia64 (Jiang Liu).

   - ACPI core fixes to ensure that the correct IRQ number is used to
     represent the SCI (System Control Interrupt) in the cases when it
     has been re-mapped (Chen Yu).

   - New ACPI backlight quirk for Lenovo IdeaPad S405 (Hans de Goede).

   - ACPI EC driver fixes (Lv Zheng).

   - Assorted ACPI fixes and cleanups (Dan Carpenter, Insu Yun, Jiri
     Kosina, Rami Rosen, Rasmus Villemoes).

   - New mechanism in the PM core allowing drivers to check if the
     platform firmware is going to be involved in the upcoming system
     suspend or if it has been involved in the suspend the system is
     resuming from at the moment (Rafael Wysocki).

     This should allow drivers to optimize their suspend/resume handling
     in some cases and the changes include a couple of users of it (the
     i8042 input driver, PCI PM).

   - PCI PM fix to prevent runtime-suspended devices with PME enabled
     from being resumed during system suspend even if they aren't
     configured to wake up the system from sleep (Rafael Wysocki).

   - New mechanism to report the number of a wakeup IRQ that woke up the
     system from sleep last time (Alexandra Yates).

   - Removal of unused interfaces from the generic power domains
     framework and fixes related to latency measurements in that code
     (Ulf Hansson, Daniel Lezcano).

   - cpufreq core sysfs interface rework to make it handle CPUs that
     share performance scaling settings (represented by a common cpufreq
     policy object) more symmetrically (Viresh Kumar).

     This should help to simplify the CPU offline/online handling among
     other things.

   - cpufreq core fixes and cleanups (Viresh Kumar).

   - intel_pstate fixes related to the Turbo Activation Ratio (TAR)
     mechanism on client platforms which causes the turbo P-states range
     to vary depending on platform firmware settings (Srinivas
     Pandruvada).

   - intel_pstate sysfs interface fix (Prarit Bhargava).

   - Assorted cpufreq driver (imx, tegra20, powernv, integrator) fixes
     and cleanups (Bai Ping, Bartlomiej Zolnierkiewicz, Shilpasri G
     Bhat, Luis de Bethencourt).

   - cpuidle mvebu driver cleanups (Russell King).

   - OPP (Operating Performance Points) framework code reorganization to
     make it more maintainable (Viresh Kumar).

   - Intel Broxton support for the RAPL (Running Average Power Limits)
     power capping driver (Amy Wiles).

   - Assorted power management code fixes and cleanups (Dan Carpenter,
     Geert Uytterhoeven, Geliang Tang, Luis de Bethencourt, Rasmus
     Villemoes)"

* tag 'pm+acpi-4.4-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (108 commits)
  cpufreq: postfix policy directory with the first CPU in related_cpus
  cpufreq: create cpu/cpufreq/policyX directories
  cpufreq: remove cpufreq_sysfs_{create|remove}_file()
  cpufreq: create cpu/cpufreq at boot time
  cpufreq: Use cpumask_copy instead of cpumask_or to copy a mask
  cpufreq: ondemand: Drop unnecessary locks from update_sampling_rate()
  PM / Domains: Merge measurements for PM QoS device latencies
  PM / Domains: Don't measure ->start|stop() latency in system PM callbacks
  PM / clk: Fix broken build due to non-matching code and header #ifdefs
  ACPI / Documentation: add copy_dsdt to ACPI format options
  ACPI / sysfs: correctly check failing memory allocation
  ACPI / video: Add a quirk to force native backlight on Lenovo IdeaPad S405
  ACPI / CPPC: Fix potential memory leak
  ACPI / CPPC: signedness bug in register_pcc_channel()
  ACPI / PAD: power_saving_thread() is not freezable
  ACPI / PM: Fix incorrect wakeup IRQ setting during suspend-to-idle
  ACPI: Using correct irq when waiting for events
  ACPI: Use correct IRQ when uninstalling ACPI interrupt handler
  cpuidle: mvebu: disable the bind/unbind attributes and use builtin_platform_driver
  cpuidle: mvebu: clean up multiple platform drivers
  ...
This commit is contained in:
Linus Torvalds 2015-11-04 18:10:13 -08:00
commit 0d51ce9ca1
175 changed files with 13973 additions and 2210 deletions

View File

@ -256,3 +256,15 @@ Description:
Writing a "1" enables this printing while writing a "0"
disables it. The default value is "0". Reading from this file
will display the current value.
What: /sys/power/pm_wakeup_irq
Date: April 2015
Contact: Alexandra Yates <alexandra.yates@linux.intel.org>
Description:
The /sys/power/pm_wakeup_irq file reports to user space the IRQ
number of the first wakeup interrupt (that is, the first
interrupt from an IRQ line armed for system wakeup) seen by the
kernel during the most recent system suspend/resume cycle.
This output is useful for system wakeup diagnostics of spurious
wakeup interrupts.

View File

@ -167,7 +167,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
acpi= [HW,ACPI,X86,ARM64]
Advanced Configuration and Power Interface
Format: { force | off | strict | noirq | rsdt }
Format: { force | off | strict | noirq | rsdt |
copy_dsdt }
force -- enable ACPI if default was off
off -- disable ACPI if default was on
noirq -- do not use ACPI for IRQ routing
@ -1561,6 +1562,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
hwp_only
Only load intel_pstate on systems which support
hardware P state control (HWP) if available.
no_acpi
Don't use ACPI processor performance control objects
_PSS and _PPC specified limits.
intremap= [X86-64, Intel-IOMMU]
on enable Interrupt Remapping (default)

View File

@ -120,6 +120,6 @@ void __init time_init(void)
#ifdef CONFIG_COMMON_CLK
of_clk_init(NULL);
#endif
clocksource_of_init();
clocksource_probe();
}
}

View File

@ -350,7 +350,7 @@ static void __init imx6q_opp_init(void)
return;
}
if (of_init_opp_table(cpu_dev)) {
if (dev_pm_opp_of_add_table(cpu_dev)) {
pr_warn("failed to init OPP table\n");
goto put_node;
}

View File

@ -647,7 +647,7 @@ static OMAP_SYS_32K_TIMER_INIT(4, 1, "timer_32k_ck", "ti,timer-alwon",
void __init omap4_local_timer_init(void)
{
omap4_sync32k_timer_init();
clocksource_of_init();
clocksource_probe();
}
#else
void __init omap4_local_timer_init(void)
@ -663,7 +663,7 @@ void __init omap5_realtime_timer_init(void)
omap4_sync32k_timer_init();
realtime_counter_init();
clocksource_of_init();
clocksource_probe();
}
#endif /* CONFIG_SOC_OMAP5 || CONFIG_SOC_DRA7XX */

View File

@ -67,7 +67,7 @@ static void __init rockchip_timer_init(void)
}
of_clk_init(NULL);
clocksource_of_init();
clocksource_probe();
}
static void __init rockchip_dt_init(void)

View File

@ -97,7 +97,7 @@ static u32 __init r8a7779_read_mode_pins(void)
static void __init r8a7779_init_time(void)
{
r8a7779_clocks_init(r8a7779_read_mode_pins());
clocksource_of_init();
clocksource_probe();
}
static const char *const r8a7779_compat_dt[] __initconst = {

View File

@ -128,7 +128,7 @@ void __init rcar_gen2_timer_init(void)
#endif /* CONFIG_ARM_ARCH_TIMER */
rcar_gen2_clocks_init(mode);
clocksource_of_init();
clocksource_probe();
}
struct memory_reserve_config {

View File

@ -124,5 +124,5 @@ void __init spear13xx_timer_init(void)
clk_put(pclk);
spear_setup_of_timer();
clocksource_of_init();
clocksource_probe();
}

View File

@ -46,7 +46,7 @@ static void __init sun6i_timer_init(void)
of_clk_init(NULL);
if (IS_ENABLED(CONFIG_RESET_CONTROLLER))
sun6i_reset_init();
clocksource_of_init();
clocksource_probe();
}
DT_MACHINE_START(SUN6I_DT, "Allwinner sun6i (A31) Family")

View File

@ -408,7 +408,7 @@ static const char * u300_board_compat[] = {
DT_MACHINE_START(U300_DT, "U300 S335/B335 (Device Tree)")
.map_io = u300_map_io,
.init_irq = u300_init_irq_dt,
.init_time = clocksource_of_init,
.init_time = clocksource_probe,
.init_machine = u300_init_machine_dt,
.restart = u300_restart,
.dt_compat = u300_board_compat,

View File

@ -44,5 +44,5 @@ void __init ux500_timer_init(void)
dt_fail:
clksrc_dbx500_prcmu_init(prcmu_timer_base);
clocksource_of_init();
clocksource_probe();
}

View File

@ -154,7 +154,7 @@ static void __init zynq_timer_init(void)
zynq_clock_init();
of_clk_init(NULL);
clocksource_of_init();
clocksource_probe();
}
static struct map_desc zynq_cortex_a9_scu_map __initdata = {

View File

@ -12,7 +12,6 @@
#ifndef _ASM_ACPI_H
#define _ASM_ACPI_H
#include <linux/irqchip/arm-gic-acpi.h>
#include <linux/mm.h>
#include <linux/psci.h>

View File

@ -1,23 +1,10 @@
#ifndef __ASM_IRQ_H
#define __ASM_IRQ_H
#include <linux/irqchip/arm-gic-acpi.h>
#include <asm-generic/irq.h>
struct pt_regs;
extern void set_handle_irq(void (*handle_irq)(struct pt_regs *));
static inline void acpi_irq_init(void)
{
/*
* Hardcode ACPI IRQ chip initialization to GICv2 for now.
* Proper irqchip infrastructure will be implemented along with
* incoming GICv2m|GICv3|ITS bits.
*/
acpi_gic_init();
}
#define acpi_irq_init acpi_irq_init
#endif

View File

@ -211,31 +211,6 @@ void __init acpi_boot_table_init(void)
}
}
void __init acpi_gic_init(void)
{
struct acpi_table_header *table;
acpi_status status;
acpi_size tbl_size;
int err;
if (acpi_disabled)
return;
status = acpi_get_table_with_size(ACPI_SIG_MADT, 0, &table, &tbl_size);
if (ACPI_FAILURE(status)) {
const char *msg = acpi_format_exception(status);
pr_err("Failed to get MADT table, %s\n", msg);
return;
}
err = gic_v2_acpi_init(table);
if (err)
pr_err("Failed to initialize GIC IRQ controller");
early_acpi_os_unmap_memory((char *)table, tbl_size);
}
#ifdef CONFIG_ACPI_APEI
pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
{

View File

@ -67,16 +67,10 @@ void __init time_init(void)
u32 arch_timer_rate;
of_clk_init(NULL);
clocksource_of_init();
clocksource_probe();
tick_setup_hrtimer_broadcast();
/*
* Since ACPI or FDT will only one be available in the system,
* we can use acpi_generic_timer_init() here safely
*/
acpi_generic_timer_init();
arch_timer_rate = arch_timer_get_rate();
if (!arch_timer_rate)
panic("Unable to initialise architected timer.\n");

View File

@ -64,11 +64,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define pci_legacy_read platform_pci_legacy_read
#define pci_legacy_write platform_pci_legacy_write
struct iospace_resource {
struct list_head list;
struct resource res;
};
struct pci_controller {
struct acpi_device *companion;
void *iommu;

View File

@ -115,33 +115,13 @@ struct pci_ops pci_root_ops = {
.write = pci_write,
};
/* Called by ACPI when it finds a new root bus. */
static struct pci_controller *alloc_pci_controller(int seg)
{
struct pci_controller *controller;
controller = kzalloc(sizeof(*controller), GFP_KERNEL);
if (!controller)
return NULL;
controller->segment = seg;
return controller;
}
struct pci_root_info {
struct acpi_device *bridge;
struct pci_controller *controller;
struct list_head resources;
struct resource *res;
resource_size_t *res_offset;
unsigned int res_num;
struct acpi_pci_root_info common;
struct pci_controller controller;
struct list_head io_resources;
char *name;
};
static unsigned int
new_space (u64 phys_base, int sparse)
static unsigned int new_space(u64 phys_base, int sparse)
{
u64 mmio_base;
int i;
@ -168,39 +148,36 @@ new_space (u64 phys_base, int sparse)
return i;
}
static u64 add_io_space(struct pci_root_info *info,
struct acpi_resource_address64 *addr)
static int add_io_space(struct device *dev, struct pci_root_info *info,
struct resource_entry *entry)
{
struct iospace_resource *iospace;
struct resource *resource;
struct resource_entry *iospace;
struct resource *resource, *res = entry->res;
char *name;
unsigned long base, min, max, base_port;
unsigned int sparse = 0, space_nr, len;
len = strlen(info->name) + 32;
iospace = kzalloc(sizeof(*iospace) + len, GFP_KERNEL);
len = strlen(info->common.name) + 32;
iospace = resource_list_create_entry(NULL, len);
if (!iospace) {
dev_err(&info->bridge->dev,
"PCI: No memory for %s I/O port space\n",
info->name);
goto out;
dev_err(dev, "PCI: No memory for %s I/O port space\n",
info->common.name);
return -ENOMEM;
}
name = (char *)(iospace + 1);
min = addr->address.minimum;
max = min + addr->address.address_length - 1;
if (addr->info.io.translation_type == ACPI_SPARSE_TRANSLATION)
if (res->flags & IORESOURCE_IO_SPARSE)
sparse = 1;
space_nr = new_space(addr->address.translation_offset, sparse);
space_nr = new_space(entry->offset, sparse);
if (space_nr == ~0)
goto free_resource;
name = (char *)(iospace + 1);
min = res->start - entry->offset;
max = res->end - entry->offset;
base = __pa(io_space[space_nr].mmio_base);
base_port = IO_SPACE_BASE(space_nr);
snprintf(name, len, "%s I/O Ports %08lx-%08lx", info->name,
base_port + min, base_port + max);
snprintf(name, len, "%s I/O Ports %08lx-%08lx", info->common.name,
base_port + min, base_port + max);
/*
* The SDM guarantees the legacy 0-64K space is sparse, but if the
@ -210,270 +187,125 @@ static u64 add_io_space(struct pci_root_info *info,
if (space_nr == 0)
sparse = 1;
resource = &iospace->res;
resource = iospace->res;
resource->name = name;
resource->flags = IORESOURCE_MEM;
resource->start = base + (sparse ? IO_SPACE_SPARSE_ENCODING(min) : min);
resource->end = base + (sparse ? IO_SPACE_SPARSE_ENCODING(max) : max);
if (insert_resource(&iomem_resource, resource)) {
dev_err(&info->bridge->dev,
"can't allocate host bridge io space resource %pR\n",
resource);
dev_err(dev,
"can't allocate host bridge io space resource %pR\n",
resource);
goto free_resource;
}
list_add_tail(&iospace->list, &info->io_resources);
return base_port;
entry->offset = base_port;
res->start = min + base_port;
res->end = max + base_port;
resource_list_add_tail(iospace, &info->io_resources);
return 0;
free_resource:
kfree(iospace);
out:
return ~0;
resource_list_free_entry(iospace);
return -ENOSPC;
}
static acpi_status resource_to_window(struct acpi_resource *resource,
struct acpi_resource_address64 *addr)
/*
* An IO port or MMIO resource assigned to a PCI host bridge may be
* consumed by the host bridge itself or available to its child
* bus/devices. The ACPI specification defines a bit (Producer/Consumer)
* to tell whether the resource is consumed by the host bridge itself,
* but firmware hasn't used that bit consistently, so we can't rely on it.
*
* On x86 and IA64 platforms, all IO port and MMIO resources are assumed
* to be available to child bus/devices except one special case:
* IO port [0xCF8-0xCFF] is consumed by the host bridge itself
* to access PCI configuration space.
*
* So explicitly filter out PCI CFG IO ports[0xCF8-0xCFF].
*/
static bool resource_is_pcicfg_ioport(struct resource *res)
{
acpi_status status;
/*
* We're only interested in _CRS descriptors that are
* - address space descriptors for memory or I/O space
* - non-zero size
*/
status = acpi_resource_to_address64(resource, addr);
if (ACPI_SUCCESS(status) &&
(addr->resource_type == ACPI_MEMORY_RANGE ||
addr->resource_type == ACPI_IO_RANGE) &&
addr->address.address_length)
return AE_OK;
return AE_ERROR;
return (res->flags & IORESOURCE_IO) &&
res->start == 0xCF8 && res->end == 0xCFF;
}
static acpi_status count_window(struct acpi_resource *resource, void *data)
static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci)
{
unsigned int *windows = (unsigned int *) data;
struct acpi_resource_address64 addr;
acpi_status status;
status = resource_to_window(resource, &addr);
if (ACPI_SUCCESS(status))
(*windows)++;
return AE_OK;
}
static acpi_status add_window(struct acpi_resource *res, void *data)
{
struct pci_root_info *info = data;
struct resource *resource;
struct acpi_resource_address64 addr;
acpi_status status;
unsigned long flags, offset = 0;
struct resource *root;
/* Return AE_OK for non-window resources to keep scanning for more */
status = resource_to_window(res, &addr);
if (!ACPI_SUCCESS(status))
return AE_OK;
if (addr.resource_type == ACPI_MEMORY_RANGE) {
flags = IORESOURCE_MEM;
root = &iomem_resource;
offset = addr.address.translation_offset;
} else if (addr.resource_type == ACPI_IO_RANGE) {
flags = IORESOURCE_IO;
root = &ioport_resource;
offset = add_io_space(info, &addr);
if (offset == ~0)
return AE_OK;
} else
return AE_OK;
resource = &info->res[info->res_num];
resource->name = info->name;
resource->flags = flags;
resource->start = addr.address.minimum + offset;
resource->end = resource->start + addr.address.address_length - 1;
info->res_offset[info->res_num] = offset;
if (insert_resource(root, resource)) {
dev_err(&info->bridge->dev,
"can't allocate host bridge window %pR\n",
resource);
} else {
if (offset)
dev_info(&info->bridge->dev, "host bridge window %pR "
"(PCI address [%#llx-%#llx])\n",
resource,
resource->start - offset,
resource->end - offset);
else
dev_info(&info->bridge->dev,
"host bridge window %pR\n", resource);
}
/* HP's firmware has a hack to work around a Windows bug.
* Ignore these tiny memory ranges */
if (!((resource->flags & IORESOURCE_MEM) &&
(resource->end - resource->start < 16)))
pci_add_resource_offset(&info->resources, resource,
info->res_offset[info->res_num]);
info->res_num++;
return AE_OK;
}
static void free_pci_root_info_res(struct pci_root_info *info)
{
struct iospace_resource *iospace, *tmp;
list_for_each_entry_safe(iospace, tmp, &info->io_resources, list)
kfree(iospace);
kfree(info->name);
kfree(info->res);
info->res = NULL;
kfree(info->res_offset);
info->res_offset = NULL;
info->res_num = 0;
kfree(info->controller);
info->controller = NULL;
}
static void __release_pci_root_info(struct pci_root_info *info)
{
int i;
struct device *dev = &ci->bridge->dev;
struct pci_root_info *info;
struct resource *res;
struct iospace_resource *iospace;
struct resource_entry *entry, *tmp;
int status;
list_for_each_entry(iospace, &info->io_resources, list)
release_resource(&iospace->res);
for (i = 0; i < info->res_num; i++) {
res = &info->res[i];
if (!res->parent)
continue;
if (!(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
continue;
release_resource(res);
status = acpi_pci_probe_root_resources(ci);
if (status > 0) {
info = container_of(ci, struct pci_root_info, common);
resource_list_for_each_entry_safe(entry, tmp, &ci->resources) {
res = entry->res;
if (res->flags & IORESOURCE_MEM) {
/*
* HP's firmware has a hack to work around a
* Windows bug. Ignore these tiny memory ranges.
*/
if (resource_size(res) <= 16) {
resource_list_del(entry);
insert_resource(&iomem_resource,
entry->res);
resource_list_add_tail(entry,
&info->io_resources);
}
} else if (res->flags & IORESOURCE_IO) {
if (resource_is_pcicfg_ioport(entry->res))
resource_list_destroy_entry(entry);
else if (add_io_space(dev, info, entry))
resource_list_destroy_entry(entry);
}
}
}
free_pci_root_info_res(info);
return status;
}
static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci)
{
struct pci_root_info *info;
struct resource_entry *entry, *tmp;
info = container_of(ci, struct pci_root_info, common);
resource_list_for_each_entry_safe(entry, tmp, &info->io_resources) {
release_resource(entry->res);
resource_list_destroy_entry(entry);
}
kfree(info);
}
static void release_pci_root_info(struct pci_host_bridge *bridge)
{
struct pci_root_info *info = bridge->release_data;
__release_pci_root_info(info);
}
static int
probe_pci_root_info(struct pci_root_info *info, struct acpi_device *device,
int busnum, int domain)
{
char *name;
name = kmalloc(16, GFP_KERNEL);
if (!name)
return -ENOMEM;
sprintf(name, "PCI Bus %04x:%02x", domain, busnum);
info->bridge = device;
info->name = name;
acpi_walk_resources(device->handle, METHOD_NAME__CRS, count_window,
&info->res_num);
if (info->res_num) {
info->res =
kzalloc_node(sizeof(*info->res) * info->res_num,
GFP_KERNEL, info->controller->node);
if (!info->res) {
kfree(name);
return -ENOMEM;
}
info->res_offset =
kzalloc_node(sizeof(*info->res_offset) * info->res_num,
GFP_KERNEL, info->controller->node);
if (!info->res_offset) {
kfree(name);
kfree(info->res);
info->res = NULL;
return -ENOMEM;
}
info->res_num = 0;
acpi_walk_resources(device->handle, METHOD_NAME__CRS,
add_window, info);
} else
kfree(name);
return 0;
}
static struct acpi_pci_root_ops pci_acpi_root_ops = {
.pci_ops = &pci_root_ops,
.release_info = pci_acpi_root_release_info,
.prepare_resources = pci_acpi_root_prepare_resources,
};
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
struct acpi_device *device = root->device;
int domain = root->segment;
int bus = root->secondary.start;
struct pci_controller *controller;
struct pci_root_info *info = NULL;
int busnum = root->secondary.start;
struct pci_bus *pbus;
int ret;
controller = alloc_pci_controller(domain);
if (!controller)
return NULL;
controller->companion = device;
controller->node = acpi_get_node(device->handle);
struct pci_root_info *info;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
dev_err(&device->dev,
"pci_bus %04x:%02x: ignored (out of memory)\n",
domain, busnum);
kfree(controller);
"pci_bus %04x:%02x: ignored (out of memory)\n",
root->segment, (int)root->secondary.start);
return NULL;
}
info->controller = controller;
info->controller.segment = root->segment;
info->controller.companion = device;
info->controller.node = acpi_get_node(device->handle);
INIT_LIST_HEAD(&info->io_resources);
INIT_LIST_HEAD(&info->resources);
ret = probe_pci_root_info(info, device, busnum, domain);
if (ret) {
kfree(info->controller);
kfree(info);
return NULL;
}
/* insert busn resource at first */
pci_add_resource(&info->resources, &root->secondary);
/*
* See arch/x86/pci/acpi.c.
* The desired pci bus might already be scanned in a quirk. We
* should handle the case here, but it appears that IA64 hasn't
* such quirk. So we just ignore the case now.
*/
pbus = pci_create_root_bus(NULL, bus, &pci_root_ops, controller,
&info->resources);
if (!pbus) {
pci_free_resource_list(&info->resources);
__release_pci_root_info(info);
return NULL;
}
pci_set_host_bridge_release(to_pci_host_bridge(pbus->bridge),
release_pci_root_info, info);
pci_scan_child_bus(pbus);
return pbus;
return acpi_pci_root_create(root, &pci_acpi_root_ops,
&info->common, &info->controller);
}
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)

View File

@ -194,7 +194,7 @@ void __init time_init(void)
{
of_clk_init(NULL);
setup_cpuinfo_clk();
clocksource_of_init();
clocksource_probe();
}
#ifdef CONFIG_DEBUG_FS

View File

@ -39,7 +39,7 @@ void __init plat_time_init(void)
struct clk *clk;
of_clk_init(NULL);
clocksource_of_init();
clocksource_probe();
np = of_get_cpu_node(0, NULL);
if (!np) {

View File

@ -75,5 +75,5 @@ void __init plat_time_init(void)
pr_info("CPU Clock: %ldMHz\n", clk_get_rate(clk) / 1000000);
mips_hpt_frequency = clk_get_rate(clk) / 2;
clk_put(clk);
clocksource_of_init();
clocksource_probe();
}

View File

@ -324,7 +324,7 @@ void __init time_init(void)
if (count < 2)
panic("%d timer is found, it needs 2 timers in system\n", count);
clocksource_of_init();
clocksource_probe();
}
CLOCKSOURCE_OF_DECLARE(nios2_timer, ALTR_TIMER_COMPATIBLE, nios2_time_init);

View File

@ -206,6 +206,13 @@
#define MSR_GFX_PERF_LIMIT_REASONS 0x000006B0
#define MSR_RING_PERF_LIMIT_REASONS 0x000006B1
/* Config TDP MSRs */
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
#define MSR_CONFIG_TDP_LEVEL1 0x00000649
#define MSR_CONFIG_TDP_LEVEL2 0x0000064A
#define MSR_CONFIG_TDP_CONTROL 0x0000064B
#define MSR_TURBO_ACTIVATION_RATIO 0x0000064C
/* Hardware P state interface */
#define MSR_PPERF 0x0000064e
#define MSR_PERF_LIMIT_REASONS 0x0000064f

View File

@ -976,6 +976,8 @@ static int __init acpi_parse_madt_lapic_entries(void)
{
int count;
int x2count = 0;
int ret;
struct acpi_subtable_proc madt_proc[2];
if (!cpu_has_apic)
return -ENODEV;
@ -999,10 +1001,22 @@ static int __init acpi_parse_madt_lapic_entries(void)
acpi_parse_sapic, MAX_LOCAL_APIC);
if (!count) {
x2count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_X2APIC,
acpi_parse_x2apic, MAX_LOCAL_APIC);
count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC,
acpi_parse_lapic, MAX_LOCAL_APIC);
memset(madt_proc, 0, sizeof(madt_proc));
madt_proc[0].id = ACPI_MADT_TYPE_LOCAL_APIC;
madt_proc[0].handler = acpi_parse_lapic;
madt_proc[1].id = ACPI_MADT_TYPE_LOCAL_X2APIC;
madt_proc[1].handler = acpi_parse_x2apic;
ret = acpi_table_parse_entries_array(ACPI_SIG_MADT,
sizeof(struct acpi_table_madt),
madt_proc, ARRAY_SIZE(madt_proc), MAX_LOCAL_APIC);
if (ret < 0) {
printk(KERN_ERR PREFIX
"Error parsing LAPIC/X2APIC entries\n");
return ret;
}
x2count = madt_proc[0].count;
count = madt_proc[1].count;
}
if (!count && !x2count) {
printk(KERN_ERR PREFIX "No LAPIC entries present\n");

View File

@ -4,16 +4,15 @@
#include <linux/irq.h>
#include <linux/dmi.h>
#include <linux/slab.h>
#include <linux/pci-acpi.h>
#include <asm/numa.h>
#include <asm/pci_x86.h>
struct pci_root_info {
struct acpi_device *bridge;
char name[16];
struct acpi_pci_root_info common;
struct pci_sysdata sd;
#ifdef CONFIG_PCI_MMCONFIG
bool mcfg_added;
u16 segment;
u8 start_bus;
u8 end_bus;
#endif
@ -178,15 +177,18 @@ static int check_segment(u16 seg, struct device *dev, char *estr)
return 0;
}
static int setup_mcfg_map(struct pci_root_info *info, u16 seg, u8 start,
u8 end, phys_addr_t addr)
static int setup_mcfg_map(struct acpi_pci_root_info *ci)
{
int result;
struct device *dev = &info->bridge->dev;
int result, seg;
struct pci_root_info *info;
struct acpi_pci_root *root = ci->root;
struct device *dev = &ci->bridge->dev;
info->start_bus = start;
info->end_bus = end;
info = container_of(ci, struct pci_root_info, common);
info->start_bus = (u8)root->secondary.start;
info->end_bus = (u8)root->secondary.end;
info->mcfg_added = false;
seg = info->sd.domain;
/* return success if MMCFG is not in use */
if (raw_pci_ext_ops && raw_pci_ext_ops != &pci_mmcfg)
@ -195,7 +197,8 @@ static int setup_mcfg_map(struct pci_root_info *info, u16 seg, u8 start,
if (!(pci_probe & PCI_PROBE_MMCONF))
return check_segment(seg, dev, "MMCONFIG is disabled,");
result = pci_mmconfig_insert(dev, seg, start, end, addr);
result = pci_mmconfig_insert(dev, seg, info->start_bus, info->end_bus,
root->mcfg_addr);
if (result == 0) {
/* enable MMCFG if it hasn't been enabled yet */
if (raw_pci_ext_ops == NULL)
@ -208,134 +211,55 @@ static int setup_mcfg_map(struct pci_root_info *info, u16 seg, u8 start,
return 0;
}
static void teardown_mcfg_map(struct pci_root_info *info)
static void teardown_mcfg_map(struct acpi_pci_root_info *ci)
{
struct pci_root_info *info;
info = container_of(ci, struct pci_root_info, common);
if (info->mcfg_added) {
pci_mmconfig_delete(info->segment, info->start_bus,
info->end_bus);
pci_mmconfig_delete(info->sd.domain,
info->start_bus, info->end_bus);
info->mcfg_added = false;
}
}
#else
static int setup_mcfg_map(struct pci_root_info *info,
u16 seg, u8 start, u8 end,
phys_addr_t addr)
static int setup_mcfg_map(struct acpi_pci_root_info *ci)
{
return 0;
}
static void teardown_mcfg_map(struct pci_root_info *info)
static void teardown_mcfg_map(struct acpi_pci_root_info *ci)
{
}
#endif
static void validate_resources(struct device *dev, struct list_head *crs_res,
unsigned long type)
static int pci_acpi_root_get_node(struct acpi_pci_root *root)
{
LIST_HEAD(list);
struct resource *res1, *res2, *root = NULL;
struct resource_entry *tmp, *entry, *entry2;
int busnum = root->secondary.start;
struct acpi_device *device = root->device;
int node = acpi_get_node(device->handle);
BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0);
root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource;
list_splice_init(crs_res, &list);
resource_list_for_each_entry_safe(entry, tmp, &list) {
bool free = false;
resource_size_t end;
res1 = entry->res;
if (!(res1->flags & type))
goto next;
/* Exclude non-addressable range or non-addressable portion */
end = min(res1->end, root->end);
if (end <= res1->start) {
dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n",
res1);
free = true;
goto next;
} else if (res1->end != end) {
dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n",
res1, (unsigned long long)end + 1,
(unsigned long long)res1->end);
res1->end = end;
}
resource_list_for_each_entry(entry2, crs_res) {
res2 = entry2->res;
if (!(res2->flags & type))
continue;
/*
* I don't like throwing away windows because then
* our resources no longer match the ACPI _CRS, but
* the kernel resource tree doesn't allow overlaps.
*/
if (resource_overlaps(res1, res2)) {
res2->start = min(res1->start, res2->start);
res2->end = max(res1->end, res2->end);
dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n",
res2, res1);
free = true;
goto next;
}
}
next:
resource_list_del(entry);
if (free)
resource_list_free_entry(entry);
else
resource_list_add_tail(entry, crs_res);
if (node == NUMA_NO_NODE) {
node = x86_pci_root_bus_node(busnum);
if (node != 0 && node != NUMA_NO_NODE)
dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n",
node);
}
if (node != NUMA_NO_NODE && !node_online(node))
node = NUMA_NO_NODE;
return node;
}
static void add_resources(struct pci_root_info *info,
struct list_head *resources,
struct list_head *crs_res)
static int pci_acpi_root_init_info(struct acpi_pci_root_info *ci)
{
struct resource_entry *entry, *tmp;
struct resource *res, *conflict, *root = NULL;
validate_resources(&info->bridge->dev, crs_res, IORESOURCE_MEM);
validate_resources(&info->bridge->dev, crs_res, IORESOURCE_IO);
resource_list_for_each_entry_safe(entry, tmp, crs_res) {
res = entry->res;
if (res->flags & IORESOURCE_MEM)
root = &iomem_resource;
else if (res->flags & IORESOURCE_IO)
root = &ioport_resource;
else
BUG_ON(res);
conflict = insert_resource_conflict(root, res);
if (conflict) {
dev_info(&info->bridge->dev,
"ignoring host bridge window %pR (conflicts with %s %pR)\n",
res, conflict->name, conflict);
resource_list_destroy_entry(entry);
}
}
list_splice_tail(crs_res, resources);
return setup_mcfg_map(ci);
}
static void release_pci_root_info(struct pci_host_bridge *bridge)
static void pci_acpi_root_release_info(struct acpi_pci_root_info *ci)
{
struct resource *res;
struct resource_entry *entry;
struct pci_root_info *info = bridge->release_data;
resource_list_for_each_entry(entry, &bridge->windows) {
res = entry->res;
if (res->parent &&
(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
release_resource(res);
}
teardown_mcfg_map(info);
kfree(info);
teardown_mcfg_map(ci);
kfree(container_of(ci, struct pci_root_info, common));
}
/*
@ -358,50 +282,47 @@ static bool resource_is_pcicfg_ioport(struct resource *res)
res->start == 0xCF8 && res->end == 0xCFF;
}
static void probe_pci_root_info(struct pci_root_info *info,
struct acpi_device *device,
int busnum, int domain,
struct list_head *list)
static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci)
{
int ret;
struct acpi_device *device = ci->bridge;
int busnum = ci->root->secondary.start;
struct resource_entry *entry, *tmp;
int status;
sprintf(info->name, "PCI Bus %04x:%02x", domain, busnum);
info->bridge = device;
ret = acpi_dev_get_resources(device, list,
acpi_dev_filter_resource_type_cb,
(void *)(IORESOURCE_IO | IORESOURCE_MEM));
if (ret < 0)
dev_warn(&device->dev,
"failed to parse _CRS method, error code %d\n", ret);
else if (ret == 0)
dev_dbg(&device->dev,
"no IO and memory resources present in _CRS\n");
else
resource_list_for_each_entry_safe(entry, tmp, list) {
if ((entry->res->flags & IORESOURCE_DISABLED) ||
resource_is_pcicfg_ioport(entry->res))
status = acpi_pci_probe_root_resources(ci);
if (pci_use_crs) {
resource_list_for_each_entry_safe(entry, tmp, &ci->resources)
if (resource_is_pcicfg_ioport(entry->res))
resource_list_destroy_entry(entry);
else
entry->res->name = info->name;
}
return status;
}
resource_list_for_each_entry_safe(entry, tmp, &ci->resources) {
dev_printk(KERN_DEBUG, &device->dev,
"host bridge window %pR (ignored)\n", entry->res);
resource_list_destroy_entry(entry);
}
x86_pci_root_bus_resources(busnum, &ci->resources);
return 0;
}
static struct acpi_pci_root_ops acpi_pci_root_ops = {
.pci_ops = &pci_root_ops,
.init_info = pci_acpi_root_init_info,
.release_info = pci_acpi_root_release_info,
.prepare_resources = pci_acpi_root_prepare_resources,
};
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
struct acpi_device *device = root->device;
struct pci_root_info *info;
int domain = root->segment;
int busnum = root->secondary.start;
struct resource_entry *res_entry;
LIST_HEAD(crs_res);
LIST_HEAD(resources);
int node = pci_acpi_root_get_node(root);
struct pci_bus *bus;
struct pci_sysdata *sd;
int node;
if (pci_ignore_seg)
domain = 0;
root->segment = domain = 0;
if (domain && !pci_domains_supported) {
printk(KERN_WARNING "pci_bus %04x:%02x: "
@ -410,71 +331,33 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
return NULL;
}
node = acpi_get_node(device->handle);
if (node == NUMA_NO_NODE) {
node = x86_pci_root_bus_node(busnum);
if (node != 0 && node != NUMA_NO_NODE)
dev_info(&device->dev, FW_BUG "no _PXM; falling back to node %d from hardware (may be inconsistent with ACPI node numbers)\n",
node);
}
if (node != NUMA_NO_NODE && !node_online(node))
node = NUMA_NO_NODE;
info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
if (!info) {
printk(KERN_WARNING "pci_bus %04x:%02x: "
"ignored (out of memory)\n", domain, busnum);
return NULL;
}
sd = &info->sd;
sd->domain = domain;
sd->node = node;
sd->companion = device;
bus = pci_find_bus(domain, busnum);
if (bus) {
/*
* If the desired bus has been scanned already, replace
* its bus->sysdata.
*/
memcpy(bus->sysdata, sd, sizeof(*sd));
kfree(info);
struct pci_sysdata sd = {
.domain = domain,
.node = node,
.companion = root->device
};
memcpy(bus->sysdata, &sd, sizeof(sd));
} else {
/* insert busn res at first */
pci_add_resource(&resources, &root->secondary);
struct pci_root_info *info;
/*
* _CRS with no apertures is normal, so only fall back to
* defaults or native bridge info if we're ignoring _CRS.
*/
probe_pci_root_info(info, device, busnum, domain, &crs_res);
if (pci_use_crs) {
add_resources(info, &resources, &crs_res);
} else {
resource_list_for_each_entry(res_entry, &crs_res)
dev_printk(KERN_DEBUG, &device->dev,
"host bridge window %pR (ignored)\n",
res_entry->res);
resource_list_free(&crs_res);
x86_pci_root_bus_resources(busnum, &resources);
}
if (!setup_mcfg_map(info, domain, (u8)root->secondary.start,
(u8)root->secondary.end, root->mcfg_addr))
bus = pci_create_root_bus(NULL, busnum, &pci_root_ops,
sd, &resources);
if (bus) {
pci_scan_child_bus(bus);
pci_set_host_bridge_release(
to_pci_host_bridge(bus->bridge),
release_pci_root_info, info);
} else {
resource_list_free(&resources);
teardown_mcfg_map(info);
kfree(info);
info = kzalloc_node(sizeof(*info), GFP_KERNEL, node);
if (!info)
dev_err(&root->device->dev,
"pci_bus %04x:%02x: ignored (out of memory)\n",
domain, busnum);
else {
info->sd.domain = domain;
info->sd.node = node;
info->sd.companion = root->device;
bus = acpi_pci_root_create(root, &acpi_pci_root_ops,
&info->common, &info->sd);
}
}
@ -487,9 +370,6 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
pcie_bus_configure_settings(child);
}
if (bus && node != NUMA_NO_NODE)
dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node);
return bus;
}

View File

@ -148,7 +148,7 @@ void __init time_init(void)
local_timer_setup(0);
setup_irq(this_cpu_ptr(&ccount_timer)->evt.irq, &timer_irqaction);
sched_clock_register(ccount_sched_clock_read, 32, ccount_freq);
clocksource_of_init();
clocksource_probe();
}
/*

View File

@ -57,6 +57,15 @@ config ACPI_SYSTEM_POWER_STATES_SUPPORT
config ACPI_CCA_REQUIRED
bool
config ACPI_DEBUGGER
bool "In-kernel debugger (EXPERIMENTAL)"
select ACPI_DEBUG
help
Enable in-kernel debugging facilities: statistics, internal
object dump, single step control method execution.
This is still under development, currently enabling this only
results in the compilation of the ACPICA debugger files.
config ACPI_SLEEP
bool
depends on SUSPEND || HIBERNATION
@ -197,11 +206,25 @@ config ACPI_PROCESSOR_IDLE
bool
select CPU_IDLE
config ACPI_CPPC_LIB
bool
depends on ACPI_PROCESSOR
depends on !ACPI_CPU_FREQ_PSS
select MAILBOX
select PCC
help
If this option is enabled, this file implements common functionality
to parse CPPC tables as described in the ACPI 5.1+ spec. The
routines implemented are meant to be used by other
drivers to control CPU performance using CPPC semantics.
If your platform does not support CPPC in firmware,
leave this option disabled.
config ACPI_PROCESSOR
tristate "Processor"
depends on X86 || IA64
select ACPI_PROCESSOR_IDLE
select ACPI_CPU_FREQ_PSS
depends on X86 || IA64 || ARM64
select ACPI_PROCESSOR_IDLE if X86 || IA64
select ACPI_CPU_FREQ_PSS if X86 || IA64
default y
help
This driver adds support for the ACPI Processor package. It is required

View File

@ -78,6 +78,7 @@ obj-$(CONFIG_ACPI_HED) += hed.o
obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o
obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o
obj-$(CONFIG_ACPI_BGRT) += bgrt.o
obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o
# processor has its own "processor." module_param namespace
processor-y := processor_driver.o

View File

@ -664,7 +664,7 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
.complete = pm_complete_with_resume_check,
.suspend = acpi_subsys_suspend,
.suspend_late = acpi_lpss_suspend_late,
.resume_early = acpi_lpss_resume_early,

View File

@ -148,8 +148,6 @@ static int power_saving_thread(void *data)
while (!kthread_should_stop()) {
unsigned long expire_time;
try_to_freeze();
/* round robin to cpus */
expire_time = last_jiffies + round_robin_time * HZ;
if (time_before(expire_time, jiffies)) {

View File

@ -316,7 +316,7 @@ static const struct acpi_device_id acpi_pnp_device_ids[] = {
{""},
};
static bool matching_id(char *idstr, char *list_id)
static bool matching_id(const char *idstr, const char *list_id)
{
int i;
@ -333,7 +333,7 @@ static bool matching_id(char *idstr, char *list_id)
return true;
}
static bool acpi_pnp_match(char *idstr, const struct acpi_device_id **matchid)
static bool acpi_pnp_match(const char *idstr, const struct acpi_device_id **matchid)
{
const struct acpi_device_id *devid;

View File

@ -164,6 +164,24 @@ static int acpi_processor_errata(void)
-------------------------------------------------------------------------- */
#ifdef CONFIG_ACPI_HOTPLUG_CPU
int __weak acpi_map_cpu(acpi_handle handle,
phys_cpuid_t physid, int *pcpu)
{
return -ENODEV;
}
int __weak acpi_unmap_cpu(int cpu)
{
return -ENODEV;
}
int __weak arch_register_cpu(int cpu)
{
return -ENODEV;
}
void __weak arch_unregister_cpu(int cpu) {}
static int acpi_processor_hotadd_init(struct acpi_processor *pr)
{
unsigned long long sta;

View File

@ -123,7 +123,6 @@ acpi-y += \
rsaddr.o \
rscalc.o \
rscreate.o \
rsdump.o \
rsdumpinfo.o \
rsinfo.o \
rsio.o \
@ -178,7 +177,24 @@ acpi-y += \
utxferror.o \
utxfmutex.o
acpi-$(CONFIG_ACPI_DEBUGGER) += \
dbcmds.o \
dbconvert.o \
dbdisply.o \
dbexec.o \
dbhistry.o \
dbinput.o \
dbmethod.o \
dbnames.o \
dbobject.o \
dbstats.o \
dbutils.o \
dbxface.o \
rsdump.o \
acpi-$(ACPI_FUTURE_USAGE) += \
dbfileio.o \
dbtest.o \
utcache.o \
utfileio.o \
utprint.o \

View File

@ -88,7 +88,7 @@
acpi_os_printf (" %-18s%s\n", name, description);
#define FILE_SUFFIX_DISASSEMBLY "dsl"
#define ACPI_TABLE_FILE_SUFFIX ".dat"
#define FILE_SUFFIX_BINARY_TABLE ".dat" /* Needs the dot */
/*
* getopt

View File

@ -44,6 +44,12 @@
#ifndef __ACDEBUG_H__
#define __ACDEBUG_H__
/* The debugger is used in conjunction with the disassembler most of time */
#ifdef ACPI_DISASSEMBLER
#include "acdisasm.h"
#endif
#define ACPI_DEBUG_BUFFER_SIZE 0x4000 /* 16K buffer for return objects */
struct acpi_db_command_info {

View File

@ -325,9 +325,9 @@ ACPI_GLOBAL(struct acpi_external_file *, acpi_gbl_external_file_list);
#ifdef ACPI_DEBUGGER
ACPI_INIT_GLOBAL(u8, acpi_gbl_db_terminate_threads, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_abort_method, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_method_executing, FALSE);
ACPI_INIT_GLOBAL(acpi_thread_id, acpi_gbl_db_thread_id, ACPI_INVALID_THREAD_ID);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_ini_methods);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_region_support);
@ -337,6 +337,8 @@ ACPI_GLOBAL(char *, acpi_gbl_db_filename);
ACPI_GLOBAL(u32, acpi_gbl_db_debug_level);
ACPI_GLOBAL(u32, acpi_gbl_db_console_debug_level);
ACPI_GLOBAL(struct acpi_namespace_node *, acpi_gbl_db_scope_node);
ACPI_GLOBAL(u8, acpi_gbl_db_terminate_loop);
ACPI_GLOBAL(u8, acpi_gbl_db_threads_terminated);
ACPI_GLOBAL(char *, acpi_gbl_db_args[ACPI_DEBUGGER_MAX_ARGS]);
ACPI_GLOBAL(acpi_object_type, acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS]);
@ -358,6 +360,9 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc);
ACPI_GLOBAL(u32, acpi_gbl_num_nodes);
ACPI_GLOBAL(u32, acpi_gbl_num_objects);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_ready);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_complete);
#endif /* ACPI_DEBUGGER */
/*****************************************************************************

View File

@ -397,12 +397,10 @@ void
acpi_ex_dump_operands(union acpi_operand_object **operands,
const char *opcode_name, u32 num_opcodes);
#ifdef ACPI_FUTURE_USAGE
void
acpi_ex_dump_object_descriptor(union acpi_operand_object *object, u32 flags);
void acpi_ex_dump_namespace_node(struct acpi_namespace_node *node, u32 flags);
#endif /* ACPI_FUTURE_USAGE */
/*
* exnames - AML namestring support

View File

@ -83,10 +83,8 @@ union acpi_parse_object;
#define ACPI_MTX_EVENTS 3 /* Data for ACPI events */
#define ACPI_MTX_CACHES 4 /* Internal caches, general purposes */
#define ACPI_MTX_MEMORY 5 /* Debug memory tracking lists */
#define ACPI_MTX_DEBUG_CMD_COMPLETE 6 /* AML debugger */
#define ACPI_MTX_DEBUG_CMD_READY 7 /* AML debugger */
#define ACPI_MAX_MUTEX 7
#define ACPI_MAX_MUTEX 5
#define ACPI_NUM_MUTEX ACPI_MAX_MUTEX+1
/* Lock structure for reader/writer interfaces */
@ -111,6 +109,14 @@ struct acpi_rw_lock {
#define ACPI_MUTEX_NOT_ACQUIRED (acpi_thread_id) 0
/* This Thread ID means an invalid thread ID */
#ifdef ACPI_OS_INVALID_THREAD_ID
#define ACPI_INVALID_THREAD_ID ACPI_OS_INVALID_THREAD_ID
#else
#define ACPI_INVALID_THREAD_ID ((acpi_thread_id) 0xFFFFFFFF)
#endif
/* Table for the global mutexes */
struct acpi_mutex_info {
@ -287,13 +293,17 @@ acpi_status(*acpi_internal_method) (struct acpi_walk_state * walk_state);
#define ACPI_BTYPE_BUFFER_FIELD 0x00002000
#define ACPI_BTYPE_DDB_HANDLE 0x00004000
#define ACPI_BTYPE_DEBUG_OBJECT 0x00008000
#define ACPI_BTYPE_REFERENCE 0x00010000
#define ACPI_BTYPE_REFERENCE_OBJECT 0x00010000 /* From Index(), ref_of(), etc (type6_opcodes) */
#define ACPI_BTYPE_RESOURCE 0x00020000
#define ACPI_BTYPE_NAMED_REFERENCE 0x00040000 /* Generic unresolved Name or Namepath */
#define ACPI_BTYPE_COMPUTE_DATA (ACPI_BTYPE_INTEGER | ACPI_BTYPE_STRING | ACPI_BTYPE_BUFFER)
#define ACPI_BTYPE_DATA (ACPI_BTYPE_COMPUTE_DATA | ACPI_BTYPE_PACKAGE)
#define ACPI_BTYPE_DATA_REFERENCE (ACPI_BTYPE_DATA | ACPI_BTYPE_REFERENCE | ACPI_BTYPE_DDB_HANDLE)
/* Used by Copy, de_ref_of, Store, Printf, Fprintf */
#define ACPI_BTYPE_DATA_REFERENCE (ACPI_BTYPE_DATA | ACPI_BTYPE_REFERENCE_OBJECT | ACPI_BTYPE_DDB_HANDLE)
#define ACPI_BTYPE_DEVICE_OBJECTS (ACPI_BTYPE_DEVICE | ACPI_BTYPE_THERMAL | ACPI_BTYPE_PROCESSOR)
#define ACPI_BTYPE_OBJECTS_AND_REFS 0x0001FFFF /* ARG or LOCAL */
#define ACPI_BTYPE_ALL_OBJECTS 0x0000FFFF
@ -848,7 +858,7 @@ struct acpi_parse_state {
#define ACPI_PARSEOP_PARAMLIST 0x02
#define ACPI_PARSEOP_EMPTY_TERMLIST 0x04
#define ACPI_PARSEOP_PREDEF_CHECKED 0x08
#define ACPI_PARSEOP_SPECIAL 0x10
#define ACPI_PARSEOP_CLOSING_PAREN 0x10
#define ACPI_PARSEOP_COMPOUND 0x20
#define ACPI_PARSEOP_ASSIGNMENT 0x40

View File

@ -193,9 +193,7 @@ acpi_ns_convert_to_resource(union acpi_operand_object *original_object,
/*
* nsdump - Namespace dump/print utilities
*/
#ifdef ACPI_FUTURE_USAGE
void acpi_ns_dump_tables(acpi_handle search_base, u32 max_depth);
#endif /* ACPI_FUTURE_USAGE */
void acpi_ns_dump_entry(acpi_handle handle, u32 debug_level);
@ -208,7 +206,6 @@ acpi_status
acpi_ns_dump_one_object(acpi_handle obj_handle,
u32 level, void *context, void **return_value);
#ifdef ACPI_FUTURE_USAGE
void
acpi_ns_dump_objects(acpi_object_type type,
u8 display_type,
@ -220,7 +217,6 @@ acpi_ns_dump_object_paths(acpi_object_type type,
u8 display_type,
u32 max_depth,
acpi_owner_id owner_id, acpi_handle start_handle);
#endif /* ACPI_FUTURE_USAGE */
/*
* nseval - Namespace evaluation functions

View File

@ -211,7 +211,7 @@
#define ARGI_ARG4 ARG_NONE
#define ARGI_ARG5 ARG_NONE
#define ARGI_ARG6 ARG_NONE
#define ARGI_BANK_FIELD_OP ARGI_INVALID_OPCODE
#define ARGI_BANK_FIELD_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_BIT_AND_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_BIT_NAND_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_BIT_NOR_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
@ -307,7 +307,7 @@
#define ARGI_SLEEP_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_STALL_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_STATICSTRING_OP ARGI_INVALID_OPCODE
#define ARGI_STORE_OP ARGI_LIST2 (ARGI_DATAREFOBJ, ARGI_TARGETREF)
#define ARGI_STORE_OP ARGI_LIST2 (ARGI_DATAREFOBJ, ARGI_STORE_TARGET)
#define ARGI_STRING_OP ARGI_INVALID_OPCODE
#define ARGI_SUBTRACT_OP ARGI_LIST3 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF)
#define ARGI_THERMAL_ZONE_OP ARGI_INVALID_OPCODE

View File

@ -194,10 +194,8 @@ union acpi_parse_object *acpi_ps_find(union acpi_parse_object *scope,
union acpi_parse_object *acpi_ps_get_arg(union acpi_parse_object *op, u32 argn);
#ifdef ACPI_FUTURE_USAGE
union acpi_parse_object *acpi_ps_get_depth_next(union acpi_parse_object *origin,
union acpi_parse_object *op);
#endif /* ACPI_FUTURE_USAGE */
/*
* pswalk - parse tree walk routines
@ -235,9 +233,7 @@ void acpi_ps_free_op(union acpi_parse_object *op);
u8 acpi_ps_is_leading_char(u32 c);
#ifdef ACPI_FUTURE_USAGE
u32 acpi_ps_get_name(union acpi_parse_object *op);
#endif /* ACPI_FUTURE_USAGE */
void acpi_ps_set_name(union acpi_parse_object *op, u32 name);

View File

@ -635,9 +635,7 @@ void
acpi_ut_free_and_track(void *address,
u32 component, const char *module, u32 line);
#ifdef ACPI_FUTURE_USAGE
void acpi_ut_dump_allocation_info(void);
#endif /* ACPI_FUTURE_USAGE */
void acpi_ut_dump_allocations(u32 component, const char *module);

View File

@ -277,14 +277,15 @@
#define ARGI_TARGETREF 0x0F /* Target, subject to implicit conversion */
#define ARGI_FIXED_TARGET 0x10 /* Target, no implicit conversion */
#define ARGI_SIMPLE_TARGET 0x11 /* Name, Local, Arg -- no implicit conversion */
#define ARGI_STORE_TARGET 0x12 /* Target for store is TARGETREF + package objects */
/* Multiple/complex types */
#define ARGI_DATAOBJECT 0x12 /* Buffer, String, package or reference to a node - Used only by size_of operator */
#define ARGI_COMPLEXOBJ 0x13 /* Buffer, String, or package (Used by INDEX op only) */
#define ARGI_REF_OR_STRING 0x14 /* Reference or String (Used by DEREFOF op only) */
#define ARGI_REGION_OR_BUFFER 0x15 /* Used by LOAD op only */
#define ARGI_DATAREFOBJ 0x16
#define ARGI_DATAOBJECT 0x13 /* Buffer, String, package or reference to a node - Used only by size_of operator */
#define ARGI_COMPLEXOBJ 0x14 /* Buffer, String, or package (Used by INDEX op only) */
#define ARGI_REF_OR_STRING 0x15 /* Reference or String (Used by DEREFOF op only) */
#define ARGI_REGION_OR_BUFFER 0x16 /* Used by LOAD op only */
#define ARGI_DATAREFOBJ 0x17
/* Note: types above can expand to 0x1F maximum */

1187
drivers/acpi/acpica/dbcmds.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,484 @@
/*******************************************************************************
*
* Module Name: dbconvert - debugger miscellaneous conversion routines
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbconvert")
#define DB_DEFAULT_PKG_ELEMENTS 33
/*******************************************************************************
*
* FUNCTION: acpi_db_hex_char_to_value
*
* PARAMETERS: hex_char - Ascii Hex digit, 0-9|a-f|A-F
* return_value - Where the converted value is returned
*
* RETURN: Status
*
* DESCRIPTION: Convert a single hex character to a 4-bit number (0-16).
*
******************************************************************************/
acpi_status acpi_db_hex_char_to_value(int hex_char, u8 *return_value)
{
u8 value;
/* Digit must be ascii [0-9a-fA-F] */
if (!isxdigit(hex_char)) {
return (AE_BAD_HEX_CONSTANT);
}
if (hex_char <= 0x39) {
value = (u8)(hex_char - 0x30);
} else {
value = (u8)(toupper(hex_char) - 0x37);
}
*return_value = value;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_hex_byte_to_binary
*
* PARAMETERS: hex_byte - Double hex digit (0x00 - 0xFF) in format:
* hi_byte then lo_byte.
* return_value - Where the converted value is returned
*
* RETURN: Status
*
* DESCRIPTION: Convert two hex characters to an 8 bit number (0 - 255).
*
******************************************************************************/
static acpi_status acpi_db_hex_byte_to_binary(char *hex_byte, u8 *return_value)
{
u8 local0;
u8 local1;
acpi_status status;
/* High byte */
status = acpi_db_hex_char_to_value(hex_byte[0], &local0);
if (ACPI_FAILURE(status)) {
return (status);
}
/* Low byte */
status = acpi_db_hex_char_to_value(hex_byte[1], &local1);
if (ACPI_FAILURE(status)) {
return (status);
}
*return_value = (u8)((local0 << 4) | local1);
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_convert_to_buffer
*
* PARAMETERS: string - Input string to be converted
* object - Where the buffer object is returned
*
* RETURN: Status
*
* DESCRIPTION: Convert a string to a buffer object. String is treated a list
* of buffer elements, each separated by a space or comma.
*
******************************************************************************/
static acpi_status
acpi_db_convert_to_buffer(char *string, union acpi_object *object)
{
u32 i;
u32 j;
u32 length;
u8 *buffer;
acpi_status status;
/* Generate the final buffer length */
for (i = 0, length = 0; string[i];) {
i += 2;
length++;
while (string[i] && ((string[i] == ',') || (string[i] == ' '))) {
i++;
}
}
buffer = ACPI_ALLOCATE(length);
if (!buffer) {
return (AE_NO_MEMORY);
}
/* Convert the command line bytes to the buffer */
for (i = 0, j = 0; string[i];) {
status = acpi_db_hex_byte_to_binary(&string[i], &buffer[j]);
if (ACPI_FAILURE(status)) {
ACPI_FREE(buffer);
return (status);
}
j++;
i += 2;
while (string[i] && ((string[i] == ',') || (string[i] == ' '))) {
i++;
}
}
object->type = ACPI_TYPE_BUFFER;
object->buffer.pointer = buffer;
object->buffer.length = length;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_convert_to_package
*
* PARAMETERS: string - Input string to be converted
* object - Where the package object is returned
*
* RETURN: Status
*
* DESCRIPTION: Convert a string to a package object. Handles nested packages
* via recursion with acpi_db_convert_to_object.
*
******************************************************************************/
acpi_status acpi_db_convert_to_package(char *string, union acpi_object * object)
{
char *this;
char *next;
u32 i;
acpi_object_type type;
union acpi_object *elements;
acpi_status status;
elements =
ACPI_ALLOCATE_ZEROED(DB_DEFAULT_PKG_ELEMENTS *
sizeof(union acpi_object));
this = string;
for (i = 0; i < (DB_DEFAULT_PKG_ELEMENTS - 1); i++) {
this = acpi_db_get_next_token(this, &next, &type);
if (!this) {
break;
}
/* Recursive call to convert each package element */
status = acpi_db_convert_to_object(type, this, &elements[i]);
if (ACPI_FAILURE(status)) {
acpi_db_delete_objects(i + 1, elements);
ACPI_FREE(elements);
return (status);
}
this = next;
}
object->type = ACPI_TYPE_PACKAGE;
object->package.count = i;
object->package.elements = elements;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_convert_to_object
*
* PARAMETERS: type - Object type as determined by parser
* string - Input string to be converted
* object - Where the new object is returned
*
* RETURN: Status
*
* DESCRIPTION: Convert a typed and tokenized string to an union acpi_object. Typing:
* 1) String objects were surrounded by quotes.
* 2) Buffer objects were surrounded by parentheses.
* 3) Package objects were surrounded by brackets "[]".
* 4) All standalone tokens are treated as integers.
*
******************************************************************************/
acpi_status
acpi_db_convert_to_object(acpi_object_type type,
char *string, union acpi_object * object)
{
acpi_status status = AE_OK;
switch (type) {
case ACPI_TYPE_STRING:
object->type = ACPI_TYPE_STRING;
object->string.pointer = string;
object->string.length = (u32)strlen(string);
break;
case ACPI_TYPE_BUFFER:
status = acpi_db_convert_to_buffer(string, object);
break;
case ACPI_TYPE_PACKAGE:
status = acpi_db_convert_to_package(string, object);
break;
default:
object->type = ACPI_TYPE_INTEGER;
status = acpi_ut_strtoul64(string, 16, &object->integer.value);
break;
}
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_encode_pld_buffer
*
* PARAMETERS: pld_info - _PLD buffer struct (Using local struct)
*
* RETURN: Encode _PLD buffer suitable for return value from _PLD
*
* DESCRIPTION: Bit-packs a _PLD buffer struct. Used to test the _PLD macros
*
******************************************************************************/
u8 *acpi_db_encode_pld_buffer(struct acpi_pld_info *pld_info)
{
u32 *buffer;
u32 dword;
buffer = ACPI_ALLOCATE_ZEROED(ACPI_PLD_BUFFER_SIZE);
if (!buffer) {
return (NULL);
}
/* First 32 bits */
dword = 0;
ACPI_PLD_SET_REVISION(&dword, pld_info->revision);
ACPI_PLD_SET_IGNORE_COLOR(&dword, pld_info->ignore_color);
ACPI_PLD_SET_RED(&dword, pld_info->red);
ACPI_PLD_SET_GREEN(&dword, pld_info->green);
ACPI_PLD_SET_BLUE(&dword, pld_info->blue);
ACPI_MOVE_32_TO_32(&buffer[0], &dword);
/* Second 32 bits */
dword = 0;
ACPI_PLD_SET_WIDTH(&dword, pld_info->width);
ACPI_PLD_SET_HEIGHT(&dword, pld_info->height);
ACPI_MOVE_32_TO_32(&buffer[1], &dword);
/* Third 32 bits */
dword = 0;
ACPI_PLD_SET_USER_VISIBLE(&dword, pld_info->user_visible);
ACPI_PLD_SET_DOCK(&dword, pld_info->dock);
ACPI_PLD_SET_LID(&dword, pld_info->lid);
ACPI_PLD_SET_PANEL(&dword, pld_info->panel);
ACPI_PLD_SET_VERTICAL(&dword, pld_info->vertical_position);
ACPI_PLD_SET_HORIZONTAL(&dword, pld_info->horizontal_position);
ACPI_PLD_SET_SHAPE(&dword, pld_info->shape);
ACPI_PLD_SET_ORIENTATION(&dword, pld_info->group_orientation);
ACPI_PLD_SET_TOKEN(&dword, pld_info->group_token);
ACPI_PLD_SET_POSITION(&dword, pld_info->group_position);
ACPI_PLD_SET_BAY(&dword, pld_info->bay);
ACPI_MOVE_32_TO_32(&buffer[2], &dword);
/* Fourth 32 bits */
dword = 0;
ACPI_PLD_SET_EJECTABLE(&dword, pld_info->ejectable);
ACPI_PLD_SET_OSPM_EJECT(&dword, pld_info->ospm_eject_required);
ACPI_PLD_SET_CABINET(&dword, pld_info->cabinet_number);
ACPI_PLD_SET_CARD_CAGE(&dword, pld_info->card_cage_number);
ACPI_PLD_SET_REFERENCE(&dword, pld_info->reference);
ACPI_PLD_SET_ROTATION(&dword, pld_info->rotation);
ACPI_PLD_SET_ORDER(&dword, pld_info->order);
ACPI_MOVE_32_TO_32(&buffer[3], &dword);
if (pld_info->revision >= 2) {
/* Fifth 32 bits */
dword = 0;
ACPI_PLD_SET_VERT_OFFSET(&dword, pld_info->vertical_offset);
ACPI_PLD_SET_HORIZ_OFFSET(&dword, pld_info->horizontal_offset);
ACPI_MOVE_32_TO_32(&buffer[4], &dword);
}
return (ACPI_CAST_PTR(u8, buffer));
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_pld_buffer
*
* PARAMETERS: obj_desc - Object returned from _PLD method
*
* RETURN: None.
*
* DESCRIPTION: Dumps formatted contents of a _PLD return buffer.
*
******************************************************************************/
#define ACPI_PLD_OUTPUT "%20s : %-6X\n"
void acpi_db_dump_pld_buffer(union acpi_object *obj_desc)
{
union acpi_object *buffer_desc;
struct acpi_pld_info *pld_info;
u8 *new_buffer;
acpi_status status;
/* Object must be of type Package with at least one Buffer element */
if (obj_desc->type != ACPI_TYPE_PACKAGE) {
return;
}
buffer_desc = &obj_desc->package.elements[0];
if (buffer_desc->type != ACPI_TYPE_BUFFER) {
return;
}
/* Convert _PLD buffer to local _PLD struct */
status = acpi_decode_pld_buffer(buffer_desc->buffer.pointer,
buffer_desc->buffer.length, &pld_info);
if (ACPI_FAILURE(status)) {
return;
}
/* Encode local _PLD struct back to a _PLD buffer */
new_buffer = acpi_db_encode_pld_buffer(pld_info);
if (!new_buffer) {
return;
}
/* The two bit-packed buffers should match */
if (memcmp(new_buffer, buffer_desc->buffer.pointer,
buffer_desc->buffer.length)) {
acpi_os_printf
("Converted _PLD buffer does not compare. New:\n");
acpi_ut_dump_buffer(new_buffer,
buffer_desc->buffer.length, DB_BYTE_DISPLAY,
0);
}
/* First 32-bit dword */
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Revision", pld_info->revision);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_IgnoreColor",
pld_info->ignore_color);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Red", pld_info->red);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Green", pld_info->green);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Blue", pld_info->blue);
/* Second 32-bit dword */
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Width", pld_info->width);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Height", pld_info->height);
/* Third 32-bit dword */
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_UserVisible",
pld_info->user_visible);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Dock", pld_info->dock);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Lid", pld_info->lid);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Panel", pld_info->panel);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_VerticalPosition",
pld_info->vertical_position);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_HorizontalPosition",
pld_info->horizontal_position);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Shape", pld_info->shape);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupOrientation",
pld_info->group_orientation);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupToken",
pld_info->group_token);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_GroupPosition",
pld_info->group_position);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Bay", pld_info->bay);
/* Fourth 32-bit dword */
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Ejectable", pld_info->ejectable);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_EjectRequired",
pld_info->ospm_eject_required);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_CabinetNumber",
pld_info->cabinet_number);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_CardCageNumber",
pld_info->card_cage_number);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Reference", pld_info->reference);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Rotation", pld_info->rotation);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_Order", pld_info->order);
/* Fifth 32-bit dword */
if (buffer_desc->buffer.length > 16) {
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_VerticalOffset",
pld_info->vertical_offset);
acpi_os_printf(ACPI_PLD_OUTPUT, "PLD_HorizontalOffset",
pld_info->horizontal_offset);
}
ACPI_FREE(pld_info);
ACPI_FREE(new_buffer);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,764 @@
/*******************************************************************************
*
* Module Name: dbexec - debugger control method execution
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdebug.h"
#include "acnamesp.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbexec")
static struct acpi_db_method_info acpi_gbl_db_method_info;
/* Local prototypes */
static acpi_status
acpi_db_execute_method(struct acpi_db_method_info *info,
struct acpi_buffer *return_obj);
static acpi_status acpi_db_execute_setup(struct acpi_db_method_info *info);
static u32 acpi_db_get_outstanding_allocations(void);
static void ACPI_SYSTEM_XFACE acpi_db_method_thread(void *context);
static acpi_status
acpi_db_execution_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value);
/*******************************************************************************
*
* FUNCTION: acpi_db_delete_objects
*
* PARAMETERS: count - Count of objects in the list
* objects - Array of ACPI_OBJECTs to be deleted
*
* RETURN: None
*
* DESCRIPTION: Delete a list of ACPI_OBJECTS. Handles packages and nested
* packages via recursion.
*
******************************************************************************/
void acpi_db_delete_objects(u32 count, union acpi_object *objects)
{
u32 i;
for (i = 0; i < count; i++) {
switch (objects[i].type) {
case ACPI_TYPE_BUFFER:
ACPI_FREE(objects[i].buffer.pointer);
break;
case ACPI_TYPE_PACKAGE:
/* Recursive call to delete package elements */
acpi_db_delete_objects(objects[i].package.count,
objects[i].package.elements);
/* Free the elements array */
ACPI_FREE(objects[i].package.elements);
break;
default:
break;
}
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_execute_method
*
* PARAMETERS: info - Valid info segment
* return_obj - Where to put return object
*
* RETURN: Status
*
* DESCRIPTION: Execute a control method.
*
******************************************************************************/
static acpi_status
acpi_db_execute_method(struct acpi_db_method_info *info,
struct acpi_buffer *return_obj)
{
acpi_status status;
struct acpi_object_list param_objects;
union acpi_object params[ACPI_DEBUGGER_MAX_ARGS + 1];
u32 i;
ACPI_FUNCTION_TRACE(db_execute_method);
if (acpi_gbl_db_output_to_file && !acpi_dbg_level) {
acpi_os_printf("Warning: debug output is not enabled!\n");
}
param_objects.count = 0;
param_objects.pointer = NULL;
/* Pass through any command-line arguments */
if (info->args && info->args[0]) {
/* Get arguments passed on the command line */
for (i = 0; (info->args[i] && *(info->args[i])); i++) {
/* Convert input string (token) to an actual union acpi_object */
status = acpi_db_convert_to_object(info->types[i],
info->args[i],
&params[i]);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"While parsing method arguments"));
goto cleanup;
}
}
param_objects.count = i;
param_objects.pointer = params;
}
/* Prepare for a return object of arbitrary size */
return_obj->pointer = acpi_gbl_db_buffer;
return_obj->length = ACPI_DEBUG_BUFFER_SIZE;
/* Do the actual method execution */
acpi_gbl_method_executing = TRUE;
status = acpi_evaluate_object(NULL, info->pathname,
&param_objects, return_obj);
acpi_gbl_cm_single_step = FALSE;
acpi_gbl_method_executing = FALSE;
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"while executing %s from debugger",
info->pathname));
if (status == AE_BUFFER_OVERFLOW) {
ACPI_ERROR((AE_INFO,
"Possible overflow of internal debugger "
"buffer (size 0x%X needed 0x%X)",
ACPI_DEBUG_BUFFER_SIZE,
(u32)return_obj->length));
}
}
cleanup:
acpi_db_delete_objects(param_objects.count, params);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_execute_setup
*
* PARAMETERS: info - Valid method info
*
* RETURN: None
*
* DESCRIPTION: Setup info segment prior to method execution
*
******************************************************************************/
static acpi_status acpi_db_execute_setup(struct acpi_db_method_info *info)
{
acpi_status status;
ACPI_FUNCTION_NAME(db_execute_setup);
/* Catenate the current scope to the supplied name */
info->pathname[0] = 0;
if ((info->name[0] != '\\') && (info->name[0] != '/')) {
if (acpi_ut_safe_strcat(info->pathname, sizeof(info->pathname),
acpi_gbl_db_scope_buf)) {
status = AE_BUFFER_OVERFLOW;
goto error_exit;
}
}
if (acpi_ut_safe_strcat(info->pathname, sizeof(info->pathname),
info->name)) {
status = AE_BUFFER_OVERFLOW;
goto error_exit;
}
acpi_db_prep_namestring(info->pathname);
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf("Evaluating %s\n", info->pathname);
if (info->flags & EX_SINGLE_STEP) {
acpi_gbl_cm_single_step = TRUE;
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
}
else {
/* No single step, allow redirection to a file */
acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT);
}
return (AE_OK);
error_exit:
ACPI_EXCEPTION((AE_INFO, status, "During setup for method execution"));
return (status);
}
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
u32 acpi_db_get_cache_info(struct acpi_memory_list *cache)
{
return (cache->total_allocated - cache->total_freed -
cache->current_depth);
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_db_get_outstanding_allocations
*
* PARAMETERS: None
*
* RETURN: Current global allocation count minus cache entries
*
* DESCRIPTION: Determine the current number of "outstanding" allocations --
* those allocations that have not been freed and also are not
* in one of the various object caches.
*
******************************************************************************/
static u32 acpi_db_get_outstanding_allocations(void)
{
u32 outstanding = 0;
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
outstanding += acpi_db_get_cache_info(acpi_gbl_state_cache);
outstanding += acpi_db_get_cache_info(acpi_gbl_ps_node_cache);
outstanding += acpi_db_get_cache_info(acpi_gbl_ps_node_ext_cache);
outstanding += acpi_db_get_cache_info(acpi_gbl_operand_cache);
#endif
return (outstanding);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_execution_walk
*
* PARAMETERS: WALK_CALLBACK
*
* RETURN: Status
*
* DESCRIPTION: Execute a control method. Name is relative to the current
* scope.
*
******************************************************************************/
static acpi_status
acpi_db_execution_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value)
{
union acpi_operand_object *obj_desc;
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
struct acpi_buffer return_obj;
acpi_status status;
obj_desc = acpi_ns_get_attached_object(node);
if (obj_desc->method.param_count) {
return (AE_OK);
}
return_obj.pointer = NULL;
return_obj.length = ACPI_ALLOCATE_BUFFER;
acpi_ns_print_node_pathname(node, "Evaluating");
/* Do the actual method execution */
acpi_os_printf("\n");
acpi_gbl_method_executing = TRUE;
status = acpi_evaluate_object(node, NULL, NULL, &return_obj);
acpi_os_printf("Evaluation of [%4.4s] returned %s\n",
acpi_ut_get_node_name(node),
acpi_format_exception(status));
acpi_gbl_method_executing = FALSE;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_execute
*
* PARAMETERS: name - Name of method to execute
* args - Parameters to the method
* Types -
* flags - single step/no single step
*
* RETURN: None
*
* DESCRIPTION: Execute a control method. Name is relative to the current
* scope.
*
******************************************************************************/
void
acpi_db_execute(char *name, char **args, acpi_object_type * types, u32 flags)
{
acpi_status status;
struct acpi_buffer return_obj;
char *name_string;
#ifdef ACPI_DEBUG_OUTPUT
u32 previous_allocations;
u32 allocations;
#endif
/*
* Allow one execution to be performed by debugger or single step
* execution will be dead locked by the interpreter mutexes.
*/
if (acpi_gbl_method_executing) {
acpi_os_printf("Only one debugger execution is allowed.\n");
return;
}
#ifdef ACPI_DEBUG_OUTPUT
/* Memory allocation tracking */
previous_allocations = acpi_db_get_outstanding_allocations();
#endif
if (*name == '*') {
(void)acpi_walk_namespace(ACPI_TYPE_METHOD, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX,
acpi_db_execution_walk, NULL, NULL,
NULL);
return;
} else {
name_string = ACPI_ALLOCATE(strlen(name) + 1);
if (!name_string) {
return;
}
memset(&acpi_gbl_db_method_info, 0,
sizeof(struct acpi_db_method_info));
strcpy(name_string, name);
acpi_ut_strupr(name_string);
acpi_gbl_db_method_info.name = name_string;
acpi_gbl_db_method_info.args = args;
acpi_gbl_db_method_info.types = types;
acpi_gbl_db_method_info.flags = flags;
return_obj.pointer = NULL;
return_obj.length = ACPI_ALLOCATE_BUFFER;
status = acpi_db_execute_setup(&acpi_gbl_db_method_info);
if (ACPI_FAILURE(status)) {
ACPI_FREE(name_string);
return;
}
/* Get the NS node, determines existence also */
status = acpi_get_handle(NULL, acpi_gbl_db_method_info.pathname,
&acpi_gbl_db_method_info.method);
if (ACPI_SUCCESS(status)) {
status =
acpi_db_execute_method(&acpi_gbl_db_method_info,
&return_obj);
}
ACPI_FREE(name_string);
}
/*
* Allow any handlers in separate threads to complete.
* (Such as Notify handlers invoked from AML executed above).
*/
acpi_os_sleep((u64)10);
#ifdef ACPI_DEBUG_OUTPUT
/* Memory allocation tracking */
allocations =
acpi_db_get_outstanding_allocations() - previous_allocations;
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
if (allocations > 0) {
acpi_os_printf
("0x%X Outstanding allocations after evaluation of %s\n",
allocations, acpi_gbl_db_method_info.pathname);
}
#endif
if (ACPI_FAILURE(status)) {
acpi_os_printf("Evaluation of %s failed with status %s\n",
acpi_gbl_db_method_info.pathname,
acpi_format_exception(status));
} else {
/* Display a return object, if any */
if (return_obj.length) {
acpi_os_printf("Evaluation of %s returned object %p, "
"external buffer length %X\n",
acpi_gbl_db_method_info.pathname,
return_obj.pointer,
(u32)return_obj.length);
acpi_db_dump_external_object(return_obj.pointer, 1);
/* Dump a _PLD buffer if present */
if (ACPI_COMPARE_NAME
((ACPI_CAST_PTR
(struct acpi_namespace_node,
acpi_gbl_db_method_info.method)->name.ascii),
METHOD_NAME__PLD)) {
acpi_db_dump_pld_buffer(return_obj.pointer);
}
} else {
acpi_os_printf
("No object was returned from evaluation of %s\n",
acpi_gbl_db_method_info.pathname);
}
}
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_method_thread
*
* PARAMETERS: context - Execution info segment
*
* RETURN: None
*
* DESCRIPTION: Debugger execute thread. Waits for a command line, then
* simply dispatches it.
*
******************************************************************************/
static void ACPI_SYSTEM_XFACE acpi_db_method_thread(void *context)
{
acpi_status status;
struct acpi_db_method_info *info = context;
struct acpi_db_method_info local_info;
u32 i;
u8 allow;
struct acpi_buffer return_obj;
/*
* acpi_gbl_db_method_info.Arguments will be passed as method arguments.
* Prevent acpi_gbl_db_method_info from being modified by multiple threads
* concurrently.
*
* Note: The arguments we are passing are used by the ASL test suite
* (aslts). Do not change them without updating the tests.
*/
(void)acpi_os_wait_semaphore(info->info_gate, 1, ACPI_WAIT_FOREVER);
if (info->init_args) {
acpi_db_uint32_to_hex_string(info->num_created,
info->index_of_thread_str);
acpi_db_uint32_to_hex_string((u32)acpi_os_get_thread_id(),
info->id_of_thread_str);
}
if (info->threads && (info->num_created < info->num_threads)) {
info->threads[info->num_created++] = acpi_os_get_thread_id();
}
local_info = *info;
local_info.args = local_info.arguments;
local_info.arguments[0] = local_info.num_threads_str;
local_info.arguments[1] = local_info.id_of_thread_str;
local_info.arguments[2] = local_info.index_of_thread_str;
local_info.arguments[3] = NULL;
local_info.types = local_info.arg_types;
(void)acpi_os_signal_semaphore(info->info_gate, 1);
for (i = 0; i < info->num_loops; i++) {
status = acpi_db_execute_method(&local_info, &return_obj);
if (ACPI_FAILURE(status)) {
acpi_os_printf
("%s During evaluation of %s at iteration %X\n",
acpi_format_exception(status), info->pathname, i);
if (status == AE_ABORT_METHOD) {
break;
}
}
#if 0
if ((i % 100) == 0) {
acpi_os_printf("%u loops, Thread 0x%x\n",
i, acpi_os_get_thread_id());
}
if (return_obj.length) {
acpi_os_printf
("Evaluation of %s returned object %p Buflen %X\n",
info->pathname, return_obj.pointer,
(u32)return_obj.length);
acpi_db_dump_external_object(return_obj.pointer, 1);
}
#endif
}
/* Signal our completion */
allow = 0;
(void)acpi_os_wait_semaphore(info->thread_complete_gate,
1, ACPI_WAIT_FOREVER);
info->num_completed++;
if (info->num_completed == info->num_threads) {
/* Do signal for main thread once only */
allow = 1;
}
(void)acpi_os_signal_semaphore(info->thread_complete_gate, 1);
if (allow) {
status = acpi_os_signal_semaphore(info->main_thread_gate, 1);
if (ACPI_FAILURE(status)) {
acpi_os_printf
("Could not signal debugger thread sync semaphore, %s\n",
acpi_format_exception(status));
}
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_create_execution_threads
*
* PARAMETERS: num_threads_arg - Number of threads to create
* num_loops_arg - Loop count for the thread(s)
* method_name_arg - Control method to execute
*
* RETURN: None
*
* DESCRIPTION: Create threads to execute method(s)
*
******************************************************************************/
void
acpi_db_create_execution_threads(char *num_threads_arg,
char *num_loops_arg, char *method_name_arg)
{
acpi_status status;
u32 num_threads;
u32 num_loops;
u32 i;
u32 size;
acpi_mutex main_thread_gate;
acpi_mutex thread_complete_gate;
acpi_mutex info_gate;
/* Get the arguments */
num_threads = strtoul(num_threads_arg, NULL, 0);
num_loops = strtoul(num_loops_arg, NULL, 0);
if (!num_threads || !num_loops) {
acpi_os_printf("Bad argument: Threads %X, Loops %X\n",
num_threads, num_loops);
return;
}
/*
* Create the semaphore for synchronization of
* the created threads with the main thread.
*/
status = acpi_os_create_semaphore(1, 0, &main_thread_gate);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not create semaphore for "
"synchronization with the main thread, %s\n",
acpi_format_exception(status));
return;
}
/*
* Create the semaphore for synchronization
* between the created threads.
*/
status = acpi_os_create_semaphore(1, 1, &thread_complete_gate);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not create semaphore for "
"synchronization between the created threads, %s\n",
acpi_format_exception(status));
(void)acpi_os_delete_semaphore(main_thread_gate);
return;
}
status = acpi_os_create_semaphore(1, 1, &info_gate);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not create semaphore for "
"synchronization of AcpiGbl_DbMethodInfo, %s\n",
acpi_format_exception(status));
(void)acpi_os_delete_semaphore(thread_complete_gate);
(void)acpi_os_delete_semaphore(main_thread_gate);
return;
}
memset(&acpi_gbl_db_method_info, 0, sizeof(struct acpi_db_method_info));
/* Array to store IDs of threads */
acpi_gbl_db_method_info.num_threads = num_threads;
size = sizeof(acpi_thread_id) * acpi_gbl_db_method_info.num_threads;
acpi_gbl_db_method_info.threads = acpi_os_allocate(size);
if (acpi_gbl_db_method_info.threads == NULL) {
acpi_os_printf("No memory for thread IDs array\n");
(void)acpi_os_delete_semaphore(main_thread_gate);
(void)acpi_os_delete_semaphore(thread_complete_gate);
(void)acpi_os_delete_semaphore(info_gate);
return;
}
memset(acpi_gbl_db_method_info.threads, 0, size);
/* Setup the context to be passed to each thread */
acpi_gbl_db_method_info.name = method_name_arg;
acpi_gbl_db_method_info.flags = 0;
acpi_gbl_db_method_info.num_loops = num_loops;
acpi_gbl_db_method_info.main_thread_gate = main_thread_gate;
acpi_gbl_db_method_info.thread_complete_gate = thread_complete_gate;
acpi_gbl_db_method_info.info_gate = info_gate;
/* Init arguments to be passed to method */
acpi_gbl_db_method_info.init_args = 1;
acpi_gbl_db_method_info.args = acpi_gbl_db_method_info.arguments;
acpi_gbl_db_method_info.arguments[0] =
acpi_gbl_db_method_info.num_threads_str;
acpi_gbl_db_method_info.arguments[1] =
acpi_gbl_db_method_info.id_of_thread_str;
acpi_gbl_db_method_info.arguments[2] =
acpi_gbl_db_method_info.index_of_thread_str;
acpi_gbl_db_method_info.arguments[3] = NULL;
acpi_gbl_db_method_info.types = acpi_gbl_db_method_info.arg_types;
acpi_gbl_db_method_info.arg_types[0] = ACPI_TYPE_INTEGER;
acpi_gbl_db_method_info.arg_types[1] = ACPI_TYPE_INTEGER;
acpi_gbl_db_method_info.arg_types[2] = ACPI_TYPE_INTEGER;
acpi_db_uint32_to_hex_string(num_threads,
acpi_gbl_db_method_info.num_threads_str);
status = acpi_db_execute_setup(&acpi_gbl_db_method_info);
if (ACPI_FAILURE(status)) {
goto cleanup_and_exit;
}
/* Get the NS node, determines existence also */
status = acpi_get_handle(NULL, acpi_gbl_db_method_info.pathname,
&acpi_gbl_db_method_info.method);
if (ACPI_FAILURE(status)) {
acpi_os_printf("%s Could not get handle for %s\n",
acpi_format_exception(status),
acpi_gbl_db_method_info.pathname);
goto cleanup_and_exit;
}
/* Create the threads */
acpi_os_printf("Creating %X threads to execute %X times each\n",
num_threads, num_loops);
for (i = 0; i < (num_threads); i++) {
status =
acpi_os_execute(OSL_DEBUGGER_EXEC_THREAD,
acpi_db_method_thread,
&acpi_gbl_db_method_info);
if (ACPI_FAILURE(status)) {
break;
}
}
/* Wait for all threads to complete */
(void)acpi_os_wait_semaphore(main_thread_gate, 1, ACPI_WAIT_FOREVER);
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf("All threads (%X) have completed\n", num_threads);
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
cleanup_and_exit:
/* Cleanup and exit */
(void)acpi_os_delete_semaphore(main_thread_gate);
(void)acpi_os_delete_semaphore(thread_complete_gate);
(void)acpi_os_delete_semaphore(info_gate);
acpi_os_free(acpi_gbl_db_method_info.threads);
acpi_gbl_db_method_info.threads = NULL;
}

View File

@ -0,0 +1,256 @@
/*******************************************************************************
*
* Module Name: dbfileio - Debugger file I/O commands. These can't usually
* be used when running the debugger in Ring 0 (Kernel mode)
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdebug.h"
#include "actables.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbfileio")
#ifdef ACPI_DEBUGGER
/*******************************************************************************
*
* FUNCTION: acpi_db_close_debug_file
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: If open, close the current debug output file
*
******************************************************************************/
void acpi_db_close_debug_file(void)
{
#ifdef ACPI_APPLICATION
if (acpi_gbl_debug_file) {
fclose(acpi_gbl_debug_file);
acpi_gbl_debug_file = NULL;
acpi_gbl_db_output_to_file = FALSE;
acpi_os_printf("Debug output file %s closed\n",
acpi_gbl_db_debug_filename);
}
#endif
}
/*******************************************************************************
*
* FUNCTION: acpi_db_open_debug_file
*
* PARAMETERS: name - Filename to open
*
* RETURN: None
*
* DESCRIPTION: Open a file where debug output will be directed.
*
******************************************************************************/
void acpi_db_open_debug_file(char *name)
{
#ifdef ACPI_APPLICATION
acpi_db_close_debug_file();
acpi_gbl_debug_file = fopen(name, "w+");
if (!acpi_gbl_debug_file) {
acpi_os_printf("Could not open debug file %s\n", name);
return;
}
acpi_os_printf("Debug output file %s opened\n", name);
strncpy(acpi_gbl_db_debug_filename, name,
sizeof(acpi_gbl_db_debug_filename));
acpi_gbl_db_output_to_file = TRUE;
#endif
}
#endif
#ifdef ACPI_APPLICATION
#include "acapps.h"
/*******************************************************************************
*
* FUNCTION: ae_local_load_table
*
* PARAMETERS: table - pointer to a buffer containing the entire
* table to be loaded
*
* RETURN: Status
*
* DESCRIPTION: This function is called to load a table from the caller's
* buffer. The buffer must contain an entire ACPI Table including
* a valid header. The header fields will be verified, and if it
* is determined that the table is invalid, the call will fail.
*
******************************************************************************/
static acpi_status ae_local_load_table(struct acpi_table_header *table)
{
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ae_local_load_table);
#if 0
/* struct acpi_table_desc table_info; */
if (!table) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
table_info.pointer = table;
status = acpi_tb_recognize_table(&table_info, ACPI_TABLE_ALL);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Install the new table into the local data structures */
status = acpi_tb_init_table_descriptor(&table_info);
if (ACPI_FAILURE(status)) {
if (status == AE_ALREADY_EXISTS) {
/* Table already exists, no error */
status = AE_OK;
}
/* Free table allocated by acpi_tb_get_table */
acpi_tb_delete_single_table(&table_info);
return_ACPI_STATUS(status);
}
#if (!defined (ACPI_NO_METHOD_EXECUTION) && !defined (ACPI_CONSTANT_EVAL_ONLY))
status =
acpi_ns_load_table(table_info.installed_desc, acpi_gbl_root_node);
if (ACPI_FAILURE(status)) {
/* Uninstall table and free the buffer */
acpi_tb_delete_tables_by_type(ACPI_TABLE_ID_DSDT);
return_ACPI_STATUS(status);
}
#endif
#endif
return_ACPI_STATUS(status);
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_db_get_table_from_file
*
* PARAMETERS: filename - File where table is located
* return_table - Where a pointer to the table is returned
*
* RETURN: Status
*
* DESCRIPTION: Load an ACPI table from a file
*
******************************************************************************/
acpi_status
acpi_db_get_table_from_file(char *filename,
struct acpi_table_header **return_table,
u8 must_be_aml_file)
{
#ifdef ACPI_APPLICATION
acpi_status status;
struct acpi_table_header *table;
u8 is_aml_table = TRUE;
status = acpi_ut_read_table_from_file(filename, &table);
if (ACPI_FAILURE(status)) {
return (status);
}
if (must_be_aml_file) {
is_aml_table = acpi_ut_is_aml_table(table);
if (!is_aml_table) {
ACPI_EXCEPTION((AE_INFO, AE_OK,
"Input for -e is not an AML table: "
"\"%4.4s\" (must be DSDT/SSDT)",
table->signature));
return (AE_TYPE);
}
}
if (is_aml_table) {
/* Attempt to recognize and install the table */
status = ae_local_load_table(table);
if (ACPI_FAILURE(status)) {
if (status == AE_ALREADY_EXISTS) {
acpi_os_printf
("Table %4.4s is already installed\n",
table->signature);
} else {
acpi_os_printf("Could not install table, %s\n",
acpi_format_exception(status));
}
return (status);
}
acpi_tb_print_table_header(0, table);
fprintf(stderr,
"Acpi table [%4.4s] successfully installed and loaded\n",
table->signature);
}
acpi_gbl_acpi_hardware_present = FALSE;
if (return_table) {
*return_table = table;
}
#endif /* ACPI_APPLICATION */
return (AE_OK);
}

View File

@ -0,0 +1,239 @@
/******************************************************************************
*
* Module Name: dbhistry - debugger HISTORY command
*
*****************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbhistry")
#define HI_NO_HISTORY 0
#define HI_RECORD_HISTORY 1
#define HISTORY_SIZE 40
typedef struct history_info {
char *command;
u32 cmd_num;
} HISTORY_INFO;
static HISTORY_INFO acpi_gbl_history_buffer[HISTORY_SIZE];
static u16 acpi_gbl_lo_history = 0;
static u16 acpi_gbl_num_history = 0;
static u16 acpi_gbl_next_history_index = 0;
u32 acpi_gbl_next_cmd_num = 1;
/*******************************************************************************
*
* FUNCTION: acpi_db_add_to_history
*
* PARAMETERS: command_line - Command to add
*
* RETURN: None
*
* DESCRIPTION: Add a command line to the history buffer.
*
******************************************************************************/
void acpi_db_add_to_history(char *command_line)
{
u16 cmd_len;
u16 buffer_len;
/* Put command into the next available slot */
cmd_len = (u16)strlen(command_line);
if (!cmd_len) {
return;
}
if (acpi_gbl_history_buffer[acpi_gbl_next_history_index].command !=
NULL) {
buffer_len =
(u16)
strlen(acpi_gbl_history_buffer[acpi_gbl_next_history_index].
command);
if (cmd_len > buffer_len) {
acpi_os_free(acpi_gbl_history_buffer
[acpi_gbl_next_history_index].command);
acpi_gbl_history_buffer[acpi_gbl_next_history_index].
command = acpi_os_allocate(cmd_len + 1);
}
} else {
acpi_gbl_history_buffer[acpi_gbl_next_history_index].command =
acpi_os_allocate(cmd_len + 1);
}
strcpy(acpi_gbl_history_buffer[acpi_gbl_next_history_index].command,
command_line);
acpi_gbl_history_buffer[acpi_gbl_next_history_index].cmd_num =
acpi_gbl_next_cmd_num;
/* Adjust indexes */
if ((acpi_gbl_num_history == HISTORY_SIZE) &&
(acpi_gbl_next_history_index == acpi_gbl_lo_history)) {
acpi_gbl_lo_history++;
if (acpi_gbl_lo_history >= HISTORY_SIZE) {
acpi_gbl_lo_history = 0;
}
}
acpi_gbl_next_history_index++;
if (acpi_gbl_next_history_index >= HISTORY_SIZE) {
acpi_gbl_next_history_index = 0;
}
acpi_gbl_next_cmd_num++;
if (acpi_gbl_num_history < HISTORY_SIZE) {
acpi_gbl_num_history++;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_display_history
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Display the contents of the history buffer
*
******************************************************************************/
void acpi_db_display_history(void)
{
u32 i;
u16 history_index;
history_index = acpi_gbl_lo_history;
/* Dump entire history buffer */
for (i = 0; i < acpi_gbl_num_history; i++) {
if (acpi_gbl_history_buffer[history_index].command) {
acpi_os_printf("%3ld %s\n",
acpi_gbl_history_buffer[history_index].
cmd_num,
acpi_gbl_history_buffer[history_index].
command);
}
history_index++;
if (history_index >= HISTORY_SIZE) {
history_index = 0;
}
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_get_from_history
*
* PARAMETERS: command_num_arg - String containing the number of the
* command to be retrieved
*
* RETURN: Pointer to the retrieved command. Null on error.
*
* DESCRIPTION: Get a command from the history buffer
*
******************************************************************************/
char *acpi_db_get_from_history(char *command_num_arg)
{
u32 cmd_num;
if (command_num_arg == NULL) {
cmd_num = acpi_gbl_next_cmd_num - 1;
}
else {
cmd_num = strtoul(command_num_arg, NULL, 0);
}
return (acpi_db_get_history_by_index(cmd_num));
}
/*******************************************************************************
*
* FUNCTION: acpi_db_get_history_by_index
*
* PARAMETERS: cmd_num - Index of the desired history entry.
* Values are 0...(acpi_gbl_next_cmd_num - 1)
*
* RETURN: Pointer to the retrieved command. Null on error.
*
* DESCRIPTION: Get a command from the history buffer
*
******************************************************************************/
char *acpi_db_get_history_by_index(u32 cmd_num)
{
u32 i;
u16 history_index;
/* Search history buffer */
history_index = acpi_gbl_lo_history;
for (i = 0; i < acpi_gbl_num_history; i++) {
if (acpi_gbl_history_buffer[history_index].cmd_num == cmd_num) {
/* Found the command, return it */
return (acpi_gbl_history_buffer[history_index].command);
}
/* History buffer is circular */
history_index++;
if (history_index >= HISTORY_SIZE) {
history_index = 0;
}
}
acpi_os_printf("Invalid history number: %u\n", history_index);
return (NULL);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,369 @@
/*******************************************************************************
*
* Module Name: dbmethod - Debug commands for control methods
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdispat.h"
#include "acnamesp.h"
#include "acdebug.h"
#include "acparser.h"
#include "acpredef.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbmethod")
/*******************************************************************************
*
* FUNCTION: acpi_db_set_method_breakpoint
*
* PARAMETERS: location - AML offset of breakpoint
* walk_state - Current walk info
* op - Current Op (from parse walk)
*
* RETURN: None
*
* DESCRIPTION: Set a breakpoint in a control method at the specified
* AML offset
*
******************************************************************************/
void
acpi_db_set_method_breakpoint(char *location,
struct acpi_walk_state *walk_state,
union acpi_parse_object *op)
{
u32 address;
u32 aml_offset;
if (!op) {
acpi_os_printf("There is no method currently executing\n");
return;
}
/* Get and verify the breakpoint address */
address = strtoul(location, NULL, 16);
aml_offset = (u32)ACPI_PTR_DIFF(op->common.aml,
walk_state->parser_state.aml_start);
if (address <= aml_offset) {
acpi_os_printf("Breakpoint %X is beyond current address %X\n",
address, aml_offset);
}
/* Save breakpoint in current walk */
walk_state->user_breakpoint = address;
acpi_os_printf("Breakpoint set at AML offset %X\n", address);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_set_method_call_breakpoint
*
* PARAMETERS: op - Current Op (from parse walk)
*
* RETURN: None
*
* DESCRIPTION: Set a breakpoint in a control method at the specified
* AML offset
*
******************************************************************************/
void acpi_db_set_method_call_breakpoint(union acpi_parse_object *op)
{
if (!op) {
acpi_os_printf("There is no method currently executing\n");
return;
}
acpi_gbl_step_to_next_call = TRUE;
}
/*******************************************************************************
*
* FUNCTION: acpi_db_set_method_data
*
* PARAMETERS: type_arg - L for local, A for argument
* index_arg - which one
* value_arg - Value to set.
*
* RETURN: None
*
* DESCRIPTION: Set a local or argument for the running control method.
* NOTE: only object supported is Number.
*
******************************************************************************/
void acpi_db_set_method_data(char *type_arg, char *index_arg, char *value_arg)
{
char type;
u32 index;
u32 value;
struct acpi_walk_state *walk_state;
union acpi_operand_object *obj_desc;
acpi_status status;
struct acpi_namespace_node *node;
/* Validate type_arg */
acpi_ut_strupr(type_arg);
type = type_arg[0];
if ((type != 'L') && (type != 'A') && (type != 'N')) {
acpi_os_printf("Invalid SET operand: %s\n", type_arg);
return;
}
value = strtoul(value_arg, NULL, 16);
if (type == 'N') {
node = acpi_db_convert_to_node(index_arg);
if (!node) {
return;
}
if (node->type != ACPI_TYPE_INTEGER) {
acpi_os_printf("Can only set Integer nodes\n");
return;
}
obj_desc = node->object;
obj_desc->integer.value = value;
return;
}
/* Get the index and value */
index = strtoul(index_arg, NULL, 16);
walk_state = acpi_ds_get_current_walk_state(acpi_gbl_current_walk_list);
if (!walk_state) {
acpi_os_printf("There is no method currently executing\n");
return;
}
/* Create and initialize the new object */
obj_desc = acpi_ut_create_integer_object((u64)value);
if (!obj_desc) {
acpi_os_printf("Could not create an internal object\n");
return;
}
/* Store the new object into the target */
switch (type) {
case 'A':
/* Set a method argument */
if (index > ACPI_METHOD_MAX_ARG) {
acpi_os_printf("Arg%u - Invalid argument name\n",
index);
goto cleanup;
}
status = acpi_ds_store_object_to_local(ACPI_REFCLASS_ARG,
index, obj_desc,
walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
obj_desc = walk_state->arguments[index].object;
acpi_os_printf("Arg%u: ", index);
acpi_db_display_internal_object(obj_desc, walk_state);
break;
case 'L':
/* Set a method local */
if (index > ACPI_METHOD_MAX_LOCAL) {
acpi_os_printf
("Local%u - Invalid local variable name\n", index);
goto cleanup;
}
status = acpi_ds_store_object_to_local(ACPI_REFCLASS_LOCAL,
index, obj_desc,
walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
obj_desc = walk_state->local_variables[index].object;
acpi_os_printf("Local%u: ", index);
acpi_db_display_internal_object(obj_desc, walk_state);
break;
default:
break;
}
cleanup:
acpi_ut_remove_reference(obj_desc);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_disassemble_aml
*
* PARAMETERS: statements - Number of statements to disassemble
* op - Current Op (from parse walk)
*
* RETURN: None
*
* DESCRIPTION: Display disassembled AML (ASL) starting from Op for the number
* of statements specified.
*
******************************************************************************/
void acpi_db_disassemble_aml(char *statements, union acpi_parse_object *op)
{
u32 num_statements = 8;
if (!op) {
acpi_os_printf("There is no method currently executing\n");
return;
}
if (statements) {
num_statements = strtoul(statements, NULL, 0);
}
#ifdef ACPI_DISASSEMBLER
acpi_dm_disassemble(NULL, op, num_statements);
#endif
}
/*******************************************************************************
*
* FUNCTION: acpi_db_disassemble_method
*
* PARAMETERS: name - Name of control method
*
* RETURN: None
*
* DESCRIPTION: Display disassembled AML (ASL) starting from Op for the number
* of statements specified.
*
******************************************************************************/
acpi_status acpi_db_disassemble_method(char *name)
{
acpi_status status;
union acpi_parse_object *op;
struct acpi_walk_state *walk_state;
union acpi_operand_object *obj_desc;
struct acpi_namespace_node *method;
method = acpi_db_convert_to_node(name);
if (!method) {
return (AE_BAD_PARAMETER);
}
if (method->type != ACPI_TYPE_METHOD) {
ACPI_ERROR((AE_INFO, "%s (%s): Object must be a control method",
name, acpi_ut_get_type_name(method->type)));
return (AE_BAD_PARAMETER);
}
obj_desc = method->object;
op = acpi_ps_create_scope_op(obj_desc->method.aml_start);
if (!op) {
return (AE_NO_MEMORY);
}
/* Create and initialize a new walk state */
walk_state = acpi_ds_create_walk_state(0, op, NULL, NULL);
if (!walk_state) {
return (AE_NO_MEMORY);
}
status = acpi_ds_init_aml_walk(walk_state, op, NULL,
obj_desc->method.aml_start,
obj_desc->method.aml_length, NULL,
ACPI_IMODE_LOAD_PASS1);
if (ACPI_FAILURE(status)) {
return (status);
}
status = acpi_ut_allocate_owner_id(&obj_desc->method.owner_id);
walk_state->owner_id = obj_desc->method.owner_id;
/* Push start scope on scope stack and make it current */
status = acpi_ds_scope_stack_push(method, method->type, walk_state);
if (ACPI_FAILURE(status)) {
return (status);
}
/* Parse the entire method AML including deferred operators */
walk_state->parse_flags &= ~ACPI_PARSE_DELETE_TREE;
walk_state->parse_flags |= ACPI_PARSE_DISASSEMBLE;
status = acpi_ps_parse_aml(walk_state);
#ifdef ACPI_DISASSEMBLER
(void)acpi_dm_parse_deferred_ops(op);
/* Now we can disassemble the method */
acpi_gbl_dm_opt_verbose = FALSE;
acpi_dm_disassemble(NULL, op, 0);
acpi_gbl_dm_opt_verbose = TRUE;
#endif
acpi_ps_delete_parse_tree(op);
/* Method cleanup */
acpi_ns_delete_namespace_subtree(method);
acpi_ns_delete_namespace_by_owner(obj_desc->method.owner_id);
acpi_ut_release_owner_id(&obj_desc->method.owner_id);
return (AE_OK);
}

View File

@ -0,0 +1,947 @@
/*******************************************************************************
*
* Module Name: dbnames - Debugger commands for the acpi namespace
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
#include "acdebug.h"
#include "acpredef.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbnames")
/* Local prototypes */
static acpi_status
acpi_db_walk_and_match_name(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
static acpi_status
acpi_db_walk_for_predefined_names(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
static acpi_status
acpi_db_walk_for_specific_objects(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
static acpi_status
acpi_db_walk_for_object_counts(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
static acpi_status
acpi_db_integrity_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value);
static acpi_status
acpi_db_walk_for_references(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
static acpi_status
acpi_db_bus_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value);
/*
* Arguments for the Objects command
* These object types map directly to the ACPI_TYPES
*/
static struct acpi_db_argument_info acpi_db_object_types[] = {
{"ANY"},
{"INTEGERS"},
{"STRINGS"},
{"BUFFERS"},
{"PACKAGES"},
{"FIELDS"},
{"DEVICES"},
{"EVENTS"},
{"METHODS"},
{"MUTEXES"},
{"REGIONS"},
{"POWERRESOURCES"},
{"PROCESSORS"},
{"THERMALZONES"},
{"BUFFERFIELDS"},
{"DDBHANDLES"},
{"DEBUG"},
{"REGIONFIELDS"},
{"BANKFIELDS"},
{"INDEXFIELDS"},
{"REFERENCES"},
{"ALIASES"},
{"METHODALIASES"},
{"NOTIFY"},
{"ADDRESSHANDLER"},
{"RESOURCE"},
{"RESOURCEFIELD"},
{"SCOPES"},
{NULL} /* Must be null terminated */
};
/*******************************************************************************
*
* FUNCTION: acpi_db_set_scope
*
* PARAMETERS: name - New scope path
*
* RETURN: Status
*
* DESCRIPTION: Set the "current scope" as maintained by this utility.
* The scope is used as a prefix to ACPI paths.
*
******************************************************************************/
void acpi_db_set_scope(char *name)
{
acpi_status status;
struct acpi_namespace_node *node;
if (!name || name[0] == 0) {
acpi_os_printf("Current scope: %s\n", acpi_gbl_db_scope_buf);
return;
}
acpi_db_prep_namestring(name);
if (ACPI_IS_ROOT_PREFIX(name[0])) {
/* Validate new scope from the root */
status = acpi_ns_get_node(acpi_gbl_root_node, name,
ACPI_NS_NO_UPSEARCH, &node);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
acpi_gbl_db_scope_buf[0] = 0;
} else {
/* Validate new scope relative to old scope */
status = acpi_ns_get_node(acpi_gbl_db_scope_node, name,
ACPI_NS_NO_UPSEARCH, &node);
if (ACPI_FAILURE(status)) {
goto error_exit;
}
}
/* Build the final pathname */
if (acpi_ut_safe_strcat
(acpi_gbl_db_scope_buf, sizeof(acpi_gbl_db_scope_buf), name)) {
status = AE_BUFFER_OVERFLOW;
goto error_exit;
}
if (acpi_ut_safe_strcat
(acpi_gbl_db_scope_buf, sizeof(acpi_gbl_db_scope_buf), "\\")) {
status = AE_BUFFER_OVERFLOW;
goto error_exit;
}
acpi_gbl_db_scope_node = node;
acpi_os_printf("New scope: %s\n", acpi_gbl_db_scope_buf);
return;
error_exit:
acpi_os_printf("Could not attach scope: %s, %s\n",
name, acpi_format_exception(status));
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_namespace
*
* PARAMETERS: start_arg - Node to begin namespace dump
* depth_arg - Maximum tree depth to be dumped
*
* RETURN: None
*
* DESCRIPTION: Dump entire namespace or a subtree. Each node is displayed
* with type and other information.
*
******************************************************************************/
void acpi_db_dump_namespace(char *start_arg, char *depth_arg)
{
acpi_handle subtree_entry = acpi_gbl_root_node;
u32 max_depth = ACPI_UINT32_MAX;
/* No argument given, just start at the root and dump entire namespace */
if (start_arg) {
subtree_entry = acpi_db_convert_to_node(start_arg);
if (!subtree_entry) {
return;
}
/* Now we can check for the depth argument */
if (depth_arg) {
max_depth = strtoul(depth_arg, NULL, 0);
}
}
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf("ACPI Namespace (from %4.4s (%p) subtree):\n",
((struct acpi_namespace_node *)subtree_entry)->name.
ascii, subtree_entry);
/* Display the subtree */
acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT);
acpi_ns_dump_objects(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY, max_depth,
ACPI_OWNER_ID_MAX, subtree_entry);
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_namespace_paths
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Dump entire namespace with full object pathnames and object
* type information. Alternative to "namespace" command.
*
******************************************************************************/
void acpi_db_dump_namespace_paths(void)
{
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf("ACPI Namespace (from root):\n");
/* Display the entire namespace */
acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT);
acpi_ns_dump_object_paths(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY,
ACPI_UINT32_MAX, ACPI_OWNER_ID_MAX,
acpi_gbl_root_node);
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_namespace_by_owner
*
* PARAMETERS: owner_arg - Owner ID whose nodes will be displayed
* depth_arg - Maximum tree depth to be dumped
*
* RETURN: None
*
* DESCRIPTION: Dump elements of the namespace that are owned by the owner_id.
*
******************************************************************************/
void acpi_db_dump_namespace_by_owner(char *owner_arg, char *depth_arg)
{
acpi_handle subtree_entry = acpi_gbl_root_node;
u32 max_depth = ACPI_UINT32_MAX;
acpi_owner_id owner_id;
owner_id = (acpi_owner_id) strtoul(owner_arg, NULL, 0);
/* Now we can check for the depth argument */
if (depth_arg) {
max_depth = strtoul(depth_arg, NULL, 0);
}
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf("ACPI Namespace by owner %X:\n", owner_id);
/* Display the subtree */
acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT);
acpi_ns_dump_objects(ACPI_TYPE_ANY, ACPI_DISPLAY_SUMMARY, max_depth,
owner_id, subtree_entry);
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_walk_and_match_name
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Find a particular name/names within the namespace. Wildcards
* are supported -- '?' matches any character.
*
******************************************************************************/
static acpi_status
acpi_db_walk_and_match_name(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
acpi_status status;
char *requested_name = (char *)context;
u32 i;
struct acpi_buffer buffer;
struct acpi_walk_info info;
/* Check for a name match */
for (i = 0; i < 4; i++) {
/* Wildcard support */
if ((requested_name[i] != '?') &&
(requested_name[i] != ((struct acpi_namespace_node *)
obj_handle)->name.ascii[i])) {
/* No match, just exit */
return (AE_OK);
}
}
/* Get the full pathname to this object */
buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could Not get pathname for object %p\n",
obj_handle);
} else {
info.owner_id = ACPI_OWNER_ID_MAX;
info.debug_level = ACPI_UINT32_MAX;
info.display_type = ACPI_DISPLAY_SUMMARY | ACPI_DISPLAY_SHORT;
acpi_os_printf("%32s", (char *)buffer.pointer);
(void)acpi_ns_dump_one_object(obj_handle, nesting_level, &info,
NULL);
ACPI_FREE(buffer.pointer);
}
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_find_name_in_namespace
*
* PARAMETERS: name_arg - The 4-character ACPI name to find.
* wildcards are supported.
*
* RETURN: None
*
* DESCRIPTION: Search the namespace for a given name (with wildcards)
*
******************************************************************************/
acpi_status acpi_db_find_name_in_namespace(char *name_arg)
{
char acpi_name[5] = "____";
char *acpi_name_ptr = acpi_name;
if (strlen(name_arg) > ACPI_NAME_SIZE) {
acpi_os_printf("Name must be no longer than 4 characters\n");
return (AE_OK);
}
/* Pad out name with underscores as necessary to create a 4-char name */
acpi_ut_strupr(name_arg);
while (*name_arg) {
*acpi_name_ptr = *name_arg;
acpi_name_ptr++;
name_arg++;
}
/* Walk the namespace from the root */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, acpi_db_walk_and_match_name,
NULL, acpi_name, NULL);
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_walk_for_predefined_names
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Detect and display predefined ACPI names (names that start with
* an underscore)
*
******************************************************************************/
static acpi_status
acpi_db_walk_for_predefined_names(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
u32 *count = (u32 *)context;
const union acpi_predefined_info *predefined;
const union acpi_predefined_info *package = NULL;
char *pathname;
char string_buffer[48];
predefined = acpi_ut_match_predefined_method(node->name.ascii);
if (!predefined) {
return (AE_OK);
}
pathname = acpi_ns_get_external_pathname(node);
if (!pathname) {
return (AE_OK);
}
/* If method returns a package, the info is in the next table entry */
if (predefined->info.expected_btypes & ACPI_RTYPE_PACKAGE) {
package = predefined + 1;
}
acpi_ut_get_expected_return_types(string_buffer,
predefined->info.expected_btypes);
acpi_os_printf("%-32s Arguments %X, Return Types: %s", pathname,
METHOD_GET_ARG_COUNT(predefined->info.argument_list),
string_buffer);
if (package) {
acpi_os_printf(" (PkgType %2.2X, ObjType %2.2X, Count %2.2X)",
package->ret_info.type,
package->ret_info.object_type1,
package->ret_info.count1);
}
acpi_os_printf("\n");
/* Check that the declared argument count matches the ACPI spec */
acpi_ns_check_acpi_compliance(pathname, node, predefined);
ACPI_FREE(pathname);
(*count)++;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_check_predefined_names
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Validate all predefined names in the namespace
*
******************************************************************************/
void acpi_db_check_predefined_names(void)
{
u32 count = 0;
/* Search all nodes in namespace */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX,
acpi_db_walk_for_predefined_names, NULL,
(void *)&count, NULL);
acpi_os_printf("Found %u predefined names in the namespace\n", count);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_walk_for_object_counts
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Display short info about objects in the namespace
*
******************************************************************************/
static acpi_status
acpi_db_walk_for_object_counts(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
struct acpi_object_info *info = (struct acpi_object_info *)context;
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
if (node->type > ACPI_TYPE_NS_NODE_MAX) {
acpi_os_printf("[%4.4s]: Unknown object type %X\n",
node->name.ascii, node->type);
} else {
info->types[node->type]++;
}
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_walk_for_specific_objects
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Display short info about objects in the namespace
*
******************************************************************************/
static acpi_status
acpi_db_walk_for_specific_objects(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
struct acpi_walk_info *info = (struct acpi_walk_info *)context;
struct acpi_buffer buffer;
acpi_status status;
info->count++;
/* Get and display the full pathname to this object */
buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could Not get pathname for object %p\n",
obj_handle);
return (AE_OK);
}
acpi_os_printf("%32s", (char *)buffer.pointer);
ACPI_FREE(buffer.pointer);
/* Dump short info about the object */
(void)acpi_ns_dump_one_object(obj_handle, nesting_level, info, NULL);
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_display_objects
*
* PARAMETERS: obj_type_arg - Type of object to display
* display_count_arg - Max depth to display
*
* RETURN: None
*
* DESCRIPTION: Display objects in the namespace of the requested type
*
******************************************************************************/
acpi_status acpi_db_display_objects(char *obj_type_arg, char *display_count_arg)
{
struct acpi_walk_info info;
acpi_object_type type;
struct acpi_object_info *object_info;
u32 i;
u32 total_objects = 0;
/* No argument means display summary/count of all object types */
if (!obj_type_arg) {
object_info =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_object_info));
/* Walk the namespace from the root */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX,
acpi_db_walk_for_object_counts, NULL,
(void *)object_info, NULL);
acpi_os_printf("\nSummary of namespace objects:\n\n");
for (i = 0; i < ACPI_TOTAL_TYPES; i++) {
acpi_os_printf("%8u %s\n", object_info->types[i],
acpi_ut_get_type_name(i));
total_objects += object_info->types[i];
}
acpi_os_printf("\n%8u Total namespace objects\n\n",
total_objects);
ACPI_FREE(object_info);
return (AE_OK);
}
/* Get the object type */
type = acpi_db_match_argument(obj_type_arg, acpi_db_object_types);
if (type == ACPI_TYPE_NOT_FOUND) {
acpi_os_printf("Invalid or unsupported argument\n");
return (AE_OK);
}
acpi_db_set_output_destination(ACPI_DB_DUPLICATE_OUTPUT);
acpi_os_printf
("Objects of type [%s] defined in the current ACPI Namespace:\n",
acpi_ut_get_type_name(type));
acpi_db_set_output_destination(ACPI_DB_REDIRECTABLE_OUTPUT);
info.count = 0;
info.owner_id = ACPI_OWNER_ID_MAX;
info.debug_level = ACPI_UINT32_MAX;
info.display_type = ACPI_DISPLAY_SUMMARY | ACPI_DISPLAY_SHORT;
/* Walk the namespace from the root */
(void)acpi_walk_namespace(type, ACPI_ROOT_OBJECT, ACPI_UINT32_MAX,
acpi_db_walk_for_specific_objects, NULL,
(void *)&info, NULL);
acpi_os_printf
("\nFound %u objects of type [%s] in the current ACPI Namespace\n",
info.count, acpi_ut_get_type_name(type));
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_integrity_walk
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Examine one NS node for valid values.
*
******************************************************************************/
static acpi_status
acpi_db_integrity_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value)
{
struct acpi_integrity_info *info =
(struct acpi_integrity_info *)context;
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
union acpi_operand_object *object;
u8 alias = TRUE;
info->nodes++;
/* Verify the NS node, and dereference aliases */
while (alias) {
if (ACPI_GET_DESCRIPTOR_TYPE(node) != ACPI_DESC_TYPE_NAMED) {
acpi_os_printf
("Invalid Descriptor Type for Node %p [%s] - "
"is %2.2X should be %2.2X\n", node,
acpi_ut_get_descriptor_name(node),
ACPI_GET_DESCRIPTOR_TYPE(node),
ACPI_DESC_TYPE_NAMED);
return (AE_OK);
}
if ((node->type == ACPI_TYPE_LOCAL_ALIAS) ||
(node->type == ACPI_TYPE_LOCAL_METHOD_ALIAS)) {
node = (struct acpi_namespace_node *)node->object;
} else {
alias = FALSE;
}
}
if (node->type > ACPI_TYPE_LOCAL_MAX) {
acpi_os_printf("Invalid Object Type for Node %p, Type = %X\n",
node, node->type);
return (AE_OK);
}
if (!acpi_ut_valid_acpi_name(node->name.ascii)) {
acpi_os_printf("Invalid AcpiName for Node %p\n", node);
return (AE_OK);
}
object = acpi_ns_get_attached_object(node);
if (object) {
info->objects++;
if (ACPI_GET_DESCRIPTOR_TYPE(object) != ACPI_DESC_TYPE_OPERAND) {
acpi_os_printf
("Invalid Descriptor Type for Object %p [%s]\n",
object, acpi_ut_get_descriptor_name(object));
}
}
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_check_integrity
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Check entire namespace for data structure integrity
*
******************************************************************************/
void acpi_db_check_integrity(void)
{
struct acpi_integrity_info info = { 0, 0 };
/* Search all nodes in namespace */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, acpi_db_integrity_walk, NULL,
(void *)&info, NULL);
acpi_os_printf("Verified %u namespace nodes with %u Objects\n",
info.nodes, info.objects);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_walk_for_references
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Check if this namespace object refers to the target object
* that is passed in as the context value.
*
* Note: Currently doesn't check subobjects within the Node's object
*
******************************************************************************/
static acpi_status
acpi_db_walk_for_references(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
union acpi_operand_object *obj_desc =
(union acpi_operand_object *)context;
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
/* Check for match against the namespace node itself */
if (node == (void *)obj_desc) {
acpi_os_printf("Object is a Node [%4.4s]\n",
acpi_ut_get_node_name(node));
}
/* Check for match against the object attached to the node */
if (acpi_ns_get_attached_object(node) == obj_desc) {
acpi_os_printf("Reference at Node->Object %p [%4.4s]\n",
node, acpi_ut_get_node_name(node));
}
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_find_references
*
* PARAMETERS: object_arg - String with hex value of the object
*
* RETURN: None
*
* DESCRIPTION: Search namespace for all references to the input object
*
******************************************************************************/
void acpi_db_find_references(char *object_arg)
{
union acpi_operand_object *obj_desc;
acpi_size address;
/* Convert string to object pointer */
address = strtoul(object_arg, NULL, 16);
obj_desc = ACPI_TO_POINTER(address);
/* Search all nodes in namespace */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, acpi_db_walk_for_references,
NULL, (void *)obj_desc, NULL);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_bus_walk
*
* PARAMETERS: Callback from walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Display info about device objects that have a corresponding
* _PRT method.
*
******************************************************************************/
static acpi_status
acpi_db_bus_walk(acpi_handle obj_handle,
u32 nesting_level, void *context, void **return_value)
{
struct acpi_namespace_node *node =
(struct acpi_namespace_node *)obj_handle;
acpi_status status;
struct acpi_buffer buffer;
struct acpi_namespace_node *temp_node;
struct acpi_device_info *info;
u32 i;
if ((node->type != ACPI_TYPE_DEVICE) &&
(node->type != ACPI_TYPE_PROCESSOR)) {
return (AE_OK);
}
/* Exit if there is no _PRT under this device */
status = acpi_get_handle(node, METHOD_NAME__PRT,
ACPI_CAST_PTR(acpi_handle, &temp_node));
if (ACPI_FAILURE(status)) {
return (AE_OK);
}
/* Get the full path to this device object */
buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER;
status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could Not get pathname for object %p\n",
obj_handle);
return (AE_OK);
}
status = acpi_get_object_info(obj_handle, &info);
if (ACPI_FAILURE(status)) {
return (AE_OK);
}
/* Display the full path */
acpi_os_printf("%-32s Type %X", (char *)buffer.pointer, node->type);
ACPI_FREE(buffer.pointer);
if (info->flags & ACPI_PCI_ROOT_BRIDGE) {
acpi_os_printf(" - Is PCI Root Bridge");
}
acpi_os_printf("\n");
/* _PRT info */
acpi_os_printf("_PRT: %p\n", temp_node);
/* Dump _ADR, _HID, _UID, _CID */
if (info->valid & ACPI_VALID_ADR) {
acpi_os_printf("_ADR: %8.8X%8.8X\n",
ACPI_FORMAT_UINT64(info->address));
} else {
acpi_os_printf("_ADR: <Not Present>\n");
}
if (info->valid & ACPI_VALID_HID) {
acpi_os_printf("_HID: %s\n", info->hardware_id.string);
} else {
acpi_os_printf("_HID: <Not Present>\n");
}
if (info->valid & ACPI_VALID_UID) {
acpi_os_printf("_UID: %s\n", info->unique_id.string);
} else {
acpi_os_printf("_UID: <Not Present>\n");
}
if (info->valid & ACPI_VALID_CID) {
for (i = 0; i < info->compatible_id_list.count; i++) {
acpi_os_printf("_CID: %s\n",
info->compatible_id_list.ids[i].string);
}
} else {
acpi_os_printf("_CID: <Not Present>\n");
}
ACPI_FREE(info);
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_get_bus_info
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Display info about system busses.
*
******************************************************************************/
void acpi_db_get_bus_info(void)
{
/* Search all nodes in namespace */
(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, acpi_db_bus_walk, NULL, NULL,
NULL);
}

View File

@ -0,0 +1,533 @@
/*******************************************************************************
*
* Module Name: dbobject - ACPI object decode and display
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbobject")
/* Local prototypes */
static void acpi_db_decode_node(struct acpi_namespace_node *node);
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_method_info
*
* PARAMETERS: status - Method execution status
* walk_state - Current state of the parse tree walk
*
* RETURN: None
*
* DESCRIPTION: Called when a method has been aborted because of an error.
* Dumps the method execution stack, and the method locals/args,
* and disassembles the AML opcode that failed.
*
******************************************************************************/
void
acpi_db_dump_method_info(acpi_status status, struct acpi_walk_state *walk_state)
{
struct acpi_thread_state *thread;
/* Ignore control codes, they are not errors */
if ((status & AE_CODE_MASK) == AE_CODE_CONTROL) {
return;
}
/* We may be executing a deferred opcode */
if (walk_state->deferred_node) {
acpi_os_printf("Executing subtree for Buffer/Package/Region\n");
return;
}
/*
* If there is no Thread, we are not actually executing a method.
* This can happen when the iASL compiler calls the interpreter
* to perform constant folding.
*/
thread = walk_state->thread;
if (!thread) {
return;
}
/* Display the method locals and arguments */
acpi_os_printf("\n");
acpi_db_decode_locals(walk_state);
acpi_os_printf("\n");
acpi_db_decode_arguments(walk_state);
acpi_os_printf("\n");
}
/*******************************************************************************
*
* FUNCTION: acpi_db_decode_internal_object
*
* PARAMETERS: obj_desc - Object to be displayed
*
* RETURN: None
*
* DESCRIPTION: Short display of an internal object. Numbers/Strings/Buffers.
*
******************************************************************************/
void acpi_db_decode_internal_object(union acpi_operand_object *obj_desc)
{
u32 i;
if (!obj_desc) {
acpi_os_printf(" Uninitialized");
return;
}
if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) {
acpi_os_printf(" %p [%s]", obj_desc,
acpi_ut_get_descriptor_name(obj_desc));
return;
}
acpi_os_printf(" %s", acpi_ut_get_object_type_name(obj_desc));
switch (obj_desc->common.type) {
case ACPI_TYPE_INTEGER:
acpi_os_printf(" %8.8X%8.8X",
ACPI_FORMAT_UINT64(obj_desc->integer.value));
break;
case ACPI_TYPE_STRING:
acpi_os_printf("(%u) \"%.24s",
obj_desc->string.length,
obj_desc->string.pointer);
if (obj_desc->string.length > 24) {
acpi_os_printf("...");
} else {
acpi_os_printf("\"");
}
break;
case ACPI_TYPE_BUFFER:
acpi_os_printf("(%u)", obj_desc->buffer.length);
for (i = 0; (i < 8) && (i < obj_desc->buffer.length); i++) {
acpi_os_printf(" %2.2X", obj_desc->buffer.pointer[i]);
}
break;
default:
acpi_os_printf(" %p", obj_desc);
break;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_decode_node
*
* PARAMETERS: node - Object to be displayed
*
* RETURN: None
*
* DESCRIPTION: Short display of a namespace node
*
******************************************************************************/
static void acpi_db_decode_node(struct acpi_namespace_node *node)
{
acpi_os_printf("<Node> Name %4.4s",
acpi_ut_get_node_name(node));
if (node->flags & ANOBJ_METHOD_ARG) {
acpi_os_printf(" [Method Arg]");
}
if (node->flags & ANOBJ_METHOD_LOCAL) {
acpi_os_printf(" [Method Local]");
}
switch (node->type) {
/* These types have no attached object */
case ACPI_TYPE_DEVICE:
acpi_os_printf(" Device");
break;
case ACPI_TYPE_THERMAL:
acpi_os_printf(" Thermal Zone");
break;
default:
acpi_db_decode_internal_object(acpi_ns_get_attached_object
(node));
break;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_display_internal_object
*
* PARAMETERS: obj_desc - Object to be displayed
* walk_state - Current walk state
*
* RETURN: None
*
* DESCRIPTION: Short display of an internal object
*
******************************************************************************/
void
acpi_db_display_internal_object(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
u8 type;
acpi_os_printf("%p ", obj_desc);
if (!obj_desc) {
acpi_os_printf("<Null Object>\n");
return;
}
/* Decode the object type */
switch (ACPI_GET_DESCRIPTOR_TYPE(obj_desc)) {
case ACPI_DESC_TYPE_PARSER:
acpi_os_printf("<Parser> ");
break;
case ACPI_DESC_TYPE_NAMED:
acpi_db_decode_node((struct acpi_namespace_node *)obj_desc);
break;
case ACPI_DESC_TYPE_OPERAND:
type = obj_desc->common.type;
if (type > ACPI_TYPE_LOCAL_MAX) {
acpi_os_printf(" Type %X [Invalid Type]", (u32)type);
return;
}
/* Decode the ACPI object type */
switch (obj_desc->common.type) {
case ACPI_TYPE_LOCAL_REFERENCE:
acpi_os_printf("[%s] ",
acpi_ut_get_reference_name(obj_desc));
/* Decode the refererence */
switch (obj_desc->reference.class) {
case ACPI_REFCLASS_LOCAL:
acpi_os_printf("%X ",
obj_desc->reference.value);
if (walk_state) {
obj_desc = walk_state->local_variables
[obj_desc->reference.value].object;
acpi_os_printf("%p", obj_desc);
acpi_db_decode_internal_object
(obj_desc);
}
break;
case ACPI_REFCLASS_ARG:
acpi_os_printf("%X ",
obj_desc->reference.value);
if (walk_state) {
obj_desc = walk_state->arguments
[obj_desc->reference.value].object;
acpi_os_printf("%p", obj_desc);
acpi_db_decode_internal_object
(obj_desc);
}
break;
case ACPI_REFCLASS_INDEX:
switch (obj_desc->reference.target_type) {
case ACPI_TYPE_BUFFER_FIELD:
acpi_os_printf("%p",
obj_desc->reference.
object);
acpi_db_decode_internal_object
(obj_desc->reference.object);
break;
case ACPI_TYPE_PACKAGE:
acpi_os_printf("%p",
obj_desc->reference.
where);
if (!obj_desc->reference.where) {
acpi_os_printf
(" Uninitialized WHERE pointer");
} else {
acpi_db_decode_internal_object(*
(obj_desc->
reference.
where));
}
break;
default:
acpi_os_printf
("Unknown index target type");
break;
}
break;
case ACPI_REFCLASS_REFOF:
if (!obj_desc->reference.object) {
acpi_os_printf
("Uninitialized reference subobject pointer");
break;
}
/* Reference can be to a Node or an Operand object */
switch (ACPI_GET_DESCRIPTOR_TYPE
(obj_desc->reference.object)) {
case ACPI_DESC_TYPE_NAMED:
acpi_db_decode_node(obj_desc->reference.
object);
break;
case ACPI_DESC_TYPE_OPERAND:
acpi_db_decode_internal_object
(obj_desc->reference.object);
break;
default:
break;
}
break;
case ACPI_REFCLASS_NAME:
acpi_db_decode_node(obj_desc->reference.node);
break;
case ACPI_REFCLASS_DEBUG:
case ACPI_REFCLASS_TABLE:
acpi_os_printf("\n");
break;
default: /* Unknown reference class */
acpi_os_printf("%2.2X\n",
obj_desc->reference.class);
break;
}
break;
default:
acpi_os_printf("<Obj> ");
acpi_db_decode_internal_object(obj_desc);
break;
}
break;
default:
acpi_os_printf("<Not a valid ACPI Object Descriptor> [%s]",
acpi_ut_get_descriptor_name(obj_desc));
break;
}
acpi_os_printf("\n");
}
/*******************************************************************************
*
* FUNCTION: acpi_db_decode_locals
*
* PARAMETERS: walk_state - State for current method
*
* RETURN: None
*
* DESCRIPTION: Display all locals for the currently running control method
*
******************************************************************************/
void acpi_db_decode_locals(struct acpi_walk_state *walk_state)
{
u32 i;
union acpi_operand_object *obj_desc;
struct acpi_namespace_node *node;
u8 display_locals = FALSE;
obj_desc = walk_state->method_desc;
node = walk_state->method_node;
if (!node) {
acpi_os_printf
("No method node (Executing subtree for buffer or opregion)\n");
return;
}
if (node->type != ACPI_TYPE_METHOD) {
acpi_os_printf("Executing subtree for Buffer/Package/Region\n");
return;
}
/* Are any locals actually set? */
for (i = 0; i < ACPI_METHOD_NUM_LOCALS; i++) {
obj_desc = walk_state->local_variables[i].object;
if (obj_desc) {
display_locals = TRUE;
break;
}
}
/* If any are set, only display the ones that are set */
if (display_locals) {
acpi_os_printf
("\nInitialized Local Variables for method [%4.4s]:\n",
acpi_ut_get_node_name(node));
for (i = 0; i < ACPI_METHOD_NUM_LOCALS; i++) {
obj_desc = walk_state->local_variables[i].object;
if (obj_desc) {
acpi_os_printf(" Local%X: ", i);
acpi_db_display_internal_object(obj_desc,
walk_state);
}
}
} else {
acpi_os_printf
("No Local Variables are initialized for method [%4.4s]\n",
acpi_ut_get_node_name(node));
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_decode_arguments
*
* PARAMETERS: walk_state - State for current method
*
* RETURN: None
*
* DESCRIPTION: Display all arguments for the currently running control method
*
******************************************************************************/
void acpi_db_decode_arguments(struct acpi_walk_state *walk_state)
{
u32 i;
union acpi_operand_object *obj_desc;
struct acpi_namespace_node *node;
u8 display_args = FALSE;
node = walk_state->method_node;
obj_desc = walk_state->method_desc;
if (!node) {
acpi_os_printf
("No method node (Executing subtree for buffer or opregion)\n");
return;
}
if (node->type != ACPI_TYPE_METHOD) {
acpi_os_printf("Executing subtree for Buffer/Package/Region\n");
return;
}
/* Are any arguments actually set? */
for (i = 0; i < ACPI_METHOD_NUM_ARGS; i++) {
obj_desc = walk_state->arguments[i].object;
if (obj_desc) {
display_args = TRUE;
break;
}
}
/* If any are set, only display the ones that are set */
if (display_args) {
acpi_os_printf("Initialized Arguments for Method [%4.4s]: "
"(%X arguments defined for method invocation)\n",
acpi_ut_get_node_name(node),
obj_desc->method.param_count);
for (i = 0; i < ACPI_METHOD_NUM_ARGS; i++) {
obj_desc = walk_state->arguments[i].object;
if (obj_desc) {
acpi_os_printf(" Arg%u: ", i);
acpi_db_display_internal_object(obj_desc,
walk_state);
}
}
} else {
acpi_os_printf
("No Arguments are initialized for method [%4.4s]\n",
acpi_ut_get_node_name(node));
}
}

View File

@ -0,0 +1,546 @@
/*******************************************************************************
*
* Module Name: dbstats - Generation and display of ACPI table statistics
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acdebug.h"
#include "acnamesp.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbstats")
/* Local prototypes */
static void acpi_db_count_namespace_objects(void);
static void acpi_db_enumerate_object(union acpi_operand_object *obj_desc);
static acpi_status
acpi_db_classify_one_object(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value);
#if defined ACPI_DBG_TRACK_ALLOCATIONS || defined ACPI_USE_LOCAL_CACHE
static void acpi_db_list_info(struct acpi_memory_list *list);
#endif
/*
* Statistics subcommands
*/
static struct acpi_db_argument_info acpi_db_stat_types[] = {
{"ALLOCATIONS"},
{"OBJECTS"},
{"MEMORY"},
{"MISC"},
{"TABLES"},
{"SIZES"},
{"STACK"},
{NULL} /* Must be null terminated */
};
#define CMD_STAT_ALLOCATIONS 0
#define CMD_STAT_OBJECTS 1
#define CMD_STAT_MEMORY 2
#define CMD_STAT_MISC 3
#define CMD_STAT_TABLES 4
#define CMD_STAT_SIZES 5
#define CMD_STAT_STACK 6
#if defined ACPI_DBG_TRACK_ALLOCATIONS || defined ACPI_USE_LOCAL_CACHE
/*******************************************************************************
*
* FUNCTION: acpi_db_list_info
*
* PARAMETERS: list - Memory list/cache to be displayed
*
* RETURN: None
*
* DESCRIPTION: Display information about the input memory list or cache.
*
******************************************************************************/
static void acpi_db_list_info(struct acpi_memory_list *list)
{
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
u32 outstanding;
#endif
acpi_os_printf("\n%s\n", list->list_name);
/* max_depth > 0 indicates a cache object */
if (list->max_depth > 0) {
acpi_os_printf
(" Cache: [Depth MaxD Avail Size] "
"%8.2X %8.2X %8.2X %8.2X\n", list->current_depth,
list->max_depth, list->max_depth - list->current_depth,
(list->current_depth * list->object_size));
}
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
if (list->max_depth > 0) {
acpi_os_printf
(" Cache: [Requests Hits Misses ObjSize] "
"%8.2X %8.2X %8.2X %8.2X\n", list->requests, list->hits,
list->requests - list->hits, list->object_size);
}
outstanding = acpi_db_get_cache_info(list);
if (list->object_size) {
acpi_os_printf
(" Mem: [Alloc Free Max CurSize Outstanding] "
"%8.2X %8.2X %8.2X %8.2X %8.2X\n", list->total_allocated,
list->total_freed, list->max_occupied,
outstanding * list->object_size, outstanding);
} else {
acpi_os_printf
(" Mem: [Alloc Free Max CurSize Outstanding Total] "
"%8.2X %8.2X %8.2X %8.2X %8.2X %8.2X\n",
list->total_allocated, list->total_freed,
list->max_occupied, list->current_total_size, outstanding,
list->total_size);
}
#endif
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_db_enumerate_object
*
* PARAMETERS: obj_desc - Object to be counted
*
* RETURN: None
*
* DESCRIPTION: Add this object to the global counts, by object type.
* Limited recursion handles subobjects and packages, and this
* is probably acceptable within the AML debugger only.
*
******************************************************************************/
static void acpi_db_enumerate_object(union acpi_operand_object *obj_desc)
{
u32 i;
if (!obj_desc) {
return;
}
/* Enumerate this object first */
acpi_gbl_num_objects++;
if (obj_desc->common.type > ACPI_TYPE_NS_NODE_MAX) {
acpi_gbl_obj_type_count_misc++;
} else {
acpi_gbl_obj_type_count[obj_desc->common.type]++;
}
/* Count the sub-objects */
switch (obj_desc->common.type) {
case ACPI_TYPE_PACKAGE:
for (i = 0; i < obj_desc->package.count; i++) {
acpi_db_enumerate_object(obj_desc->package.elements[i]);
}
break;
case ACPI_TYPE_DEVICE:
acpi_db_enumerate_object(obj_desc->device.notify_list[0]);
acpi_db_enumerate_object(obj_desc->device.notify_list[1]);
acpi_db_enumerate_object(obj_desc->device.handler);
break;
case ACPI_TYPE_BUFFER_FIELD:
if (acpi_ns_get_secondary_object(obj_desc)) {
acpi_gbl_obj_type_count[ACPI_TYPE_BUFFER_FIELD]++;
}
break;
case ACPI_TYPE_REGION:
acpi_gbl_obj_type_count[ACPI_TYPE_LOCAL_REGION_FIELD]++;
acpi_db_enumerate_object(obj_desc->region.handler);
break;
case ACPI_TYPE_POWER:
acpi_db_enumerate_object(obj_desc->power_resource.
notify_list[0]);
acpi_db_enumerate_object(obj_desc->power_resource.
notify_list[1]);
break;
case ACPI_TYPE_PROCESSOR:
acpi_db_enumerate_object(obj_desc->processor.notify_list[0]);
acpi_db_enumerate_object(obj_desc->processor.notify_list[1]);
acpi_db_enumerate_object(obj_desc->processor.handler);
break;
case ACPI_TYPE_THERMAL:
acpi_db_enumerate_object(obj_desc->thermal_zone.notify_list[0]);
acpi_db_enumerate_object(obj_desc->thermal_zone.notify_list[1]);
acpi_db_enumerate_object(obj_desc->thermal_zone.handler);
break;
default:
break;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_classify_one_object
*
* PARAMETERS: Callback for walk_namespace
*
* RETURN: Status
*
* DESCRIPTION: Enumerate both the object descriptor (including subobjects) and
* the parent namespace node.
*
******************************************************************************/
static acpi_status
acpi_db_classify_one_object(acpi_handle obj_handle,
u32 nesting_level,
void *context, void **return_value)
{
struct acpi_namespace_node *node;
union acpi_operand_object *obj_desc;
u32 type;
acpi_gbl_num_nodes++;
node = (struct acpi_namespace_node *)obj_handle;
obj_desc = acpi_ns_get_attached_object(node);
acpi_db_enumerate_object(obj_desc);
type = node->type;
if (type > ACPI_TYPE_NS_NODE_MAX) {
acpi_gbl_node_type_count_misc++;
} else {
acpi_gbl_node_type_count[type]++;
}
return (AE_OK);
#ifdef ACPI_FUTURE_IMPLEMENTATION
/* TBD: These need to be counted during the initial parsing phase */
if (acpi_ps_is_named_op(op->opcode)) {
num_nodes++;
}
if (is_method) {
num_method_elements++;
}
num_grammar_elements++;
op = acpi_ps_get_depth_next(root, op);
size_of_parse_tree = (num_grammar_elements - num_method_elements) *
(u32)sizeof(union acpi_parse_object);
size_of_method_trees =
num_method_elements * (u32)sizeof(union acpi_parse_object);
size_of_node_entries =
num_nodes * (u32)sizeof(struct acpi_namespace_node);
size_of_acpi_objects =
num_nodes * (u32)sizeof(union acpi_operand_object);
#endif
}
/*******************************************************************************
*
* FUNCTION: acpi_db_count_namespace_objects
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Count and classify the entire namespace, including all
* namespace nodes and attached objects.
*
******************************************************************************/
static void acpi_db_count_namespace_objects(void)
{
u32 i;
acpi_gbl_num_nodes = 0;
acpi_gbl_num_objects = 0;
acpi_gbl_obj_type_count_misc = 0;
for (i = 0; i < (ACPI_TYPE_NS_NODE_MAX - 1); i++) {
acpi_gbl_obj_type_count[i] = 0;
acpi_gbl_node_type_count[i] = 0;
}
(void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, FALSE,
acpi_db_classify_one_object, NULL, NULL,
NULL);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_display_statistics
*
* PARAMETERS: type_arg - Subcommand
*
* RETURN: Status
*
* DESCRIPTION: Display various statistics
*
******************************************************************************/
acpi_status acpi_db_display_statistics(char *type_arg)
{
u32 i;
u32 temp;
acpi_ut_strupr(type_arg);
temp = acpi_db_match_argument(type_arg, acpi_db_stat_types);
if (temp == ACPI_TYPE_NOT_FOUND) {
acpi_os_printf("Invalid or unsupported argument\n");
return (AE_OK);
}
switch (temp) {
case CMD_STAT_ALLOCATIONS:
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
acpi_ut_dump_allocation_info();
#endif
break;
case CMD_STAT_TABLES:
acpi_os_printf("ACPI Table Information (not implemented):\n\n");
break;
case CMD_STAT_OBJECTS:
acpi_db_count_namespace_objects();
acpi_os_printf
("\nObjects defined in the current namespace:\n\n");
acpi_os_printf("%16.16s %10.10s %10.10s\n",
"ACPI_TYPE", "NODES", "OBJECTS");
for (i = 0; i < ACPI_TYPE_NS_NODE_MAX; i++) {
acpi_os_printf("%16.16s % 10ld% 10ld\n",
acpi_ut_get_type_name(i),
acpi_gbl_node_type_count[i],
acpi_gbl_obj_type_count[i]);
}
acpi_os_printf("%16.16s % 10ld% 10ld\n", "Misc/Unknown",
acpi_gbl_node_type_count_misc,
acpi_gbl_obj_type_count_misc);
acpi_os_printf("%16.16s % 10ld% 10ld\n", "TOTALS:",
acpi_gbl_num_nodes, acpi_gbl_num_objects);
break;
case CMD_STAT_MEMORY:
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
acpi_os_printf
("\n----Object Statistics (all in hex)---------\n");
acpi_db_list_info(acpi_gbl_global_list);
acpi_db_list_info(acpi_gbl_ns_node_list);
#endif
#ifdef ACPI_USE_LOCAL_CACHE
acpi_os_printf
("\n----Cache Statistics (all in hex)---------\n");
acpi_db_list_info(acpi_gbl_operand_cache);
acpi_db_list_info(acpi_gbl_ps_node_cache);
acpi_db_list_info(acpi_gbl_ps_node_ext_cache);
acpi_db_list_info(acpi_gbl_state_cache);
#endif
break;
case CMD_STAT_MISC:
acpi_os_printf("\nMiscellaneous Statistics:\n\n");
acpi_os_printf("Calls to AcpiPsFind:.. ........% 7ld\n",
acpi_gbl_ps_find_count);
acpi_os_printf("Calls to AcpiNsLookup:..........% 7ld\n",
acpi_gbl_ns_lookup_count);
acpi_os_printf("\n");
acpi_os_printf("Mutex usage:\n\n");
for (i = 0; i < ACPI_NUM_MUTEX; i++) {
acpi_os_printf("%-28s: % 7ld\n",
acpi_ut_get_mutex_name(i),
acpi_gbl_mutex_info[i].use_count);
}
break;
case CMD_STAT_SIZES:
acpi_os_printf("\nInternal object sizes:\n\n");
acpi_os_printf("Common %3d\n",
sizeof(struct acpi_object_common));
acpi_os_printf("Number %3d\n",
sizeof(struct acpi_object_integer));
acpi_os_printf("String %3d\n",
sizeof(struct acpi_object_string));
acpi_os_printf("Buffer %3d\n",
sizeof(struct acpi_object_buffer));
acpi_os_printf("Package %3d\n",
sizeof(struct acpi_object_package));
acpi_os_printf("BufferField %3d\n",
sizeof(struct acpi_object_buffer_field));
acpi_os_printf("Device %3d\n",
sizeof(struct acpi_object_device));
acpi_os_printf("Event %3d\n",
sizeof(struct acpi_object_event));
acpi_os_printf("Method %3d\n",
sizeof(struct acpi_object_method));
acpi_os_printf("Mutex %3d\n",
sizeof(struct acpi_object_mutex));
acpi_os_printf("Region %3d\n",
sizeof(struct acpi_object_region));
acpi_os_printf("PowerResource %3d\n",
sizeof(struct acpi_object_power_resource));
acpi_os_printf("Processor %3d\n",
sizeof(struct acpi_object_processor));
acpi_os_printf("ThermalZone %3d\n",
sizeof(struct acpi_object_thermal_zone));
acpi_os_printf("RegionField %3d\n",
sizeof(struct acpi_object_region_field));
acpi_os_printf("BankField %3d\n",
sizeof(struct acpi_object_bank_field));
acpi_os_printf("IndexField %3d\n",
sizeof(struct acpi_object_index_field));
acpi_os_printf("Reference %3d\n",
sizeof(struct acpi_object_reference));
acpi_os_printf("Notify %3d\n",
sizeof(struct acpi_object_notify_handler));
acpi_os_printf("AddressSpace %3d\n",
sizeof(struct acpi_object_addr_handler));
acpi_os_printf("Extra %3d\n",
sizeof(struct acpi_object_extra));
acpi_os_printf("Data %3d\n",
sizeof(struct acpi_object_data));
acpi_os_printf("\n");
acpi_os_printf("ParseObject %3d\n",
sizeof(struct acpi_parse_obj_common));
acpi_os_printf("ParseObjectNamed %3d\n",
sizeof(struct acpi_parse_obj_named));
acpi_os_printf("ParseObjectAsl %3d\n",
sizeof(struct acpi_parse_obj_asl));
acpi_os_printf("OperandObject %3d\n",
sizeof(union acpi_operand_object));
acpi_os_printf("NamespaceNode %3d\n",
sizeof(struct acpi_namespace_node));
acpi_os_printf("AcpiObject %3d\n",
sizeof(union acpi_object));
acpi_os_printf("\n");
acpi_os_printf("Generic State %3d\n",
sizeof(union acpi_generic_state));
acpi_os_printf("Common State %3d\n",
sizeof(struct acpi_common_state));
acpi_os_printf("Control State %3d\n",
sizeof(struct acpi_control_state));
acpi_os_printf("Update State %3d\n",
sizeof(struct acpi_update_state));
acpi_os_printf("Scope State %3d\n",
sizeof(struct acpi_scope_state));
acpi_os_printf("Parse Scope %3d\n",
sizeof(struct acpi_pscope_state));
acpi_os_printf("Package State %3d\n",
sizeof(struct acpi_pkg_state));
acpi_os_printf("Thread State %3d\n",
sizeof(struct acpi_thread_state));
acpi_os_printf("Result Values %3d\n",
sizeof(struct acpi_result_values));
acpi_os_printf("Notify Info %3d\n",
sizeof(struct acpi_notify_info));
break;
case CMD_STAT_STACK:
#if defined(ACPI_DEBUG_OUTPUT)
temp =
(u32)ACPI_PTR_DIFF(acpi_gbl_entry_stack_pointer,
acpi_gbl_lowest_stack_pointer);
acpi_os_printf("\nSubsystem Stack Usage:\n\n");
acpi_os_printf("Entry Stack Pointer %p\n",
acpi_gbl_entry_stack_pointer);
acpi_os_printf("Lowest Stack Pointer %p\n",
acpi_gbl_lowest_stack_pointer);
acpi_os_printf("Stack Use %X (%u)\n", temp,
temp);
acpi_os_printf("Deepest Procedure Nesting %u\n",
acpi_gbl_deepest_nesting);
#endif
break;
default:
break;
}
acpi_os_printf("\n");
return (AE_OK);
}

1057
drivers/acpi/acpica/dbtest.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,457 @@
/*******************************************************************************
*
* Module Name: dbutils - AML debugger utilities
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbutils")
/* Local prototypes */
#ifdef ACPI_OBSOLETE_FUNCTIONS
acpi_status acpi_db_second_pass_parse(union acpi_parse_object *root);
void acpi_db_dump_buffer(u32 address);
#endif
static char *gbl_hex_to_ascii = "0123456789ABCDEF";
/*******************************************************************************
*
* FUNCTION: acpi_db_match_argument
*
* PARAMETERS: user_argument - User command line
* arguments - Array of commands to match against
*
* RETURN: Index into command array or ACPI_TYPE_NOT_FOUND if not found
*
* DESCRIPTION: Search command array for a command match
*
******************************************************************************/
acpi_object_type
acpi_db_match_argument(char *user_argument,
struct acpi_db_argument_info *arguments)
{
u32 i;
if (!user_argument || user_argument[0] == 0) {
return (ACPI_TYPE_NOT_FOUND);
}
for (i = 0; arguments[i].name; i++) {
if (strstr(arguments[i].name, user_argument) ==
arguments[i].name) {
return (i);
}
}
/* Argument not recognized */
return (ACPI_TYPE_NOT_FOUND);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_set_output_destination
*
* PARAMETERS: output_flags - Current flags word
*
* RETURN: None
*
* DESCRIPTION: Set the current destination for debugger output. Also sets
* the debug output level accordingly.
*
******************************************************************************/
void acpi_db_set_output_destination(u32 output_flags)
{
acpi_gbl_db_output_flags = (u8)output_flags;
if ((output_flags & ACPI_DB_REDIRECTABLE_OUTPUT) &&
acpi_gbl_db_output_to_file) {
acpi_dbg_level = acpi_gbl_db_debug_level;
} else {
acpi_dbg_level = acpi_gbl_db_console_debug_level;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_external_object
*
* PARAMETERS: obj_desc - External ACPI object to dump
* level - Nesting level.
*
* RETURN: None
*
* DESCRIPTION: Dump the contents of an ACPI external object
*
******************************************************************************/
void acpi_db_dump_external_object(union acpi_object *obj_desc, u32 level)
{
u32 i;
if (!obj_desc) {
acpi_os_printf("[Null Object]\n");
return;
}
for (i = 0; i < level; i++) {
acpi_os_printf(" ");
}
switch (obj_desc->type) {
case ACPI_TYPE_ANY:
acpi_os_printf("[Null Object] (Type=0)\n");
break;
case ACPI_TYPE_INTEGER:
acpi_os_printf("[Integer] = %8.8X%8.8X\n",
ACPI_FORMAT_UINT64(obj_desc->integer.value));
break;
case ACPI_TYPE_STRING:
acpi_os_printf("[String] Length %.2X = ",
obj_desc->string.length);
acpi_ut_print_string(obj_desc->string.pointer, ACPI_UINT8_MAX);
acpi_os_printf("\n");
break;
case ACPI_TYPE_BUFFER:
acpi_os_printf("[Buffer] Length %.2X = ",
obj_desc->buffer.length);
if (obj_desc->buffer.length) {
if (obj_desc->buffer.length > 16) {
acpi_os_printf("\n");
}
acpi_ut_debug_dump_buffer(ACPI_CAST_PTR
(u8,
obj_desc->buffer.pointer),
obj_desc->buffer.length,
DB_BYTE_DISPLAY, _COMPONENT);
} else {
acpi_os_printf("\n");
}
break;
case ACPI_TYPE_PACKAGE:
acpi_os_printf("[Package] Contains %u Elements:\n",
obj_desc->package.count);
for (i = 0; i < obj_desc->package.count; i++) {
acpi_db_dump_external_object(&obj_desc->package.
elements[i], level + 1);
}
break;
case ACPI_TYPE_LOCAL_REFERENCE:
acpi_os_printf("[Object Reference] = ");
acpi_db_display_internal_object(obj_desc->reference.handle,
NULL);
break;
case ACPI_TYPE_PROCESSOR:
acpi_os_printf("[Processor]\n");
break;
case ACPI_TYPE_POWER:
acpi_os_printf("[Power Resource]\n");
break;
default:
acpi_os_printf("[Unknown Type] %X\n", obj_desc->type);
break;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_prep_namestring
*
* PARAMETERS: name - String to prepare
*
* RETURN: None
*
* DESCRIPTION: Translate all forward slashes and dots to backslashes.
*
******************************************************************************/
void acpi_db_prep_namestring(char *name)
{
if (!name) {
return;
}
acpi_ut_strupr(name);
/* Convert a leading forward slash to a backslash */
if (*name == '/') {
*name = '\\';
}
/* Ignore a leading backslash, this is the root prefix */
if (ACPI_IS_ROOT_PREFIX(*name)) {
name++;
}
/* Convert all slash path separators to dots */
while (*name) {
if ((*name == '/') || (*name == '\\')) {
*name = '.';
}
name++;
}
}
/*******************************************************************************
*
* FUNCTION: acpi_db_local_ns_lookup
*
* PARAMETERS: name - Name to lookup
*
* RETURN: Pointer to a namespace node, null on failure
*
* DESCRIPTION: Lookup a name in the ACPI namespace
*
* Note: Currently begins search from the root. Could be enhanced to use
* the current prefix (scope) node as the search beginning point.
*
******************************************************************************/
struct acpi_namespace_node *acpi_db_local_ns_lookup(char *name)
{
char *internal_path;
acpi_status status;
struct acpi_namespace_node *node = NULL;
acpi_db_prep_namestring(name);
/* Build an internal namestring */
status = acpi_ns_internalize_name(name, &internal_path);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Invalid namestring: %s\n", name);
return (NULL);
}
/*
* Lookup the name.
* (Uses root node as the search starting point)
*/
status = acpi_ns_lookup(NULL, internal_path, ACPI_TYPE_ANY,
ACPI_IMODE_EXECUTE,
ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE,
NULL, &node);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not locate name: %s, %s\n",
name, acpi_format_exception(status));
}
ACPI_FREE(internal_path);
return (node);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_uint32_to_hex_string
*
* PARAMETERS: value - The value to be converted to string
* buffer - Buffer for result (not less than 11 bytes)
*
* RETURN: None
*
* DESCRIPTION: Convert the unsigned 32-bit value to the hexadecimal image
*
* NOTE: It is the caller's responsibility to ensure that the length of buffer
* is sufficient.
*
******************************************************************************/
void acpi_db_uint32_to_hex_string(u32 value, char *buffer)
{
int i;
if (value == 0) {
strcpy(buffer, "0");
return;
}
buffer[8] = '\0';
for (i = 7; i >= 0; i--) {
buffer[i] = gbl_hex_to_ascii[value & 0x0F];
value = value >> 4;
}
}
#ifdef ACPI_OBSOLETE_FUNCTIONS
/*******************************************************************************
*
* FUNCTION: acpi_db_second_pass_parse
*
* PARAMETERS: root - Root of the parse tree
*
* RETURN: Status
*
* DESCRIPTION: Second pass parse of the ACPI tables. We need to wait until
* second pass to parse the control methods
*
******************************************************************************/
acpi_status acpi_db_second_pass_parse(union acpi_parse_object *root)
{
union acpi_parse_object *op = root;
union acpi_parse_object *method;
union acpi_parse_object *search_op;
union acpi_parse_object *start_op;
acpi_status status = AE_OK;
u32 base_aml_offset;
struct acpi_walk_state *walk_state;
ACPI_FUNCTION_ENTRY();
acpi_os_printf("Pass two parse ....\n");
while (op) {
if (op->common.aml_opcode == AML_METHOD_OP) {
method = op;
/* Create a new walk state for the parse */
walk_state =
acpi_ds_create_walk_state(0, NULL, NULL, NULL);
if (!walk_state) {
return (AE_NO_MEMORY);
}
/* Init the Walk State */
walk_state->parser_state.aml =
walk_state->parser_state.aml_start =
method->named.data;
walk_state->parser_state.aml_end =
walk_state->parser_state.pkg_end =
method->named.data + method->named.length;
walk_state->parser_state.start_scope = op;
walk_state->descending_callback =
acpi_ds_load1_begin_op;
walk_state->ascending_callback = acpi_ds_load1_end_op;
/* Perform the AML parse */
status = acpi_ps_parse_aml(walk_state);
base_aml_offset =
(method->common.value.arg)->common.aml_offset + 1;
start_op = (method->common.value.arg)->common.next;
search_op = start_op;
while (search_op) {
search_op->common.aml_offset += base_aml_offset;
search_op =
acpi_ps_get_depth_next(start_op, search_op);
}
}
if (op->common.aml_opcode == AML_REGION_OP) {
/* TBD: [Investigate] this isn't quite the right thing to do! */
/*
*
* Method = (ACPI_DEFERRED_OP *) Op;
* Status = acpi_ps_parse_aml (Op, Method->Body, Method->body_length);
*/
}
if (ACPI_FAILURE(status)) {
break;
}
op = acpi_ps_get_depth_next(root, op);
}
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_dump_buffer
*
* PARAMETERS: address - Pointer to the buffer
*
* RETURN: None
*
* DESCRIPTION: Print a portion of a buffer
*
******************************************************************************/
void acpi_db_dump_buffer(u32 address)
{
acpi_os_printf("\nLocation %X:\n", address);
acpi_dbg_level |= ACPI_LV_TABLES;
acpi_ut_debug_dump_buffer(ACPI_TO_POINTER(address), 64, DB_BYTE_DISPLAY,
ACPI_UINT32_MAX);
}
#endif

View File

@ -0,0 +1,513 @@
/*******************************************************************************
*
* Module Name: dbxface - AML Debugger external interfaces
*
******************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "amlcode.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbxface")
/* Local prototypes */
static acpi_status
acpi_db_start_command(struct acpi_walk_state *walk_state,
union acpi_parse_object *op);
#ifdef ACPI_OBSOLETE_FUNCTIONS
void acpi_db_method_end(struct acpi_walk_state *walk_state);
#endif
/*******************************************************************************
*
* FUNCTION: acpi_db_start_command
*
* PARAMETERS: walk_state - Current walk
* op - Current executing Op, from AML interpreter
*
* RETURN: Status
*
* DESCRIPTION: Enter debugger command loop
*
******************************************************************************/
static acpi_status
acpi_db_start_command(struct acpi_walk_state *walk_state,
union acpi_parse_object *op)
{
acpi_status status;
/* TBD: [Investigate] are there namespace locking issues here? */
/* acpi_ut_release_mutex (ACPI_MTX_NAMESPACE); */
/* Go into the command loop and await next user command */
acpi_gbl_method_executing = TRUE;
status = AE_CTRL_TRUE;
while (status == AE_CTRL_TRUE) {
if (acpi_gbl_debugger_configuration == DEBUGGER_MULTI_THREADED) {
/* Handshake with the front-end that gets user command lines */
acpi_os_release_mutex(acpi_gbl_db_command_complete);
status =
acpi_os_acquire_mutex(acpi_gbl_db_command_ready,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
return (status);
}
} else {
/* Single threaded, we must get a command line ourselves */
/* Force output to console until a command is entered */
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
/* Different prompt if method is executing */
if (!acpi_gbl_method_executing) {
acpi_os_printf("%1c ",
ACPI_DEBUGGER_COMMAND_PROMPT);
} else {
acpi_os_printf("%1c ",
ACPI_DEBUGGER_EXECUTE_PROMPT);
}
/* Get the user input line */
status = acpi_os_get_line(acpi_gbl_db_line_buf,
ACPI_DB_LINE_BUFFER_SIZE,
NULL);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"While parsing command line"));
return (status);
}
}
status =
acpi_db_command_dispatch(acpi_gbl_db_line_buf, walk_state,
op);
}
/* acpi_ut_acquire_mutex (ACPI_MTX_NAMESPACE); */
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_single_step
*
* PARAMETERS: walk_state - Current walk
* op - Current executing op (from aml interpreter)
* opcode_class - Class of the current AML Opcode
*
* RETURN: Status
*
* DESCRIPTION: Called just before execution of an AML opcode.
*
******************************************************************************/
acpi_status
acpi_db_single_step(struct acpi_walk_state * walk_state,
union acpi_parse_object * op, u32 opcode_class)
{
union acpi_parse_object *next;
acpi_status status = AE_OK;
u32 original_debug_level;
union acpi_parse_object *display_op;
union acpi_parse_object *parent_op;
u32 aml_offset;
ACPI_FUNCTION_ENTRY();
#ifndef ACPI_APPLICATION
if (acpi_gbl_db_thread_id != acpi_os_get_thread_id()) {
return (AE_OK);
}
#endif
/* Check the abort flag */
if (acpi_gbl_abort_method) {
acpi_gbl_abort_method = FALSE;
return (AE_ABORT_METHOD);
}
aml_offset = (u32)ACPI_PTR_DIFF(op->common.aml,
walk_state->parser_state.aml_start);
/* Check for single-step breakpoint */
if (walk_state->method_breakpoint &&
(walk_state->method_breakpoint <= aml_offset)) {
/* Check if the breakpoint has been reached or passed */
/* Hit the breakpoint, resume single step, reset breakpoint */
acpi_os_printf("***Break*** at AML offset %X\n", aml_offset);
acpi_gbl_cm_single_step = TRUE;
acpi_gbl_step_to_next_call = FALSE;
walk_state->method_breakpoint = 0;
}
/* Check for user breakpoint (Must be on exact Aml offset) */
else if (walk_state->user_breakpoint &&
(walk_state->user_breakpoint == aml_offset)) {
acpi_os_printf("***UserBreakpoint*** at AML offset %X\n",
aml_offset);
acpi_gbl_cm_single_step = TRUE;
acpi_gbl_step_to_next_call = FALSE;
walk_state->method_breakpoint = 0;
}
/*
* Check if this is an opcode that we are interested in --
* namely, opcodes that have arguments
*/
if (op->common.aml_opcode == AML_INT_NAMEDFIELD_OP) {
return (AE_OK);
}
switch (opcode_class) {
case AML_CLASS_UNKNOWN:
case AML_CLASS_ARGUMENT: /* constants, literals, etc. do nothing */
return (AE_OK);
default:
/* All other opcodes -- continue */
break;
}
/*
* Under certain debug conditions, display this opcode and its operands
*/
if ((acpi_gbl_db_output_to_file) ||
(acpi_gbl_cm_single_step) || (acpi_dbg_level & ACPI_LV_PARSE)) {
if ((acpi_gbl_db_output_to_file) ||
(acpi_dbg_level & ACPI_LV_PARSE)) {
acpi_os_printf
("\n[AmlDebug] Next AML Opcode to execute:\n");
}
/*
* Display this op (and only this op - zero out the NEXT field
* temporarily, and disable parser trace output for the duration of
* the display because we don't want the extraneous debug output)
*/
original_debug_level = acpi_dbg_level;
acpi_dbg_level &= ~(ACPI_LV_PARSE | ACPI_LV_FUNCTIONS);
next = op->common.next;
op->common.next = NULL;
display_op = op;
parent_op = op->common.parent;
if (parent_op) {
if ((walk_state->control_state) &&
(walk_state->control_state->common.state ==
ACPI_CONTROL_PREDICATE_EXECUTING)) {
/*
* We are executing the predicate of an IF or WHILE statement
* Search upwards for the containing IF or WHILE so that the
* entire predicate can be displayed.
*/
while (parent_op) {
if ((parent_op->common.aml_opcode ==
AML_IF_OP)
|| (parent_op->common.aml_opcode ==
AML_WHILE_OP)) {
display_op = parent_op;
break;
}
parent_op = parent_op->common.parent;
}
} else {
while (parent_op) {
if ((parent_op->common.aml_opcode ==
AML_IF_OP)
|| (parent_op->common.aml_opcode ==
AML_ELSE_OP)
|| (parent_op->common.aml_opcode ==
AML_SCOPE_OP)
|| (parent_op->common.aml_opcode ==
AML_METHOD_OP)
|| (parent_op->common.aml_opcode ==
AML_WHILE_OP)) {
break;
}
display_op = parent_op;
parent_op = parent_op->common.parent;
}
}
}
/* Now we can display it */
#ifdef ACPI_DISASSEMBLER
acpi_dm_disassemble(walk_state, display_op, ACPI_UINT32_MAX);
#endif
if ((op->common.aml_opcode == AML_IF_OP) ||
(op->common.aml_opcode == AML_WHILE_OP)) {
if (walk_state->control_state->common.value) {
acpi_os_printf
("Predicate = [True], IF block was executed\n");
} else {
acpi_os_printf
("Predicate = [False], Skipping IF block\n");
}
} else if (op->common.aml_opcode == AML_ELSE_OP) {
acpi_os_printf
("Predicate = [False], ELSE block was executed\n");
}
/* Restore everything */
op->common.next = next;
acpi_os_printf("\n");
if ((acpi_gbl_db_output_to_file) ||
(acpi_dbg_level & ACPI_LV_PARSE)) {
acpi_os_printf("\n");
}
acpi_dbg_level = original_debug_level;
}
/* If we are not single stepping, just continue executing the method */
if (!acpi_gbl_cm_single_step) {
return (AE_OK);
}
/*
* If we are executing a step-to-call command,
* Check if this is a method call.
*/
if (acpi_gbl_step_to_next_call) {
if (op->common.aml_opcode != AML_INT_METHODCALL_OP) {
/* Not a method call, just keep executing */
return (AE_OK);
}
/* Found a method call, stop executing */
acpi_gbl_step_to_next_call = FALSE;
}
/*
* If the next opcode is a method call, we will "step over" it
* by default.
*/
if (op->common.aml_opcode == AML_INT_METHODCALL_OP) {
/* Force no more single stepping while executing called method */
acpi_gbl_cm_single_step = FALSE;
/*
* Set the breakpoint on/before the call, it will stop execution
* as soon as we return
*/
walk_state->method_breakpoint = 1; /* Must be non-zero! */
}
status = acpi_db_start_command(walk_state, op);
/* User commands complete, continue execution of the interrupted method */
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_initialize_debugger
*
* PARAMETERS: None
*
* RETURN: Status
*
* DESCRIPTION: Init and start debugger
*
******************************************************************************/
acpi_status acpi_initialize_debugger(void)
{
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_initialize_debugger);
/* Init globals */
acpi_gbl_db_buffer = NULL;
acpi_gbl_db_filename = NULL;
acpi_gbl_db_output_to_file = FALSE;
acpi_gbl_db_debug_level = ACPI_LV_VERBOSITY2;
acpi_gbl_db_console_debug_level = ACPI_NORMAL_DEFAULT | ACPI_LV_TABLES;
acpi_gbl_db_output_flags = ACPI_DB_CONSOLE_OUTPUT;
acpi_gbl_db_opt_no_ini_methods = FALSE;
acpi_gbl_db_buffer = acpi_os_allocate(ACPI_DEBUG_BUFFER_SIZE);
if (!acpi_gbl_db_buffer) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
memset(acpi_gbl_db_buffer, 0, ACPI_DEBUG_BUFFER_SIZE);
/* Initial scope is the root */
acpi_gbl_db_scope_buf[0] = AML_ROOT_PREFIX;
acpi_gbl_db_scope_buf[1] = 0;
acpi_gbl_db_scope_node = acpi_gbl_root_node;
/* Initialize user commands loop */
acpi_gbl_db_terminate_loop = FALSE;
/*
* If configured for multi-thread support, the debug executor runs in
* a separate thread so that the front end can be in another address
* space, environment, or even another machine.
*/
if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) {
/* These were created with one unit, grab it */
status = acpi_os_acquire_mutex(acpi_gbl_db_command_complete,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not get debugger mutex\n");
return_ACPI_STATUS(status);
}
status = acpi_os_acquire_mutex(acpi_gbl_db_command_ready,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not get debugger mutex\n");
return_ACPI_STATUS(status);
}
/* Create the debug execution thread to execute commands */
acpi_gbl_db_threads_terminated = FALSE;
status = acpi_os_execute(OSL_DEBUGGER_MAIN_THREAD,
acpi_db_execute_thread, NULL);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"Could not start debugger thread"));
acpi_gbl_db_threads_terminated = TRUE;
return_ACPI_STATUS(status);
}
} else {
acpi_gbl_db_thread_id = acpi_os_get_thread_id();
}
return_ACPI_STATUS(AE_OK);
}
ACPI_EXPORT_SYMBOL(acpi_initialize_debugger)
/*******************************************************************************
*
* FUNCTION: acpi_terminate_debugger
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Stop debugger
*
******************************************************************************/
void acpi_terminate_debugger(void)
{
/* Terminate the AML Debugger */
acpi_gbl_db_terminate_loop = TRUE;
if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) {
acpi_os_release_mutex(acpi_gbl_db_command_ready);
/* Wait the AML Debugger threads */
while (!acpi_gbl_db_threads_terminated) {
acpi_os_sleep(100);
}
}
if (acpi_gbl_db_buffer) {
acpi_os_free(acpi_gbl_db_buffer);
acpi_gbl_db_buffer = NULL;
}
/* Ensure that debug output is now disabled */
acpi_gbl_db_output_flags = ACPI_DB_DISABLE_OUTPUT;
}
ACPI_EXPORT_SYMBOL(acpi_terminate_debugger)
/*******************************************************************************
*
* FUNCTION: acpi_set_debugger_thread_id
*
* PARAMETERS: thread_id - Debugger thread ID
*
* RETURN: None
*
* DESCRIPTION: Set debugger thread ID
*
******************************************************************************/
void acpi_set_debugger_thread_id(acpi_thread_id thread_id)
{
acpi_gbl_db_thread_id = thread_id;
}
ACPI_EXPORT_SYMBOL(acpi_set_debugger_thread_id)

View File

@ -405,7 +405,7 @@ cleanup:
}
ACPI_EXPORT_SYMBOL(acpi_install_exception_handler)
#endif /* ACPI_FUTURE_USAGE */
#endif
#if (!ACPI_REDUCED_HARDWARE)
/*******************************************************************************

View File

@ -618,6 +618,7 @@ acpi_ex_convert_to_target_type(acpi_object_type destination_type,
break;
case ARGI_TARGETREF:
case ARGI_STORE_TARGET:
switch (destination_type) {
case ACPI_TYPE_INTEGER:

View File

@ -209,7 +209,6 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
* (i.e., dereference the package index)
* Delete the ref object, increment the returned object
*/
acpi_ut_remove_reference(stack_desc);
acpi_ut_add_reference(obj_desc);
*stack_ptr = obj_desc;
} else {

View File

@ -307,6 +307,8 @@ acpi_ex_resolve_operands(u16 opcode,
case ARGI_TARGETREF: /* Allows implicit conversion rules before store */
case ARGI_FIXED_TARGET: /* No implicit conversion before store to target */
case ARGI_SIMPLE_TARGET: /* Name, Local, or arg - no implicit conversion */
case ARGI_STORE_TARGET:
/*
* Need an operand of type ACPI_TYPE_LOCAL_REFERENCE
* A Namespace Node is OK as-is

View File

@ -137,7 +137,7 @@ acpi_ex_store(union acpi_operand_object *source_desc,
/* Destination is not a Reference object */
ACPI_ERROR((AE_INFO,
"Target is not a Reference or Constant object - %s [%p]",
"Target is not a Reference or Constant object - [%s] %p",
acpi_ut_get_object_type_name(dest_desc),
dest_desc));
@ -189,7 +189,7 @@ acpi_ex_store(union acpi_operand_object *source_desc,
* displayed and otherwise has no effect -- see ACPI Specification
*/
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"**** Write to Debug Object: Object %p %s ****:\n\n",
"**** Write to Debug Object: Object %p [%s] ****:\n\n",
source_desc,
acpi_ut_get_object_type_name(source_desc)));
@ -341,7 +341,7 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc,
/* All other types are invalid */
ACPI_ERROR((AE_INFO,
"Source must be Integer/Buffer/String type, not %s",
"Source must be type [Integer/Buffer/String], found [%s]",
acpi_ut_get_object_type_name(source_desc)));
return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
}
@ -352,8 +352,9 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc,
break;
default:
ACPI_ERROR((AE_INFO, "Target is not a Package or BufferField"));
status = AE_AML_OPERAND_TYPE;
ACPI_ERROR((AE_INFO,
"Target is not of type [Package/BufferField]"));
status = AE_AML_TARGET_TYPE;
break;
}
@ -373,20 +374,20 @@ acpi_ex_store_object_to_index(union acpi_operand_object *source_desc,
*
* DESCRIPTION: Store the object to the named object.
*
* The Assignment of an object to a named object is handled here
* The value passed in will replace the current value (if any)
* with the input value.
* The assignment of an object to a named object is handled here.
* The value passed in will replace the current value (if any)
* with the input value.
*
* When storing into an object the data is converted to the
* target object type then stored in the object. This means
* that the target object type (for an initialized target) will
* not be changed by a store operation. A copy_object can change
* the target type, however.
* When storing into an object the data is converted to the
* target object type then stored in the object. This means
* that the target object type (for an initialized target) will
* not be changed by a store operation. A copy_object can change
* the target type, however.
*
* The implicit_conversion flag is set to NO/FALSE only when
* storing to an arg_x -- as per the rules of the ACPI spec.
* The implicit_conversion flag is set to NO/FALSE only when
* storing to an arg_x -- as per the rules of the ACPI spec.
*
* Assumes parameters are already validated.
* Assumes parameters are already validated.
*
******************************************************************************/
@ -408,11 +409,75 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
target_type = acpi_ns_get_type(node);
target_desc = acpi_ns_get_attached_object(node);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p (%s) to node %p (%s)\n",
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Storing %p [%s] to node %p [%s]\n",
source_desc,
acpi_ut_get_object_type_name(source_desc), node,
acpi_ut_get_type_name(target_type)));
/* Only limited target types possible for everything except copy_object */
if (walk_state->opcode != AML_COPY_OP) {
/*
* Only copy_object allows all object types to be overwritten. For
* target_ref(s), there are restrictions on the object types that
* are allowed.
*
* Allowable operations/typing for Store:
*
* 1) Simple Store
* Integer --> Integer (Named/Local/Arg)
* String --> String (Named/Local/Arg)
* Buffer --> Buffer (Named/Local/Arg)
* Package --> Package (Named/Local/Arg)
*
* 2) Store with implicit conversion
* Integer --> String or Buffer (Named)
* String --> Integer or Buffer (Named)
* Buffer --> Integer or String (Named)
*/
switch (target_type) {
case ACPI_TYPE_PACKAGE:
/*
* Here, can only store a package to an existing package.
* Storing a package to a Local/Arg is OK, and handled
* elsewhere.
*/
if (walk_state->opcode == AML_STORE_OP) {
if (source_desc->common.type !=
ACPI_TYPE_PACKAGE) {
ACPI_ERROR((AE_INFO,
"Cannot assign type [%s] to [Package] "
"(source must be type Pkg)",
acpi_ut_get_object_type_name
(source_desc)));
return_ACPI_STATUS(AE_AML_TARGET_TYPE);
}
break;
}
/* Fallthrough */
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_EVENT:
case ACPI_TYPE_MUTEX:
case ACPI_TYPE_REGION:
case ACPI_TYPE_POWER:
case ACPI_TYPE_PROCESSOR:
case ACPI_TYPE_THERMAL:
ACPI_ERROR((AE_INFO,
"Target must be [Buffer/Integer/String/Reference], found [%s] (%4.4s)",
acpi_ut_get_type_name(node->type),
node->name.ascii));
return_ACPI_STATUS(AE_AML_TARGET_TYPE);
default:
break;
}
}
/*
* Resolve the source object to an actual value
* (If it is a reference object)
@ -425,13 +490,13 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
/* Do the actual store operation */
switch (target_type) {
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_BUFFER:
/*
* The simple data types all support implicit source operand
* conversion before the store.
*/
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_BUFFER:
if ((walk_state->opcode == AML_COPY_OP) || !implicit_conversion) {
/*
@ -467,7 +532,7 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
new_desc->common.type);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Store %s into %s via Convert/Attach\n",
"Store type [%s] into [%s] via Convert/Attach\n",
acpi_ut_get_object_type_name
(source_desc),
acpi_ut_get_object_type_name
@ -491,15 +556,12 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
default:
/*
* No conversions for all other types. Directly store a copy of
* the source object. This is the ACPI spec-defined behavior for
* the copy_object operator.
* copy_object operator: No conversions for all other types.
* Instead, directly store a copy of the source object.
*
* NOTE: For the Store operator, this is a departure from the
* ACPI spec, which states "If conversion is impossible, abort
* the running control method". Instead, this code implements
* "If conversion is impossible, treat the Store operation as
* a CopyObject".
* This is the ACPI spec-defined behavior for the copy_object
* operator. (Note, for this default case, all normal
* Store/Target operations exited above with an error).
*/
status = acpi_ex_store_direct_to_node(source_desc, node,
walk_state);

View File

@ -122,9 +122,10 @@ acpi_ex_resolve_object(union acpi_operand_object **source_desc_ptr,
/* Conversion successful but still not a valid type */
ACPI_ERROR((AE_INFO,
"Cannot assign type %s to %s (must be type Int/Str/Buf)",
"Cannot assign type [%s] to [%s] (must be type Int/Str/Buf)",
acpi_ut_get_object_type_name(source_desc),
acpi_ut_get_type_name(target_type)));
status = AE_AML_OPERAND_TYPE;
}
break;
@ -275,7 +276,7 @@ acpi_ex_store_object_to_object(union acpi_operand_object *source_desc,
/*
* All other types come here.
*/
ACPI_WARNING((AE_INFO, "Store into type %s not implemented",
ACPI_WARNING((AE_INFO, "Store into type [%s] not implemented",
acpi_ut_get_object_type_name(dest_desc)));
status = AE_NOT_IMPLEMENTED;

View File

@ -60,7 +60,6 @@ acpi_ns_dump_one_device(acpi_handle obj_handle,
#if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER)
#ifdef ACPI_FUTURE_USAGE
static acpi_status
acpi_ns_dump_one_object_path(acpi_handle obj_handle,
u32 level, void *context, void **return_value);
@ -68,7 +67,6 @@ acpi_ns_dump_one_object_path(acpi_handle obj_handle,
static acpi_status
acpi_ns_get_max_depth(acpi_handle obj_handle,
u32 level, void *context, void **return_value);
#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*
@ -625,7 +623,6 @@ cleanup:
return (AE_OK);
}
#ifdef ACPI_FUTURE_USAGE
/*******************************************************************************
*
* FUNCTION: acpi_ns_dump_objects
@ -680,9 +677,7 @@ acpi_ns_dump_objects(acpi_object_type type,
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
}
#endif /* ACPI_FUTURE_USAGE */
#ifdef ACPI_FUTURE_USAGE
/*******************************************************************************
*
* FUNCTION: acpi_ns_dump_one_object_path, acpi_ns_get_max_depth
@ -810,7 +805,6 @@ acpi_ns_dump_object_paths(acpi_object_type type,
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
}
#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*

View File

@ -226,7 +226,7 @@ acpi_ns_check_object_type(struct acpi_evaluate_info *info,
{
union acpi_operand_object *return_object = *return_object_ptr;
acpi_status status = AE_OK;
char type_buffer[48]; /* Room for 5 types */
char type_buffer[96]; /* Room for 10 types */
/* A Namespace node should not get here, but make sure */

View File

@ -183,7 +183,6 @@ acpi_ps_append_arg(union acpi_parse_object *op, union acpi_parse_object *arg)
}
}
#ifdef ACPI_FUTURE_USAGE
/*******************************************************************************
*
* FUNCTION: acpi_ps_get_depth_next
@ -317,4 +316,3 @@ union acpi_parse_object *acpi_ps_get_child(union acpi_parse_object *op)
return (child);
}
#endif
#endif /* ACPI_FUTURE_USAGE */

View File

@ -205,7 +205,6 @@ u8 acpi_ps_is_leading_char(u32 c)
/*
* Get op's name (4-byte name segment) or 0 if unnamed
*/
#ifdef ACPI_FUTURE_USAGE
u32 acpi_ps_get_name(union acpi_parse_object * op)
{
@ -219,7 +218,6 @@ u32 acpi_ps_get_name(union acpi_parse_object * op)
return (op->named.name);
}
#endif /* ACPI_FUTURE_USAGE */
/*
* Set op's name

View File

@ -51,7 +51,6 @@ ACPI_MODULE_NAME("rsdump")
/*
* All functions in this module are used by the AML Debugger only
*/
#if defined(ACPI_DEBUGGER)
/* Local prototypes */
static void acpi_rs_out_string(char *title, char *value);
@ -565,5 +564,3 @@ static void acpi_rs_dump_word_list(u16 length, u16 *data)
acpi_os_printf("%25s%2.2X : %4.4X\n", "Word", i, data[i]);
}
}
#endif

View File

@ -564,7 +564,6 @@ acpi_rs_get_crs_method_data(struct acpi_namespace_node *node,
*
******************************************************************************/
#ifdef ACPI_FUTURE_USAGE
acpi_status
acpi_rs_get_prs_method_data(struct acpi_namespace_node *node,
struct acpi_buffer *ret_buffer)
@ -596,7 +595,6 @@ acpi_rs_get_prs_method_data(struct acpi_namespace_node *node,
acpi_ut_remove_reference(obj_desc);
return_ACPI_STATUS(status);
}
#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*

View File

@ -220,7 +220,7 @@ acpi_get_current_resources(acpi_handle device_handle,
}
ACPI_EXPORT_SYMBOL(acpi_get_current_resources)
#ifdef ACPI_FUTURE_USAGE
/*******************************************************************************
*
* FUNCTION: acpi_get_possible_resources
@ -262,7 +262,7 @@ acpi_get_possible_resources(acpi_handle device_handle,
}
ACPI_EXPORT_SYMBOL(acpi_get_possible_resources)
#endif /* ACPI_FUTURE_USAGE */
/*******************************************************************************
*
* FUNCTION: acpi_set_current_resources

View File

@ -232,12 +232,27 @@ char *acpi_ut_get_type_name(acpi_object_type type)
char *acpi_ut_get_object_type_name(union acpi_operand_object *obj_desc)
{
ACPI_FUNCTION_TRACE(ut_get_object_type_name);
if (!obj_desc) {
return ("[NULL Object Descriptor]");
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Null Object Descriptor\n"));
return_PTR("[NULL Object Descriptor]");
}
return (acpi_ut_get_type_name(obj_desc->common.type));
/* These descriptor types share a common area */
if ((ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) &&
(ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_NAMED)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Invalid object descriptor type: 0x%2.2X [%s] (%p)\n",
ACPI_GET_DESCRIPTOR_TYPE(obj_desc),
acpi_ut_get_descriptor_name(obj_desc),
obj_desc));
return_PTR("Invalid object");
}
return_PTR(acpi_ut_get_type_name(obj_desc->common.type));
}
/*******************************************************************************
@ -407,8 +422,6 @@ static char *acpi_gbl_mutex_names[ACPI_NUM_MUTEX] = {
"ACPI_MTX_Events",
"ACPI_MTX_Caches",
"ACPI_MTX_Memory",
"ACPI_MTX_CommandComplete",
"ACPI_MTX_CommandReady"
};
char *acpi_ut_get_mutex_name(u32 mutex_id)

View File

@ -45,6 +45,7 @@
#include "accommon.h"
#include "actables.h"
#include "acapps.h"
#include "errno.h"
#ifdef ACPI_ASL_COMPILER
#include "aslcompiler.h"
@ -301,6 +302,11 @@ acpi_ut_read_table_from_file(char *filename, struct acpi_table_header ** table)
file = fopen(filename, "rb");
if (!file) {
perror("Could not open input file");
if (errno == ENOENT) {
return (AE_NOT_EXIST);
}
return (status);
}

View File

@ -241,8 +241,6 @@ acpi_status acpi_ut_init_globals(void)
acpi_gbl_disable_mem_tracking = FALSE;
#endif
ACPI_DEBUGGER_EXEC(acpi_gbl_db_terminate_threads = FALSE);
return_ACPI_STATUS(AE_OK);
}
@ -284,6 +282,19 @@ void acpi_ut_subsystem_shutdown(void)
{
ACPI_FUNCTION_TRACE(ut_subsystem_shutdown);
/* Just exit if subsystem is already shutdown */
if (acpi_gbl_shutdown) {
ACPI_ERROR((AE_INFO, "ACPI Subsystem is already terminated"));
return_VOID;
}
/* Subsystem appears active, go ahead and shut it down */
acpi_gbl_shutdown = TRUE;
acpi_gbl_startup_flags = 0;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Shutting down ACPI Subsystem\n"));
#ifndef ACPI_ASL_COMPILER
/* Close the acpi_event Handling */

View File

@ -108,6 +108,21 @@ acpi_status acpi_ut_mutex_initialize(void)
/* Create the reader/writer lock for namespace access */
status = acpi_ut_create_rw_lock(&acpi_gbl_namespace_rw_lock);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
#ifdef ACPI_DEBUGGER
/* Debugger Support */
status = acpi_os_create_mutex(&acpi_gbl_db_command_ready);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
status = acpi_os_create_mutex(&acpi_gbl_db_command_complete);
#endif
return_ACPI_STATUS(status);
}
@ -147,6 +162,12 @@ void acpi_ut_mutex_terminate(void)
/* Delete the reader/writer lock */
acpi_ut_delete_rw_lock(&acpi_gbl_namespace_rw_lock);
#ifdef ACPI_DEBUGGER
acpi_os_delete_mutex(acpi_gbl_db_command_ready);
acpi_os_delete_mutex(acpi_gbl_db_command_complete);
#endif
return_VOID;
}

View File

@ -67,23 +67,6 @@ acpi_status __init acpi_terminate(void)
ACPI_FUNCTION_TRACE(acpi_terminate);
/* Just exit if subsystem is already shutdown */
if (acpi_gbl_shutdown) {
ACPI_ERROR((AE_INFO, "ACPI Subsystem is already terminated"));
return_ACPI_STATUS(AE_OK);
}
/* Subsystem appears active, go ahead and shut it down */
acpi_gbl_shutdown = TRUE;
acpi_gbl_startup_flags = 0;
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Shutting down ACPI Subsystem\n"));
/* Terminate the AML Debugger if present */
ACPI_DEBUGGER_EXEC(acpi_gbl_db_terminate_threads = TRUE);
/* Shutdown and free all resources */
acpi_ut_subsystem_shutdown();
@ -270,7 +253,7 @@ acpi_install_initialization_handler(acpi_init_handler handler, u32 function)
}
ACPI_EXPORT_SYMBOL(acpi_install_initialization_handler)
#endif /* ACPI_FUTURE_USAGE */
#endif
/*****************************************************************************
*

733
drivers/acpi/cppc_acpi.c Normal file
View File

@ -0,0 +1,733 @@
/*
* CPPC (Collaborative Processor Performance Control) methods used by CPUfreq drivers.
*
* (C) Copyright 2014, 2015 Linaro Ltd.
* Author: Ashwin Chaugule <ashwin.chaugule@linaro.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; version 2
* of the License.
*
* CPPC describes a few methods for controlling CPU performance using
* information from a per CPU table called CPC. This table is described in
* the ACPI v5.0+ specification. The table consists of a list of
* registers which may be memory mapped or hardware registers and also may
* include some static integer values.
*
* CPU performance is on an abstract continuous scale as against a discretized
* P-state scale which is tied to CPU frequency only. In brief, the basic
* operation involves:
*
* - OS makes a CPU performance request. (Can provide min and max bounds)
*
* - Platform (such as BMC) is free to optimize request within requested bounds
* depending on power/thermal budgets etc.
*
* - Platform conveys its decision back to OS
*
* The communication between OS and platform occurs through another medium
* called (PCC) Platform Communication Channel. This is a generic mailbox like
* mechanism which includes doorbell semantics to indicate register updates.
* See drivers/mailbox/pcc.c for details on PCC.
*
* Finer details about the PCC and CPPC spec are available in the ACPI v5.1 and
* above specifications.
*/
#define pr_fmt(fmt) "ACPI CPPC: " fmt
#include <linux/cpufreq.h>
#include <linux/delay.h>
#include <acpi/cppc_acpi.h>
/*
* Lock to provide mutually exclusive access to the PCC
* channel. e.g. When the remote updates the shared region
* with new data, the reader needs to be protected from
* other CPUs activity on the same channel.
*/
static DEFINE_SPINLOCK(pcc_lock);
/*
* The cpc_desc structure contains the ACPI register details
* as described in the per CPU _CPC tables. The details
* include the type of register (e.g. PCC, System IO, FFH etc.)
* and destination addresses which lets us READ/WRITE CPU performance
* information using the appropriate I/O methods.
*/
static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
/* This layer handles all the PCC specifics for CPPC. */
static struct mbox_chan *pcc_channel;
static void __iomem *pcc_comm_addr;
static u64 comm_base_addr;
static int pcc_subspace_idx = -1;
static u16 pcc_cmd_delay;
static bool pcc_channel_acquired;
/*
* Arbitrary Retries in case the remote processor is slow to respond
* to PCC commands.
*/
#define NUM_RETRIES 500
static int send_pcc_cmd(u16 cmd)
{
int retries, result = -EIO;
struct acpi_pcct_hw_reduced *pcct_ss = pcc_channel->con_priv;
struct acpi_pcct_shared_memory *generic_comm_base =
(struct acpi_pcct_shared_memory *) pcc_comm_addr;
u32 cmd_latency = pcct_ss->latency;
/* Min time OS should wait before sending next command. */
udelay(pcc_cmd_delay);
/* Write to the shared comm region. */
writew(cmd, &generic_comm_base->command);
/* Flip CMD COMPLETE bit */
writew(0, &generic_comm_base->status);
/* Ring doorbell */
result = mbox_send_message(pcc_channel, &cmd);
if (result < 0) {
pr_err("Err sending PCC mbox message. cmd:%d, ret:%d\n",
cmd, result);
return result;
}
/* Wait for a nominal time to let platform process command. */
udelay(cmd_latency);
/* Retry in case the remote processor was too slow to catch up. */
for (retries = NUM_RETRIES; retries > 0; retries--) {
if (readw_relaxed(&generic_comm_base->status) & PCC_CMD_COMPLETE) {
result = 0;
break;
}
}
mbox_client_txdone(pcc_channel, result);
return result;
}
static void cppc_chan_tx_done(struct mbox_client *cl, void *msg, int ret)
{
if (ret)
pr_debug("TX did not complete: CMD sent:%x, ret:%d\n",
*(u16 *)msg, ret);
else
pr_debug("TX completed. CMD sent:%x, ret:%d\n",
*(u16 *)msg, ret);
}
struct mbox_client cppc_mbox_cl = {
.tx_done = cppc_chan_tx_done,
.knows_txdone = true,
};
static int acpi_get_psd(struct cpc_desc *cpc_ptr, acpi_handle handle)
{
int result = -EFAULT;
acpi_status status = AE_OK;
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
struct acpi_buffer format = {sizeof("NNNNN"), "NNNNN"};
struct acpi_buffer state = {0, NULL};
union acpi_object *psd = NULL;
struct acpi_psd_package *pdomain;
status = acpi_evaluate_object_typed(handle, "_PSD", NULL, &buffer,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
return -ENODEV;
psd = buffer.pointer;
if (!psd || psd->package.count != 1) {
pr_debug("Invalid _PSD data\n");
goto end;
}
pdomain = &(cpc_ptr->domain_info);
state.length = sizeof(struct acpi_psd_package);
state.pointer = pdomain;
status = acpi_extract_package(&(psd->package.elements[0]),
&format, &state);
if (ACPI_FAILURE(status)) {
pr_debug("Invalid _PSD data for CPU:%d\n", cpc_ptr->cpu_id);
goto end;
}
if (pdomain->num_entries != ACPI_PSD_REV0_ENTRIES) {
pr_debug("Unknown _PSD:num_entries for CPU:%d\n", cpc_ptr->cpu_id);
goto end;
}
if (pdomain->revision != ACPI_PSD_REV0_REVISION) {
pr_debug("Unknown _PSD:revision for CPU: %d\n", cpc_ptr->cpu_id);
goto end;
}
if (pdomain->coord_type != DOMAIN_COORD_TYPE_SW_ALL &&
pdomain->coord_type != DOMAIN_COORD_TYPE_SW_ANY &&
pdomain->coord_type != DOMAIN_COORD_TYPE_HW_ALL) {
pr_debug("Invalid _PSD:coord_type for CPU:%d\n", cpc_ptr->cpu_id);
goto end;
}
result = 0;
end:
kfree(buffer.pointer);
return result;
}
/**
* acpi_get_psd_map - Map the CPUs in a common freq domain.
* @all_cpu_data: Ptrs to CPU specific CPPC data including PSD info.
*
* Return: 0 for success or negative value for err.
*/
int acpi_get_psd_map(struct cpudata **all_cpu_data)
{
int count_target;
int retval = 0;
unsigned int i, j;
cpumask_var_t covered_cpus;
struct cpudata *pr, *match_pr;
struct acpi_psd_package *pdomain;
struct acpi_psd_package *match_pdomain;
struct cpc_desc *cpc_ptr, *match_cpc_ptr;
if (!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL))
return -ENOMEM;
/*
* Now that we have _PSD data from all CPUs, lets setup P-state
* domain info.
*/
for_each_possible_cpu(i) {
pr = all_cpu_data[i];
if (!pr)
continue;
if (cpumask_test_cpu(i, covered_cpus))
continue;
cpc_ptr = per_cpu(cpc_desc_ptr, i);
if (!cpc_ptr)
continue;
pdomain = &(cpc_ptr->domain_info);
cpumask_set_cpu(i, pr->shared_cpu_map);
cpumask_set_cpu(i, covered_cpus);
if (pdomain->num_processors <= 1)
continue;
/* Validate the Domain info */
count_target = pdomain->num_processors;
if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ALL)
pr->shared_type = CPUFREQ_SHARED_TYPE_ALL;
else if (pdomain->coord_type == DOMAIN_COORD_TYPE_HW_ALL)
pr->shared_type = CPUFREQ_SHARED_TYPE_HW;
else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY)
pr->shared_type = CPUFREQ_SHARED_TYPE_ANY;
for_each_possible_cpu(j) {
if (i == j)
continue;
match_cpc_ptr = per_cpu(cpc_desc_ptr, j);
if (!match_cpc_ptr)
continue;
match_pdomain = &(match_cpc_ptr->domain_info);
if (match_pdomain->domain != pdomain->domain)
continue;
/* Here i and j are in the same domain */
if (match_pdomain->num_processors != count_target) {
retval = -EFAULT;
goto err_ret;
}
if (pdomain->coord_type != match_pdomain->coord_type) {
retval = -EFAULT;
goto err_ret;
}
cpumask_set_cpu(j, covered_cpus);
cpumask_set_cpu(j, pr->shared_cpu_map);
}
for_each_possible_cpu(j) {
if (i == j)
continue;
match_pr = all_cpu_data[j];
if (!match_pr)
continue;
match_cpc_ptr = per_cpu(cpc_desc_ptr, j);
if (!match_cpc_ptr)
continue;
match_pdomain = &(match_cpc_ptr->domain_info);
if (match_pdomain->domain != pdomain->domain)
continue;
match_pr->shared_type = pr->shared_type;
cpumask_copy(match_pr->shared_cpu_map,
pr->shared_cpu_map);
}
}
err_ret:
for_each_possible_cpu(i) {
pr = all_cpu_data[i];
if (!pr)
continue;
/* Assume no coordination on any error parsing domain info */
if (retval) {
cpumask_clear(pr->shared_cpu_map);
cpumask_set_cpu(i, pr->shared_cpu_map);
pr->shared_type = CPUFREQ_SHARED_TYPE_ALL;
}
}
free_cpumask_var(covered_cpus);
return retval;
}
EXPORT_SYMBOL_GPL(acpi_get_psd_map);
static int register_pcc_channel(int pcc_subspace_idx)
{
struct acpi_pcct_subspace *cppc_ss;
unsigned int len;
if (pcc_subspace_idx >= 0) {
pcc_channel = pcc_mbox_request_channel(&cppc_mbox_cl,
pcc_subspace_idx);
if (IS_ERR(pcc_channel)) {
pr_err("Failed to find PCC communication channel\n");
return -ENODEV;
}
/*
* The PCC mailbox controller driver should
* have parsed the PCCT (global table of all
* PCC channels) and stored pointers to the
* subspace communication region in con_priv.
*/
cppc_ss = pcc_channel->con_priv;
if (!cppc_ss) {
pr_err("No PCC subspace found for CPPC\n");
return -ENODEV;
}
/*
* This is the shared communication region
* for the OS and Platform to communicate over.
*/
comm_base_addr = cppc_ss->base_address;
len = cppc_ss->length;
pcc_cmd_delay = cppc_ss->min_turnaround_time;
pcc_comm_addr = acpi_os_ioremap(comm_base_addr, len);
if (!pcc_comm_addr) {
pr_err("Failed to ioremap PCC comm region mem\n");
return -ENOMEM;
}
/* Set flag so that we dont come here for each CPU. */
pcc_channel_acquired = true;
}
return 0;
}
/*
* An example CPC table looks like the following.
*
* Name(_CPC, Package()
* {
* 17,
* NumEntries
* 1,
* // Revision
* ResourceTemplate(){Register(PCC, 32, 0, 0x120, 2)},
* // Highest Performance
* ResourceTemplate(){Register(PCC, 32, 0, 0x124, 2)},
* // Nominal Performance
* ResourceTemplate(){Register(PCC, 32, 0, 0x128, 2)},
* // Lowest Nonlinear Performance
* ResourceTemplate(){Register(PCC, 32, 0, 0x12C, 2)},
* // Lowest Performance
* ResourceTemplate(){Register(PCC, 32, 0, 0x130, 2)},
* // Guaranteed Performance Register
* ResourceTemplate(){Register(PCC, 32, 0, 0x110, 2)},
* // Desired Performance Register
* ResourceTemplate(){Register(SystemMemory, 0, 0, 0, 0)},
* ..
* ..
* ..
*
* }
* Each Register() encodes how to access that specific register.
* e.g. a sample PCC entry has the following encoding:
*
* Register (
* PCC,
* AddressSpaceKeyword
* 8,
* //RegisterBitWidth
* 8,
* //RegisterBitOffset
* 0x30,
* //RegisterAddress
* 9
* //AccessSize (subspace ID)
* 0
* )
* }
*/
/**
* acpi_cppc_processor_probe - Search for per CPU _CPC objects.
* @pr: Ptr to acpi_processor containing this CPUs logical Id.
*
* Return: 0 for success or negative value for err.
*/
int acpi_cppc_processor_probe(struct acpi_processor *pr)
{
struct acpi_buffer output = {ACPI_ALLOCATE_BUFFER, NULL};
union acpi_object *out_obj, *cpc_obj;
struct cpc_desc *cpc_ptr;
struct cpc_reg *gas_t;
acpi_handle handle = pr->handle;
unsigned int num_ent, i, cpc_rev;
acpi_status status;
int ret = -EFAULT;
/* Parse the ACPI _CPC table for this cpu. */
status = acpi_evaluate_object_typed(handle, "_CPC", NULL, &output,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status)) {
ret = -ENODEV;
goto out_buf_free;
}
out_obj = (union acpi_object *) output.pointer;
cpc_ptr = kzalloc(sizeof(struct cpc_desc), GFP_KERNEL);
if (!cpc_ptr) {
ret = -ENOMEM;
goto out_buf_free;
}
/* First entry is NumEntries. */
cpc_obj = &out_obj->package.elements[0];
if (cpc_obj->type == ACPI_TYPE_INTEGER) {
num_ent = cpc_obj->integer.value;
} else {
pr_debug("Unexpected entry type(%d) for NumEntries\n",
cpc_obj->type);
goto out_free;
}
/* Only support CPPCv2. Bail otherwise. */
if (num_ent != CPPC_NUM_ENT) {
pr_debug("Firmware exports %d entries. Expected: %d\n",
num_ent, CPPC_NUM_ENT);
goto out_free;
}
/* Second entry should be revision. */
cpc_obj = &out_obj->package.elements[1];
if (cpc_obj->type == ACPI_TYPE_INTEGER) {
cpc_rev = cpc_obj->integer.value;
} else {
pr_debug("Unexpected entry type(%d) for Revision\n",
cpc_obj->type);
goto out_free;
}
if (cpc_rev != CPPC_REV) {
pr_debug("Firmware exports revision:%d. Expected:%d\n",
cpc_rev, CPPC_REV);
goto out_free;
}
/* Iterate through remaining entries in _CPC */
for (i = 2; i < num_ent; i++) {
cpc_obj = &out_obj->package.elements[i];
if (cpc_obj->type == ACPI_TYPE_INTEGER) {
cpc_ptr->cpc_regs[i-2].type = ACPI_TYPE_INTEGER;
cpc_ptr->cpc_regs[i-2].cpc_entry.int_value = cpc_obj->integer.value;
} else if (cpc_obj->type == ACPI_TYPE_BUFFER) {
gas_t = (struct cpc_reg *)
cpc_obj->buffer.pointer;
/*
* The PCC Subspace index is encoded inside
* the CPC table entries. The same PCC index
* will be used for all the PCC entries,
* so extract it only once.
*/
if (gas_t->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
if (pcc_subspace_idx < 0)
pcc_subspace_idx = gas_t->access_width;
else if (pcc_subspace_idx != gas_t->access_width) {
pr_debug("Mismatched PCC ids.\n");
goto out_free;
}
} else if (gas_t->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) {
/* Support only PCC and SYS MEM type regs */
pr_debug("Unsupported register type: %d\n", gas_t->space_id);
goto out_free;
}
cpc_ptr->cpc_regs[i-2].type = ACPI_TYPE_BUFFER;
memcpy(&cpc_ptr->cpc_regs[i-2].cpc_entry.reg, gas_t, sizeof(*gas_t));
} else {
pr_debug("Err in entry:%d in CPC table of CPU:%d \n", i, pr->id);
goto out_free;
}
}
/* Store CPU Logical ID */
cpc_ptr->cpu_id = pr->id;
/* Plug it into this CPUs CPC descriptor. */
per_cpu(cpc_desc_ptr, pr->id) = cpc_ptr;
/* Parse PSD data for this CPU */
ret = acpi_get_psd(cpc_ptr, handle);
if (ret)
goto out_free;
/* Register PCC channel once for all CPUs. */
if (!pcc_channel_acquired) {
ret = register_pcc_channel(pcc_subspace_idx);
if (ret)
goto out_free;
}
/* Everything looks okay */
pr_debug("Parsed CPC struct for CPU: %d\n", pr->id);
kfree(output.pointer);
return 0;
out_free:
kfree(cpc_ptr);
out_buf_free:
kfree(output.pointer);
return ret;
}
EXPORT_SYMBOL_GPL(acpi_cppc_processor_probe);
/**
* acpi_cppc_processor_exit - Cleanup CPC structs.
* @pr: Ptr to acpi_processor containing this CPUs logical Id.
*
* Return: Void
*/
void acpi_cppc_processor_exit(struct acpi_processor *pr)
{
struct cpc_desc *cpc_ptr;
cpc_ptr = per_cpu(cpc_desc_ptr, pr->id);
kfree(cpc_ptr);
}
EXPORT_SYMBOL_GPL(acpi_cppc_processor_exit);
static u64 get_phys_addr(struct cpc_reg *reg)
{
/* PCC communication addr space begins at byte offset 0x8. */
if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM)
return (u64)comm_base_addr + 0x8 + reg->address;
else
return reg->address;
}
static void cpc_read(struct cpc_reg *reg, u64 *val)
{
u64 addr = get_phys_addr(reg);
acpi_os_read_memory((acpi_physical_address)addr,
val, reg->bit_width);
}
static void cpc_write(struct cpc_reg *reg, u64 val)
{
u64 addr = get_phys_addr(reg);
acpi_os_write_memory((acpi_physical_address)addr,
val, reg->bit_width);
}
/**
* cppc_get_perf_caps - Get a CPUs performance capabilities.
* @cpunum: CPU from which to get capabilities info.
* @perf_caps: ptr to cppc_perf_caps. See cppc_acpi.h
*
* Return: 0 for success with perf_caps populated else -ERRNO.
*/
int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum);
struct cpc_register_resource *highest_reg, *lowest_reg, *ref_perf,
*nom_perf;
u64 high, low, ref, nom;
int ret = 0;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpunum);
return -ENODEV;
}
highest_reg = &cpc_desc->cpc_regs[HIGHEST_PERF];
lowest_reg = &cpc_desc->cpc_regs[LOWEST_PERF];
ref_perf = &cpc_desc->cpc_regs[REFERENCE_PERF];
nom_perf = &cpc_desc->cpc_regs[NOMINAL_PERF];
spin_lock(&pcc_lock);
/* Are any of the regs PCC ?*/
if ((highest_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(lowest_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(ref_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(nom_perf->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) {
/* Ring doorbell once to update PCC subspace */
if (send_pcc_cmd(CMD_READ)) {
ret = -EIO;
goto out_err;
}
}
cpc_read(&highest_reg->cpc_entry.reg, &high);
perf_caps->highest_perf = high;
cpc_read(&lowest_reg->cpc_entry.reg, &low);
perf_caps->lowest_perf = low;
cpc_read(&ref_perf->cpc_entry.reg, &ref);
perf_caps->reference_perf = ref;
cpc_read(&nom_perf->cpc_entry.reg, &nom);
perf_caps->nominal_perf = nom;
if (!ref)
perf_caps->reference_perf = perf_caps->nominal_perf;
if (!high || !low || !nom)
ret = -EFAULT;
out_err:
spin_unlock(&pcc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(cppc_get_perf_caps);
/**
* cppc_get_perf_ctrs - Read a CPUs performance feedback counters.
* @cpunum: CPU from which to read counters.
* @perf_fb_ctrs: ptr to cppc_perf_fb_ctrs. See cppc_acpi.h
*
* Return: 0 for success with perf_fb_ctrs populated else -ERRNO.
*/
int cppc_get_perf_ctrs(int cpunum, struct cppc_perf_fb_ctrs *perf_fb_ctrs)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum);
struct cpc_register_resource *delivered_reg, *reference_reg;
u64 delivered, reference;
int ret = 0;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpunum);
return -ENODEV;
}
delivered_reg = &cpc_desc->cpc_regs[DELIVERED_CTR];
reference_reg = &cpc_desc->cpc_regs[REFERENCE_CTR];
spin_lock(&pcc_lock);
/* Are any of the regs PCC ?*/
if ((delivered_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) ||
(reference_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM)) {
/* Ring doorbell once to update PCC subspace */
if (send_pcc_cmd(CMD_READ)) {
ret = -EIO;
goto out_err;
}
}
cpc_read(&delivered_reg->cpc_entry.reg, &delivered);
cpc_read(&reference_reg->cpc_entry.reg, &reference);
if (!delivered || !reference) {
ret = -EFAULT;
goto out_err;
}
perf_fb_ctrs->delivered = delivered;
perf_fb_ctrs->reference = reference;
perf_fb_ctrs->delivered -= perf_fb_ctrs->prev_delivered;
perf_fb_ctrs->reference -= perf_fb_ctrs->prev_reference;
perf_fb_ctrs->prev_delivered = delivered;
perf_fb_ctrs->prev_reference = reference;
out_err:
spin_unlock(&pcc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(cppc_get_perf_ctrs);
/**
* cppc_set_perf - Set a CPUs performance controls.
* @cpu: CPU for which to set performance controls.
* @perf_ctrls: ptr to cppc_perf_ctrls. See cppc_acpi.h
*
* Return: 0 for success, -ERRNO otherwise.
*/
int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
struct cpc_register_resource *desired_reg;
int ret = 0;
if (!cpc_desc) {
pr_debug("No CPC descriptor for CPU:%d\n", cpu);
return -ENODEV;
}
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
spin_lock(&pcc_lock);
/*
* Skip writing MIN/MAX until Linux knows how to come up with
* useful values.
*/
cpc_write(&desired_reg->cpc_entry.reg, perf_ctrls->desired_perf);
/* Is this a PCC reg ?*/
if (desired_reg->cpc_entry.reg.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
/* Ring doorbell so Remote can get our perf request. */
if (send_pcc_cmd(CMD_WRITE))
ret = -EIO;
}
spin_unlock(&pcc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(cppc_set_perf);

View File

@ -962,23 +962,6 @@ int acpi_subsys_prepare(struct device *dev)
}
EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
/**
* acpi_subsys_complete - Finalize device's resume during system resume.
* @dev: Device to handle.
*/
void acpi_subsys_complete(struct device *dev)
{
pm_generic_complete(dev);
/*
* If the device had been runtime-suspended before the system went into
* the sleep state it is going out of and it has never been resumed till
* now, resume it in case the firmware powered it up.
*/
if (dev->power.direct_complete)
pm_request_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_complete);
/**
* acpi_subsys_suspend - Run the device driver's suspend callback.
* @dev: Device to handle.
@ -1047,7 +1030,7 @@ static struct dev_pm_domain acpi_general_pm_domain = {
.runtime_resume = acpi_subsys_runtime_resume,
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
.complete = pm_complete_with_resume_check,
.suspend = acpi_subsys_suspend,
.suspend_late = acpi_subsys_suspend_late,
.resume_early = acpi_subsys_resume_early,

View File

@ -26,6 +26,106 @@
#include "internal.h"
static ssize_t acpi_object_path(acpi_handle handle, char *buf)
{
struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL};
int result;
result = acpi_get_name(handle, ACPI_FULL_PATHNAME, &path);
if (result)
return result;
result = sprintf(buf, "%s\n", (char*)path.pointer);
kfree(path.pointer);
return result;
}
struct acpi_data_node_attr {
struct attribute attr;
ssize_t (*show)(struct acpi_data_node *, char *);
ssize_t (*store)(struct acpi_data_node *, const char *, size_t count);
};
#define DATA_NODE_ATTR(_name) \
static struct acpi_data_node_attr data_node_##_name = \
__ATTR(_name, 0444, data_node_show_##_name, NULL)
static ssize_t data_node_show_path(struct acpi_data_node *dn, char *buf)
{
return acpi_object_path(dn->handle, buf);
}
DATA_NODE_ATTR(path);
static struct attribute *acpi_data_node_default_attrs[] = {
&data_node_path.attr,
NULL
};
#define to_data_node(k) container_of(k, struct acpi_data_node, kobj)
#define to_attr(a) container_of(a, struct acpi_data_node_attr, attr)
static ssize_t acpi_data_node_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
struct acpi_data_node *dn = to_data_node(kobj);
struct acpi_data_node_attr *dn_attr = to_attr(attr);
return dn_attr->show ? dn_attr->show(dn, buf) : -ENXIO;
}
static const struct sysfs_ops acpi_data_node_sysfs_ops = {
.show = acpi_data_node_attr_show,
};
static void acpi_data_node_release(struct kobject *kobj)
{
struct acpi_data_node *dn = to_data_node(kobj);
complete(&dn->kobj_done);
}
static struct kobj_type acpi_data_node_ktype = {
.sysfs_ops = &acpi_data_node_sysfs_ops,
.default_attrs = acpi_data_node_default_attrs,
.release = acpi_data_node_release,
};
static void acpi_expose_nondev_subnodes(struct kobject *kobj,
struct acpi_device_data *data)
{
struct list_head *list = &data->subnodes;
struct acpi_data_node *dn;
if (list_empty(list))
return;
list_for_each_entry(dn, list, sibling) {
int ret;
init_completion(&dn->kobj_done);
ret = kobject_init_and_add(&dn->kobj, &acpi_data_node_ktype,
kobj, dn->name);
if (ret)
acpi_handle_err(dn->handle, "Failed to expose (%d)\n", ret);
else
acpi_expose_nondev_subnodes(&dn->kobj, &dn->data);
}
}
static void acpi_hide_nondev_subnodes(struct acpi_device_data *data)
{
struct list_head *list = &data->subnodes;
struct acpi_data_node *dn;
if (list_empty(list))
return;
list_for_each_entry_reverse(dn, list, sibling) {
acpi_hide_nondev_subnodes(&dn->data);
kobject_put(&dn->kobj);
}
}
/**
* create_pnp_modalias - Create hid/cid(s) string for modalias and uevent
* @acpi_dev: ACPI device object.
@ -323,20 +423,12 @@ static ssize_t acpi_device_adr_show(struct device *dev,
}
static DEVICE_ATTR(adr, 0444, acpi_device_adr_show, NULL);
static ssize_t
acpi_device_path_show(struct device *dev, struct device_attribute *attr, char *buf) {
static ssize_t acpi_device_path_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct acpi_device *acpi_dev = to_acpi_device(dev);
struct acpi_buffer path = {ACPI_ALLOCATE_BUFFER, NULL};
int result;
result = acpi_get_name(acpi_dev->handle, ACPI_FULL_PATHNAME, &path);
if (result)
goto end;
result = sprintf(buf, "%s\n", (char*)path.pointer);
kfree(path.pointer);
end:
return result;
return acpi_object_path(acpi_dev->handle, buf);
}
static DEVICE_ATTR(path, 0444, acpi_device_path_show, NULL);
@ -475,6 +567,8 @@ int acpi_device_setup_files(struct acpi_device *dev)
&dev_attr_real_power_state);
}
acpi_expose_nondev_subnodes(&dev->dev.kobj, &dev->data);
end:
return result;
}
@ -485,6 +579,8 @@ end:
*/
void acpi_device_remove_files(struct acpi_device *dev)
{
acpi_hide_nondev_subnodes(&dev->data);
if (dev->flags.power_manageable) {
device_remove_file(&dev->dev, &dev_attr_power_state);
if (dev->power.flags.power_resources)

View File

@ -441,17 +441,31 @@ static void acpi_ec_complete_query(struct acpi_ec *ec)
static bool acpi_ec_guard_event(struct acpi_ec *ec)
{
bool guarded = true;
unsigned long flags;
spin_lock_irqsave(&ec->lock, flags);
/*
* If firmware SCI_EVT clearing timing is "event", we actually
* don't know when the SCI_EVT will be cleared by firmware after
* evaluating _Qxx, so we need to re-check SCI_EVT after waiting an
* acceptable period.
*
* The guarding period begins when EC_FLAGS_QUERY_PENDING is
* flagged, which means SCI_EVT check has just been performed.
* But if the current transaction is ACPI_EC_COMMAND_QUERY, the
* guarding should have already been performed (via
* EC_FLAGS_QUERY_GUARDING) and should not be applied so that the
* ACPI_EC_COMMAND_QUERY transaction can be transitioned into
* ACPI_EC_COMMAND_POLL state immediately.
*/
if (ec_event_clearing == ACPI_EC_EVT_TIMING_STATUS ||
ec_event_clearing == ACPI_EC_EVT_TIMING_QUERY ||
!test_bit(EC_FLAGS_QUERY_PENDING, &ec->flags) ||
(ec->curr && ec->curr->command == ACPI_EC_COMMAND_QUERY))
return false;
/*
* Postpone the query submission to allow the firmware to proceed,
* we shouldn't check SCI_EVT before the firmware reflagging it.
*/
return true;
guarded = false;
spin_unlock_irqrestore(&ec->lock, flags);
return guarded;
}
static int ec_transaction_polled(struct acpi_ec *ec)
@ -597,6 +611,7 @@ static int ec_guard(struct acpi_ec *ec)
unsigned long guard = usecs_to_jiffies(ec_polling_guard);
unsigned long timeout = ec->timestamp + guard;
/* Ensure guarding period before polling EC status */
do {
if (ec_busy_polling) {
/* Perform busy polling */
@ -606,11 +621,13 @@ static int ec_guard(struct acpi_ec *ec)
} else {
/*
* Perform wait polling
*
* For SCI_EVT clearing timing of "event",
* performing guarding before re-checking the
* SCI_EVT. Otherwise, such guarding is not needed
* due to the old practices.
* 1. Wait the transaction to be completed by the
* GPE handler after the transaction enters
* ACPI_EC_COMMAND_POLL state.
* 2. A special guarding logic is also required
* for event clearing mode "event" before the
* transaction enters ACPI_EC_COMMAND_POLL
* state.
*/
if (!ec_transaction_polled(ec) &&
!acpi_ec_guard_event(ec))
@ -620,7 +637,6 @@ static int ec_guard(struct acpi_ec *ec)
guard))
return 0;
}
/* Guard the register accesses for the polling modes */
} while (time_before(jiffies, timeout));
return -ETIME;
}
@ -929,6 +945,23 @@ acpi_ec_get_query_handler(struct acpi_ec_query_handler *handler)
return handler;
}
static struct acpi_ec_query_handler *
acpi_ec_get_query_handler_by_value(struct acpi_ec *ec, u8 value)
{
struct acpi_ec_query_handler *handler;
bool found = false;
mutex_lock(&ec->mutex);
list_for_each_entry(handler, &ec->list, node) {
if (value == handler->query_bit) {
found = true;
break;
}
}
mutex_unlock(&ec->mutex);
return found ? acpi_ec_get_query_handler(handler) : NULL;
}
static void acpi_ec_query_handler_release(struct kref *kref)
{
struct acpi_ec_query_handler *handler =
@ -964,14 +997,15 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
}
EXPORT_SYMBOL_GPL(acpi_ec_add_query_handler);
void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
static void acpi_ec_remove_query_handlers(struct acpi_ec *ec,
bool remove_all, u8 query_bit)
{
struct acpi_ec_query_handler *handler, *tmp;
LIST_HEAD(free_list);
mutex_lock(&ec->mutex);
list_for_each_entry_safe(handler, tmp, &ec->list, node) {
if (query_bit == handler->query_bit) {
if (remove_all || query_bit == handler->query_bit) {
list_del_init(&handler->node);
list_add(&handler->node, &free_list);
}
@ -980,6 +1014,11 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
list_for_each_entry_safe(handler, tmp, &free_list, node)
acpi_ec_put_query_handler(handler);
}
void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit)
{
acpi_ec_remove_query_handlers(ec, false, query_bit);
}
EXPORT_SYMBOL_GPL(acpi_ec_remove_query_handler);
static struct acpi_ec_query *acpi_ec_create_query(u8 *pval)
@ -1025,7 +1064,6 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
{
u8 value = 0;
int result;
struct acpi_ec_query_handler *handler;
struct acpi_ec_query *q;
q = acpi_ec_create_query(&value);
@ -1043,25 +1081,26 @@ static int acpi_ec_query(struct acpi_ec *ec, u8 *data)
if (result)
goto err_exit;
mutex_lock(&ec->mutex);
result = -ENODATA;
list_for_each_entry(handler, &ec->list, node) {
if (value == handler->query_bit) {
result = 0;
q->handler = acpi_ec_get_query_handler(handler);
ec_dbg_evt("Query(0x%02x) scheduled",
q->handler->query_bit);
/*
* It is reported that _Qxx are evaluated in a
* parallel way on Windows:
* https://bugzilla.kernel.org/show_bug.cgi?id=94411
*/
if (!schedule_work(&q->work))
result = -EBUSY;
break;
}
q->handler = acpi_ec_get_query_handler_by_value(ec, value);
if (!q->handler) {
result = -ENODATA;
goto err_exit;
}
/*
* It is reported that _Qxx are evaluated in a parallel way on
* Windows:
* https://bugzilla.kernel.org/show_bug.cgi?id=94411
*
* Put this log entry before schedule_work() in order to make
* it appearing before any other log entries occurred during the
* work queue execution.
*/
ec_dbg_evt("Query(0x%02x) scheduled", value);
if (!schedule_work(&q->work)) {
ec_dbg_evt("Query(0x%02x) overlapped", value);
result = -EBUSY;
}
mutex_unlock(&ec->mutex);
err_exit:
if (result && q)
@ -1354,19 +1393,13 @@ static int acpi_ec_add(struct acpi_device *device)
static int acpi_ec_remove(struct acpi_device *device)
{
struct acpi_ec *ec;
struct acpi_ec_query_handler *handler, *tmp;
if (!device)
return -EINVAL;
ec = acpi_driver_data(device);
ec_remove_handlers(ec);
mutex_lock(&ec->mutex);
list_for_each_entry_safe(handler, tmp, &ec->list, node) {
list_del(&handler->node);
kfree(handler);
}
mutex_unlock(&ec->mutex);
acpi_ec_remove_query_handlers(ec, true, 0);
release_region(ec->data_addr, 1);
release_region(ec->command_addr, 1);
device->driver_data = NULL;

View File

@ -351,13 +351,12 @@ static int acpi_platform_notify_remove(struct device *dev)
return 0;
}
int __init init_acpi_device_notify(void)
void __init init_acpi_device_notify(void)
{
if (platform_notify || platform_notify_remove) {
printk(KERN_ERR PREFIX "Can't use platform_notify\n");
return 0;
return;
}
platform_notify = acpi_platform_notify;
platform_notify_remove = acpi_platform_notify_remove;
return 0;
}

View File

@ -21,7 +21,7 @@
#define PREFIX "ACPI: "
acpi_status acpi_os_initialize1(void);
int init_acpi_device_notify(void);
void init_acpi_device_notify(void);
int acpi_scan_init(void);
void acpi_pci_root_init(void);
void acpi_pci_link_init(void);
@ -179,13 +179,13 @@ static inline int acpi_sleep_init(void) { return -ENXIO; }
#endif
#ifdef CONFIG_ACPI_SLEEP
int acpi_sleep_proc_init(void);
void acpi_sleep_proc_init(void);
int suspend_nvs_alloc(void);
void suspend_nvs_free(void);
int suspend_nvs_save(void);
void suspend_nvs_restore(void);
#else
static inline int acpi_sleep_proc_init(void) { return 0; }
static inline void acpi_sleep_proc_init(void) {}
static inline int suspend_nvs_alloc(void) { return 0; }
static inline void suspend_nvs_free(void) {}
static inline int suspend_nvs_save(void) { return 0; }

View File

@ -706,7 +706,7 @@ static ssize_t flags_show(struct device *dev,
flags & ACPI_NFIT_MEM_SAVE_FAILED ? "save_fail " : "",
flags & ACPI_NFIT_MEM_RESTORE_FAILED ? "restore_fail " : "",
flags & ACPI_NFIT_MEM_FLUSH_FAILED ? "flush_fail " : "",
flags & ACPI_NFIT_MEM_ARMED ? "not_armed " : "",
flags & ACPI_NFIT_MEM_NOT_ARMED ? "not_armed " : "",
flags & ACPI_NFIT_MEM_HEALTH_OBSERVED ? "smart_event " : "");
}
static DEVICE_ATTR_RO(flags);
@ -815,7 +815,7 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
flags |= NDD_ALIASING;
mem_flags = __to_nfit_memdev(nfit_mem)->flags;
if (mem_flags & ACPI_NFIT_MEM_ARMED)
if (mem_flags & ACPI_NFIT_MEM_NOT_ARMED)
flags |= NDD_UNARMED;
rc = acpi_nfit_add_dimm(acpi_desc, nfit_mem, device_handle);
@ -839,7 +839,7 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
mem_flags & ACPI_NFIT_MEM_SAVE_FAILED ? " save_fail" : "",
mem_flags & ACPI_NFIT_MEM_RESTORE_FAILED ? " restore_fail":"",
mem_flags & ACPI_NFIT_MEM_FLUSH_FAILED ? " flush_fail" : "",
mem_flags & ACPI_NFIT_MEM_ARMED ? " not_armed" : "");
mem_flags & ACPI_NFIT_MEM_NOT_ARMED ? " not_armed" : "");
}

View File

@ -24,7 +24,7 @@
#define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66"
#define ACPI_NFIT_MEM_FAILED_MASK (ACPI_NFIT_MEM_SAVE_FAILED \
| ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \
| ACPI_NFIT_MEM_ARMED)
| ACPI_NFIT_MEM_NOT_ARMED)
enum nfit_uuids {
NFIT_SPA_VOLATILE,

View File

@ -66,8 +66,6 @@ struct acpi_os_dpc {
/* stuff for debugger support */
int acpi_in_debugger;
EXPORT_SYMBOL(acpi_in_debugger);
extern char line_buf[80];
#endif /*ENABLE_DEBUGGER */
static int (*__acpi_os_prepare_sleep)(u8 sleep_state, u32 pm1a_ctrl,
@ -81,6 +79,7 @@ static struct workqueue_struct *kacpid_wq;
static struct workqueue_struct *kacpi_notify_wq;
static struct workqueue_struct *kacpi_hotplug_wq;
static bool acpi_os_initialized;
unsigned int acpi_sci_irq = INVALID_ACPI_IRQ;
/*
* This list of permanent mappings is for memory that may be accessed from
@ -856,17 +855,19 @@ acpi_os_install_interrupt_handler(u32 gsi, acpi_osd_handler handler,
acpi_irq_handler = NULL;
return AE_NOT_ACQUIRED;
}
acpi_sci_irq = irq;
return AE_OK;
}
acpi_status acpi_os_remove_interrupt_handler(u32 irq, acpi_osd_handler handler)
acpi_status acpi_os_remove_interrupt_handler(u32 gsi, acpi_osd_handler handler)
{
if (irq != acpi_gbl_FADT.sci_interrupt)
if (gsi != acpi_gbl_FADT.sci_interrupt || !acpi_sci_irq_valid())
return AE_BAD_PARAMETER;
free_irq(irq, acpi_irq);
free_irq(acpi_sci_irq, acpi_irq);
acpi_irq_handler = NULL;
acpi_sci_irq = INVALID_ACPI_IRQ;
return AE_OK;
}
@ -1180,8 +1181,8 @@ void acpi_os_wait_events_complete(void)
* Make sure the GPE handler or the fixed event handler is not used
* on another CPU after removal.
*/
if (acpi_irq_handler)
synchronize_hardirq(acpi_gbl_FADT.sci_interrupt);
if (acpi_sci_irq_valid())
synchronize_hardirq(acpi_sci_irq);
flush_workqueue(kacpid_wq);
flush_workqueue(kacpi_notify_wq);
}
@ -1345,15 +1346,13 @@ acpi_status acpi_os_signal_semaphore(acpi_handle handle, u32 units)
return AE_OK;
}
#ifdef ACPI_FUTURE_USAGE
u32 acpi_os_get_line(char *buffer)
acpi_status acpi_os_get_line(char *buffer, u32 buffer_length, u32 *bytes_read)
{
#ifdef ENABLE_DEBUGGER
if (acpi_in_debugger) {
u32 chars;
kdb_read(buffer, sizeof(line_buf));
kdb_read(buffer, buffer_length);
/* remove the CR kdb includes */
chars = strlen(buffer) - 1;
@ -1361,9 +1360,8 @@ u32 acpi_os_get_line(char *buffer)
}
#endif
return 0;
return AE_OK;
}
#endif /* ACPI_FUTURE_USAGE */
acpi_status acpi_os_signal(u32 function, void *info)
{

View File

@ -652,6 +652,210 @@ static void acpi_pci_root_remove(struct acpi_device *device)
kfree(root);
}
/*
* Following code to support acpi_pci_root_create() is copied from
* arch/x86/pci/acpi.c and modified so it could be reused by x86, IA64
* and ARM64.
*/
static void acpi_pci_root_validate_resources(struct device *dev,
struct list_head *resources,
unsigned long type)
{
LIST_HEAD(list);
struct resource *res1, *res2, *root = NULL;
struct resource_entry *tmp, *entry, *entry2;
BUG_ON((type & (IORESOURCE_MEM | IORESOURCE_IO)) == 0);
root = (type & IORESOURCE_MEM) ? &iomem_resource : &ioport_resource;
list_splice_init(resources, &list);
resource_list_for_each_entry_safe(entry, tmp, &list) {
bool free = false;
resource_size_t end;
res1 = entry->res;
if (!(res1->flags & type))
goto next;
/* Exclude non-addressable range or non-addressable portion */
end = min(res1->end, root->end);
if (end <= res1->start) {
dev_info(dev, "host bridge window %pR (ignored, not CPU addressable)\n",
res1);
free = true;
goto next;
} else if (res1->end != end) {
dev_info(dev, "host bridge window %pR ([%#llx-%#llx] ignored, not CPU addressable)\n",
res1, (unsigned long long)end + 1,
(unsigned long long)res1->end);
res1->end = end;
}
resource_list_for_each_entry(entry2, resources) {
res2 = entry2->res;
if (!(res2->flags & type))
continue;
/*
* I don't like throwing away windows because then
* our resources no longer match the ACPI _CRS, but
* the kernel resource tree doesn't allow overlaps.
*/
if (resource_overlaps(res1, res2)) {
res2->start = min(res1->start, res2->start);
res2->end = max(res1->end, res2->end);
dev_info(dev, "host bridge window expanded to %pR; %pR ignored\n",
res2, res1);
free = true;
goto next;
}
}
next:
resource_list_del(entry);
if (free)
resource_list_free_entry(entry);
else
resource_list_add_tail(entry, resources);
}
}
int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info)
{
int ret;
struct list_head *list = &info->resources;
struct acpi_device *device = info->bridge;
struct resource_entry *entry, *tmp;
unsigned long flags;
flags = IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_MEM_8AND16BIT;
ret = acpi_dev_get_resources(device, list,
acpi_dev_filter_resource_type_cb,
(void *)flags);
if (ret < 0)
dev_warn(&device->dev,
"failed to parse _CRS method, error code %d\n", ret);
else if (ret == 0)
dev_dbg(&device->dev,
"no IO and memory resources present in _CRS\n");
else {
resource_list_for_each_entry_safe(entry, tmp, list) {
if (entry->res->flags & IORESOURCE_DISABLED)
resource_list_destroy_entry(entry);
else
entry->res->name = info->name;
}
acpi_pci_root_validate_resources(&device->dev, list,
IORESOURCE_MEM);
acpi_pci_root_validate_resources(&device->dev, list,
IORESOURCE_IO);
}
return ret;
}
static void pci_acpi_root_add_resources(struct acpi_pci_root_info *info)
{
struct resource_entry *entry, *tmp;
struct resource *res, *conflict, *root = NULL;
resource_list_for_each_entry_safe(entry, tmp, &info->resources) {
res = entry->res;
if (res->flags & IORESOURCE_MEM)
root = &iomem_resource;
else if (res->flags & IORESOURCE_IO)
root = &ioport_resource;
else
continue;
conflict = insert_resource_conflict(root, res);
if (conflict) {
dev_info(&info->bridge->dev,
"ignoring host bridge window %pR (conflicts with %s %pR)\n",
res, conflict->name, conflict);
resource_list_destroy_entry(entry);
}
}
}
static void __acpi_pci_root_release_info(struct acpi_pci_root_info *info)
{
struct resource *res;
struct resource_entry *entry, *tmp;
if (!info)
return;
resource_list_for_each_entry_safe(entry, tmp, &info->resources) {
res = entry->res;
if (res->parent &&
(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
release_resource(res);
resource_list_destroy_entry(entry);
}
info->ops->release_info(info);
}
static void acpi_pci_root_release_info(struct pci_host_bridge *bridge)
{
struct resource *res;
struct resource_entry *entry;
resource_list_for_each_entry(entry, &bridge->windows) {
res = entry->res;
if (res->parent &&
(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
release_resource(res);
}
__acpi_pci_root_release_info(bridge->release_data);
}
struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
struct acpi_pci_root_ops *ops,
struct acpi_pci_root_info *info,
void *sysdata)
{
int ret, busnum = root->secondary.start;
struct acpi_device *device = root->device;
int node = acpi_get_node(device->handle);
struct pci_bus *bus;
info->root = root;
info->bridge = device;
info->ops = ops;
INIT_LIST_HEAD(&info->resources);
snprintf(info->name, sizeof(info->name), "PCI Bus %04x:%02x",
root->segment, busnum);
if (ops->init_info && ops->init_info(info))
goto out_release_info;
if (ops->prepare_resources)
ret = ops->prepare_resources(info);
else
ret = acpi_pci_probe_root_resources(info);
if (ret < 0)
goto out_release_info;
pci_acpi_root_add_resources(info);
pci_add_resource(&info->resources, &root->secondary);
bus = pci_create_root_bus(NULL, busnum, ops->pci_ops,
sysdata, &info->resources);
if (!bus)
goto out_release_info;
pci_scan_child_bus(bus);
pci_set_host_bridge_release(to_pci_host_bridge(bus->bridge),
acpi_pci_root_release_info, info);
if (node != NUMA_NO_NODE)
dev_printk(KERN_DEBUG, &bus->dev, "on NUMA node %d\n", node);
return bus;
out_release_info:
__acpi_pci_root_release_info(info);
return NULL;
}
void __init acpi_pci_root_init(void)
{
acpi_hest_init();

View File

@ -144,11 +144,9 @@ static const struct file_operations acpi_system_wakeup_device_fops = {
.release = single_release,
};
int __init acpi_sleep_proc_init(void)
void __init acpi_sleep_proc_init(void)
{
/* 'wakeup device' [R/W] */
proc_create("wakeup", S_IFREG | S_IRUGO | S_IWUSR,
acpi_root_dir, &acpi_system_wakeup_device_fops);
return 0;
}

View File

@ -242,6 +242,10 @@ static int __acpi_processor_start(struct acpi_device *device)
if (pr->flags.need_hotplug_init)
return 0;
result = acpi_cppc_processor_probe(pr);
if (result)
return -ENODEV;
if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver)
acpi_processor_power_init(pr);
@ -287,6 +291,8 @@ static int acpi_processor_stop(struct device *dev)
acpi_pss_perf_exit(pr, device);
acpi_cppc_processor_exit(pr);
return 0;
}

View File

@ -19,11 +19,133 @@
#include "internal.h"
static int acpi_data_get_property_array(struct acpi_device_data *data,
const char *name,
acpi_object_type type,
const union acpi_object **obj);
/* ACPI _DSD device properties UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
static const u8 prp_uuid[16] = {
0x14, 0xd8, 0xff, 0xda, 0xba, 0x6e, 0x8c, 0x4d,
0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01
};
/* ACPI _DSD data subnodes UUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */
static const u8 ads_uuid[16] = {
0xe6, 0xe3, 0xb8, 0xdb, 0x86, 0x58, 0xa6, 0x4b,
0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b
};
static bool acpi_enumerate_nondev_subnodes(acpi_handle scope,
const union acpi_object *desc,
struct acpi_device_data *data);
static bool acpi_extract_properties(const union acpi_object *desc,
struct acpi_device_data *data);
static bool acpi_nondev_subnode_ok(acpi_handle scope,
const union acpi_object *link,
struct list_head *list)
{
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
struct acpi_data_node *dn;
acpi_handle handle;
acpi_status status;
dn = kzalloc(sizeof(*dn), GFP_KERNEL);
if (!dn)
return false;
dn->name = link->package.elements[0].string.pointer;
dn->fwnode.type = FWNODE_ACPI_DATA;
INIT_LIST_HEAD(&dn->data.subnodes);
status = acpi_get_handle(scope, link->package.elements[1].string.pointer,
&handle);
if (ACPI_FAILURE(status))
goto fail;
status = acpi_evaluate_object_typed(handle, NULL, NULL, &buf,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
goto fail;
if (acpi_extract_properties(buf.pointer, &dn->data))
dn->handle = handle;
/*
* The scope for the subnode object lookup is the one of the namespace
* node (device) containing the object that has returned the package.
* That is, it's the scope of that object's parent.
*/
status = acpi_get_parent(handle, &scope);
if (ACPI_SUCCESS(status)
&& acpi_enumerate_nondev_subnodes(scope, buf.pointer, &dn->data))
dn->handle = handle;
if (dn->handle) {
dn->data.pointer = buf.pointer;
list_add_tail(&dn->sibling, list);
return true;
}
acpi_handle_debug(handle, "Invalid properties/subnodes data, skipping\n");
fail:
ACPI_FREE(buf.pointer);
kfree(dn);
return false;
}
static int acpi_add_nondev_subnodes(acpi_handle scope,
const union acpi_object *links,
struct list_head *list)
{
bool ret = false;
int i;
for (i = 0; i < links->package.count; i++) {
const union acpi_object *link;
link = &links->package.elements[i];
/* Only two elements allowed, both must be strings. */
if (link->package.count == 2
&& link->package.elements[0].type == ACPI_TYPE_STRING
&& link->package.elements[1].type == ACPI_TYPE_STRING
&& acpi_nondev_subnode_ok(scope, link, list))
ret = true;
}
return ret;
}
static bool acpi_enumerate_nondev_subnodes(acpi_handle scope,
const union acpi_object *desc,
struct acpi_device_data *data)
{
int i;
/* Look for the ACPI data subnodes UUID. */
for (i = 0; i < desc->package.count; i += 2) {
const union acpi_object *uuid, *links;
uuid = &desc->package.elements[i];
links = &desc->package.elements[i + 1];
/*
* The first element must be a UUID and the second one must be
* a package.
*/
if (uuid->type != ACPI_TYPE_BUFFER || uuid->buffer.length != 16
|| links->type != ACPI_TYPE_PACKAGE)
break;
if (memcmp(uuid->buffer.pointer, ads_uuid, sizeof(ads_uuid)))
continue;
return acpi_add_nondev_subnodes(scope, links, &data->subnodes);
}
return false;
}
static bool acpi_property_value_ok(const union acpi_object *value)
{
@ -81,8 +203,8 @@ static void acpi_init_of_compatible(struct acpi_device *adev)
const union acpi_object *of_compatible;
int ret;
ret = acpi_dev_get_property_array(adev, "compatible", ACPI_TYPE_STRING,
&of_compatible);
ret = acpi_data_get_property_array(&adev->data, "compatible",
ACPI_TYPE_STRING, &of_compatible);
if (ret) {
ret = acpi_dev_get_property(adev, "compatible",
ACPI_TYPE_STRING, &of_compatible);
@ -100,34 +222,13 @@ static void acpi_init_of_compatible(struct acpi_device *adev)
adev->flags.of_compatible_ok = 1;
}
void acpi_init_properties(struct acpi_device *adev)
static bool acpi_extract_properties(const union acpi_object *desc,
struct acpi_device_data *data)
{
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
bool acpi_of = false;
struct acpi_hardware_id *hwid;
const union acpi_object *desc;
acpi_status status;
int i;
/*
* Check if ACPI_DT_NAMESPACE_HID is present and inthat case we fill in
* Device Tree compatible properties for this device.
*/
list_for_each_entry(hwid, &adev->pnp.ids, list) {
if (!strcmp(hwid->id, ACPI_DT_NAMESPACE_HID)) {
acpi_of = true;
break;
}
}
status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
goto out;
desc = buf.pointer;
if (desc->package.count % 2)
goto fail;
return false;
/* Look for the device properties UUID. */
for (i = 0; i < desc->package.count; i += 2) {
@ -154,18 +255,50 @@ void acpi_init_properties(struct acpi_device *adev)
if (!acpi_properties_format_valid(properties))
break;
adev->data.pointer = buf.pointer;
adev->data.properties = properties;
if (acpi_of)
acpi_init_of_compatible(adev);
goto out;
data->properties = properties;
return true;
}
fail:
dev_dbg(&adev->dev, "Returned _DSD data is not valid, skipping\n");
ACPI_FREE(buf.pointer);
return false;
}
void acpi_init_properties(struct acpi_device *adev)
{
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
struct acpi_hardware_id *hwid;
acpi_status status;
bool acpi_of = false;
INIT_LIST_HEAD(&adev->data.subnodes);
/*
* Check if ACPI_DT_NAMESPACE_HID is present and inthat case we fill in
* Device Tree compatible properties for this device.
*/
list_for_each_entry(hwid, &adev->pnp.ids, list) {
if (!strcmp(hwid->id, ACPI_DT_NAMESPACE_HID)) {
acpi_of = true;
break;
}
}
status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
goto out;
if (acpi_extract_properties(buf.pointer, &adev->data)) {
adev->data.pointer = buf.pointer;
if (acpi_of)
acpi_init_of_compatible(adev);
}
if (acpi_enumerate_nondev_subnodes(adev->handle, buf.pointer, &adev->data))
adev->data.pointer = buf.pointer;
if (!adev->data.pointer) {
acpi_handle_debug(adev->handle, "Invalid _DSD data, skipping\n");
ACPI_FREE(buf.pointer);
}
out:
if (acpi_of && !adev->flags.of_compatible_ok)
@ -173,8 +306,25 @@ void acpi_init_properties(struct acpi_device *adev)
ACPI_DT_NAMESPACE_HID " requires 'compatible' property\n");
}
static void acpi_destroy_nondev_subnodes(struct list_head *list)
{
struct acpi_data_node *dn, *next;
if (list_empty(list))
return;
list_for_each_entry_safe_reverse(dn, next, list, sibling) {
acpi_destroy_nondev_subnodes(&dn->data.subnodes);
wait_for_completion(&dn->kobj_done);
list_del(&dn->sibling);
ACPI_FREE((void *)dn->data.pointer);
kfree(dn);
}
}
void acpi_free_properties(struct acpi_device *adev)
{
acpi_destroy_nondev_subnodes(&adev->data.subnodes);
ACPI_FREE((void *)adev->data.pointer);
adev->data.of_compatible = NULL;
adev->data.pointer = NULL;
@ -182,8 +332,8 @@ void acpi_free_properties(struct acpi_device *adev)
}
/**
* acpi_dev_get_property - return an ACPI property with given name
* @adev: ACPI device to get property
* acpi_data_get_property - return an ACPI property with given name
* @data: ACPI device deta object to get the property from
* @name: Name of the property
* @type: Expected property type
* @obj: Location to store the property value (if not %NULL)
@ -192,26 +342,27 @@ void acpi_free_properties(struct acpi_device *adev)
* object at the location pointed to by @obj if found.
*
* Callers must not attempt to free the returned objects. These objects will be
* freed by the ACPI core automatically during the removal of @adev.
* freed by the ACPI core automatically during the removal of @data.
*
* Return: %0 if property with @name has been found (success),
* %-EINVAL if the arguments are invalid,
* %-ENODATA if the property doesn't exist,
* %-EPROTO if the property value type doesn't match @type.
*/
int acpi_dev_get_property(struct acpi_device *adev, const char *name,
acpi_object_type type, const union acpi_object **obj)
static int acpi_data_get_property(struct acpi_device_data *data,
const char *name, acpi_object_type type,
const union acpi_object **obj)
{
const union acpi_object *properties;
int i;
if (!adev || !name)
if (!data || !name)
return -EINVAL;
if (!adev->data.pointer || !adev->data.properties)
if (!data->pointer || !data->properties)
return -ENODATA;
properties = adev->data.properties;
properties = data->properties;
for (i = 0; i < properties->package.count; i++) {
const union acpi_object *propname, *propvalue;
const union acpi_object *property;
@ -232,11 +383,50 @@ int acpi_dev_get_property(struct acpi_device *adev, const char *name,
}
return -ENODATA;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property);
/**
* acpi_dev_get_property_array - return an ACPI array property with given name
* @adev: ACPI device to get property
* acpi_dev_get_property - return an ACPI property with given name.
* @adev: ACPI device to get the property from.
* @name: Name of the property.
* @type: Expected property type.
* @obj: Location to store the property value (if not %NULL).
*/
int acpi_dev_get_property(struct acpi_device *adev, const char *name,
acpi_object_type type, const union acpi_object **obj)
{
return adev ? acpi_data_get_property(&adev->data, name, type, obj) : -EINVAL;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property);
static struct acpi_device_data *acpi_device_data_of_node(struct fwnode_handle *fwnode)
{
if (fwnode->type == FWNODE_ACPI) {
struct acpi_device *adev = to_acpi_device_node(fwnode);
return &adev->data;
} else if (fwnode->type == FWNODE_ACPI_DATA) {
struct acpi_data_node *dn = to_acpi_data_node(fwnode);
return &dn->data;
}
return NULL;
}
/**
* acpi_node_prop_get - return an ACPI property with given name.
* @fwnode: Firmware node to get the property from.
* @propname: Name of the property.
* @valptr: Location to store a pointer to the property value (if not %NULL).
*/
int acpi_node_prop_get(struct fwnode_handle *fwnode, const char *propname,
void **valptr)
{
return acpi_data_get_property(acpi_device_data_of_node(fwnode),
propname, ACPI_TYPE_ANY,
(const union acpi_object **)valptr);
}
/**
* acpi_data_get_property_array - return an ACPI array property with given name
* @adev: ACPI data object to get the property from
* @name: Name of the property
* @type: Expected type of array elements
* @obj: Location to store a pointer to the property value (if not NULL)
@ -245,7 +435,7 @@ EXPORT_SYMBOL_GPL(acpi_dev_get_property);
* ACPI object at the location pointed to by @obj if found.
*
* Callers must not attempt to free the returned objects. Those objects will be
* freed by the ACPI core automatically during the removal of @adev.
* freed by the ACPI core automatically during the removal of @data.
*
* Return: %0 if array property (package) with @name has been found (success),
* %-EINVAL if the arguments are invalid,
@ -253,14 +443,15 @@ EXPORT_SYMBOL_GPL(acpi_dev_get_property);
* %-EPROTO if the property is not a package or the type of its elements
* doesn't match @type.
*/
int acpi_dev_get_property_array(struct acpi_device *adev, const char *name,
acpi_object_type type,
const union acpi_object **obj)
static int acpi_data_get_property_array(struct acpi_device_data *data,
const char *name,
acpi_object_type type,
const union acpi_object **obj)
{
const union acpi_object *prop;
int ret, i;
ret = acpi_dev_get_property(adev, name, ACPI_TYPE_PACKAGE, &prop);
ret = acpi_data_get_property(data, name, ACPI_TYPE_PACKAGE, &prop);
if (ret)
return ret;
@ -275,12 +466,11 @@ int acpi_dev_get_property_array(struct acpi_device *adev, const char *name,
return 0;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property_array);
/**
* acpi_dev_get_property_reference - returns handle to the referenced object
* @adev: ACPI device to get property
* @name: Name of the property
* acpi_data_get_property_reference - returns handle to the referenced object
* @data: ACPI device data object containing the property
* @propname: Name of the property
* @index: Index of the reference to return
* @args: Location to store the returned reference with optional arguments
*
@ -294,16 +484,16 @@ EXPORT_SYMBOL_GPL(acpi_dev_get_property_array);
*
* Return: %0 on success, negative error code on failure.
*/
int acpi_dev_get_property_reference(struct acpi_device *adev,
const char *name, size_t index,
struct acpi_reference_args *args)
static int acpi_data_get_property_reference(struct acpi_device_data *data,
const char *propname, size_t index,
struct acpi_reference_args *args)
{
const union acpi_object *element, *end;
const union acpi_object *obj;
struct acpi_device *device;
int ret, idx = 0;
ret = acpi_dev_get_property(adev, name, ACPI_TYPE_ANY, &obj);
ret = acpi_data_get_property(data, propname, ACPI_TYPE_ANY, &obj);
if (ret)
return ret;
@ -378,17 +568,27 @@ int acpi_dev_get_property_reference(struct acpi_device *adev,
return -EPROTO;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property_reference);
int acpi_dev_prop_get(struct acpi_device *adev, const char *propname,
void **valptr)
/**
* acpi_node_get_property_reference - get a handle to the referenced object.
* @fwnode: Firmware node to get the property from.
* @propname: Name of the property.
* @index: Index of the reference to return.
* @args: Location to store the returned reference with optional arguments.
*/
int acpi_node_get_property_reference(struct fwnode_handle *fwnode,
const char *name, size_t index,
struct acpi_reference_args *args)
{
return acpi_dev_get_property(adev, propname, ACPI_TYPE_ANY,
(const union acpi_object **)valptr);
}
struct acpi_device_data *data = acpi_device_data_of_node(fwnode);
int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val)
return data ? acpi_data_get_property_reference(data, name, index, args) : -EINVAL;
}
EXPORT_SYMBOL_GPL(acpi_node_get_property_reference);
static int acpi_data_prop_read_single(struct acpi_device_data *data,
const char *propname,
enum dev_prop_type proptype, void *val)
{
const union acpi_object *obj;
int ret;
@ -397,7 +597,7 @@ int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
return -EINVAL;
if (proptype >= DEV_PROP_U8 && proptype <= DEV_PROP_U64) {
ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_INTEGER, &obj);
ret = acpi_data_get_property(data, propname, ACPI_TYPE_INTEGER, &obj);
if (ret)
return ret;
@ -422,7 +622,7 @@ int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
break;
}
} else if (proptype == DEV_PROP_STRING) {
ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_STRING, &obj);
ret = acpi_data_get_property(data, propname, ACPI_TYPE_STRING, &obj);
if (ret)
return ret;
@ -433,6 +633,12 @@ int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
return ret;
}
int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val)
{
return adev ? acpi_data_prop_read_single(&adev->data, propname, proptype, val) : -EINVAL;
}
static int acpi_copy_property_array_u8(const union acpi_object *items, u8 *val,
size_t nval)
{
@ -509,20 +715,22 @@ static int acpi_copy_property_array_string(const union acpi_object *items,
return 0;
}
int acpi_dev_prop_read(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val, size_t nval)
static int acpi_data_prop_read(struct acpi_device_data *data,
const char *propname,
enum dev_prop_type proptype,
void *val, size_t nval)
{
const union acpi_object *obj;
const union acpi_object *items;
int ret;
if (val && nval == 1) {
ret = acpi_dev_prop_read_single(adev, propname, proptype, val);
ret = acpi_data_prop_read_single(data, propname, proptype, val);
if (!ret)
return ret;
}
ret = acpi_dev_get_property_array(adev, propname, ACPI_TYPE_ANY, &obj);
ret = acpi_data_get_property_array(data, propname, ACPI_TYPE_ANY, &obj);
if (ret)
return ret;
@ -558,3 +766,84 @@ int acpi_dev_prop_read(struct acpi_device *adev, const char *propname,
}
return ret;
}
int acpi_dev_prop_read(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val, size_t nval)
{
return adev ? acpi_data_prop_read(&adev->data, propname, proptype, val, nval) : -EINVAL;
}
/**
* acpi_node_prop_read - retrieve the value of an ACPI property with given name.
* @fwnode: Firmware node to get the property from.
* @propname: Name of the property.
* @proptype: Expected property type.
* @val: Location to store the property value (if not %NULL).
* @nval: Size of the array pointed to by @val.
*
* If @val is %NULL, return the number of array elements comprising the value
* of the property. Otherwise, read at most @nval values to the array at the
* location pointed to by @val.
*/
int acpi_node_prop_read(struct fwnode_handle *fwnode, const char *propname,
enum dev_prop_type proptype, void *val, size_t nval)
{
return acpi_data_prop_read(acpi_device_data_of_node(fwnode),
propname, proptype, val, nval);
}
/**
* acpi_get_next_subnode - Return the next child node handle for a device.
* @dev: Device to find the next child node for.
* @child: Handle to one of the device's child nodes or a null handle.
*/
struct fwnode_handle *acpi_get_next_subnode(struct device *dev,
struct fwnode_handle *child)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
struct list_head *head, *next;
if (!adev)
return NULL;
if (!child || child->type == FWNODE_ACPI) {
head = &adev->children;
if (list_empty(head))
goto nondev;
if (child) {
adev = to_acpi_device_node(child);
next = adev->node.next;
if (next == head) {
child = NULL;
goto nondev;
}
adev = list_entry(next, struct acpi_device, node);
} else {
adev = list_first_entry(head, struct acpi_device, node);
}
return acpi_fwnode_handle(adev);
}
nondev:
if (!child || child->type == FWNODE_ACPI_DATA) {
struct acpi_data_node *dn;
head = &adev->data.subnodes;
if (list_empty(head))
return NULL;
if (child) {
dn = to_acpi_data_node(child);
next = dn->sibling.next;
if (next == head)
return NULL;
dn = list_entry(next, struct acpi_data_node, sibling);
} else {
dn = list_first_entry(head, struct acpi_data_node, sibling);
}
return &dn->fwnode;
}
return NULL;
}

View File

@ -119,7 +119,7 @@ bool acpi_dev_resource_memory(struct acpi_resource *ares, struct resource *res)
EXPORT_SYMBOL_GPL(acpi_dev_resource_memory);
static void acpi_dev_ioresource_flags(struct resource *res, u64 len,
u8 io_decode)
u8 io_decode, u8 translation_type)
{
res->flags = IORESOURCE_IO;
@ -131,6 +131,8 @@ static void acpi_dev_ioresource_flags(struct resource *res, u64 len,
if (io_decode == ACPI_DECODE_16)
res->flags |= IORESOURCE_IO_16BIT_ADDR;
if (translation_type == ACPI_SPARSE_TRANSLATION)
res->flags |= IORESOURCE_IO_SPARSE;
}
static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len,
@ -138,7 +140,7 @@ static void acpi_dev_get_ioresource(struct resource *res, u64 start, u64 len,
{
res->start = start;
res->end = start + len - 1;
acpi_dev_ioresource_flags(res, len, io_decode);
acpi_dev_ioresource_flags(res, len, io_decode, 0);
}
/**
@ -231,7 +233,8 @@ static bool acpi_decode_space(struct resource_win *win,
acpi_dev_memresource_flags(res, len, wp);
break;
case ACPI_IO_RANGE:
acpi_dev_ioresource_flags(res, len, iodec);
acpi_dev_ioresource_flags(res, len, iodec,
addr->info.io.translation_type);
break;
case ACPI_BUS_NUMBER_RANGE:
res->flags = IORESOURCE_BUS;

View File

@ -695,26 +695,6 @@ int acpi_device_add(struct acpi_device *device,
return result;
}
struct acpi_device *acpi_get_next_child(struct device *dev,
struct acpi_device *child)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
struct list_head *head, *next;
if (!adev)
return NULL;
head = &adev->children;
if (list_empty(head))
return NULL;
if (!child)
return list_first_entry(head, struct acpi_device, node);
next = child->node.next;
return next == head ? NULL : list_entry(next, struct acpi_device, node);
}
/* --------------------------------------------------------------------------
Device Enumeration
-------------------------------------------------------------------------- */
@ -1184,7 +1164,7 @@ static void acpi_add_id(struct acpi_device_pnp *pnp, const char *dev_id)
if (!id)
return;
id->id = kstrdup(dev_id, GFP_KERNEL);
id->id = kstrdup_const(dev_id, GFP_KERNEL);
if (!id->id) {
kfree(id);
return;
@ -1322,7 +1302,7 @@ void acpi_free_pnp_ids(struct acpi_device_pnp *pnp)
struct acpi_hardware_id *id, *tmp;
list_for_each_entry_safe(id, tmp, &pnp->ids, list) {
kfree(id->id);
kfree_const(id->id);
kfree(id);
}
kfree(pnp->unique_id);
@ -1472,7 +1452,7 @@ bool acpi_device_is_present(struct acpi_device *adev)
}
static bool acpi_scan_handler_matching(struct acpi_scan_handler *handler,
char *idstr,
const char *idstr,
const struct acpi_device_id **matchid)
{
const struct acpi_device_id *devid;
@ -1491,7 +1471,7 @@ static bool acpi_scan_handler_matching(struct acpi_scan_handler *handler,
return false;
}
static struct acpi_scan_handler *acpi_scan_match_handler(char *idstr,
static struct acpi_scan_handler *acpi_scan_match_handler(const char *idstr,
const struct acpi_device_id **matchid)
{
struct acpi_scan_handler *handler;
@ -1933,3 +1913,42 @@ int __init acpi_scan_init(void)
mutex_unlock(&acpi_scan_lock);
return result;
}
static struct acpi_probe_entry *ape;
static int acpi_probe_count;
static DEFINE_SPINLOCK(acpi_probe_lock);
static int __init acpi_match_madt(struct acpi_subtable_header *header,
const unsigned long end)
{
if (!ape->subtable_valid || ape->subtable_valid(header, ape))
if (!ape->probe_subtbl(header, end))
acpi_probe_count++;
return 0;
}
int __init __acpi_probe_device_table(struct acpi_probe_entry *ap_head, int nr)
{
int count = 0;
if (acpi_disabled)
return 0;
spin_lock(&acpi_probe_lock);
for (ape = ap_head; nr; ape++, nr--) {
if (ACPI_COMPARE_NAME(ACPI_SIG_MADT, ape->id)) {
acpi_probe_count = 0;
acpi_table_parse_madt(ape->type, acpi_match_madt, 0);
count += acpi_probe_count;
} else {
int res;
res = acpi_table_parse(ape->id, ape->probe_table);
if (!res)
count++;
}
}
spin_unlock(&acpi_probe_lock);
return count;
}

View File

@ -487,6 +487,8 @@ static int acpi_suspend_begin(suspend_state_t pm_state)
pr_err("ACPI does not support sleep state S%u\n", acpi_state);
return -ENOSYS;
}
if (acpi_state > ACPI_STATE_S1)
pm_set_suspend_via_firmware();
acpi_pm_start(acpi_state);
return 0;
@ -522,6 +524,7 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
if (error)
return error;
pr_info(PREFIX "Low-level resume complete\n");
pm_set_resume_via_firmware();
break;
}
trace_suspend_resume(TPS("acpi_suspend"), acpi_state, false);
@ -632,14 +635,16 @@ static int acpi_freeze_prepare(void)
acpi_enable_wakeup_devices(ACPI_STATE_S0);
acpi_enable_all_wakeup_gpes();
acpi_os_wait_events_complete();
enable_irq_wake(acpi_gbl_FADT.sci_interrupt);
if (acpi_sci_irq_valid())
enable_irq_wake(acpi_sci_irq);
return 0;
}
static void acpi_freeze_restore(void)
{
acpi_disable_wakeup_devices(ACPI_STATE_S0);
disable_irq_wake(acpi_gbl_FADT.sci_interrupt);
if (acpi_sci_irq_valid())
disable_irq_wake(acpi_sci_irq);
acpi_enable_all_runtime_gpes();
}

View File

@ -878,6 +878,9 @@ int __init acpi_sysfs_init(void)
return result;
hotplug_kobj = kobject_create_and_add("hotplug", acpi_kobj);
if (!hotplug_kobj)
return -ENOMEM;
result = sysfs_create_file(hotplug_kobj, &force_remove_attr.attr);
if (result)
return result;

View File

@ -210,20 +210,39 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
}
}
int __init
acpi_parse_entries(char *id, unsigned long table_size,
acpi_tbl_entry_handler handler,
/**
* acpi_parse_entries_array - for each proc_num find a suitable subtable
*
* @id: table id (for debugging purposes)
* @table_size: single entry size
* @table_header: where does the table start?
* @proc: array of acpi_subtable_proc struct containing entry id
* and associated handler with it
* @proc_num: how big proc is?
* @max_entries: how many entries can we process?
*
* For each proc_num find a subtable with proc->id and run proc->handler
* on it. Assumption is that there's only single handler for particular
* entry id.
*
* On success returns sum of all matching entries for all proc handlers.
* Otherwise, -ENODEV or -EINVAL is returned.
*/
static int __init
acpi_parse_entries_array(char *id, unsigned long table_size,
struct acpi_table_header *table_header,
int entry_id, unsigned int max_entries)
struct acpi_subtable_proc *proc, int proc_num,
unsigned int max_entries)
{
struct acpi_subtable_header *entry;
int count = 0;
unsigned long table_end;
int count = 0;
int i;
if (acpi_disabled)
return -ENODEV;
if (!id || !handler)
if (!id)
return -EINVAL;
if (!table_size)
@ -243,20 +262,28 @@ acpi_parse_entries(char *id, unsigned long table_size,
while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) <
table_end) {
if (entry->type == entry_id
&& (!max_entries || count < max_entries)) {
if (handler(entry, table_end))
if (max_entries && count >= max_entries)
break;
for (i = 0; i < proc_num; i++) {
if (entry->type != proc[i].id)
continue;
if (!proc[i].handler ||
proc[i].handler(entry, table_end))
return -EINVAL;
count++;
proc->count++;
break;
}
if (i != proc_num)
count++;
/*
* If entry->length is 0, break from this loop to avoid
* infinite loop.
*/
if (entry->length == 0) {
pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, entry_id);
pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, proc->id);
return -EINVAL;
}
@ -266,17 +293,32 @@ acpi_parse_entries(char *id, unsigned long table_size,
if (max_entries && count > max_entries) {
pr_warn("[%4.4s:0x%02x] ignored %i entries of %i found\n",
id, entry_id, count - max_entries, count);
id, proc->id, count - max_entries, count);
}
return count;
}
int __init
acpi_table_parse_entries(char *id,
acpi_parse_entries(char *id,
unsigned long table_size,
acpi_tbl_entry_handler handler,
struct acpi_table_header *table_header,
int entry_id, unsigned int max_entries)
{
struct acpi_subtable_proc proc = {
.id = entry_id,
.handler = handler,
};
return acpi_parse_entries_array(id, table_size, table_header,
&proc, 1, max_entries);
}
int __init
acpi_table_parse_entries_array(char *id,
unsigned long table_size,
int entry_id,
acpi_tbl_entry_handler handler,
struct acpi_subtable_proc *proc, int proc_num,
unsigned int max_entries)
{
struct acpi_table_header *table_header = NULL;
@ -287,7 +329,7 @@ acpi_table_parse_entries(char *id,
if (acpi_disabled)
return -ENODEV;
if (!id || !handler)
if (!id)
return -EINVAL;
if (!strncmp(id, ACPI_SIG_MADT, 4))
@ -299,13 +341,29 @@ acpi_table_parse_entries(char *id,
return -ENODEV;
}
count = acpi_parse_entries(id, table_size, handler, table_header,
entry_id, max_entries);
count = acpi_parse_entries_array(id, table_size, table_header,
proc, proc_num, max_entries);
early_acpi_os_unmap_memory((char *)table_header, tbl_size);
return count;
}
int __init
acpi_table_parse_entries(char *id,
unsigned long table_size,
int entry_id,
acpi_tbl_entry_handler handler,
unsigned int max_entries)
{
struct acpi_subtable_proc proc = {
.id = entry_id,
.handler = handler,
};
return acpi_table_parse_entries_array(id, table_size, &proc, 1,
max_entries);
}
int __init
acpi_table_parse_madt(enum acpi_madt_type id,
acpi_tbl_entry_handler handler, unsigned int max_entries)

View File

@ -243,6 +243,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
},
/* Non win8 machines which need native backlight nevertheless */
{
/* https://bugzilla.redhat.com/show_bug.cgi?id=1201530 */
.callback = video_detect_force_native,
.ident = "Lenovo Ideapad S405",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_BOARD_NAME, "Lenovo IdeaPad S405"),
},
},
{
/* https://bugzilla.redhat.com/show_bug.cgi?id=1187004 */
.callback = video_detect_force_native,

View File

@ -1,7 +1,7 @@
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_OPP) += opp.o
obj-$(CONFIG_PM_OPP) += opp/
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o

View File

@ -17,7 +17,7 @@
#include <linux/err.h>
#include <linux/pm_runtime.h>
#ifdef CONFIG_PM
#ifdef CONFIG_PM_CLK
enum pce_status {
PCE_STATUS_NONE = 0,
@ -404,7 +404,7 @@ int pm_clk_runtime_resume(struct device *dev)
return pm_generic_runtime_resume(dev);
}
#else /* !CONFIG_PM */
#else /* !CONFIG_PM_CLK */
/**
* enable_clock - Enable a device clock.
@ -484,7 +484,7 @@ static int pm_clk_notify(struct notifier_block *nb,
return 0;
}
#endif /* !CONFIG_PM */
#endif /* !CONFIG_PM_CLK */
/**
* pm_clk_add_notifier - Add bus type notifier for power management clocks.

View File

@ -34,43 +34,9 @@
__ret; \
})
#define GENPD_DEV_TIMED_CALLBACK(genpd, type, callback, dev, field, name) \
({ \
ktime_t __start = ktime_get(); \
type __retval = GENPD_DEV_CALLBACK(genpd, type, callback, dev); \
s64 __elapsed = ktime_to_ns(ktime_sub(ktime_get(), __start)); \
struct gpd_timing_data *__td = &dev_gpd_data(dev)->td; \
if (!__retval && __elapsed > __td->field) { \
__td->field = __elapsed; \
dev_dbg(dev, name " latency exceeded, new value %lld ns\n", \
__elapsed); \
genpd->max_off_time_changed = true; \
__td->constraint_changed = true; \
} \
__retval; \
})
static LIST_HEAD(gpd_list);
static DEFINE_MUTEX(gpd_list_lock);
static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name)
{
struct generic_pm_domain *genpd = NULL, *gpd;
if (IS_ERR_OR_NULL(domain_name))
return NULL;
mutex_lock(&gpd_list_lock);
list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
if (!strcmp(gpd->name, domain_name)) {
genpd = gpd;
break;
}
}
mutex_unlock(&gpd_list_lock);
return genpd;
}
/*
* Get the generic PM domain for a particular struct device.
* This validates the struct device pointer, the PM domain pointer,
@ -110,18 +76,12 @@ static struct generic_pm_domain *dev_to_genpd(struct device *dev)
static int genpd_stop_dev(struct generic_pm_domain *genpd, struct device *dev)
{
return GENPD_DEV_TIMED_CALLBACK(genpd, int, stop, dev,
stop_latency_ns, "stop");
return GENPD_DEV_CALLBACK(genpd, int, stop, dev);
}
static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev,
bool timed)
static int genpd_start_dev(struct generic_pm_domain *genpd, struct device *dev)
{
if (!timed)
return GENPD_DEV_CALLBACK(genpd, int, start, dev);
return GENPD_DEV_TIMED_CALLBACK(genpd, int, start, dev,
start_latency_ns, "start");
return GENPD_DEV_CALLBACK(genpd, int, start, dev);
}
static bool genpd_sd_counter_dec(struct generic_pm_domain *genpd)
@ -140,19 +100,6 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
smp_mb__after_atomic();
}
static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd)
{
s64 usecs64;
if (!genpd->cpuidle_data)
return;
usecs64 = genpd->power_on_latency_ns;
do_div(usecs64, NSEC_PER_USEC);
usecs64 += genpd->cpuidle_data->saved_exit_latency;
genpd->cpuidle_data->idle_state->exit_latency = usecs64;
}
static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
{
ktime_t time_start;
@ -176,7 +123,6 @@ static int genpd_power_on(struct generic_pm_domain *genpd, bool timed)
genpd->power_on_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
genpd_recalc_cpu_exit_latency(genpd);
pr_debug("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "on", elapsed_ns);
@ -213,10 +159,10 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool timed)
}
/**
* genpd_queue_power_off_work - Queue up the execution of pm_genpd_poweroff().
* genpd_queue_power_off_work - Queue up the execution of genpd_poweroff().
* @genpd: PM domait to power off.
*
* Queue up the execution of pm_genpd_poweroff() unless it's already been done
* Queue up the execution of genpd_poweroff() unless it's already been done
* before.
*/
static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
@ -224,14 +170,16 @@ static void genpd_queue_power_off_work(struct generic_pm_domain *genpd)
queue_work(pm_wq, &genpd->power_off_work);
}
static int genpd_poweron(struct generic_pm_domain *genpd);
/**
* __pm_genpd_poweron - Restore power to a given PM domain and its masters.
* __genpd_poweron - Restore power to a given PM domain and its masters.
* @genpd: PM domain to power up.
*
* Restore power to @genpd and all of its masters so that it is possible to
* resume a device belonging to it.
*/
static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
static int __genpd_poweron(struct generic_pm_domain *genpd)
{
struct gpd_link *link;
int ret = 0;
@ -240,13 +188,6 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
|| (genpd->prepared_count > 0 && genpd->suspend_power_off))
return 0;
if (genpd->cpuidle_data) {
cpuidle_pause_and_lock();
genpd->cpuidle_data->idle_state->disabled = true;
cpuidle_resume_and_unlock();
goto out;
}
/*
* The list is guaranteed not to change while the loop below is being
* executed, unless one of the masters' .power_on() callbacks fiddles
@ -255,7 +196,7 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
list_for_each_entry(link, &genpd->slave_links, slave_node) {
genpd_sd_counter_inc(link->master);
ret = pm_genpd_poweron(link->master);
ret = genpd_poweron(link->master);
if (ret) {
genpd_sd_counter_dec(link->master);
goto err;
@ -266,7 +207,6 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
if (ret)
goto err;
out:
genpd->status = GPD_STATE_ACTIVE;
return 0;
@ -282,46 +222,28 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
}
/**
* pm_genpd_poweron - Restore power to a given PM domain and its masters.
* genpd_poweron - Restore power to a given PM domain and its masters.
* @genpd: PM domain to power up.
*/
int pm_genpd_poweron(struct generic_pm_domain *genpd)
static int genpd_poweron(struct generic_pm_domain *genpd)
{
int ret;
mutex_lock(&genpd->lock);
ret = __pm_genpd_poweron(genpd);
ret = __genpd_poweron(genpd);
mutex_unlock(&genpd->lock);
return ret;
}
/**
* pm_genpd_name_poweron - Restore power to a given PM domain and its masters.
* @domain_name: Name of the PM domain to power up.
*/
int pm_genpd_name_poweron(const char *domain_name)
{
struct generic_pm_domain *genpd;
genpd = pm_genpd_lookup_name(domain_name);
return genpd ? pm_genpd_poweron(genpd) : -EINVAL;
}
static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev)
{
return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev,
save_state_latency_ns, "state save");
return GENPD_DEV_CALLBACK(genpd, int, save_state, dev);
}
static int genpd_restore_dev(struct generic_pm_domain *genpd,
struct device *dev, bool timed)
struct device *dev)
{
if (!timed)
return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
return GENPD_DEV_TIMED_CALLBACK(genpd, int, restore_state, dev,
restore_state_latency_ns,
"state restore");
return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
}
static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
@ -365,13 +287,14 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
}
/**
* pm_genpd_poweroff - Remove power from a given PM domain.
* genpd_poweroff - Remove power from a given PM domain.
* @genpd: PM domain to power down.
* @is_async: PM domain is powered down from a scheduled work
*
* If all of the @genpd's devices have been suspended and all of its subdomains
* have been powered down, remove power from @genpd.
*/
static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
{
struct pm_domain_data *pdd;
struct gpd_link *link;
@ -403,7 +326,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
not_suspended++;
}
if (not_suspended > genpd->in_progress)
if (not_suspended > 1 || (not_suspended == 1 && is_async))
return -EBUSY;
if (genpd->gov && genpd->gov->power_down_ok) {
@ -411,21 +334,6 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
return -EAGAIN;
}
if (genpd->cpuidle_data) {
/*
* If cpuidle_data is set, cpuidle should turn the domain off
* when the CPU in it is idle. In that case we don't decrement
* the subdomain counts of the master domains, so that power is
* not removed from the current domain prematurely as a result
* of cutting off the masters' power.
*/
genpd->status = GPD_STATE_POWER_OFF;
cpuidle_pause_and_lock();
genpd->cpuidle_data->idle_state->disabled = false;
cpuidle_resume_and_unlock();
return 0;
}
if (genpd->power_off) {
int ret;
@ -434,10 +342,10 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
/*
* If sd_count > 0 at this point, one of the subdomains hasn't
* managed to call pm_genpd_poweron() for the master yet after
* incrementing it. In that case pm_genpd_poweron() will wait
* managed to call genpd_poweron() for the master yet after
* incrementing it. In that case genpd_poweron() will wait
* for us to drop the lock, so we can call .power_off() and let
* the pm_genpd_poweron() restore power for us (this shouldn't
* the genpd_poweron() restore power for us (this shouldn't
* happen very often).
*/
ret = genpd_power_off(genpd, true);
@ -466,7 +374,7 @@ static void genpd_power_off_work_fn(struct work_struct *work)
genpd = container_of(work, struct generic_pm_domain, power_off_work);
mutex_lock(&genpd->lock);
pm_genpd_poweroff(genpd);
genpd_poweroff(genpd, true);
mutex_unlock(&genpd->lock);
}
@ -482,6 +390,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
{
struct generic_pm_domain *genpd;
bool (*stop_ok)(struct device *__dev);
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
ktime_t time_start;
s64 elapsed_ns;
int ret;
dev_dbg(dev, "%s()\n", __func__);
@ -494,16 +405,29 @@ static int pm_genpd_runtime_suspend(struct device *dev)
if (stop_ok && !stop_ok(dev))
return -EBUSY;
/* Measure suspend latency. */
time_start = ktime_get();
ret = genpd_save_dev(genpd, dev);
if (ret)
return ret;
ret = genpd_stop_dev(genpd, dev);
if (ret) {
genpd_restore_dev(genpd, dev, true);
genpd_restore_dev(genpd, dev);
return ret;
}
/* Update suspend latency value if the measured time exceeds it. */
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns > td->suspend_latency_ns) {
td->suspend_latency_ns = elapsed_ns;
dev_dbg(dev, "suspend latency exceeded, %lld ns\n",
elapsed_ns);
genpd->max_off_time_changed = true;
td->constraint_changed = true;
}
/*
* If power.irq_safe is set, this routine will be run with interrupts
* off, so it can't use mutexes.
@ -512,9 +436,7 @@ static int pm_genpd_runtime_suspend(struct device *dev)
return 0;
mutex_lock(&genpd->lock);
genpd->in_progress++;
pm_genpd_poweroff(genpd);
genpd->in_progress--;
genpd_poweroff(genpd, false);
mutex_unlock(&genpd->lock);
return 0;
@ -531,6 +453,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
static int pm_genpd_runtime_resume(struct device *dev)
{
struct generic_pm_domain *genpd;
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
ktime_t time_start;
s64 elapsed_ns;
int ret;
bool timed = true;
@ -547,15 +472,31 @@ static int pm_genpd_runtime_resume(struct device *dev)
}
mutex_lock(&genpd->lock);
ret = __pm_genpd_poweron(genpd);
ret = __genpd_poweron(genpd);
mutex_unlock(&genpd->lock);
if (ret)
return ret;
out:
genpd_start_dev(genpd, dev, timed);
genpd_restore_dev(genpd, dev, timed);
/* Measure resume latency. */
if (timed)
time_start = ktime_get();
genpd_start_dev(genpd, dev);
genpd_restore_dev(genpd, dev);
/* Update resume latency value if the measured time exceeds it. */
if (timed) {
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns > td->resume_latency_ns) {
td->resume_latency_ns = elapsed_ns;
dev_dbg(dev, "resume latency exceeded, %lld ns\n",
elapsed_ns);
genpd->max_off_time_changed = true;
td->constraint_changed = true;
}
}
return 0;
}
@ -569,15 +510,15 @@ static int __init pd_ignore_unused_setup(char *__unused)
__setup("pd_ignore_unused", pd_ignore_unused_setup);
/**
* pm_genpd_poweroff_unused - Power off all PM domains with no devices in use.
* genpd_poweroff_unused - Power off all PM domains with no devices in use.
*/
void pm_genpd_poweroff_unused(void)
static int __init genpd_poweroff_unused(void)
{
struct generic_pm_domain *genpd;
if (pd_ignore_unused) {
pr_warn("genpd: Not disabling unused power domains\n");
return;
return 0;
}
mutex_lock(&gpd_list_lock);
@ -586,11 +527,7 @@ void pm_genpd_poweroff_unused(void)
genpd_queue_power_off_work(genpd);
mutex_unlock(&gpd_list_lock);
}
static int __init genpd_poweroff_unused(void)
{
pm_genpd_poweroff_unused();
return 0;
}
late_initcall(genpd_poweroff_unused);
@ -764,7 +701,7 @@ static int pm_genpd_prepare(struct device *dev)
/*
* The PM domain must be in the GPD_STATE_ACTIVE state at this point,
* so pm_genpd_poweron() will return immediately, but if the device
* so genpd_poweron() will return immediately, but if the device
* is suspended (e.g. it's been stopped by genpd_stop_dev()), we need
* to make it operational.
*/
@ -890,7 +827,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
pm_genpd_sync_poweron(genpd, true);
genpd->suspended_count--;
return genpd_start_dev(genpd, dev, true);
return genpd_start_dev(genpd, dev);
}
/**
@ -1018,7 +955,8 @@ static int pm_genpd_thaw_noirq(struct device *dev)
if (IS_ERR(genpd))
return -EINVAL;
return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev, true);
return genpd->suspend_power_off ?
0 : genpd_start_dev(genpd, dev);
}
/**
@ -1112,7 +1050,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
pm_genpd_sync_poweron(genpd, true);
return genpd_start_dev(genpd, dev, true);
return genpd_start_dev(genpd, dev);
}
/**
@ -1316,18 +1254,6 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
return ret;
}
/**
* __pm_genpd_name_add_device - Find I/O PM domain and add a device to it.
* @domain_name: Name of the PM domain to add the device to.
* @dev: Device to be added.
* @td: Set of PM QoS timing parameters to attach to the device.
*/
int __pm_genpd_name_add_device(const char *domain_name, struct device *dev,
struct gpd_timing_data *td)
{
return __pm_genpd_add_device(pm_genpd_lookup_name(domain_name), dev, td);
}
/**
* pm_genpd_remove_device - Remove a device from an I/O PM domain.
* @genpd: PM domain to remove the device from.
@ -1428,35 +1354,6 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
return ret;
}
/**
* pm_genpd_add_subdomain_names - Add a subdomain to an I/O PM domain.
* @master_name: Name of the master PM domain to add the subdomain to.
* @subdomain_name: Name of the subdomain to be added.
*/
int pm_genpd_add_subdomain_names(const char *master_name,
const char *subdomain_name)
{
struct generic_pm_domain *master = NULL, *subdomain = NULL, *gpd;
if (IS_ERR_OR_NULL(master_name) || IS_ERR_OR_NULL(subdomain_name))
return -EINVAL;
mutex_lock(&gpd_list_lock);
list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
if (!master && !strcmp(gpd->name, master_name))
master = gpd;
if (!subdomain && !strcmp(gpd->name, subdomain_name))
subdomain = gpd;
if (master && subdomain)
break;
}
mutex_unlock(&gpd_list_lock);
return pm_genpd_add_subdomain(master, subdomain);
}
/**
* pm_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain.
* @genpd: Master PM domain to remove the subdomain from.
@ -1504,124 +1401,6 @@ out:
return ret;
}
/**
* pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle.
* @genpd: PM domain to be connected with cpuidle.
* @state: cpuidle state this domain can disable/enable.
*
* Make a PM domain behave as though it contained a CPU core, that is, instead
* of calling its power down routine it will enable the given cpuidle state so
* that the cpuidle subsystem can power it down (if possible and desirable).
*/
int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state)
{
struct cpuidle_driver *cpuidle_drv;
struct gpd_cpuidle_data *cpuidle_data;
struct cpuidle_state *idle_state;
int ret = 0;
if (IS_ERR_OR_NULL(genpd) || state < 0)
return -EINVAL;
mutex_lock(&genpd->lock);
if (genpd->cpuidle_data) {
ret = -EEXIST;
goto out;
}
cpuidle_data = kzalloc(sizeof(*cpuidle_data), GFP_KERNEL);
if (!cpuidle_data) {
ret = -ENOMEM;
goto out;
}
cpuidle_drv = cpuidle_driver_ref();
if (!cpuidle_drv) {
ret = -ENODEV;
goto err_drv;
}
if (cpuidle_drv->state_count <= state) {
ret = -EINVAL;
goto err;
}
idle_state = &cpuidle_drv->states[state];
if (!idle_state->disabled) {
ret = -EAGAIN;
goto err;
}
cpuidle_data->idle_state = idle_state;
cpuidle_data->saved_exit_latency = idle_state->exit_latency;
genpd->cpuidle_data = cpuidle_data;
genpd_recalc_cpu_exit_latency(genpd);
out:
mutex_unlock(&genpd->lock);
return ret;
err:
cpuidle_driver_unref();
err_drv:
kfree(cpuidle_data);
goto out;
}
/**
* pm_genpd_name_attach_cpuidle - Find PM domain and connect cpuidle to it.
* @name: Name of the domain to connect to cpuidle.
* @state: cpuidle state this domain can manipulate.
*/
int pm_genpd_name_attach_cpuidle(const char *name, int state)
{
return pm_genpd_attach_cpuidle(pm_genpd_lookup_name(name), state);
}
/**
* pm_genpd_detach_cpuidle - Remove the cpuidle connection from a PM domain.
* @genpd: PM domain to remove the cpuidle connection from.
*
* Remove the cpuidle connection set up by pm_genpd_attach_cpuidle() from the
* given PM domain.
*/
int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd)
{
struct gpd_cpuidle_data *cpuidle_data;
struct cpuidle_state *idle_state;
int ret = 0;
if (IS_ERR_OR_NULL(genpd))
return -EINVAL;
mutex_lock(&genpd->lock);
cpuidle_data = genpd->cpuidle_data;
if (!cpuidle_data) {
ret = -ENODEV;
goto out;
}
idle_state = cpuidle_data->idle_state;
if (!idle_state->disabled) {
ret = -EAGAIN;
goto out;
}
idle_state->exit_latency = cpuidle_data->saved_exit_latency;
cpuidle_driver_unref();
genpd->cpuidle_data = NULL;
kfree(cpuidle_data);
out:
mutex_unlock(&genpd->lock);
return ret;
}
/**
* pm_genpd_name_detach_cpuidle - Find PM domain and disconnect cpuidle from it.
* @name: Name of the domain to disconnect cpuidle from.
*/
int pm_genpd_name_detach_cpuidle(const char *name)
{
return pm_genpd_detach_cpuidle(pm_genpd_lookup_name(name));
}
/* Default device callbacks for generic PM domains. */
/**
@ -1688,7 +1467,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
mutex_init(&genpd->lock);
genpd->gov = gov;
INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
genpd->in_progress = 0;
atomic_set(&genpd->sd_count, 0);
genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE;
genpd->device_count = 0;
@ -2023,7 +1801,7 @@ int genpd_dev_pm_attach(struct device *dev)
dev->pm_domain->detach = genpd_dev_pm_detach;
dev->pm_domain->sync = genpd_dev_pm_sync;
ret = pm_genpd_poweron(pd);
ret = genpd_poweron(pd);
out:
return ret ? -EPROBE_DEFER : 0;

View File

@ -77,10 +77,8 @@ static bool default_stop_ok(struct device *dev)
dev_update_qos_constraint);
if (constraint_ns > 0) {
constraint_ns -= td->save_state_latency_ns +
td->stop_latency_ns +
td->start_latency_ns +
td->restore_state_latency_ns;
constraint_ns -= td->suspend_latency_ns +
td->resume_latency_ns;
if (constraint_ns == 0)
return false;
}

View File

@ -9,6 +9,7 @@
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/export.h>
#include <linux/suspend.h>
#ifdef CONFIG_PM
/**
@ -296,11 +297,27 @@ void pm_generic_complete(struct device *dev)
if (drv && drv->pm && drv->pm->complete)
drv->pm->complete(dev);
/*
* Let runtime PM try to suspend devices that haven't been in use before
* going into the system-wide sleep state we're resuming from.
*/
pm_request_idle(dev);
}
/**
* pm_complete_with_resume_check - Complete a device power transition.
* @dev: Device to handle.
*
* Complete a device power transition during a system-wide power transition and
* optionally schedule a runtime resume of the device if the system resume in
* progress has been initated by the platform firmware and the device had its
* power.direct_complete flag set.
*/
void pm_complete_with_resume_check(struct device *dev)
{
pm_generic_complete(dev);
/*
* If the device had been runtime-suspended before the system went into
* the sleep state it is going out of and it has never been resumed till
* now, resume it in case the firmware powered it up.
*/
if (dev->power.direct_complete && pm_resume_via_firmware())
pm_request_resume(dev);
}
EXPORT_SYMBOL_GPL(pm_complete_with_resume_check);
#endif /* CONFIG_PM_SLEEP */

Some files were not shown because too many files have changed in this diff Show More