Merge branch 'pm-core'

* pm-core: (29 commits)
  dmaengine: rcar-dmac: Make DMAC reinit during system resume explicit
  PM / runtime: Allow no callbacks in pm_runtime_force_suspend|resume()
  PM / runtime: Check ignore_children in pm_runtime_need_not_resume()
  PM / runtime: Rework pm_runtime_force_suspend/resume()
  PM / wakeup: Print warn if device gets enabled as wakeup source during sleep
  PM / core: Propagate wakeup_path status flag in __device_suspend_late()
  PM / core: Re-structure code for clearing the direct_complete flag
  PM: i2c-designware-platdrv: Optimize power management
  PM: i2c-designware-platdrv: Use DPM_FLAG_SMART_PREPARE
  PM / mfd: intel-lpss: Use DPM_FLAG_SMART_SUSPEND
  PCI / PM: Use SMART_SUSPEND and LEAVE_SUSPENDED flags for PCIe ports
  PM / wakeup: Add device_set_wakeup_path() helper to control wakeup path
  PM / core: Assign the wakeup_path status flag in __device_prepare()
  PM / wakeup: Do not fail dev_pm_attach_wake_irq() unnecessarily
  PM / core: Direct DPM_FLAG_LEAVE_SUSPENDED handling
  PM / core: Direct DPM_FLAG_SMART_SUSPEND optimization
  PM / core: Add helpers for subsystem callback selection
  PM / wakeup: Drop redundant check from device_init_wakeup()
  PM / wakeup: Drop redundant check from device_set_wakeup_enable()
  PM / wakeup: only recommend "call"ing device_init_wakeup() once
  ...
This commit is contained in:
Rafael J. Wysocki 2018-01-18 02:55:09 +01:00
commit 4b67157f04
17 changed files with 629 additions and 335 deletions

View File

@ -777,17 +777,51 @@ The driver can indicate that by setting ``DPM_FLAG_SMART_SUSPEND`` in
runtime suspend at the beginning of the ``suspend_late`` phase of system-wide runtime suspend at the beginning of the ``suspend_late`` phase of system-wide
suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM suspend (or in the ``poweroff_late`` phase of hibernation), when runtime PM
has been disabled for it, under the assumption that its state should not change has been disabled for it, under the assumption that its state should not change
after that point until the system-wide transition is over. If that happens, the after that point until the system-wide transition is over (the PM core itself
driver's system-wide resume callbacks, if present, may still be invoked during does that for devices whose "noirq", "late" and "early" system-wide PM callbacks
the subsequent system-wide resume transition and the device's runtime power are executed directly by it). If that happens, the driver's system-wide resume
management status may be set to "active" before enabling runtime PM for it, callbacks, if present, may still be invoked during the subsequent system-wide
so the driver must be prepared to cope with the invocation of its system-wide resume transition and the device's runtime power management status may be set
resume callbacks back-to-back with its ``->runtime_suspend`` one (without the to "active" before enabling runtime PM for it, so the driver must be prepared to
intervening ``->runtime_resume`` and so on) and the final state of the device cope with the invocation of its system-wide resume callbacks back-to-back with
must reflect the "active" status for runtime PM in that case. its ``->runtime_suspend`` one (without the intervening ``->runtime_resume`` and
so on) and the final state of the device must reflect the "active" runtime PM
status in that case.
During system-wide resume from a sleep state it's easiest to put devices into During system-wide resume from a sleep state it's easiest to put devices into
the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`. the full-power state, as explained in :file:`Documentation/power/runtime_pm.txt`.
Refer to that document for more information regarding this particular issue as [Refer to that document for more information regarding this particular issue as
well as for information on the device runtime power management framework in well as for information on the device runtime power management framework in
general. general.]
However, it often is desirable to leave devices in suspend after system
transitions to the working state, especially if those devices had been in
runtime suspend before the preceding system-wide suspend (or analogous)
transition. Device drivers can use the ``DPM_FLAG_LEAVE_SUSPENDED`` flag to
indicate to the PM core (and middle-layer code) that they prefer the specific
devices handled by them to be left suspended and they have no problems with
skipping their system-wide resume callbacks for this reason. Whether or not the
devices will actually be left in suspend may depend on their state before the
given system suspend-resume cycle and on the type of the system transition under
way. In particular, devices are not left suspended if that transition is a
restore from hibernation, as device states are not guaranteed to be reflected
by the information stored in the hibernation image in that case.
The middle-layer code involved in the handling of the device is expected to
indicate to the PM core if the device may be left in suspend by setting its
:c:member:`power.may_skip_resume` status bit which is checked by the PM core
during the "noirq" phase of the preceding system-wide suspend (or analogous)
transition. The middle layer is then responsible for handling the device as
appropriate in its "noirq" resume callback, which is executed regardless of
whether or not the device is left suspended, but the other resume callbacks
(except for ``->complete``) will be skipped automatically by the PM core if the
device really can be left in suspend.
For devices whose "noirq", "late" and "early" driver callbacks are invoked
directly by the PM core, all of the system-wide resume callbacks are skipped if
``DPM_FLAG_LEAVE_SUSPENDED`` is set and the device is in runtime suspend during
the ``suspend_noirq`` (or analogous) phase or the transition under way is a
proper system suspend (rather than anything related to hibernation) and the
device's wakeup settings are suitable for runtime PM (that is, it cannot
generate wakeup signals at all or it is allowed to wake up the system from
sleep).

View File

@ -994,6 +994,17 @@ into D0 going forward), but if it is in runtime suspend in pci_pm_thaw_noirq(),
the function will set the power.direct_complete flag for it (to make the PM core the function will set the power.direct_complete flag for it (to make the PM core
skip the subsequent "thaw" callbacks for it) and return. skip the subsequent "thaw" callbacks for it) and return.
Setting the DPM_FLAG_LEAVE_SUSPENDED flag means that the driver prefers the
device to be left in suspend after system-wide transitions to the working state.
This flag is checked by the PM core, but the PCI bus type informs the PM core
which devices may be left in suspend from its perspective (that happens during
the "noirq" phase of system-wide suspend and analogous transitions) and next it
uses the dev_pm_may_skip_resume() helper to decide whether or not to return from
pci_pm_resume_noirq() early, as the PM core will skip the remaining resume
callbacks for the device during the transition under way and will set its
runtime PM status to "suspended" if dev_pm_may_skip_resume() returns "true" for
it.
3.2. Device Runtime Power Management 3.2. Device Runtime Power Management
------------------------------------ ------------------------------------
In addition to providing device power management callbacks PCI device drivers In addition to providing device power management callbacks PCI device drivers

View File

@ -990,7 +990,7 @@ void acpi_subsys_complete(struct device *dev)
* the sleep state it is going out of and it has never been resumed till * the sleep state it is going out of and it has never been resumed till
* now, resume it in case the firmware powered it up. * now, resume it in case the firmware powered it up.
*/ */
if (dev->power.direct_complete && pm_resume_via_firmware()) if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
pm_request_resume(dev); pm_request_resume(dev);
} }
EXPORT_SYMBOL_GPL(acpi_subsys_complete); EXPORT_SYMBOL_GPL(acpi_subsys_complete);
@ -1039,10 +1039,28 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_late);
*/ */
int acpi_subsys_suspend_noirq(struct device *dev) int acpi_subsys_suspend_noirq(struct device *dev)
{ {
if (dev_pm_smart_suspend_and_suspended(dev)) int ret;
return 0;
return pm_generic_suspend_noirq(dev); if (dev_pm_smart_suspend_and_suspended(dev)) {
dev->power.may_skip_resume = true;
return 0;
}
ret = pm_generic_suspend_noirq(dev);
if (ret)
return ret;
/*
* If the target system sleep state is suspend-to-idle, it is sufficient
* to check whether or not the device's wakeup settings are good for
* runtime PM. Otherwise, the pm_resume_via_firmware() check will cause
* acpi_subsys_complete() to take care of fixing up the device's state
* anyway, if need be.
*/
dev->power.may_skip_resume = device_may_wakeup(dev) ||
!device_can_wakeup(dev);
return 0;
} }
EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq); EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
@ -1052,6 +1070,9 @@ EXPORT_SYMBOL_GPL(acpi_subsys_suspend_noirq);
*/ */
int acpi_subsys_resume_noirq(struct device *dev) int acpi_subsys_resume_noirq(struct device *dev)
{ {
if (dev_pm_may_skip_resume(dev))
return 0;
/* /*
* Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
* during system suspend, so update their runtime PM status to "active" * during system suspend, so update their runtime PM status to "active"

View File

@ -18,7 +18,6 @@
*/ */
#include <linux/device.h> #include <linux/device.h>
#include <linux/kallsyms.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/pm.h> #include <linux/pm.h>
@ -541,30 +540,41 @@ void dev_pm_skip_next_resume_phases(struct device *dev)
} }
/** /**
* device_resume_noirq - Execute a "noirq resume" callback for given device. * suspend_event - Return a "suspend" message for given "resume" one.
* @dev: Device to handle. * @resume_msg: PM message representing a system-wide resume transition.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*
* The driver of @dev will not receive interrupts while this function is being
* executed.
*/ */
static int device_resume_noirq(struct device *dev, pm_message_t state, bool async) static pm_message_t suspend_event(pm_message_t resume_msg)
{ {
pm_callback_t callback = NULL; switch (resume_msg.event) {
const char *info = NULL; case PM_EVENT_RESUME:
int error = 0; return PMSG_SUSPEND;
case PM_EVENT_THAW:
case PM_EVENT_RESTORE:
return PMSG_FREEZE;
case PM_EVENT_RECOVER:
return PMSG_HIBERNATE;
}
return PMSG_ON;
}
TRACE_DEVICE(dev); /**
TRACE_RESUME(0); * dev_pm_may_skip_resume - System-wide device resume optimization check.
* @dev: Target device.
*
* Checks whether or not the device may be left in suspend after a system-wide
* transition to the working state.
*/
bool dev_pm_may_skip_resume(struct device *dev)
{
return !dev->power.must_resume && pm_transition.event != PM_EVENT_RESTORE;
}
if (dev->power.syscore || dev->power.direct_complete) static pm_callback_t dpm_subsys_resume_noirq_cb(struct device *dev,
goto Out; pm_message_t state,
const char **info_p)
if (!dev->power.is_noirq_suspended) {
goto Out; pm_callback_t callback;
const char *info;
dpm_wait_for_superior(dev, async);
if (dev->pm_domain) { if (dev->pm_domain) {
info = "noirq power domain "; info = "noirq power domain ";
@ -578,17 +588,106 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn
} else if (dev->bus && dev->bus->pm) { } else if (dev->bus && dev->bus->pm) {
info = "noirq bus "; info = "noirq bus ";
callback = pm_noirq_op(dev->bus->pm, state); callback = pm_noirq_op(dev->bus->pm, state);
} else {
return NULL;
} }
if (!callback && dev->driver && dev->driver->pm) { if (info_p)
*info_p = info;
return callback;
}
static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
pm_message_t state,
const char **info_p);
static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
pm_message_t state,
const char **info_p);
/**
* device_resume_noirq - Execute a "noirq resume" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*
* The driver of @dev will not receive interrupts while this function is being
* executed.
*/
static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
{
pm_callback_t callback;
const char *info;
bool skip_resume;
int error = 0;
TRACE_DEVICE(dev);
TRACE_RESUME(0);
if (dev->power.syscore || dev->power.direct_complete)
goto Out;
if (!dev->power.is_noirq_suspended)
goto Out;
dpm_wait_for_superior(dev, async);
skip_resume = dev_pm_may_skip_resume(dev);
callback = dpm_subsys_resume_noirq_cb(dev, state, &info);
if (callback)
goto Run;
if (skip_resume)
goto Skip;
if (dev_pm_smart_suspend_and_suspended(dev)) {
pm_message_t suspend_msg = suspend_event(state);
/*
* If "freeze" callbacks have been skipped during a transition
* related to hibernation, the subsequent "thaw" callbacks must
* be skipped too or bad things may happen. Otherwise, resume
* callbacks are going to be run for the device, so its runtime
* PM status must be changed to reflect the new state after the
* transition under way.
*/
if (!dpm_subsys_suspend_late_cb(dev, suspend_msg, NULL) &&
!dpm_subsys_suspend_noirq_cb(dev, suspend_msg, NULL)) {
if (state.event == PM_EVENT_THAW) {
skip_resume = true;
goto Skip;
} else {
pm_runtime_set_active(dev);
}
}
}
if (dev->driver && dev->driver->pm) {
info = "noirq driver "; info = "noirq driver ";
callback = pm_noirq_op(dev->driver->pm, state); callback = pm_noirq_op(dev->driver->pm, state);
} }
Run:
error = dpm_run_callback(callback, dev, state, info); error = dpm_run_callback(callback, dev, state, info);
Skip:
dev->power.is_noirq_suspended = false; dev->power.is_noirq_suspended = false;
Out: if (skip_resume) {
/*
* The device is going to be left in suspend, but it might not
* have been in runtime suspend before the system suspended, so
* its runtime PM status needs to be updated to avoid confusing
* the runtime PM framework when runtime PM is enabled for the
* device again.
*/
pm_runtime_set_suspended(dev);
dev_pm_skip_next_resume_phases(dev);
}
Out:
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
TRACE_RESUME(error); TRACE_RESUME(error);
return error; return error;
@ -681,30 +780,12 @@ void dpm_resume_noirq(pm_message_t state)
dpm_noirq_end(); dpm_noirq_end();
} }
/** static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
* device_resume_early - Execute an "early resume" callback for given device. pm_message_t state,
* @dev: Device to handle. const char **info_p)
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*
* Runtime PM is disabled for @dev while this function is being executed.
*/
static int device_resume_early(struct device *dev, pm_message_t state, bool async)
{ {
pm_callback_t callback = NULL; pm_callback_t callback;
const char *info = NULL; const char *info;
int error = 0;
TRACE_DEVICE(dev);
TRACE_RESUME(0);
if (dev->power.syscore || dev->power.direct_complete)
goto Out;
if (!dev->power.is_late_suspended)
goto Out;
dpm_wait_for_superior(dev, async);
if (dev->pm_domain) { if (dev->pm_domain) {
info = "early power domain "; info = "early power domain ";
@ -718,8 +799,43 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
} else if (dev->bus && dev->bus->pm) { } else if (dev->bus && dev->bus->pm) {
info = "early bus "; info = "early bus ";
callback = pm_late_early_op(dev->bus->pm, state); callback = pm_late_early_op(dev->bus->pm, state);
} else {
return NULL;
} }
if (info_p)
*info_p = info;
return callback;
}
/**
* device_resume_early - Execute an "early resume" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*
* Runtime PM is disabled for @dev while this function is being executed.
*/
static int device_resume_early(struct device *dev, pm_message_t state, bool async)
{
pm_callback_t callback;
const char *info;
int error = 0;
TRACE_DEVICE(dev);
TRACE_RESUME(0);
if (dev->power.syscore || dev->power.direct_complete)
goto Out;
if (!dev->power.is_late_suspended)
goto Out;
dpm_wait_for_superior(dev, async);
callback = dpm_subsys_resume_early_cb(dev, state, &info);
if (!callback && dev->driver && dev->driver->pm) { if (!callback && dev->driver && dev->driver->pm) {
info = "early driver "; info = "early driver ";
callback = pm_late_early_op(dev->driver->pm, state); callback = pm_late_early_op(dev->driver->pm, state);
@ -1089,6 +1205,77 @@ static pm_message_t resume_event(pm_message_t sleep_state)
return PMSG_ON; return PMSG_ON;
} }
static void dpm_superior_set_must_resume(struct device *dev)
{
struct device_link *link;
int idx;
if (dev->parent)
dev->parent->power.must_resume = true;
idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
link->supplier->power.must_resume = true;
device_links_read_unlock(idx);
}
static pm_callback_t dpm_subsys_suspend_noirq_cb(struct device *dev,
pm_message_t state,
const char **info_p)
{
pm_callback_t callback;
const char *info;
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(&dev->pm_domain->ops, state);
} else if (dev->type && dev->type->pm) {
info = "noirq type ";
callback = pm_noirq_op(dev->type->pm, state);
} else if (dev->class && dev->class->pm) {
info = "noirq class ";
callback = pm_noirq_op(dev->class->pm, state);
} else if (dev->bus && dev->bus->pm) {
info = "noirq bus ";
callback = pm_noirq_op(dev->bus->pm, state);
} else {
return NULL;
}
if (info_p)
*info_p = info;
return callback;
}
static bool device_must_resume(struct device *dev, pm_message_t state,
bool no_subsys_suspend_noirq)
{
pm_message_t resume_msg = resume_event(state);
/*
* If all of the device driver's "noirq", "late" and "early" callbacks
* are invoked directly by the core, the decision to allow the device to
* stay in suspend can be based on its current runtime PM status and its
* wakeup settings.
*/
if (no_subsys_suspend_noirq &&
!dpm_subsys_suspend_late_cb(dev, state, NULL) &&
!dpm_subsys_resume_early_cb(dev, resume_msg, NULL) &&
!dpm_subsys_resume_noirq_cb(dev, resume_msg, NULL))
return !pm_runtime_status_suspended(dev) &&
(resume_msg.event != PM_EVENT_RESUME ||
(device_can_wakeup(dev) && !device_may_wakeup(dev)));
/*
* The only safe strategy here is to require that if the device may not
* be left in suspend, resume callbacks must be invoked for it.
*/
return !dev->power.may_skip_resume;
}
/** /**
* __device_suspend_noirq - Execute a "noirq suspend" callback for given device. * __device_suspend_noirq - Execute a "noirq suspend" callback for given device.
* @dev: Device to handle. * @dev: Device to handle.
@ -1100,8 +1287,9 @@ static pm_message_t resume_event(pm_message_t sleep_state)
*/ */
static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async) static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool async)
{ {
pm_callback_t callback = NULL; pm_callback_t callback;
const char *info = NULL; const char *info;
bool no_subsys_cb = false;
int error = 0; int error = 0;
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
@ -1120,30 +1308,40 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
if (dev->power.syscore || dev->power.direct_complete) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
if (dev->pm_domain) { callback = dpm_subsys_suspend_noirq_cb(dev, state, &info);
info = "noirq power domain "; if (callback)
callback = pm_noirq_op(&dev->pm_domain->ops, state); goto Run;
} else if (dev->type && dev->type->pm) {
info = "noirq type ";
callback = pm_noirq_op(dev->type->pm, state);
} else if (dev->class && dev->class->pm) {
info = "noirq class ";
callback = pm_noirq_op(dev->class->pm, state);
} else if (dev->bus && dev->bus->pm) {
info = "noirq bus ";
callback = pm_noirq_op(dev->bus->pm, state);
}
if (!callback && dev->driver && dev->driver->pm) { no_subsys_cb = !dpm_subsys_suspend_late_cb(dev, state, NULL);
if (dev_pm_smart_suspend_and_suspended(dev) && no_subsys_cb)
goto Skip;
if (dev->driver && dev->driver->pm) {
info = "noirq driver "; info = "noirq driver ";
callback = pm_noirq_op(dev->driver->pm, state); callback = pm_noirq_op(dev->driver->pm, state);
} }
Run:
error = dpm_run_callback(callback, dev, state, info); error = dpm_run_callback(callback, dev, state, info);
if (!error) if (error) {
dev->power.is_noirq_suspended = true;
else
async_error = error; async_error = error;
goto Complete;
}
Skip:
dev->power.is_noirq_suspended = true;
if (dev_pm_test_driver_flags(dev, DPM_FLAG_LEAVE_SUSPENDED)) {
dev->power.must_resume = dev->power.must_resume ||
atomic_read(&dev->power.usage_count) > 1 ||
device_must_resume(dev, state, no_subsys_cb);
} else {
dev->power.must_resume = true;
}
if (dev->power.must_resume)
dpm_superior_set_must_resume(dev);
Complete: Complete:
complete_all(&dev->power.completion); complete_all(&dev->power.completion);
@ -1249,6 +1447,50 @@ int dpm_suspend_noirq(pm_message_t state)
return ret; return ret;
} }
static void dpm_propagate_wakeup_to_parent(struct device *dev)
{
struct device *parent = dev->parent;
if (!parent)
return;
spin_lock_irq(&parent->power.lock);
if (dev->power.wakeup_path && !parent->power.ignore_children)
parent->power.wakeup_path = true;
spin_unlock_irq(&parent->power.lock);
}
static pm_callback_t dpm_subsys_suspend_late_cb(struct device *dev,
pm_message_t state,
const char **info_p)
{
pm_callback_t callback;
const char *info;
if (dev->pm_domain) {
info = "late power domain ";
callback = pm_late_early_op(&dev->pm_domain->ops, state);
} else if (dev->type && dev->type->pm) {
info = "late type ";
callback = pm_late_early_op(dev->type->pm, state);
} else if (dev->class && dev->class->pm) {
info = "late class ";
callback = pm_late_early_op(dev->class->pm, state);
} else if (dev->bus && dev->bus->pm) {
info = "late bus ";
callback = pm_late_early_op(dev->bus->pm, state);
} else {
return NULL;
}
if (info_p)
*info_p = info;
return callback;
}
/** /**
* __device_suspend_late - Execute a "late suspend" callback for given device. * __device_suspend_late - Execute a "late suspend" callback for given device.
* @dev: Device to handle. * @dev: Device to handle.
@ -1259,8 +1501,8 @@ int dpm_suspend_noirq(pm_message_t state)
*/ */
static int __device_suspend_late(struct device *dev, pm_message_t state, bool async) static int __device_suspend_late(struct device *dev, pm_message_t state, bool async)
{ {
pm_callback_t callback = NULL; pm_callback_t callback;
const char *info = NULL; const char *info;
int error = 0; int error = 0;
TRACE_DEVICE(dev); TRACE_DEVICE(dev);
@ -1281,30 +1523,29 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
if (dev->power.syscore || dev->power.direct_complete) if (dev->power.syscore || dev->power.direct_complete)
goto Complete; goto Complete;
if (dev->pm_domain) { callback = dpm_subsys_suspend_late_cb(dev, state, &info);
info = "late power domain "; if (callback)
callback = pm_late_early_op(&dev->pm_domain->ops, state); goto Run;
} else if (dev->type && dev->type->pm) {
info = "late type ";
callback = pm_late_early_op(dev->type->pm, state);
} else if (dev->class && dev->class->pm) {
info = "late class ";
callback = pm_late_early_op(dev->class->pm, state);
} else if (dev->bus && dev->bus->pm) {
info = "late bus ";
callback = pm_late_early_op(dev->bus->pm, state);
}
if (!callback && dev->driver && dev->driver->pm) { if (dev_pm_smart_suspend_and_suspended(dev) &&
!dpm_subsys_suspend_noirq_cb(dev, state, NULL))
goto Skip;
if (dev->driver && dev->driver->pm) {
info = "late driver "; info = "late driver ";
callback = pm_late_early_op(dev->driver->pm, state); callback = pm_late_early_op(dev->driver->pm, state);
} }
Run:
error = dpm_run_callback(callback, dev, state, info); error = dpm_run_callback(callback, dev, state, info);
if (!error) if (error) {
dev->power.is_late_suspended = true;
else
async_error = error; async_error = error;
goto Complete;
}
dpm_propagate_wakeup_to_parent(dev);
Skip:
dev->power.is_late_suspended = true;
Complete: Complete:
TRACE_SUSPEND(error); TRACE_SUSPEND(error);
@ -1435,11 +1676,17 @@ static int legacy_suspend(struct device *dev, pm_message_t state,
return error; return error;
} }
static void dpm_clear_suppliers_direct_complete(struct device *dev) static void dpm_clear_superiors_direct_complete(struct device *dev)
{ {
struct device_link *link; struct device_link *link;
int idx; int idx;
if (dev->parent) {
spin_lock_irq(&dev->parent->power.lock);
dev->parent->power.direct_complete = false;
spin_unlock_irq(&dev->parent->power.lock);
}
idx = device_links_read_lock(); idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) { list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
@ -1500,6 +1747,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
dev->power.direct_complete = false; dev->power.direct_complete = false;
} }
dev->power.may_skip_resume = false;
dev->power.must_resume = false;
dpm_watchdog_set(&wd, dev); dpm_watchdog_set(&wd, dev);
device_lock(dev); device_lock(dev);
@ -1543,20 +1793,12 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
End: End:
if (!error) { if (!error) {
struct device *parent = dev->parent;
dev->power.is_suspended = true; dev->power.is_suspended = true;
if (parent) { if (device_may_wakeup(dev))
spin_lock_irq(&parent->power.lock); dev->power.wakeup_path = true;
dev->parent->power.direct_complete = false; dpm_propagate_wakeup_to_parent(dev);
if (dev->power.wakeup_path dpm_clear_superiors_direct_complete(dev);
&& !dev->parent->power.ignore_children)
dev->parent->power.wakeup_path = true;
spin_unlock_irq(&parent->power.lock);
}
dpm_clear_suppliers_direct_complete(dev);
} }
device_unlock(dev); device_unlock(dev);
@ -1665,8 +1907,9 @@ static int device_prepare(struct device *dev, pm_message_t state)
if (dev->power.syscore) if (dev->power.syscore)
return 0; return 0;
WARN_ON(dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND) && WARN_ON(!pm_runtime_enabled(dev) &&
!pm_runtime_enabled(dev)); dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND |
DPM_FLAG_LEAVE_SUSPENDED));
/* /*
* If a device's parent goes into runtime suspend at the wrong time, * If a device's parent goes into runtime suspend at the wrong time,
@ -1678,7 +1921,7 @@ static int device_prepare(struct device *dev, pm_message_t state)
device_lock(dev); device_lock(dev);
dev->power.wakeup_path = device_may_wakeup(dev); dev->power.wakeup_path = false;
if (dev->power.no_pm_callbacks) { if (dev->power.no_pm_callbacks) {
ret = 1; /* Let device go direct_complete */ ret = 1; /* Let device go direct_complete */

View File

@ -41,20 +41,15 @@ extern void dev_pm_disable_wake_irq_check(struct device *dev);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
extern int device_wakeup_attach_irq(struct device *dev, extern void device_wakeup_attach_irq(struct device *dev, struct wake_irq *wakeirq);
struct wake_irq *wakeirq);
extern void device_wakeup_detach_irq(struct device *dev); extern void device_wakeup_detach_irq(struct device *dev);
extern void device_wakeup_arm_wake_irqs(void); extern void device_wakeup_arm_wake_irqs(void);
extern void device_wakeup_disarm_wake_irqs(void); extern void device_wakeup_disarm_wake_irqs(void);
#else #else
static inline int static inline void device_wakeup_attach_irq(struct device *dev,
device_wakeup_attach_irq(struct device *dev, struct wake_irq *wakeirq) {}
struct wake_irq *wakeirq)
{
return 0;
}
static inline void device_wakeup_detach_irq(struct device *dev) static inline void device_wakeup_detach_irq(struct device *dev)
{ {

View File

@ -1613,22 +1613,34 @@ void pm_runtime_drop_link(struct device *dev)
spin_unlock_irq(&dev->power.lock); spin_unlock_irq(&dev->power.lock);
} }
static bool pm_runtime_need_not_resume(struct device *dev)
{
return atomic_read(&dev->power.usage_count) <= 1 &&
(atomic_read(&dev->power.child_count) == 0 ||
dev->power.ignore_children);
}
/** /**
* pm_runtime_force_suspend - Force a device into suspend state if needed. * pm_runtime_force_suspend - Force a device into suspend state if needed.
* @dev: Device to suspend. * @dev: Device to suspend.
* *
* Disable runtime PM so we safely can check the device's runtime PM status and * Disable runtime PM so we safely can check the device's runtime PM status and
* if it is active, invoke it's .runtime_suspend callback to bring it into * if it is active, invoke its ->runtime_suspend callback to suspend it and
* suspend state. Keep runtime PM disabled to preserve the state unless we * change its runtime PM status field to RPM_SUSPENDED. Also, if the device's
* encounter errors. * usage and children counters don't indicate that the device was in use before
* the system-wide transition under way, decrement its parent's children counter
* (if there is a parent). Keep runtime PM disabled to preserve the state
* unless we encounter errors.
* *
* Typically this function may be invoked from a system suspend callback to make * Typically this function may be invoked from a system suspend callback to make
* sure the device is put into low power state. * sure the device is put into low power state and it should only be used during
* system-wide PM transitions to sleep states. It assumes that the analogous
* pm_runtime_force_resume() will be used to resume the device.
*/ */
int pm_runtime_force_suspend(struct device *dev) int pm_runtime_force_suspend(struct device *dev)
{ {
int (*callback)(struct device *); int (*callback)(struct device *);
int ret = 0; int ret;
pm_runtime_disable(dev); pm_runtime_disable(dev);
if (pm_runtime_status_suspended(dev)) if (pm_runtime_status_suspended(dev))
@ -1636,27 +1648,23 @@ int pm_runtime_force_suspend(struct device *dev)
callback = RPM_GET_CALLBACK(dev, runtime_suspend); callback = RPM_GET_CALLBACK(dev, runtime_suspend);
if (!callback) { ret = callback ? callback(dev) : 0;
ret = -ENOSYS;
goto err;
}
ret = callback(dev);
if (ret) if (ret)
goto err; goto err;
/* /*
* Increase the runtime PM usage count for the device's parent, in case * If the device can stay in suspend after the system-wide transition
* when we find the device being used when system suspend was invoked. * to the working state that will follow, drop the children counter of
* This informs pm_runtime_force_resume() to resume the parent * its parent, but set its status to RPM_SUSPENDED anyway in case this
* immediately, which is needed to be able to resume its children, * function will be called again for it in the meantime.
* when not deferring the resume to be managed via runtime PM.
*/ */
if (dev->parent && atomic_read(&dev->power.usage_count) > 1) if (pm_runtime_need_not_resume(dev))
pm_runtime_get_noresume(dev->parent); pm_runtime_set_suspended(dev);
else
__update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_set_suspended(dev);
return 0; return 0;
err: err:
pm_runtime_enable(dev); pm_runtime_enable(dev);
return ret; return ret;
@ -1669,13 +1677,9 @@ EXPORT_SYMBOL_GPL(pm_runtime_force_suspend);
* *
* Prior invoking this function we expect the user to have brought the device * Prior invoking this function we expect the user to have brought the device
* into low power state by a call to pm_runtime_force_suspend(). Here we reverse * into low power state by a call to pm_runtime_force_suspend(). Here we reverse
* those actions and brings the device into full power, if it is expected to be * those actions and bring the device into full power, if it is expected to be
* used on system resume. To distinguish that, we check whether the runtime PM * used on system resume. In the other case, we defer the resume to be managed
* usage count is greater than 1 (the PM core increases the usage count in the * via runtime PM.
* system PM prepare phase), as that indicates a real user (such as a subsystem,
* driver, userspace, etc.) is using it. If that is the case, the device is
* expected to be used on system resume as well, so then we resume it. In the
* other case, we defer the resume to be managed via runtime PM.
* *
* Typically this function may be invoked from a system resume callback. * Typically this function may be invoked from a system resume callback.
*/ */
@ -1684,32 +1688,18 @@ int pm_runtime_force_resume(struct device *dev)
int (*callback)(struct device *); int (*callback)(struct device *);
int ret = 0; int ret = 0;
callback = RPM_GET_CALLBACK(dev, runtime_resume); if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
if (!callback) {
ret = -ENOSYS;
goto out;
}
if (!pm_runtime_status_suspended(dev))
goto out; goto out;
/* /*
* Decrease the parent's runtime PM usage count, if we increased it * The value of the parent's children counter is correct already, so
* during system suspend in pm_runtime_force_suspend(). * just update the status of the device.
*/ */
if (atomic_read(&dev->power.usage_count) > 1) { __update_runtime_status(dev, RPM_ACTIVE);
if (dev->parent)
pm_runtime_put_noidle(dev->parent);
} else {
goto out;
}
ret = pm_runtime_set_active(dev); callback = RPM_GET_CALLBACK(dev, runtime_resume);
if (ret)
goto out;
ret = callback(dev); ret = callback ? callback(dev) : 0;
if (ret) { if (ret) {
pm_runtime_set_suspended(dev); pm_runtime_set_suspended(dev);
goto out; goto out;

View File

@ -108,16 +108,10 @@ static ssize_t control_show(struct device *dev, struct device_attribute *attr,
static ssize_t control_store(struct device * dev, struct device_attribute *attr, static ssize_t control_store(struct device * dev, struct device_attribute *attr,
const char * buf, size_t n) const char * buf, size_t n)
{ {
char *cp;
int len = n;
cp = memchr(buf, '\n', n);
if (cp)
len = cp - buf;
device_lock(dev); device_lock(dev);
if (len == sizeof ctrl_auto - 1 && strncmp(buf, ctrl_auto, len) == 0) if (sysfs_streq(buf, ctrl_auto))
pm_runtime_allow(dev); pm_runtime_allow(dev);
else if (len == sizeof ctrl_on - 1 && strncmp(buf, ctrl_on, len) == 0) else if (sysfs_streq(buf, ctrl_on))
pm_runtime_forbid(dev); pm_runtime_forbid(dev);
else else
n = -EINVAL; n = -EINVAL;
@ -125,9 +119,9 @@ static ssize_t control_store(struct device * dev, struct device_attribute *attr,
return n; return n;
} }
static DEVICE_ATTR(control, 0644, control_show, control_store); static DEVICE_ATTR_RW(control);
static ssize_t rtpm_active_time_show(struct device *dev, static ssize_t runtime_active_time_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
int ret; int ret;
@ -138,9 +132,9 @@ static ssize_t rtpm_active_time_show(struct device *dev,
return ret; return ret;
} }
static DEVICE_ATTR(runtime_active_time, 0444, rtpm_active_time_show, NULL); static DEVICE_ATTR_RO(runtime_active_time);
static ssize_t rtpm_suspended_time_show(struct device *dev, static ssize_t runtime_suspended_time_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
int ret; int ret;
@ -152,9 +146,9 @@ static ssize_t rtpm_suspended_time_show(struct device *dev,
return ret; return ret;
} }
static DEVICE_ATTR(runtime_suspended_time, 0444, rtpm_suspended_time_show, NULL); static DEVICE_ATTR_RO(runtime_suspended_time);
static ssize_t rtpm_status_show(struct device *dev, static ssize_t runtime_status_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
const char *p; const char *p;
@ -184,7 +178,7 @@ static ssize_t rtpm_status_show(struct device *dev,
return sprintf(buf, p); return sprintf(buf, p);
} }
static DEVICE_ATTR(runtime_status, 0444, rtpm_status_show, NULL); static DEVICE_ATTR_RO(runtime_status);
static ssize_t autosuspend_delay_ms_show(struct device *dev, static ssize_t autosuspend_delay_ms_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
@ -211,26 +205,25 @@ static ssize_t autosuspend_delay_ms_store(struct device *dev,
return n; return n;
} }
static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show, static DEVICE_ATTR_RW(autosuspend_delay_ms);
autosuspend_delay_ms_store);
static ssize_t pm_qos_resume_latency_show(struct device *dev, static ssize_t pm_qos_resume_latency_us_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
s32 value = dev_pm_qos_requested_resume_latency(dev); s32 value = dev_pm_qos_requested_resume_latency(dev);
if (value == 0) if (value == 0)
return sprintf(buf, "n/a\n"); return sprintf(buf, "n/a\n");
else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT)
value = 0; value = 0;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
} }
static ssize_t pm_qos_resume_latency_store(struct device *dev, static ssize_t pm_qos_resume_latency_us_store(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
const char *buf, size_t n) const char *buf, size_t n)
{ {
s32 value; s32 value;
int ret; int ret;
@ -245,7 +238,7 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
if (value == 0) if (value == 0)
value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
} else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { } else if (sysfs_streq(buf, "n/a")) {
value = 0; value = 0;
} else { } else {
return -EINVAL; return -EINVAL;
@ -256,26 +249,25 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
return ret < 0 ? ret : n; return ret < 0 ? ret : n;
} }
static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, static DEVICE_ATTR_RW(pm_qos_resume_latency_us);
pm_qos_resume_latency_show, pm_qos_resume_latency_store);
static ssize_t pm_qos_latency_tolerance_show(struct device *dev, static ssize_t pm_qos_latency_tolerance_us_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
s32 value = dev_pm_qos_get_user_latency_tolerance(dev); s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
if (value < 0) if (value < 0)
return sprintf(buf, "auto\n"); return sprintf(buf, "auto\n");
else if (value == PM_QOS_LATENCY_ANY) if (value == PM_QOS_LATENCY_ANY)
return sprintf(buf, "any\n"); return sprintf(buf, "any\n");
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
} }
static ssize_t pm_qos_latency_tolerance_store(struct device *dev, static ssize_t pm_qos_latency_tolerance_us_store(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
const char *buf, size_t n) const char *buf, size_t n)
{ {
s32 value; s32 value;
int ret; int ret;
@ -285,9 +277,9 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
if (value < 0) if (value < 0)
return -EINVAL; return -EINVAL;
} else { } else {
if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) if (sysfs_streq(buf, "auto"))
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) else if (sysfs_streq(buf, "any"))
value = PM_QOS_LATENCY_ANY; value = PM_QOS_LATENCY_ANY;
else else
return -EINVAL; return -EINVAL;
@ -296,8 +288,7 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
return ret < 0 ? ret : n; return ret < 0 ? ret : n;
} }
static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644, static DEVICE_ATTR_RW(pm_qos_latency_tolerance_us);
pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
static ssize_t pm_qos_no_power_off_show(struct device *dev, static ssize_t pm_qos_no_power_off_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
@ -323,49 +314,39 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
return ret < 0 ? ret : n; return ret < 0 ? ret : n;
} }
static DEVICE_ATTR(pm_qos_no_power_off, 0644, static DEVICE_ATTR_RW(pm_qos_no_power_off);
pm_qos_no_power_off_show, pm_qos_no_power_off_store);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static const char _enabled[] = "enabled"; static const char _enabled[] = "enabled";
static const char _disabled[] = "disabled"; static const char _disabled[] = "disabled";
static ssize_t static ssize_t wakeup_show(struct device *dev, struct device_attribute *attr,
wake_show(struct device * dev, struct device_attribute *attr, char * buf) char *buf)
{ {
return sprintf(buf, "%s\n", device_can_wakeup(dev) return sprintf(buf, "%s\n", device_can_wakeup(dev)
? (device_may_wakeup(dev) ? _enabled : _disabled) ? (device_may_wakeup(dev) ? _enabled : _disabled)
: ""); : "");
} }
static ssize_t static ssize_t wakeup_store(struct device *dev, struct device_attribute *attr,
wake_store(struct device * dev, struct device_attribute *attr, const char *buf, size_t n)
const char * buf, size_t n)
{ {
char *cp;
int len = n;
if (!device_can_wakeup(dev)) if (!device_can_wakeup(dev))
return -EINVAL; return -EINVAL;
cp = memchr(buf, '\n', n); if (sysfs_streq(buf, _enabled))
if (cp)
len = cp - buf;
if (len == sizeof _enabled - 1
&& strncmp(buf, _enabled, sizeof _enabled - 1) == 0)
device_set_wakeup_enable(dev, 1); device_set_wakeup_enable(dev, 1);
else if (len == sizeof _disabled - 1 else if (sysfs_streq(buf, _disabled))
&& strncmp(buf, _disabled, sizeof _disabled - 1) == 0)
device_set_wakeup_enable(dev, 0); device_set_wakeup_enable(dev, 0);
else else
return -EINVAL; return -EINVAL;
return n; return n;
} }
static DEVICE_ATTR(wakeup, 0644, wake_show, wake_store); static DEVICE_ATTR_RW(wakeup);
static ssize_t wakeup_count_show(struct device *dev, static ssize_t wakeup_count_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
unsigned long count = 0; unsigned long count = 0;
bool enabled = false; bool enabled = false;
@ -379,10 +360,11 @@ static ssize_t wakeup_count_show(struct device *dev,
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_count, 0444, wakeup_count_show, NULL); static DEVICE_ATTR_RO(wakeup_count);
static ssize_t wakeup_active_count_show(struct device *dev, static ssize_t wakeup_active_count_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr,
char *buf)
{ {
unsigned long count = 0; unsigned long count = 0;
bool enabled = false; bool enabled = false;
@ -396,11 +378,11 @@ static ssize_t wakeup_active_count_show(struct device *dev,
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_active_count, 0444, wakeup_active_count_show, NULL); static DEVICE_ATTR_RO(wakeup_active_count);
static ssize_t wakeup_abort_count_show(struct device *dev, static ssize_t wakeup_abort_count_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
unsigned long count = 0; unsigned long count = 0;
bool enabled = false; bool enabled = false;
@ -414,7 +396,7 @@ static ssize_t wakeup_abort_count_show(struct device *dev,
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_abort_count, 0444, wakeup_abort_count_show, NULL); static DEVICE_ATTR_RO(wakeup_abort_count);
static ssize_t wakeup_expire_count_show(struct device *dev, static ssize_t wakeup_expire_count_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
@ -432,10 +414,10 @@ static ssize_t wakeup_expire_count_show(struct device *dev,
return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lu\n", count) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_expire_count, 0444, wakeup_expire_count_show, NULL); static DEVICE_ATTR_RO(wakeup_expire_count);
static ssize_t wakeup_active_show(struct device *dev, static ssize_t wakeup_active_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
unsigned int active = 0; unsigned int active = 0;
bool enabled = false; bool enabled = false;
@ -449,10 +431,11 @@ static ssize_t wakeup_active_show(struct device *dev,
return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%u\n", active) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_active, 0444, wakeup_active_show, NULL); static DEVICE_ATTR_RO(wakeup_active);
static ssize_t wakeup_total_time_show(struct device *dev, static ssize_t wakeup_total_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr,
char *buf)
{ {
s64 msec = 0; s64 msec = 0;
bool enabled = false; bool enabled = false;
@ -466,10 +449,10 @@ static ssize_t wakeup_total_time_show(struct device *dev,
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_total_time_ms, 0444, wakeup_total_time_show, NULL); static DEVICE_ATTR_RO(wakeup_total_time_ms);
static ssize_t wakeup_max_time_show(struct device *dev, static ssize_t wakeup_max_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
s64 msec = 0; s64 msec = 0;
bool enabled = false; bool enabled = false;
@ -483,10 +466,11 @@ static ssize_t wakeup_max_time_show(struct device *dev,
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_max_time_ms, 0444, wakeup_max_time_show, NULL); static DEVICE_ATTR_RO(wakeup_max_time_ms);
static ssize_t wakeup_last_time_show(struct device *dev, static ssize_t wakeup_last_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr,
char *buf)
{ {
s64 msec = 0; s64 msec = 0;
bool enabled = false; bool enabled = false;
@ -500,12 +484,12 @@ static ssize_t wakeup_last_time_show(struct device *dev,
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_last_time_ms, 0444, wakeup_last_time_show, NULL); static DEVICE_ATTR_RO(wakeup_last_time_ms);
#ifdef CONFIG_PM_AUTOSLEEP #ifdef CONFIG_PM_AUTOSLEEP
static ssize_t wakeup_prevent_sleep_time_show(struct device *dev, static ssize_t wakeup_prevent_sleep_time_ms_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
s64 msec = 0; s64 msec = 0;
bool enabled = false; bool enabled = false;
@ -519,40 +503,39 @@ static ssize_t wakeup_prevent_sleep_time_show(struct device *dev,
return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n"); return enabled ? sprintf(buf, "%lld\n", msec) : sprintf(buf, "\n");
} }
static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444, static DEVICE_ATTR_RO(wakeup_prevent_sleep_time_ms);
wakeup_prevent_sleep_time_show, NULL);
#endif /* CONFIG_PM_AUTOSLEEP */ #endif /* CONFIG_PM_AUTOSLEEP */
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_PM_ADVANCED_DEBUG #ifdef CONFIG_PM_ADVANCED_DEBUG
static ssize_t rtpm_usagecount_show(struct device *dev, static ssize_t runtime_usage_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count)); return sprintf(buf, "%d\n", atomic_read(&dev->power.usage_count));
} }
static DEVICE_ATTR_RO(runtime_usage);
static ssize_t rtpm_children_show(struct device *dev, static ssize_t runtime_active_kids_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr,
char *buf)
{ {
return sprintf(buf, "%d\n", dev->power.ignore_children ? return sprintf(buf, "%d\n", dev->power.ignore_children ?
0 : atomic_read(&dev->power.child_count)); 0 : atomic_read(&dev->power.child_count));
} }
static DEVICE_ATTR_RO(runtime_active_kids);
static ssize_t rtpm_enabled_show(struct device *dev, static ssize_t runtime_enabled_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
if ((dev->power.disable_depth) && (dev->power.runtime_auto == false)) if (dev->power.disable_depth && (dev->power.runtime_auto == false))
return sprintf(buf, "disabled & forbidden\n"); return sprintf(buf, "disabled & forbidden\n");
else if (dev->power.disable_depth) if (dev->power.disable_depth)
return sprintf(buf, "disabled\n"); return sprintf(buf, "disabled\n");
else if (dev->power.runtime_auto == false) if (dev->power.runtime_auto == false)
return sprintf(buf, "forbidden\n"); return sprintf(buf, "forbidden\n");
return sprintf(buf, "enabled\n"); return sprintf(buf, "enabled\n");
} }
static DEVICE_ATTR_RO(runtime_enabled);
static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static ssize_t async_show(struct device *dev, struct device_attribute *attr, static ssize_t async_show(struct device *dev, struct device_attribute *attr,
@ -566,23 +549,16 @@ static ssize_t async_show(struct device *dev, struct device_attribute *attr,
static ssize_t async_store(struct device *dev, struct device_attribute *attr, static ssize_t async_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t n) const char *buf, size_t n)
{ {
char *cp; if (sysfs_streq(buf, _enabled))
int len = n;
cp = memchr(buf, '\n', n);
if (cp)
len = cp - buf;
if (len == sizeof _enabled - 1 && strncmp(buf, _enabled, len) == 0)
device_enable_async_suspend(dev); device_enable_async_suspend(dev);
else if (len == sizeof _disabled - 1 && else if (sysfs_streq(buf, _disabled))
strncmp(buf, _disabled, len) == 0)
device_disable_async_suspend(dev); device_disable_async_suspend(dev);
else else
return -EINVAL; return -EINVAL;
return n; return n;
} }
static DEVICE_ATTR(async, 0644, async_show, async_store); static DEVICE_ATTR_RW(async);
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#endif /* CONFIG_PM_ADVANCED_DEBUG */ #endif /* CONFIG_PM_ADVANCED_DEBUG */

View File

@ -33,7 +33,6 @@ static int dev_pm_attach_wake_irq(struct device *dev, int irq,
struct wake_irq *wirq) struct wake_irq *wirq)
{ {
unsigned long flags; unsigned long flags;
int err;
if (!dev || !wirq) if (!dev || !wirq)
return -EINVAL; return -EINVAL;
@ -45,12 +44,11 @@ static int dev_pm_attach_wake_irq(struct device *dev, int irq,
return -EEXIST; return -EEXIST;
} }
err = device_wakeup_attach_irq(dev, wirq); dev->power.wakeirq = wirq;
if (!err) device_wakeup_attach_irq(dev, wirq);
dev->power.wakeirq = wirq;
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
return err; return 0;
} }
/** /**

View File

@ -19,6 +19,11 @@
#include "power.h" #include "power.h"
#ifndef CONFIG_SUSPEND
suspend_state_t pm_suspend_target_state;
#define pm_suspend_target_state (PM_SUSPEND_ON)
#endif
/* /*
* If set, the suspend/hibernate code will abort transitions to a sleep state * If set, the suspend/hibernate code will abort transitions to a sleep state
* if wakeup events are registered during or immediately before the transition. * if wakeup events are registered during or immediately before the transition.
@ -268,6 +273,9 @@ int device_wakeup_enable(struct device *dev)
if (!dev || !dev->power.can_wakeup) if (!dev || !dev->power.can_wakeup)
return -EINVAL; return -EINVAL;
if (pm_suspend_target_state != PM_SUSPEND_ON)
dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__);
ws = wakeup_source_register(dev_name(dev)); ws = wakeup_source_register(dev_name(dev));
if (!ws) if (!ws)
return -ENOMEM; return -ENOMEM;
@ -291,22 +299,19 @@ EXPORT_SYMBOL_GPL(device_wakeup_enable);
* *
* Call under the device's power.lock lock. * Call under the device's power.lock lock.
*/ */
int device_wakeup_attach_irq(struct device *dev, void device_wakeup_attach_irq(struct device *dev,
struct wake_irq *wakeirq) struct wake_irq *wakeirq)
{ {
struct wakeup_source *ws; struct wakeup_source *ws;
ws = dev->power.wakeup; ws = dev->power.wakeup;
if (!ws) { if (!ws)
dev_err(dev, "forgot to call call device_init_wakeup?\n"); return;
return -EINVAL;
}
if (ws->wakeirq) if (ws->wakeirq)
return -EEXIST; dev_err(dev, "Leftover wakeup IRQ found, overriding\n");
ws->wakeirq = wakeirq; ws->wakeirq = wakeirq;
return 0;
} }
/** /**
@ -448,9 +453,7 @@ int device_init_wakeup(struct device *dev, bool enable)
device_set_wakeup_capable(dev, true); device_set_wakeup_capable(dev, true);
ret = device_wakeup_enable(dev); ret = device_wakeup_enable(dev);
} else { } else {
if (dev->power.can_wakeup) device_wakeup_disable(dev);
device_wakeup_disable(dev);
device_set_wakeup_capable(dev, false); device_set_wakeup_capable(dev, false);
} }
@ -464,9 +467,6 @@ EXPORT_SYMBOL_GPL(device_init_wakeup);
*/ */
int device_set_wakeup_enable(struct device *dev, bool enable) int device_set_wakeup_enable(struct device *dev, bool enable)
{ {
if (!dev || !dev->power.can_wakeup)
return -EINVAL;
return enable ? device_wakeup_enable(dev) : device_wakeup_disable(dev); return enable ? device_wakeup_enable(dev) : device_wakeup_disable(dev);
} }
EXPORT_SYMBOL_GPL(device_set_wakeup_enable); EXPORT_SYMBOL_GPL(device_set_wakeup_enable);

View File

@ -1615,22 +1615,6 @@ static struct dma_chan *rcar_dmac_of_xlate(struct of_phandle_args *dma_spec,
* Power management * Power management
*/ */
#ifdef CONFIG_PM_SLEEP
static int rcar_dmac_sleep_suspend(struct device *dev)
{
/*
* TODO: Wait for the current transfer to complete and stop the device.
*/
return 0;
}
static int rcar_dmac_sleep_resume(struct device *dev)
{
/* TODO: Resume transfers, if any. */
return 0;
}
#endif
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int rcar_dmac_runtime_suspend(struct device *dev) static int rcar_dmac_runtime_suspend(struct device *dev)
{ {
@ -1646,7 +1630,13 @@ static int rcar_dmac_runtime_resume(struct device *dev)
#endif #endif
static const struct dev_pm_ops rcar_dmac_pm = { static const struct dev_pm_ops rcar_dmac_pm = {
SET_SYSTEM_SLEEP_PM_OPS(rcar_dmac_sleep_suspend, rcar_dmac_sleep_resume) /*
* TODO for system sleep/resume:
* - Wait for the current transfer to complete and stop the device,
* - Resume transfers, if any.
*/
SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
pm_runtime_force_resume)
SET_RUNTIME_PM_OPS(rcar_dmac_runtime_suspend, rcar_dmac_runtime_resume, SET_RUNTIME_PM_OPS(rcar_dmac_runtime_suspend, rcar_dmac_runtime_resume,
NULL) NULL)
}; };

View File

@ -280,8 +280,6 @@ struct dw_i2c_dev {
int (*acquire_lock)(struct dw_i2c_dev *dev); int (*acquire_lock)(struct dw_i2c_dev *dev);
void (*release_lock)(struct dw_i2c_dev *dev); void (*release_lock)(struct dw_i2c_dev *dev);
bool pm_disabled; bool pm_disabled;
bool suspended;
bool skip_resume;
void (*disable)(struct dw_i2c_dev *dev); void (*disable)(struct dw_i2c_dev *dev);
void (*disable_int)(struct dw_i2c_dev *dev); void (*disable_int)(struct dw_i2c_dev *dev);
int (*init)(struct dw_i2c_dev *dev); int (*init)(struct dw_i2c_dev *dev);

View File

@ -42,6 +42,7 @@
#include <linux/reset.h> #include <linux/reset.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/suspend.h>
#include "i2c-designware-core.h" #include "i2c-designware-core.h"
@ -372,6 +373,11 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev)); ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev));
adap->dev.of_node = pdev->dev.of_node; adap->dev.of_node = pdev->dev.of_node;
dev_pm_set_driver_flags(&pdev->dev,
DPM_FLAG_SMART_PREPARE |
DPM_FLAG_SMART_SUSPEND |
DPM_FLAG_LEAVE_SUSPENDED);
/* The code below assumes runtime PM to be disabled. */ /* The code below assumes runtime PM to be disabled. */
WARN_ON(pm_runtime_enabled(&pdev->dev)); WARN_ON(pm_runtime_enabled(&pdev->dev));
@ -435,12 +441,24 @@ MODULE_DEVICE_TABLE(of, dw_i2c_of_match);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static int dw_i2c_plat_prepare(struct device *dev) static int dw_i2c_plat_prepare(struct device *dev)
{ {
return pm_runtime_suspended(dev); /*
* If the ACPI companion device object is present for this device, it
* may be accessed during suspend and resume of other devices via I2C
* operation regions, so tell the PM core and middle layers to avoid
* skipping system suspend/resume callbacks for it in that case.
*/
return !has_acpi_companion(dev);
} }
static void dw_i2c_plat_complete(struct device *dev) static void dw_i2c_plat_complete(struct device *dev)
{ {
if (dev->power.direct_complete) /*
* The device can only be in runtime suspend at this point if it has not
* been resumed throughout the ending system suspend/resume cycle, so if
* the platform firmware might mess up with it, request the runtime PM
* framework to resume it.
*/
if (pm_runtime_suspended(dev) && pm_resume_via_firmware())
pm_request_resume(dev); pm_request_resume(dev);
} }
#else #else
@ -453,16 +471,9 @@ static int dw_i2c_plat_suspend(struct device *dev)
{ {
struct dw_i2c_dev *i_dev = dev_get_drvdata(dev); struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
if (i_dev->suspended) {
i_dev->skip_resume = true;
return 0;
}
i_dev->disable(i_dev); i_dev->disable(i_dev);
i2c_dw_plat_prepare_clk(i_dev, false); i2c_dw_plat_prepare_clk(i_dev, false);
i_dev->suspended = true;
return 0; return 0;
} }
@ -470,19 +481,9 @@ static int dw_i2c_plat_resume(struct device *dev)
{ {
struct dw_i2c_dev *i_dev = dev_get_drvdata(dev); struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
if (!i_dev->suspended)
return 0;
if (i_dev->skip_resume) {
i_dev->skip_resume = false;
return 0;
}
i2c_dw_plat_prepare_clk(i_dev, true); i2c_dw_plat_prepare_clk(i_dev, true);
i_dev->init(i_dev); i_dev->init(i_dev);
i_dev->suspended = false;
return 0; return 0;
} }

View File

@ -450,6 +450,8 @@ int intel_lpss_probe(struct device *dev,
if (ret) if (ret)
goto err_remove_ltr; goto err_remove_ltr;
dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND);
return 0; return 0;
err_remove_ltr: err_remove_ltr:
@ -478,7 +480,9 @@ EXPORT_SYMBOL_GPL(intel_lpss_remove);
static int resume_lpss_device(struct device *dev, void *data) static int resume_lpss_device(struct device *dev, void *data)
{ {
pm_runtime_resume(dev); if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND))
pm_runtime_resume(dev);
return 0; return 0;
} }

View File

@ -699,7 +699,7 @@ static void pci_pm_complete(struct device *dev)
pm_generic_complete(dev); pm_generic_complete(dev);
/* Resume device if platform firmware has put it in reset-power-on */ /* Resume device if platform firmware has put it in reset-power-on */
if (dev->power.direct_complete && pm_resume_via_firmware()) { if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) {
pci_power_t pre_sleep_state = pci_dev->current_state; pci_power_t pre_sleep_state = pci_dev->current_state;
pci_update_current_state(pci_dev, pci_dev->current_state); pci_update_current_state(pci_dev, pci_dev->current_state);
@ -783,8 +783,10 @@ static int pci_pm_suspend_noirq(struct device *dev)
struct pci_dev *pci_dev = to_pci_dev(dev); struct pci_dev *pci_dev = to_pci_dev(dev);
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
if (dev_pm_smart_suspend_and_suspended(dev)) if (dev_pm_smart_suspend_and_suspended(dev)) {
dev->power.may_skip_resume = true;
return 0; return 0;
}
if (pci_has_legacy_pm_support(pci_dev)) if (pci_has_legacy_pm_support(pci_dev))
return pci_legacy_suspend_late(dev, PMSG_SUSPEND); return pci_legacy_suspend_late(dev, PMSG_SUSPEND);
@ -838,6 +840,16 @@ static int pci_pm_suspend_noirq(struct device *dev)
Fixup: Fixup:
pci_fixup_device(pci_fixup_suspend_late, pci_dev); pci_fixup_device(pci_fixup_suspend_late, pci_dev);
/*
* If the target system sleep state is suspend-to-idle, it is sufficient
* to check whether or not the device's wakeup settings are good for
* runtime PM. Otherwise, the pm_resume_via_firmware() check will cause
* pci_pm_complete() to take care of fixing up the device's state
* anyway, if need be.
*/
dev->power.may_skip_resume = device_may_wakeup(dev) ||
!device_can_wakeup(dev);
return 0; return 0;
} }
@ -847,6 +859,9 @@ static int pci_pm_resume_noirq(struct device *dev)
struct device_driver *drv = dev->driver; struct device_driver *drv = dev->driver;
int error = 0; int error = 0;
if (dev_pm_may_skip_resume(dev))
return 0;
/* /*
* Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend * Devices with DPM_FLAG_SMART_SUSPEND may be left in runtime suspend
* during system suspend, so update their runtime PM status to "active" * during system suspend, so update their runtime PM status to "active"

View File

@ -150,6 +150,9 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
pci_save_state(dev); pci_save_state(dev);
dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_SMART_SUSPEND |
DPM_FLAG_LEAVE_SUSPENDED);
if (pci_bridge_d3_possible(dev)) { if (pci_bridge_d3_possible(dev)) {
/* /*
* Keep the port resumed 100ms to make sure things like * Keep the port resumed 100ms to make sure things like

View File

@ -556,9 +556,10 @@ struct pm_subsys_data {
* These flags can be set by device drivers at the probe time. They need not be * These flags can be set by device drivers at the probe time. They need not be
* cleared by the drivers as the driver core will take care of that. * cleared by the drivers as the driver core will take care of that.
* *
* NEVER_SKIP: Do not skip system suspend/resume callbacks for the device. * NEVER_SKIP: Do not skip all system suspend/resume callbacks for the device.
* SMART_PREPARE: Check the return value of the driver's ->prepare callback. * SMART_PREPARE: Check the return value of the driver's ->prepare callback.
* SMART_SUSPEND: No need to resume the device from runtime suspend. * SMART_SUSPEND: No need to resume the device from runtime suspend.
* LEAVE_SUSPENDED: Avoid resuming the device during system resume if possible.
* *
* Setting SMART_PREPARE instructs bus types and PM domains which may want * Setting SMART_PREPARE instructs bus types and PM domains which may want
* system suspend/resume callbacks to be skipped for the device to return 0 from * system suspend/resume callbacks to be skipped for the device to return 0 from
@ -572,10 +573,14 @@ struct pm_subsys_data {
* necessary from the driver's perspective. It also may cause them to skip * necessary from the driver's perspective. It also may cause them to skip
* invocations of the ->suspend_late and ->suspend_noirq callbacks provided by * invocations of the ->suspend_late and ->suspend_noirq callbacks provided by
* the driver if they decide to leave the device in runtime suspend. * the driver if they decide to leave the device in runtime suspend.
*
* Setting LEAVE_SUSPENDED informs the PM core and middle-layer code that the
* driver prefers the device to be left in suspend after system resume.
*/ */
#define DPM_FLAG_NEVER_SKIP BIT(0) #define DPM_FLAG_NEVER_SKIP BIT(0)
#define DPM_FLAG_SMART_PREPARE BIT(1) #define DPM_FLAG_SMART_PREPARE BIT(1)
#define DPM_FLAG_SMART_SUSPEND BIT(2) #define DPM_FLAG_SMART_SUSPEND BIT(2)
#define DPM_FLAG_LEAVE_SUSPENDED BIT(3)
struct dev_pm_info { struct dev_pm_info {
pm_message_t power_state; pm_message_t power_state;
@ -597,6 +602,8 @@ struct dev_pm_info {
bool wakeup_path:1; bool wakeup_path:1;
bool syscore:1; bool syscore:1;
bool no_pm_callbacks:1; /* Owned by the PM core */ bool no_pm_callbacks:1; /* Owned by the PM core */
unsigned int must_resume:1; /* Owned by the PM core */
unsigned int may_skip_resume:1; /* Set by subsystems */
#else #else
unsigned int should_wakeup:1; unsigned int should_wakeup:1;
#endif #endif
@ -766,6 +773,7 @@ extern int pm_generic_poweroff(struct device *dev);
extern void pm_generic_complete(struct device *dev); extern void pm_generic_complete(struct device *dev);
extern void dev_pm_skip_next_resume_phases(struct device *dev); extern void dev_pm_skip_next_resume_phases(struct device *dev);
extern bool dev_pm_may_skip_resume(struct device *dev);
extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); extern bool dev_pm_smart_suspend_and_suspended(struct device *dev);
#else /* !CONFIG_PM_SLEEP */ #else /* !CONFIG_PM_SLEEP */

View File

@ -88,6 +88,11 @@ static inline bool device_may_wakeup(struct device *dev)
return dev->power.can_wakeup && !!dev->power.wakeup; return dev->power.can_wakeup && !!dev->power.wakeup;
} }
static inline void device_set_wakeup_path(struct device *dev)
{
dev->power.wakeup_path = true;
}
/* drivers/base/power/wakeup.c */ /* drivers/base/power/wakeup.c */
extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name); extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name);
extern struct wakeup_source *wakeup_source_create(const char *name); extern struct wakeup_source *wakeup_source_create(const char *name);
@ -174,6 +179,8 @@ static inline bool device_may_wakeup(struct device *dev)
return dev->power.can_wakeup && dev->power.should_wakeup; return dev->power.can_wakeup && dev->power.should_wakeup;
} }
static inline void device_set_wakeup_path(struct device *dev) {}
static inline void __pm_stay_awake(struct wakeup_source *ws) {} static inline void __pm_stay_awake(struct wakeup_source *ws) {}
static inline void pm_stay_awake(struct device *dev) {} static inline void pm_stay_awake(struct device *dev) {}