ACPI and power management fixes for 3.18-rc3
- Fix a crash on r8a7791/koelsch during resume from system suspend caused by a recent cpufreq-dt commit (Geert Uytterhoeven). - Fix an MFD enumeration problem introduced by a recent commit adding ACPI support to the MFD subsystem that exposed a weakness in the ACPI core causing ACPI enumeration to be applied to all devices associated with one ACPI companion object, although it should be used for one of them only (Mika Westerberg). - Fix an ACPI EC regression introduced during the 3.17 cycle causing some Samsung laptops to misbehave as a result of a workaround targeted at some Acer machines. That includes a revert of a commit that went too far and a quirk for the Acer machines in question. From Lv Zheng. - Fix a regression in the system suspend error code path introduced during the 3.15 cycle that causes it to fail to take errors from asychronous execution of "late" suspend callbacks into account (Imre Deak). - Fix a long-standing bug in the hibernation resume error code path that fails to roll back everything correcty on "freeze" callback errors and leaves some devices in a "suspended" state causing more breakage to happen subsequently (Imre Deak). - Make the cpufreq-dt driver disable operation performance points that are not supported by the VR connected to the CPU voltage plane with acceptable tolerance instead of constantly failing voltage scaling later on (Lucas Stach). / -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABCAAGBQJUVAuPAAoJEILEb/54YlRxGfQP/0nFfTqyuDN8cPA2qRIzDIoi 8PTOzlhrRuzUlpMkYdsDijxwFcK2/59LomwtuAKHi7309N6UzUa8vAkb8WrzpY7m XUU+fhsLEkDnEczMfgmbP5ljtP75eJSSWRO0WIBuk4k79qcsutLNtgGpJV7feYSv +t7OE9DrBPM8lSpBKM/4qs5gnXzdaWmi4xGH7upQWyxAC6RG9GosKdDUZxVxSJQt oy/y0O4oxwyjg+8EvPwd22JtoFJ6axoEwCJXXlkn7NbIQNGtxrMR9zcMglsuOklg bG93g1xJl4YCwLXV8sKfPU2kQkQ1ISY3rYIkwIjvBNIY4QFsQpCg3GYt08OJI0bO 4wDD7kH8C51aD9Zfi9luCdE4MsMyGB7SeNvQJul5uMujuG9ZeI61a8d7P6fmXu5X lk+GeNl/rMujaESwqQlNgm3DvSYfc5FFEDC6F4Wcu4koomSlJwj//lMlOg2ajIgz p5En6FeC8yGTuobGqo2dT7yYjmxm+kdX+gTStsto+hkxWA7beNjI1iXXWwPrQa/F 7pzneSrdbTZVdzZ1F9eR9AcGljhRMLBxs2XembXgkviCv+IVjw4qHWWKveDQKkhG CVtcd3jrFSRHeAaqVNnbsoMu2nOLRY2W+f2+FNEfYKc+13aDJYm7pyAOIjujY7ns Q1jSP7ZZQBVlxP5j5W5x =g4QU -----END PGP SIGNATURE----- Merge tag 'pm+acpi-3.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull ACPI and power management fixes from Rafael Wysocki: "These are fixes received after my previous pull request plus one that has been in the works for quite a while, but its previous version caused problems to happen, so it's been deferred till now. Fixed are two recent regressions (MFD enumeration and cpufreq-dt), ACPI EC regression introduced in 3.17, system suspend error code path regression introduced in 3.15, an older bug related to recovery from failing resume from hibernation and a cpufreq-dt driver issue related to operation performance points. Specifics: - Fix a crash on r8a7791/koelsch during resume from system suspend caused by a recent cpufreq-dt commit (Geert Uytterhoeven). - Fix an MFD enumeration problem introduced by a recent commit adding ACPI support to the MFD subsystem that exposed a weakness in the ACPI core causing ACPI enumeration to be applied to all devices associated with one ACPI companion object, although it should be used for one of them only (Mika Westerberg). - Fix an ACPI EC regression introduced during the 3.17 cycle causing some Samsung laptops to misbehave as a result of a workaround targeted at some Acer machines. That includes a revert of a commit that went too far and a quirk for the Acer machines in question. From Lv Zheng. - Fix a regression in the system suspend error code path introduced during the 3.15 cycle that causes it to fail to take errors from asychronous execution of "late" suspend callbacks into account (Imre Deak). - Fix a long-standing bug in the hibernation resume error code path that fails to roll back everything correcty on "freeze" callback errors and leaves some devices in a "suspended" state causing more breakage to happen subsequently (Imre Deak). - Make the cpufreq-dt driver disable operation performance points that are not supported by the VR connected to the CPU voltage plane with acceptable tolerance instead of constantly failing voltage scaling later on (Lucas Stach)" * tag 'pm+acpi-3.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: ACPI / EC: Fix regression due to conflicting firmware behavior between Samsung and Acer. Revert "ACPI / EC: Add support to disallow QR_EC to be issued before completing previous QR_EC" cpufreq: cpufreq-dt: Restore default cpumask_setall(policy->cpus) PM / Sleep: fix recovery during resuming from hibernation PM / Sleep: fix async suspend_late/freeze_late error handling ACPI: Use ACPI companion to match only the first physical device cpufreq: cpufreq-dt: disable unsupported OPPs
This commit is contained in:
commit
ab01f963de
|
@ -126,6 +126,7 @@ static int EC_FLAGS_MSI; /* Out-of-spec MSI controller */
|
|||
static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */
|
||||
static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */
|
||||
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
|
||||
static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Transaction Management
|
||||
|
@ -236,13 +237,8 @@ static bool advance_transaction(struct acpi_ec *ec)
|
|||
}
|
||||
return wakeup;
|
||||
} else {
|
||||
/*
|
||||
* There is firmware refusing to respond QR_EC when SCI_EVT
|
||||
* is not set, for which case, we complete the QR_EC
|
||||
* without issuing it to the firmware.
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=86211
|
||||
*/
|
||||
if (!(status & ACPI_EC_FLAG_SCI) &&
|
||||
if (EC_FLAGS_QUERY_HANDSHAKE &&
|
||||
!(status & ACPI_EC_FLAG_SCI) &&
|
||||
(t->command == ACPI_EC_COMMAND_QUERY)) {
|
||||
t->flags |= ACPI_EC_COMMAND_POLL;
|
||||
t->rdata[t->ri++] = 0x00;
|
||||
|
@ -334,13 +330,13 @@ static int acpi_ec_transaction_unlocked(struct acpi_ec *ec,
|
|||
pr_debug("***** Command(%s) started *****\n",
|
||||
acpi_ec_cmd_string(t->command));
|
||||
start_transaction(ec);
|
||||
spin_unlock_irqrestore(&ec->lock, tmp);
|
||||
ret = ec_poll(ec);
|
||||
spin_lock_irqsave(&ec->lock, tmp);
|
||||
if (ec->curr->command == ACPI_EC_COMMAND_QUERY) {
|
||||
clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags);
|
||||
pr_debug("***** Event stopped *****\n");
|
||||
}
|
||||
spin_unlock_irqrestore(&ec->lock, tmp);
|
||||
ret = ec_poll(ec);
|
||||
spin_lock_irqsave(&ec->lock, tmp);
|
||||
pr_debug("***** Command(%s) stopped *****\n",
|
||||
acpi_ec_cmd_string(t->command));
|
||||
ec->curr = NULL;
|
||||
|
@ -1011,6 +1007,18 @@ static int ec_enlarge_storm_threshold(const struct dmi_system_id *id)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Acer EC firmware refuses to respond QR_EC when SCI_EVT is not set, for
|
||||
* which case, we complete the QR_EC without issuing it to the firmware.
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=86211
|
||||
*/
|
||||
static int ec_flag_query_handshake(const struct dmi_system_id *id)
|
||||
{
|
||||
pr_debug("Detected the EC firmware requiring QR_EC issued when SCI_EVT set\n");
|
||||
EC_FLAGS_QUERY_HANDSHAKE = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* On some hardware it is necessary to clear events accumulated by the EC during
|
||||
* sleep. These ECs stop reporting GPEs until they are manually polled, if too
|
||||
|
@ -1085,6 +1093,9 @@ static struct dmi_system_id ec_dmi_table[] __initdata = {
|
|||
{
|
||||
ec_clear_on_resume, "Samsung hardware", {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
|
||||
{
|
||||
ec_flag_query_handshake, "Acer hardware", {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Acer"), }, NULL},
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
@ -141,6 +141,53 @@ static int create_modalias(struct acpi_device *acpi_dev, char *modalias,
|
|||
return len;
|
||||
}
|
||||
|
||||
/*
|
||||
* acpi_companion_match() - Can we match via ACPI companion device
|
||||
* @dev: Device in question
|
||||
*
|
||||
* Check if the given device has an ACPI companion and if that companion has
|
||||
* a valid list of PNP IDs, and if the device is the first (primary) physical
|
||||
* device associated with it.
|
||||
*
|
||||
* If multiple physical devices are attached to a single ACPI companion, we need
|
||||
* to be careful. The usage scenario for this kind of relationship is that all
|
||||
* of the physical devices in question use resources provided by the ACPI
|
||||
* companion. A typical case is an MFD device where all the sub-devices share
|
||||
* the parent's ACPI companion. In such cases we can only allow the primary
|
||||
* (first) physical device to be matched with the help of the companion's PNP
|
||||
* IDs.
|
||||
*
|
||||
* Additional physical devices sharing the ACPI companion can still use
|
||||
* resources available from it but they will be matched normally using functions
|
||||
* provided by their bus types (and analogously for their modalias).
|
||||
*/
|
||||
static bool acpi_companion_match(const struct device *dev)
|
||||
{
|
||||
struct acpi_device *adev;
|
||||
bool ret;
|
||||
|
||||
adev = ACPI_COMPANION(dev);
|
||||
if (!adev)
|
||||
return false;
|
||||
|
||||
if (list_empty(&adev->pnp.ids))
|
||||
return false;
|
||||
|
||||
mutex_lock(&adev->physical_node_lock);
|
||||
if (list_empty(&adev->physical_node_list)) {
|
||||
ret = false;
|
||||
} else {
|
||||
const struct acpi_device_physical_node *node;
|
||||
|
||||
node = list_first_entry(&adev->physical_node_list,
|
||||
struct acpi_device_physical_node, node);
|
||||
ret = node->dev == dev;
|
||||
}
|
||||
mutex_unlock(&adev->physical_node_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Creates uevent modalias field for ACPI enumerated devices.
|
||||
* Because the other buses does not support ACPI HIDs & CIDs.
|
||||
|
@ -149,20 +196,14 @@ static int create_modalias(struct acpi_device *acpi_dev, char *modalias,
|
|||
*/
|
||||
int acpi_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct acpi_device *acpi_dev;
|
||||
int len;
|
||||
|
||||
acpi_dev = ACPI_COMPANION(dev);
|
||||
if (!acpi_dev)
|
||||
return -ENODEV;
|
||||
|
||||
/* Fall back to bus specific way of modalias exporting */
|
||||
if (list_empty(&acpi_dev->pnp.ids))
|
||||
if (!acpi_companion_match(dev))
|
||||
return -ENODEV;
|
||||
|
||||
if (add_uevent_var(env, "MODALIAS="))
|
||||
return -ENOMEM;
|
||||
len = create_modalias(acpi_dev, &env->buf[env->buflen - 1],
|
||||
len = create_modalias(ACPI_COMPANION(dev), &env->buf[env->buflen - 1],
|
||||
sizeof(env->buf) - env->buflen);
|
||||
if (len <= 0)
|
||||
return len;
|
||||
|
@ -179,18 +220,12 @@ EXPORT_SYMBOL_GPL(acpi_device_uevent_modalias);
|
|||
*/
|
||||
int acpi_device_modalias(struct device *dev, char *buf, int size)
|
||||
{
|
||||
struct acpi_device *acpi_dev;
|
||||
int len;
|
||||
|
||||
acpi_dev = ACPI_COMPANION(dev);
|
||||
if (!acpi_dev)
|
||||
if (!acpi_companion_match(dev))
|
||||
return -ENODEV;
|
||||
|
||||
/* Fall back to bus specific way of modalias exporting */
|
||||
if (list_empty(&acpi_dev->pnp.ids))
|
||||
return -ENODEV;
|
||||
|
||||
len = create_modalias(acpi_dev, buf, size -1);
|
||||
len = create_modalias(ACPI_COMPANION(dev), buf, size -1);
|
||||
if (len <= 0)
|
||||
return len;
|
||||
buf[len++] = '\n';
|
||||
|
@ -853,6 +888,9 @@ const struct acpi_device_id *acpi_match_device(const struct acpi_device_id *ids,
|
|||
if (!ids || !handle || acpi_bus_get_device(handle, &adev))
|
||||
return NULL;
|
||||
|
||||
if (!acpi_companion_match(dev))
|
||||
return NULL;
|
||||
|
||||
return __acpi_match_device(adev, ids);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_match_device);
|
||||
|
|
|
@ -1266,6 +1266,8 @@ int dpm_suspend_late(pm_message_t state)
|
|||
}
|
||||
mutex_unlock(&dpm_list_mtx);
|
||||
async_synchronize_full();
|
||||
if (!error)
|
||||
error = async_error;
|
||||
if (error) {
|
||||
suspend_stats.failed_suspend_late++;
|
||||
dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
|
||||
|
|
|
@ -187,6 +187,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
struct device *cpu_dev;
|
||||
struct regulator *cpu_reg;
|
||||
struct clk *cpu_clk;
|
||||
unsigned long min_uV = ~0, max_uV = 0;
|
||||
unsigned int transition_latency;
|
||||
int ret;
|
||||
|
||||
|
@ -206,16 +207,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
/* OPPs might be populated at runtime, don't check for error here */
|
||||
of_init_opp_table(cpu_dev);
|
||||
|
||||
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
|
||||
goto out_put_node;
|
||||
}
|
||||
|
||||
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv) {
|
||||
ret = -ENOMEM;
|
||||
goto out_free_table;
|
||||
goto out_put_node;
|
||||
}
|
||||
|
||||
of_property_read_u32(np, "voltage-tolerance", &priv->voltage_tolerance);
|
||||
|
@ -224,30 +219,51 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
transition_latency = CPUFREQ_ETERNAL;
|
||||
|
||||
if (!IS_ERR(cpu_reg)) {
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long min_uV, max_uV;
|
||||
int i;
|
||||
unsigned long opp_freq = 0;
|
||||
|
||||
/*
|
||||
* OPP is maintained in order of increasing frequency, and
|
||||
* freq_table initialised from OPP is therefore sorted in the
|
||||
* same order.
|
||||
* Disable any OPPs where the connected regulator isn't able to
|
||||
* provide the specified voltage and record minimum and maximum
|
||||
* voltage levels.
|
||||
*/
|
||||
for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
|
||||
;
|
||||
rcu_read_lock();
|
||||
opp = dev_pm_opp_find_freq_exact(cpu_dev,
|
||||
freq_table[0].frequency * 1000, true);
|
||||
min_uV = dev_pm_opp_get_voltage(opp);
|
||||
opp = dev_pm_opp_find_freq_exact(cpu_dev,
|
||||
freq_table[i-1].frequency * 1000, true);
|
||||
max_uV = dev_pm_opp_get_voltage(opp);
|
||||
rcu_read_unlock();
|
||||
while (1) {
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long opp_uV, tol_uV;
|
||||
|
||||
rcu_read_lock();
|
||||
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &opp_freq);
|
||||
if (IS_ERR(opp)) {
|
||||
rcu_read_unlock();
|
||||
break;
|
||||
}
|
||||
opp_uV = dev_pm_opp_get_voltage(opp);
|
||||
rcu_read_unlock();
|
||||
|
||||
tol_uV = opp_uV * priv->voltage_tolerance / 100;
|
||||
if (regulator_is_supported_voltage(cpu_reg, opp_uV,
|
||||
opp_uV + tol_uV)) {
|
||||
if (opp_uV < min_uV)
|
||||
min_uV = opp_uV;
|
||||
if (opp_uV > max_uV)
|
||||
max_uV = opp_uV;
|
||||
} else {
|
||||
dev_pm_opp_disable(cpu_dev, opp_freq);
|
||||
}
|
||||
|
||||
opp_freq++;
|
||||
}
|
||||
|
||||
ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
|
||||
if (ret > 0)
|
||||
transition_latency += ret * 1000;
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
|
||||
if (ret) {
|
||||
pr_err("failed to init cpufreq table: %d\n", ret);
|
||||
goto out_free_priv;
|
||||
}
|
||||
|
||||
/*
|
||||
* For now, just loading the cooling device;
|
||||
* thermal DT code takes care of matching them.
|
||||
|
@ -277,7 +293,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
policy->cpuinfo.transition_latency = transition_latency;
|
||||
|
||||
pd = cpufreq_get_driver_data();
|
||||
if (pd && !pd->independent_clocks)
|
||||
if (!pd || !pd->independent_clocks)
|
||||
cpumask_setall(policy->cpus);
|
||||
|
||||
of_node_put(np);
|
||||
|
@ -286,9 +302,9 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
|
||||
out_cooling_unregister:
|
||||
cpufreq_cooling_unregister(priv->cdev);
|
||||
kfree(priv);
|
||||
out_free_table:
|
||||
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
|
||||
out_free_priv:
|
||||
kfree(priv);
|
||||
out_put_node:
|
||||
of_node_put(np);
|
||||
out_put_reg_clk:
|
||||
|
|
|
@ -502,8 +502,14 @@ int hibernation_restore(int platform_mode)
|
|||
error = dpm_suspend_start(PMSG_QUIESCE);
|
||||
if (!error) {
|
||||
error = resume_target_kernel(platform_mode);
|
||||
dpm_resume_end(PMSG_RECOVER);
|
||||
/*
|
||||
* The above should either succeed and jump to the new kernel,
|
||||
* or return with an error. Otherwise things are just
|
||||
* undefined, so let's be paranoid.
|
||||
*/
|
||||
BUG_ON(!error);
|
||||
}
|
||||
dpm_resume_end(PMSG_RECOVER);
|
||||
pm_restore_gfp_mask();
|
||||
resume_console();
|
||||
pm_restore_console();
|
||||
|
|
Loading…
Reference in New Issue