Power management updates for 5.11-rc1
- Use local_clock() instead of jiffies in the cpufreq statistics to improve accuracy (Viresh Kumar). - Fix up OPP usage in the cpufreq-dt and qcom-cpufreq-nvmem cpufreq drivers (Viresh Kumar). - Clean up the cpufreq core, the intel_pstate driver and the schedutil cpufreq governor (Rafael Wysocki). - Fix up error code paths in the sti-cpufreq and mediatek cpufreq drivers (Yangtao Li, Qinglang Miao). - Fix cpufreq_online() to return error codes instead of success (0) in all cases when it fails (Wang ShaoBo). - Add mt8167 support to the mediatek cpufreq driver and blacklist mt8516 in the cpufreq-dt-platdev driver (Fabien Parent). - Modify the tegra194 cpufreq driver to always return values from the frequency table as the current frequency and clean up that driver (Sumit Gupta, Jon Hunter). - Modify the arm_scmi cpufreq driver to allow it to discover the power scale present in the performance protocol and provide this information to the Energy Model (Lukasz Luba). - Add missing MODULE_DEVICE_TABLE to several cpufreq drivers (Pali Rohár). - Clean up the CPPC cpufreq driver (Ionela Voinescu). - Fix NVMEM_IMX_OCOTP dependency in the imx cpufreq driver (Arnd Bergmann). - Rework the poling interval selection for the polling state in cpuidle (Mel Gorman). - Enable suspend-to-idle for PSCI OSI mode in the PSCI cpuidle driver (Ulf Hansson). - Modify the OPP framework to support empty (node-less) OPP tables in DT for passing dependency information (Nicola Mazzucato). - Fix potential lockdep issue in the OPP core and clean up the OPP core (Viresh Kumar). - Modify dev_pm_opp_put_regulators() to accept a NULL argument and update its users accordingly (Viresh Kumar). - Add frequency changes tracepoint to devfreq (Matthias Kaehlcke). - Add support for governor feature flags to devfreq, make devfreq sysfs file permissions depend on the governor and clean up the devfreq core (Chanwoo Choi). - Clean up the tegra20 devfreq driver and deprecate it to allow another driver based on EMC_STAT to be used instead of it (Dmitry Osipenko). - Add interconnect support to the tegra30 devfreq driver, allow it to take the interconnect and OPP information from DT and clean it up ((Dmitry Osipenko). - Add interconnect support to the exynos-bus devfreq driver along with interconnect properties documentation (Sylwester Nawrocki). - Add suport for AMD Fam17h and Fam19h processors to the RAPL power capping driver (Victor Ding, Kim Phillips). - Fix handling of overly long constraint names in the powercap framework (Lukasz Luba). - Fix the wakeup configuration handling for bridges in the ACPI device power management core (Rafael Wysocki). - Add support for using an abstract scale for power units in the Energy Model (EM) and document it (Lukasz Luba). - Add em_cpu_energy() micro-optimization to the EM (Pavankumar Kondeti). - Modify the generic power domains (genpd) framwework to support suspend-to-idle (Ulf Hansson). - Fix creation of debugfs nodes in genpd (Thierry Strudel). - Clean up genpd (Lina Iyer). - Clean up the core system-wide suspend code and make it print driver flags for devices with debug enabled (Alex Shi, Patrice Chotard, Chen Yu). - Modify the ACPI system reboot code to make it prepare for system power off to avoid confusing the platform firmware (Kai-Heng Feng). - Update the pm-graph (multiple changes, mostly usability-related) and cpupower (online and offline CPU information support) PM utilities (Todd Brandt, Brahadambal Srinivasan). -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl/Y8mcSHHJqd0Byand5 c29ja2kubmV0AAoJEILEb/54YlRxjY4QAKsNFJeEtjGCxq7MxQIML3QLAsdJM9of 9kkY9skMEw4v1TRmyy7sW9jZW2pLSRcLJwWRKWu4143qUS3YUp2DQ0lqX4WyXoWu BhnkhkMUl6iCeBO8CWnt8zsTuqSa20A13sL9LyqN1+7OZKHD8StbT4hKjBncdNNN 4aDj+1uAPyOgj2iCUZuHQ8DtpBvOLjgTh367vbhbufjeJ//8/9+R7s4Xzrj7wtmv JlE0LDgvge9QeGTpjhxQJzn0q2/H5fg9jbmjPXUfbHJNuyKhrqnmjGyrN5m256JI 8DqGqQtJpmFp7Ihrur3uKTk3gWO05YwJ1FdeEooAKEjEMObm5xuYhKVRoDhmlJAu G6ui+OAUvNR0FffJtbzvWe/pLovLGOEOHdvTrZxUF8Abo6br3untTm8rKTi1fhaF wWndSMw0apGsPzCx5T+bE7AbJz2QHFpLhaVAutenuCzNI8xoMlxNKEzsaVz/+FqL Pq/PdFaM4vNlMbv7hkb/fujkCs/v3EcX2ihzvt7I2o8dBS0D1X8A4mnuWJmiGslw 1ftbJ6M9XacwkPBTHPgeXxJh2C1yxxe5VQ9Z5fWWi7sPOUeJnUwxKaluv+coFndQ sO6JxsPQ4hQihg8yOxLEkL6Wn68sZlmp+u2Oj+TPFAsAGANIA8rJlBPo1ppJWvdQ j1OCIc/qzwpH =BVdX -----END PGP SIGNATURE----- Merge tag 'pm-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management updates from Rafael Wysocki: "These update cpufreq (core and drivers), cpuidle (polling state implementation and the PSCI driver), the OPP (operating performance points) framework, devfreq (core and drivers), the power capping RAPL (Running Average Power Limit) driver, the Energy Model support, the generic power domains (genpd) framework, the ACPI device power management, the core system-wide suspend code and power management utilities. Specifics: - Use local_clock() instead of jiffies in the cpufreq statistics to improve accuracy (Viresh Kumar). - Fix up OPP usage in the cpufreq-dt and qcom-cpufreq-nvmem cpufreq drivers (Viresh Kumar). - Clean up the cpufreq core, the intel_pstate driver and the schedutil cpufreq governor (Rafael Wysocki). - Fix up error code paths in the sti-cpufreq and mediatek cpufreq drivers (Yangtao Li, Qinglang Miao). - Fix cpufreq_online() to return error codes instead of success (0) in all cases when it fails (Wang ShaoBo). - Add mt8167 support to the mediatek cpufreq driver and blacklist mt8516 in the cpufreq-dt-platdev driver (Fabien Parent). - Modify the tegra194 cpufreq driver to always return values from the frequency table as the current frequency and clean up that driver (Sumit Gupta, Jon Hunter). - Modify the arm_scmi cpufreq driver to allow it to discover the power scale present in the performance protocol and provide this information to the Energy Model (Lukasz Luba). - Add missing MODULE_DEVICE_TABLE to several cpufreq drivers (Pali Rohár). - Clean up the CPPC cpufreq driver (Ionela Voinescu). - Fix NVMEM_IMX_OCOTP dependency in the imx cpufreq driver (Arnd Bergmann). - Rework the poling interval selection for the polling state in cpuidle (Mel Gorman). - Enable suspend-to-idle for PSCI OSI mode in the PSCI cpuidle driver (Ulf Hansson). - Modify the OPP framework to support empty (node-less) OPP tables in DT for passing dependency information (Nicola Mazzucato). - Fix potential lockdep issue in the OPP core and clean up the OPP core (Viresh Kumar). - Modify dev_pm_opp_put_regulators() to accept a NULL argument and update its users accordingly (Viresh Kumar). - Add frequency changes tracepoint to devfreq (Matthias Kaehlcke). - Add support for governor feature flags to devfreq, make devfreq sysfs file permissions depend on the governor and clean up the devfreq core (Chanwoo Choi). - Clean up the tegra20 devfreq driver and deprecate it to allow another driver based on EMC_STAT to be used instead of it (Dmitry Osipenko). - Add interconnect support to the tegra30 devfreq driver, allow it to take the interconnect and OPP information from DT and clean it up (Dmitry Osipenko). - Add interconnect support to the exynos-bus devfreq driver along with interconnect properties documentation (Sylwester Nawrocki). - Add suport for AMD Fam17h and Fam19h processors to the RAPL power capping driver (Victor Ding, Kim Phillips). - Fix handling of overly long constraint names in the powercap framework (Lukasz Luba). - Fix the wakeup configuration handling for bridges in the ACPI device power management core (Rafael Wysocki). - Add support for using an abstract scale for power units in the Energy Model (EM) and document it (Lukasz Luba). - Add em_cpu_energy() micro-optimization to the EM (Pavankumar Kondeti). - Modify the generic power domains (genpd) framwework to support suspend-to-idle (Ulf Hansson). - Fix creation of debugfs nodes in genpd (Thierry Strudel). - Clean up genpd (Lina Iyer). - Clean up the core system-wide suspend code and make it print driver flags for devices with debug enabled (Alex Shi, Patrice Chotard, Chen Yu). - Modify the ACPI system reboot code to make it prepare for system power off to avoid confusing the platform firmware (Kai-Heng Feng). - Update the pm-graph (multiple changes, mostly usability-related) and cpupower (online and offline CPU information support) PM utilities (Todd Brandt, Brahadambal Srinivasan)" * tag 'pm-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (86 commits) cpufreq: Fix cpufreq_online() return value on errors cpufreq: Fix up several kerneldoc comments cpufreq: stats: Use local_clock() instead of jiffies cpufreq: schedutil: Simplify sugov_update_next_freq() cpufreq: intel_pstate: Simplify intel_cpufreq_update_pstate() PM: domains: create debugfs nodes when adding power domains opp: of: Allow empty opp-table with opp-shared dt-bindings: opp: Allow empty OPP tables media: venus: dev_pm_opp_put_*() accepts NULL argument drm/panfrost: dev_pm_opp_put_*() accepts NULL argument drm/lima: dev_pm_opp_put_*() accepts NULL argument PM / devfreq: exynos: dev_pm_opp_put_*() accepts NULL argument cpufreq: qcom-cpufreq-nvmem: dev_pm_opp_put_*() accepts NULL argument cpufreq: dt: dev_pm_opp_put_regulators() accepts NULL argument opp: Allow dev_pm_opp_put_*() APIs to accept NULL opp_table opp: Don't create an OPP table from dev_pm_opp_get_opp_table() cpufreq: dt: Don't (ab)use dev_pm_opp_get_opp_table() to create OPP table opp: Reduce the size of critical section in _opp_kref_release() PM / EM: Micro optimization in em_cpu_energy cpufreq: arm_scmi: Discover the power scale in performance protocol ...
This commit is contained in:
commit
b4ec805464
|
@ -37,20 +37,6 @@ Description:
|
|||
The /sys/class/devfreq/.../target_freq shows the next governor
|
||||
predicted target frequency of the corresponding devfreq object.
|
||||
|
||||
What: /sys/class/devfreq/.../polling_interval
|
||||
Date: September 2011
|
||||
Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
|
||||
Description:
|
||||
The /sys/class/devfreq/.../polling_interval shows and sets
|
||||
the requested polling interval of the corresponding devfreq
|
||||
object. The values are represented in ms. If the value is
|
||||
less than 1 jiffy, it is considered to be 0, which means
|
||||
no polling. This value is meaningless if the governor is
|
||||
not polling; thus. If the governor is not using
|
||||
devfreq-provided central polling
|
||||
(/sys/class/devfreq/.../central_polling is 0), this value
|
||||
may be useless.
|
||||
|
||||
What: /sys/class/devfreq/.../trans_stat
|
||||
Date: October 2012
|
||||
Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
|
||||
|
@ -66,14 +52,6 @@ Description:
|
|||
|
||||
echo 0 > /sys/class/devfreq/.../trans_stat
|
||||
|
||||
What: /sys/class/devfreq/.../userspace/set_freq
|
||||
Date: September 2011
|
||||
Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
|
||||
Description:
|
||||
The /sys/class/devfreq/.../userspace/set_freq shows and
|
||||
sets the requested frequency for the devfreq object if
|
||||
userspace governor is in effect.
|
||||
|
||||
What: /sys/class/devfreq/.../available_frequencies
|
||||
Date: October 2012
|
||||
Contact: Nishanth Menon <nm@ti.com>
|
||||
|
@ -110,6 +88,35 @@ Description:
|
|||
The max_freq overrides min_freq because max_freq may be
|
||||
used to throttle devices to avoid overheating.
|
||||
|
||||
What: /sys/class/devfreq/.../polling_interval
|
||||
Date: September 2011
|
||||
Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
|
||||
Description:
|
||||
The /sys/class/devfreq/.../polling_interval shows and sets
|
||||
the requested polling interval of the corresponding devfreq
|
||||
object. The values are represented in ms. If the value is
|
||||
less than 1 jiffy, it is considered to be 0, which means
|
||||
no polling. This value is meaningless if the governor is
|
||||
not polling; thus. If the governor is not using
|
||||
devfreq-provided central polling
|
||||
(/sys/class/devfreq/.../central_polling is 0), this value
|
||||
may be useless.
|
||||
|
||||
A list of governors that support the node:
|
||||
- simple_ondmenad
|
||||
- tegra_actmon
|
||||
|
||||
What: /sys/class/devfreq/.../userspace/set_freq
|
||||
Date: September 2011
|
||||
Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
|
||||
Description:
|
||||
The /sys/class/devfreq/.../userspace/set_freq shows and
|
||||
sets the requested frequency for the devfreq object if
|
||||
userspace governor is in effect.
|
||||
|
||||
A list of governors that support the node:
|
||||
- userspace
|
||||
|
||||
What: /sys/class/devfreq/.../timer
|
||||
Date: July 2020
|
||||
Contact: Chanwoo Choi <cw00.choi@samsung.com>
|
||||
|
@ -122,3 +129,6 @@ Description:
|
|||
|
||||
echo deferrable > /sys/class/devfreq/.../timer
|
||||
echo delayed > /sys/class/devfreq/.../timer
|
||||
|
||||
A list of governors that support the node:
|
||||
- simple_ondemand
|
||||
|
|
|
@ -51,6 +51,19 @@ Optional properties only for parent bus device:
|
|||
- exynos,saturation-ratio: the percentage value which is used to calibrate
|
||||
the performance count against total cycle count.
|
||||
|
||||
Optional properties for the interconnect functionality (QoS frequency
|
||||
constraints):
|
||||
- #interconnect-cells: should be 0.
|
||||
- interconnects: as documented in ../interconnect.txt, describes a path at the
|
||||
higher level interconnects used by this interconnect provider.
|
||||
If this interconnect provider is directly linked to a top level interconnect
|
||||
provider the property contains only one phandle. The provider extends
|
||||
the interconnect graph by linking its node to a node registered by provider
|
||||
pointed to by first phandle in the 'interconnects' property.
|
||||
|
||||
- samsung,data-clock-ratio: ratio of the data throughput in B/s to minimum data
|
||||
clock frequency in Hz, default value is 8 when this property is missing.
|
||||
|
||||
Detailed correlation between sub-blocks and power line according to Exynos SoC:
|
||||
- In case of Exynos3250, there are two power line as following:
|
||||
VDD_MIF |--- DMC
|
||||
|
@ -135,7 +148,7 @@ Detailed correlation between sub-blocks and power line according to Exynos SoC:
|
|||
|--- PERIC (Fixed clock rate)
|
||||
|--- FSYS (Fixed clock rate)
|
||||
|
||||
Example1:
|
||||
Example 1:
|
||||
Show the AXI buses of Exynos3250 SoC. Exynos3250 divides the buses to
|
||||
power line (regulator). The MIF (Memory Interface) AXI bus is used to
|
||||
transfer data between DRAM and CPU and uses the VDD_MIF regulator.
|
||||
|
@ -184,7 +197,7 @@ Example1:
|
|||
|L5 |200000 |200000 |400000 |300000 | ||1000000 |
|
||||
----------------------------------------------------------
|
||||
|
||||
Example2 :
|
||||
Example 2:
|
||||
The bus of DMC (Dynamic Memory Controller) block in exynos3250.dtsi
|
||||
is listed below:
|
||||
|
||||
|
@ -419,3 +432,57 @@ Example2 :
|
|||
devfreq = <&bus_leftbus>;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
Example 3:
|
||||
An interconnect path "bus_display -- bus_leftbus -- bus_dmc" on
|
||||
Exynos4412 SoC with video mixer as an interconnect consumer device.
|
||||
|
||||
soc {
|
||||
bus_dmc: bus_dmc {
|
||||
compatible = "samsung,exynos-bus";
|
||||
clocks = <&clock CLK_DIV_DMC>;
|
||||
clock-names = "bus";
|
||||
operating-points-v2 = <&bus_dmc_opp_table>;
|
||||
samsung,data-clock-ratio = <4>;
|
||||
#interconnect-cells = <0>;
|
||||
};
|
||||
|
||||
bus_leftbus: bus_leftbus {
|
||||
compatible = "samsung,exynos-bus";
|
||||
clocks = <&clock CLK_DIV_GDL>;
|
||||
clock-names = "bus";
|
||||
operating-points-v2 = <&bus_leftbus_opp_table>;
|
||||
#interconnect-cells = <0>;
|
||||
interconnects = <&bus_dmc>;
|
||||
};
|
||||
|
||||
bus_display: bus_display {
|
||||
compatible = "samsung,exynos-bus";
|
||||
clocks = <&clock CLK_ACLK160>;
|
||||
clock-names = "bus";
|
||||
operating-points-v2 = <&bus_display_opp_table>;
|
||||
#interconnect-cells = <0>;
|
||||
interconnects = <&bus_leftbus &bus_dmc>;
|
||||
};
|
||||
|
||||
bus_dmc_opp_table: opp_table1 {
|
||||
compatible = "operating-points-v2";
|
||||
/* ... */
|
||||
}
|
||||
|
||||
bus_leftbus_opp_table: opp_table3 {
|
||||
compatible = "operating-points-v2";
|
||||
/* ... */
|
||||
};
|
||||
|
||||
bus_display_opp_table: opp_table4 {
|
||||
compatible = "operating-points-v2";
|
||||
/* .. */
|
||||
};
|
||||
|
||||
&mixer {
|
||||
compatible = "samsung,exynos4212-mixer";
|
||||
interconnects = <&bus_display &bus_dmc>;
|
||||
/* ... */
|
||||
};
|
||||
};
|
||||
|
|
|
@ -65,7 +65,9 @@ Required properties:
|
|||
|
||||
- OPP nodes: One or more OPP nodes describing voltage-current-frequency
|
||||
combinations. Their name isn't significant but their phandle can be used to
|
||||
reference an OPP.
|
||||
reference an OPP. These are mandatory except for the case where the OPP table
|
||||
is present only to indicate dependency between devices using the opp-shared
|
||||
property.
|
||||
|
||||
Optional properties:
|
||||
- opp-shared: Indicates that device nodes using this OPP Table Node's phandle
|
||||
|
@ -568,3 +570,53 @@ Example 6: opp-microvolt-<name>, opp-microamp-<name>:
|
|||
};
|
||||
};
|
||||
};
|
||||
|
||||
Example 7: Single cluster Quad-core ARM cortex A53, OPP points from firmware,
|
||||
distinct clock controls but two sets of clock/voltage/current lines.
|
||||
|
||||
/ {
|
||||
cpus {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <0>;
|
||||
|
||||
cpu@0 {
|
||||
compatible = "arm,cortex-a53";
|
||||
reg = <0x0 0x100>;
|
||||
next-level-cache = <&A53_L2>;
|
||||
clocks = <&dvfs_controller 0>;
|
||||
operating-points-v2 = <&cpu_opp0_table>;
|
||||
};
|
||||
cpu@1 {
|
||||
compatible = "arm,cortex-a53";
|
||||
reg = <0x0 0x101>;
|
||||
next-level-cache = <&A53_L2>;
|
||||
clocks = <&dvfs_controller 1>;
|
||||
operating-points-v2 = <&cpu_opp0_table>;
|
||||
};
|
||||
cpu@2 {
|
||||
compatible = "arm,cortex-a53";
|
||||
reg = <0x0 0x102>;
|
||||
next-level-cache = <&A53_L2>;
|
||||
clocks = <&dvfs_controller 2>;
|
||||
operating-points-v2 = <&cpu_opp1_table>;
|
||||
};
|
||||
cpu@3 {
|
||||
compatible = "arm,cortex-a53";
|
||||
reg = <0x0 0x103>;
|
||||
next-level-cache = <&A53_L2>;
|
||||
clocks = <&dvfs_controller 3>;
|
||||
operating-points-v2 = <&cpu_opp1_table>;
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
cpu_opp0_table: opp0_table {
|
||||
compatible = "operating-points-v2";
|
||||
opp-shared;
|
||||
};
|
||||
|
||||
cpu_opp1_table: opp1_table {
|
||||
compatible = "operating-points-v2";
|
||||
opp-shared;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -71,7 +71,9 @@ to the speed-grade of the silicon. `sustainable_power` is therefore
|
|||
simply an estimate, and may be tuned to affect the aggressiveness of
|
||||
the thermal ramp. For reference, the sustainable power of a 4" phone
|
||||
is typically 2000mW, while on a 10" tablet is around 4500mW (may vary
|
||||
depending on screen size).
|
||||
depending on screen size). It is possible to have the power value
|
||||
expressed in an abstract scale. The sustained power should be aligned
|
||||
to the scale used by the related cooling devices.
|
||||
|
||||
If you are using device tree, do add it as a property of the
|
||||
thermal-zone. For example::
|
||||
|
@ -269,3 +271,11 @@ won't be very good. Note that this is not particular to this
|
|||
governor, step-wise will also misbehave if you call its throttle()
|
||||
faster than the normal thermal framework tick (due to interrupts for
|
||||
example) as it will overreact.
|
||||
|
||||
Energy Model requirements
|
||||
=========================
|
||||
|
||||
Another important thing is the consistent scale of the power values
|
||||
provided by the cooling devices. All of the cooling devices in a single
|
||||
thermal zone should have power values reported either in milli-Watts
|
||||
or scaled to the same 'abstract scale'.
|
||||
|
|
|
@ -20,6 +20,21 @@ possible source of information on its own, the EM framework intervenes as an
|
|||
abstraction layer which standardizes the format of power cost tables in the
|
||||
kernel, hence enabling to avoid redundant work.
|
||||
|
||||
The power values might be expressed in milli-Watts or in an 'abstract scale'.
|
||||
Multiple subsystems might use the EM and it is up to the system integrator to
|
||||
check that the requirements for the power value scale types are met. An example
|
||||
can be found in the Energy-Aware Scheduler documentation
|
||||
Documentation/scheduler/sched-energy.rst. For some subsystems like thermal or
|
||||
powercap power values expressed in an 'abstract scale' might cause issues.
|
||||
These subsystems are more interested in estimation of power used in the past,
|
||||
thus the real milli-Watts might be needed. An example of these requirements can
|
||||
be found in the Intelligent Power Allocation in
|
||||
Documentation/driver-api/thermal/power_allocator.rst.
|
||||
Kernel subsystems might implement automatic detection to check whether EM
|
||||
registered devices have inconsistent scale (based on EM internal flag).
|
||||
Important thing to keep in mind is that when the power values are expressed in
|
||||
an 'abstract scale' deriving real energy in milli-Joules would not be possible.
|
||||
|
||||
The figure below depicts an example of drivers (Arm-specific here, but the
|
||||
approach is applicable to any architecture) providing power costs to the EM
|
||||
framework, and interested clients reading the data from it::
|
||||
|
@ -73,7 +88,7 @@ Drivers are expected to register performance domains into the EM framework by
|
|||
calling the following API::
|
||||
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *cpus);
|
||||
struct em_data_callback *cb, cpumask_t *cpus, bool milliwatts);
|
||||
|
||||
Drivers must provide a callback function returning <frequency, power> tuples
|
||||
for each performance state. The callback function provided by the driver is free
|
||||
|
@ -81,6 +96,10 @@ to fetch data from any relevant location (DT, firmware, ...), and by any mean
|
|||
deemed necessary. Only for CPU devices, drivers must specify the CPUs of the
|
||||
performance domains using cpumask. For other devices than CPUs the last
|
||||
argument must be set to NULL.
|
||||
The last argument 'milliwatts' is important to set with correct value. Kernel
|
||||
subsystems which use EM might rely on this flag to check if all EM devices use
|
||||
the same scale. If there are different scales, these subsystems might decide
|
||||
to: return warning/error, stop working or panic.
|
||||
See Section 3. for an example of driver implementing this
|
||||
callback, and kernel/power/energy_model.c for further documentation on this
|
||||
API.
|
||||
|
@ -156,7 +175,8 @@ EM framework::
|
|||
37 nr_opp = foo_get_nr_opp(policy);
|
||||
38
|
||||
39 /* And register the new performance domain */
|
||||
40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
|
||||
41
|
||||
42 return 0;
|
||||
43 }
|
||||
40 em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
|
||||
41 true);
|
||||
42
|
||||
43 return 0;
|
||||
44 }
|
||||
|
|
|
@ -350,6 +350,11 @@ independent EM framework in Documentation/power/energy-model.rst.
|
|||
Please also note that the scheduling domains need to be re-built after the
|
||||
EM has been registered in order to start EAS.
|
||||
|
||||
EAS uses the EM to make a forecasting decision on energy usage and thus it is
|
||||
more focused on the difference when checking possible options for task
|
||||
placement. For EAS it doesn't matter whether the EM power values are expressed
|
||||
in milli-Watts or in an 'abstract scale'.
|
||||
|
||||
|
||||
6.3 - Energy Model complexity
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
|
@ -11438,7 +11438,6 @@ L: linux-pm@vger.kernel.org
|
|||
L: linux-tegra@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/linux.git
|
||||
S: Maintained
|
||||
F: drivers/devfreq/tegra20-devfreq.c
|
||||
F: drivers/devfreq/tegra30-devfreq.c
|
||||
|
||||
MEMORY MANAGEMENT
|
||||
|
|
|
@ -327,8 +327,9 @@
|
|||
#define MSR_PP1_ENERGY_STATUS 0x00000641
|
||||
#define MSR_PP1_POLICY 0x00000642
|
||||
|
||||
#define MSR_AMD_PKG_ENERGY_STATUS 0xc001029b
|
||||
#define MSR_AMD_RAPL_POWER_UNIT 0xc0010299
|
||||
#define MSR_AMD_CORE_ENERGY_STATUS 0xc001029a
|
||||
#define MSR_AMD_PKG_ENERGY_STATUS 0xc001029b
|
||||
|
||||
/* Config TDP MSRs */
|
||||
#define MSR_CONFIG_TDP_NOMINAL 0x00000648
|
||||
|
|
|
@ -749,7 +749,7 @@ static void acpi_pm_notify_work_func(struct acpi_device_wakeup_context *context)
|
|||
static DEFINE_MUTEX(acpi_wakeup_lock);
|
||||
|
||||
static int __acpi_device_wakeup_enable(struct acpi_device *adev,
|
||||
u32 target_state, int max_count)
|
||||
u32 target_state)
|
||||
{
|
||||
struct acpi_device_wakeup *wakeup = &adev->wakeup;
|
||||
acpi_status status;
|
||||
|
@ -757,16 +757,27 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
|
|||
|
||||
mutex_lock(&acpi_wakeup_lock);
|
||||
|
||||
if (wakeup->enable_count >= max_count)
|
||||
/*
|
||||
* If the device wakeup power is already enabled, disable it and enable
|
||||
* it again in case it depends on the configuration of subordinate
|
||||
* devices and the conditions have changed since it was enabled last
|
||||
* time.
|
||||
*/
|
||||
if (wakeup->enable_count > 0)
|
||||
acpi_disable_wakeup_device_power(adev);
|
||||
|
||||
error = acpi_enable_wakeup_device_power(adev, target_state);
|
||||
if (error) {
|
||||
if (wakeup->enable_count > 0) {
|
||||
acpi_disable_gpe(wakeup->gpe_device, wakeup->gpe_number);
|
||||
wakeup->enable_count = 0;
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (wakeup->enable_count > 0)
|
||||
goto inc;
|
||||
|
||||
error = acpi_enable_wakeup_device_power(adev, target_state);
|
||||
if (error)
|
||||
goto out;
|
||||
|
||||
status = acpi_enable_gpe(wakeup->gpe_device, wakeup->gpe_number);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
acpi_disable_wakeup_device_power(adev);
|
||||
|
@ -778,7 +789,10 @@ static int __acpi_device_wakeup_enable(struct acpi_device *adev,
|
|||
(unsigned int)wakeup->gpe_number);
|
||||
|
||||
inc:
|
||||
wakeup->enable_count++;
|
||||
if (wakeup->enable_count < INT_MAX)
|
||||
wakeup->enable_count++;
|
||||
else
|
||||
acpi_handle_info(adev->handle, "Wakeup enable count out of bounds!\n");
|
||||
|
||||
out:
|
||||
mutex_unlock(&acpi_wakeup_lock);
|
||||
|
@ -799,7 +813,7 @@ out:
|
|||
*/
|
||||
static int acpi_device_wakeup_enable(struct acpi_device *adev, u32 target_state)
|
||||
{
|
||||
return __acpi_device_wakeup_enable(adev, target_state, 1);
|
||||
return __acpi_device_wakeup_enable(adev, target_state);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -829,8 +843,12 @@ out:
|
|||
mutex_unlock(&acpi_wakeup_lock);
|
||||
}
|
||||
|
||||
static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
|
||||
int max_count)
|
||||
/**
|
||||
* acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
|
||||
* @dev: Device to enable/disable to generate wakeup events.
|
||||
* @enable: Whether to enable or disable the wakeup functionality.
|
||||
*/
|
||||
int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
|
||||
{
|
||||
struct acpi_device *adev;
|
||||
int error;
|
||||
|
@ -850,36 +868,14 @@ static int __acpi_pm_set_device_wakeup(struct device *dev, bool enable,
|
|||
return 0;
|
||||
}
|
||||
|
||||
error = __acpi_device_wakeup_enable(adev, acpi_target_system_state(),
|
||||
max_count);
|
||||
error = __acpi_device_wakeup_enable(adev, acpi_target_system_state());
|
||||
if (!error)
|
||||
dev_dbg(dev, "Wakeup enabled by ACPI\n");
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_pm_set_device_wakeup - Enable/disable remote wakeup for given device.
|
||||
* @dev: Device to enable/disable to generate wakeup events.
|
||||
* @enable: Whether to enable or disable the wakeup functionality.
|
||||
*/
|
||||
int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
|
||||
{
|
||||
return __acpi_pm_set_device_wakeup(dev, enable, 1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_pm_set_device_wakeup);
|
||||
|
||||
/**
|
||||
* acpi_pm_set_bridge_wakeup - Enable/disable remote wakeup for given bridge.
|
||||
* @dev: Bridge device to enable/disable to generate wakeup events.
|
||||
* @enable: Whether to enable or disable the wakeup functionality.
|
||||
*/
|
||||
int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
|
||||
{
|
||||
return __acpi_pm_set_device_wakeup(dev, enable, INT_MAX);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(acpi_pm_set_bridge_wakeup);
|
||||
|
||||
/**
|
||||
* acpi_dev_pm_low_power - Put ACPI device into a low-power state.
|
||||
* @dev: Device to put into a low-power state.
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#include <linux/suspend.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
#include "power.h"
|
||||
|
||||
|
@ -210,6 +211,18 @@ static void genpd_sd_counter_inc(struct generic_pm_domain *genpd)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
static struct dentry *genpd_debugfs_dir;
|
||||
|
||||
static void genpd_debug_add(struct generic_pm_domain *genpd);
|
||||
|
||||
static void genpd_debug_remove(struct generic_pm_domain *genpd)
|
||||
{
|
||||
struct dentry *d;
|
||||
|
||||
d = debugfs_lookup(genpd->name, genpd_debugfs_dir);
|
||||
debugfs_remove(d);
|
||||
}
|
||||
|
||||
static void genpd_update_accounting(struct generic_pm_domain *genpd)
|
||||
{
|
||||
ktime_t delta, now;
|
||||
|
@ -234,6 +247,8 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
|
|||
genpd->accounting_time = now;
|
||||
}
|
||||
#else
|
||||
static inline void genpd_debug_add(struct generic_pm_domain *genpd) {}
|
||||
static inline void genpd_debug_remove(struct generic_pm_domain *genpd) {}
|
||||
static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
|
||||
#endif
|
||||
|
||||
|
@ -1142,7 +1157,7 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
|
||||
if (device_wakeup_path(dev) && genpd_is_active_wakeup(genpd))
|
||||
return 0;
|
||||
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start &&
|
||||
|
@ -1196,7 +1211,7 @@ static int genpd_resume_noirq(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
|
||||
if (device_wakeup_path(dev) && genpd_is_active_wakeup(genpd))
|
||||
return pm_generic_resume_noirq(dev);
|
||||
|
||||
genpd_lock(genpd);
|
||||
|
@ -1363,41 +1378,60 @@ static void genpd_complete(struct device *dev)
|
|||
genpd_unlock(genpd);
|
||||
}
|
||||
|
||||
/**
|
||||
* genpd_syscore_switch - Switch power during system core suspend or resume.
|
||||
* @dev: Device that normally is marked as "always on" to switch power for.
|
||||
*
|
||||
* This routine may only be called during the system core (syscore) suspend or
|
||||
* resume phase for devices whose "always on" flags are set.
|
||||
*/
|
||||
static void genpd_syscore_switch(struct device *dev, bool suspend)
|
||||
static void genpd_switch_state(struct device *dev, bool suspend)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
bool use_lock;
|
||||
|
||||
genpd = dev_to_genpd_safe(dev);
|
||||
if (!genpd)
|
||||
return;
|
||||
|
||||
use_lock = genpd_is_irq_safe(genpd);
|
||||
|
||||
if (use_lock)
|
||||
genpd_lock(genpd);
|
||||
|
||||
if (suspend) {
|
||||
genpd->suspended_count++;
|
||||
genpd_sync_power_off(genpd, false, 0);
|
||||
genpd_sync_power_off(genpd, use_lock, 0);
|
||||
} else {
|
||||
genpd_sync_power_on(genpd, false, 0);
|
||||
genpd_sync_power_on(genpd, use_lock, 0);
|
||||
genpd->suspended_count--;
|
||||
}
|
||||
|
||||
if (use_lock)
|
||||
genpd_unlock(genpd);
|
||||
}
|
||||
|
||||
void pm_genpd_syscore_poweroff(struct device *dev)
|
||||
/**
|
||||
* dev_pm_genpd_suspend - Synchronously try to suspend the genpd for @dev
|
||||
* @dev: The device that is attached to the genpd, that can be suspended.
|
||||
*
|
||||
* This routine should typically be called for a device that needs to be
|
||||
* suspended during the syscore suspend phase. It may also be called during
|
||||
* suspend-to-idle to suspend a corresponding CPU device that is attached to a
|
||||
* genpd.
|
||||
*/
|
||||
void dev_pm_genpd_suspend(struct device *dev)
|
||||
{
|
||||
genpd_syscore_switch(dev, true);
|
||||
genpd_switch_state(dev, true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweroff);
|
||||
EXPORT_SYMBOL_GPL(dev_pm_genpd_suspend);
|
||||
|
||||
void pm_genpd_syscore_poweron(struct device *dev)
|
||||
/**
|
||||
* dev_pm_genpd_resume - Synchronously try to resume the genpd for @dev
|
||||
* @dev: The device that is attached to the genpd, which needs to be resumed.
|
||||
*
|
||||
* This routine should typically be called for a device that needs to be resumed
|
||||
* during the syscore resume phase. It may also be called during suspend-to-idle
|
||||
* to resume a corresponding CPU device that is attached to a genpd.
|
||||
*/
|
||||
void dev_pm_genpd_resume(struct device *dev)
|
||||
{
|
||||
genpd_syscore_switch(dev, false);
|
||||
genpd_switch_state(dev, false);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
|
||||
EXPORT_SYMBOL_GPL(dev_pm_genpd_resume);
|
||||
|
||||
#else /* !CONFIG_PM_SLEEP */
|
||||
|
||||
|
@ -1954,6 +1988,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
|
|||
|
||||
mutex_lock(&gpd_list_lock);
|
||||
list_add(&genpd->gpd_list_node, &gpd_list);
|
||||
genpd_debug_add(genpd);
|
||||
mutex_unlock(&gpd_list_lock);
|
||||
|
||||
return 0;
|
||||
|
@ -1987,6 +2022,7 @@ static int genpd_remove(struct generic_pm_domain *genpd)
|
|||
kfree(link);
|
||||
}
|
||||
|
||||
genpd_debug_remove(genpd);
|
||||
list_del(&genpd->gpd_list_node);
|
||||
genpd_unlock(genpd);
|
||||
cancel_work_sync(&genpd->power_off_work);
|
||||
|
@ -2249,7 +2285,7 @@ int of_genpd_add_provider_onecell(struct device_node *np,
|
|||
* Save table for faster processing while setting
|
||||
* performance state.
|
||||
*/
|
||||
genpd->opp_table = dev_pm_opp_get_opp_table_indexed(&genpd->dev, i);
|
||||
genpd->opp_table = dev_pm_opp_get_opp_table(&genpd->dev);
|
||||
WARN_ON(IS_ERR(genpd->opp_table));
|
||||
}
|
||||
|
||||
|
@ -2893,14 +2929,6 @@ core_initcall(genpd_bus_init);
|
|||
/*** debugfs support ***/
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
#include <linux/pm.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kobject.h>
|
||||
static struct dentry *genpd_debugfs_dir;
|
||||
|
||||
/*
|
||||
* TODO: This function is a slightly modified version of rtpm_status_show
|
||||
* from sysfs.c, so generalize it.
|
||||
|
@ -3177,9 +3205,34 @@ DEFINE_SHOW_ATTRIBUTE(total_idle_time);
|
|||
DEFINE_SHOW_ATTRIBUTE(devices);
|
||||
DEFINE_SHOW_ATTRIBUTE(perf_state);
|
||||
|
||||
static int __init genpd_debug_init(void)
|
||||
static void genpd_debug_add(struct generic_pm_domain *genpd)
|
||||
{
|
||||
struct dentry *d;
|
||||
|
||||
if (!genpd_debugfs_dir)
|
||||
return;
|
||||
|
||||
d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
|
||||
|
||||
debugfs_create_file("current_state", 0444,
|
||||
d, genpd, &status_fops);
|
||||
debugfs_create_file("sub_domains", 0444,
|
||||
d, genpd, &sub_domains_fops);
|
||||
debugfs_create_file("idle_states", 0444,
|
||||
d, genpd, &idle_states_fops);
|
||||
debugfs_create_file("active_time", 0444,
|
||||
d, genpd, &active_time_fops);
|
||||
debugfs_create_file("total_idle_time", 0444,
|
||||
d, genpd, &total_idle_time_fops);
|
||||
debugfs_create_file("devices", 0444,
|
||||
d, genpd, &devices_fops);
|
||||
if (genpd->set_performance_state)
|
||||
debugfs_create_file("perf_state", 0444,
|
||||
d, genpd, &perf_state_fops);
|
||||
}
|
||||
|
||||
static int __init genpd_debug_init(void)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
|
||||
|
@ -3187,25 +3240,8 @@ static int __init genpd_debug_init(void)
|
|||
debugfs_create_file("pm_genpd_summary", S_IRUGO, genpd_debugfs_dir,
|
||||
NULL, &summary_fops);
|
||||
|
||||
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
|
||||
d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
|
||||
|
||||
debugfs_create_file("current_state", 0444,
|
||||
d, genpd, &status_fops);
|
||||
debugfs_create_file("sub_domains", 0444,
|
||||
d, genpd, &sub_domains_fops);
|
||||
debugfs_create_file("idle_states", 0444,
|
||||
d, genpd, &idle_states_fops);
|
||||
debugfs_create_file("active_time", 0444,
|
||||
d, genpd, &active_time_fops);
|
||||
debugfs_create_file("total_idle_time", 0444,
|
||||
d, genpd, &total_idle_time_fops);
|
||||
debugfs_create_file("devices", 0444,
|
||||
d, genpd, &devices_fops);
|
||||
if (genpd->set_performance_state)
|
||||
debugfs_create_file("perf_state", 0444,
|
||||
d, genpd, &perf_state_fops);
|
||||
}
|
||||
list_for_each_entry(genpd, &gpd_list, gpd_list_node)
|
||||
genpd_debug_add(genpd);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -441,9 +441,9 @@ static pm_callback_t pm_noirq_op(const struct dev_pm_ops *ops, pm_message_t stat
|
|||
|
||||
static void pm_dev_dbg(struct device *dev, pm_message_t state, const char *info)
|
||||
{
|
||||
dev_dbg(dev, "%s%s%s\n", info, pm_verb(state.event),
|
||||
dev_dbg(dev, "%s%s%s driver flags: %x\n", info, pm_verb(state.event),
|
||||
((state.event & PM_EVENT_SLEEP) && device_may_wakeup(dev)) ?
|
||||
", may wakeup" : "");
|
||||
", may wakeup" : "", dev->power.driver_flags);
|
||||
}
|
||||
|
||||
static void pm_dev_err(struct device *dev, pm_message_t state, const char *info,
|
||||
|
@ -1359,7 +1359,7 @@ static void dpm_propagate_wakeup_to_parent(struct device *dev)
|
|||
|
||||
spin_lock_irq(&parent->power.lock);
|
||||
|
||||
if (dev->power.wakeup_path && !parent->power.ignore_children)
|
||||
if (device_wakeup_path(dev) && !parent->power.ignore_children)
|
||||
parent->power.wakeup_path = true;
|
||||
|
||||
spin_unlock_irq(&parent->power.lock);
|
||||
|
@ -1627,7 +1627,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
|
|||
goto Complete;
|
||||
|
||||
/* Avoid direct_complete to let wakeup_path propagate. */
|
||||
if (device_may_wakeup(dev) || dev->power.wakeup_path)
|
||||
if (device_may_wakeup(dev) || device_wakeup_path(dev))
|
||||
dev->power.direct_complete = false;
|
||||
|
||||
if (dev->power.direct_complete) {
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/clk-provider.h>
|
||||
#include <linux/clk/tegra.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -235,6 +236,7 @@ void tegra20_clk_set_emc_round_callback(tegra20_clk_emc_round_cb *round_cb,
|
|||
emc->cb_arg = cb_arg;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tegra20_clk_set_emc_round_callback);
|
||||
|
||||
bool tegra20_clk_emc_driver_available(struct clk_hw *emc_hw)
|
||||
{
|
||||
|
@ -291,3 +293,4 @@ int tegra20_clk_prepare_emc_mc_same_freq(struct clk *emc_clk, bool same)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tegra20_clk_prepare_emc_mc_same_freq);
|
||||
|
|
|
@ -668,7 +668,7 @@ static void sh_cmt_clocksource_suspend(struct clocksource *cs)
|
|||
return;
|
||||
|
||||
sh_cmt_stop(ch, FLAG_CLOCKSOURCE);
|
||||
pm_genpd_syscore_poweroff(&ch->cmt->pdev->dev);
|
||||
dev_pm_genpd_suspend(&ch->cmt->pdev->dev);
|
||||
}
|
||||
|
||||
static void sh_cmt_clocksource_resume(struct clocksource *cs)
|
||||
|
@ -678,7 +678,7 @@ static void sh_cmt_clocksource_resume(struct clocksource *cs)
|
|||
if (!ch->cs_enabled)
|
||||
return;
|
||||
|
||||
pm_genpd_syscore_poweron(&ch->cmt->pdev->dev);
|
||||
dev_pm_genpd_resume(&ch->cmt->pdev->dev);
|
||||
sh_cmt_start(ch, FLAG_CLOCKSOURCE);
|
||||
}
|
||||
|
||||
|
@ -770,7 +770,7 @@ static void sh_cmt_clock_event_suspend(struct clock_event_device *ced)
|
|||
{
|
||||
struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
|
||||
|
||||
pm_genpd_syscore_poweroff(&ch->cmt->pdev->dev);
|
||||
dev_pm_genpd_suspend(&ch->cmt->pdev->dev);
|
||||
clk_unprepare(ch->cmt->clk);
|
||||
}
|
||||
|
||||
|
@ -779,7 +779,7 @@ static void sh_cmt_clock_event_resume(struct clock_event_device *ced)
|
|||
struct sh_cmt_channel *ch = ced_to_sh_cmt(ced);
|
||||
|
||||
clk_prepare(ch->cmt->clk);
|
||||
pm_genpd_syscore_poweron(&ch->cmt->pdev->dev);
|
||||
dev_pm_genpd_resume(&ch->cmt->pdev->dev);
|
||||
}
|
||||
|
||||
static int sh_cmt_register_clockevent(struct sh_cmt_channel *ch,
|
||||
|
|
|
@ -297,12 +297,12 @@ static int sh_mtu2_clock_event_set_periodic(struct clock_event_device *ced)
|
|||
|
||||
static void sh_mtu2_clock_event_suspend(struct clock_event_device *ced)
|
||||
{
|
||||
pm_genpd_syscore_poweroff(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
|
||||
dev_pm_genpd_suspend(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
|
||||
}
|
||||
|
||||
static void sh_mtu2_clock_event_resume(struct clock_event_device *ced)
|
||||
{
|
||||
pm_genpd_syscore_poweron(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
|
||||
dev_pm_genpd_resume(&ced_to_sh_mtu2(ced)->mtu->pdev->dev);
|
||||
}
|
||||
|
||||
static void sh_mtu2_register_clockevent(struct sh_mtu2_channel *ch,
|
||||
|
|
|
@ -292,7 +292,7 @@ static void sh_tmu_clocksource_suspend(struct clocksource *cs)
|
|||
|
||||
if (--ch->enable_count == 0) {
|
||||
__sh_tmu_disable(ch);
|
||||
pm_genpd_syscore_poweroff(&ch->tmu->pdev->dev);
|
||||
dev_pm_genpd_suspend(&ch->tmu->pdev->dev);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -304,7 +304,7 @@ static void sh_tmu_clocksource_resume(struct clocksource *cs)
|
|||
return;
|
||||
|
||||
if (ch->enable_count++ == 0) {
|
||||
pm_genpd_syscore_poweron(&ch->tmu->pdev->dev);
|
||||
dev_pm_genpd_resume(&ch->tmu->pdev->dev);
|
||||
__sh_tmu_enable(ch);
|
||||
}
|
||||
}
|
||||
|
@ -394,12 +394,12 @@ static int sh_tmu_clock_event_next(unsigned long delta,
|
|||
|
||||
static void sh_tmu_clock_event_suspend(struct clock_event_device *ced)
|
||||
{
|
||||
pm_genpd_syscore_poweroff(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
|
||||
dev_pm_genpd_suspend(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
|
||||
}
|
||||
|
||||
static void sh_tmu_clock_event_resume(struct clock_event_device *ced)
|
||||
{
|
||||
pm_genpd_syscore_poweron(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
|
||||
dev_pm_genpd_resume(&ced_to_sh_tmu(ced)->tmu->pdev->dev);
|
||||
}
|
||||
|
||||
static void sh_tmu_register_clockevent(struct sh_tmu_channel *ch,
|
||||
|
|
|
@ -94,7 +94,7 @@ config ARM_IMX6Q_CPUFREQ
|
|||
tristate "Freescale i.MX6 cpufreq support"
|
||||
depends on ARCH_MXC
|
||||
depends on REGULATOR_ANATOP
|
||||
select NVMEM_IMX_OCOTP
|
||||
depends on NVMEM_IMX_OCOTP || COMPILE_TEST
|
||||
select PM_OPP
|
||||
help
|
||||
This adds cpufreq driver support for Freescale i.MX6 series SoCs.
|
||||
|
|
|
@ -204,6 +204,12 @@ static void __exit armada_8k_cpufreq_exit(void)
|
|||
}
|
||||
module_exit(armada_8k_cpufreq_exit);
|
||||
|
||||
static const struct of_device_id __maybe_unused armada_8k_cpufreq_of_match[] = {
|
||||
{ .compatible = "marvell,ap806-cpu-clock" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, armada_8k_cpufreq_of_match);
|
||||
|
||||
MODULE_AUTHOR("Gregory Clement <gregory.clement@bootlin.com>");
|
||||
MODULE_DESCRIPTION("Armada 8K cpufreq driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -26,8 +26,8 @@
|
|||
/* Minimum struct length needed for the DMI processor entry we want */
|
||||
#define DMI_ENTRY_PROCESSOR_MIN_LENGTH 48
|
||||
|
||||
/* Offest in the DMI processor structure for the max frequency */
|
||||
#define DMI_PROCESSOR_MAX_SPEED 0x14
|
||||
/* Offset in the DMI processor structure for the max frequency */
|
||||
#define DMI_PROCESSOR_MAX_SPEED 0x14
|
||||
|
||||
/*
|
||||
* These structs contain information parsed from per CPU
|
||||
|
@ -96,11 +96,11 @@ static u64 cppc_get_dmi_max_khz(void)
|
|||
* and extrapolate the rest
|
||||
* For perf/freq > Nominal, we use the ratio perf:freq at Nominal for conversion
|
||||
*/
|
||||
static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
|
||||
unsigned int perf)
|
||||
static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu_data,
|
||||
unsigned int perf)
|
||||
{
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
static u64 max_khz;
|
||||
struct cppc_perf_caps *caps = &cpu->perf_caps;
|
||||
u64 mul, div;
|
||||
|
||||
if (caps->lowest_freq && caps->nominal_freq) {
|
||||
|
@ -120,11 +120,11 @@ static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
|
|||
return (u64)perf * mul / div;
|
||||
}
|
||||
|
||||
static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu,
|
||||
unsigned int freq)
|
||||
static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
|
||||
unsigned int freq)
|
||||
{
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
static u64 max_khz;
|
||||
struct cppc_perf_caps *caps = &cpu->perf_caps;
|
||||
u64 mul, div;
|
||||
|
||||
if (caps->lowest_freq && caps->nominal_freq) {
|
||||
|
@ -146,32 +146,30 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu,
|
|||
}
|
||||
|
||||
static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
struct cppc_cpudata *cpu;
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
|
||||
struct cpufreq_freqs freqs;
|
||||
u32 desired_perf;
|
||||
int ret = 0;
|
||||
|
||||
cpu = all_cpu_data[policy->cpu];
|
||||
|
||||
desired_perf = cppc_cpufreq_khz_to_perf(cpu, target_freq);
|
||||
desired_perf = cppc_cpufreq_khz_to_perf(cpu_data, target_freq);
|
||||
/* Return if it is exactly the same perf */
|
||||
if (desired_perf == cpu->perf_ctrls.desired_perf)
|
||||
if (desired_perf == cpu_data->perf_ctrls.desired_perf)
|
||||
return ret;
|
||||
|
||||
cpu->perf_ctrls.desired_perf = desired_perf;
|
||||
cpu_data->perf_ctrls.desired_perf = desired_perf;
|
||||
freqs.old = policy->cur;
|
||||
freqs.new = target_freq;
|
||||
|
||||
cpufreq_freq_transition_begin(policy, &freqs);
|
||||
ret = cppc_set_perf(cpu->cpu, &cpu->perf_ctrls);
|
||||
ret = cppc_set_perf(cpu_data->cpu, &cpu_data->perf_ctrls);
|
||||
cpufreq_freq_transition_end(policy, &freqs, ret != 0);
|
||||
|
||||
if (ret)
|
||||
pr_debug("Failed to set target on CPU:%d. ret:%d\n",
|
||||
cpu->cpu, ret);
|
||||
cpu_data->cpu, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -184,28 +182,29 @@ static int cppc_verify_policy(struct cpufreq_policy_data *policy)
|
|||
|
||||
static void cppc_cpufreq_stop_cpu(struct cpufreq_policy *policy)
|
||||
{
|
||||
int cpu_num = policy->cpu;
|
||||
struct cppc_cpudata *cpu = all_cpu_data[cpu_num];
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
unsigned int cpu = policy->cpu;
|
||||
int ret;
|
||||
|
||||
cpu->perf_ctrls.desired_perf = cpu->perf_caps.lowest_perf;
|
||||
cpu_data->perf_ctrls.desired_perf = caps->lowest_perf;
|
||||
|
||||
ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls);
|
||||
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
|
||||
if (ret)
|
||||
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
|
||||
cpu->perf_caps.lowest_perf, cpu_num, ret);
|
||||
caps->lowest_perf, cpu, ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* The PCC subspace describes the rate at which platform can accept commands
|
||||
* on the shared PCC channel (including READs which do not count towards freq
|
||||
* trasition requests), so ideally we need to use the PCC values as a fallback
|
||||
* transition requests), so ideally we need to use the PCC values as a fallback
|
||||
* if we don't have a platform specific transition_delay_us
|
||||
*/
|
||||
#ifdef CONFIG_ARM64
|
||||
#include <asm/cputype.h>
|
||||
|
||||
static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
|
||||
static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
|
||||
{
|
||||
unsigned long implementor = read_cpuid_implementor();
|
||||
unsigned long part_num = read_cpuid_part_number();
|
||||
|
@ -233,7 +232,7 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
|
|||
|
||||
#else
|
||||
|
||||
static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
|
||||
static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
|
||||
{
|
||||
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
|
||||
}
|
||||
|
@ -241,54 +240,57 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)
|
|||
|
||||
static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cppc_cpudata *cpu;
|
||||
unsigned int cpu_num = policy->cpu;
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
unsigned int cpu = policy->cpu;
|
||||
int ret = 0;
|
||||
|
||||
cpu = all_cpu_data[policy->cpu];
|
||||
|
||||
cpu->cpu = cpu_num;
|
||||
ret = cppc_get_perf_caps(policy->cpu, &cpu->perf_caps);
|
||||
cpu_data->cpu = cpu;
|
||||
ret = cppc_get_perf_caps(cpu, caps);
|
||||
|
||||
if (ret) {
|
||||
pr_debug("Err reading CPU%d perf capabilities. ret:%d\n",
|
||||
cpu_num, ret);
|
||||
cpu, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Convert the lowest and nominal freq from MHz to KHz */
|
||||
cpu->perf_caps.lowest_freq *= 1000;
|
||||
cpu->perf_caps.nominal_freq *= 1000;
|
||||
caps->lowest_freq *= 1000;
|
||||
caps->nominal_freq *= 1000;
|
||||
|
||||
/*
|
||||
* Set min to lowest nonlinear perf to avoid any efficiency penalty (see
|
||||
* Section 8.4.7.1.1.5 of ACPI 6.1 spec)
|
||||
*/
|
||||
policy->min = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_nonlinear_perf);
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
|
||||
policy->min = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->lowest_nonlinear_perf);
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->nominal_perf);
|
||||
|
||||
/*
|
||||
* Set cpuinfo.min_freq to Lowest to make the full range of performance
|
||||
* available if userspace wants to use any perf between lowest & lowest
|
||||
* nonlinear perf
|
||||
*/
|
||||
policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.lowest_perf);
|
||||
policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu, cpu->perf_caps.nominal_perf);
|
||||
policy->cpuinfo.min_freq = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->lowest_perf);
|
||||
policy->cpuinfo.max_freq = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->nominal_perf);
|
||||
|
||||
policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num);
|
||||
policy->shared_type = cpu->shared_type;
|
||||
policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu);
|
||||
policy->shared_type = cpu_data->shared_type;
|
||||
|
||||
if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
|
||||
int i;
|
||||
|
||||
cpumask_copy(policy->cpus, cpu->shared_cpu_map);
|
||||
cpumask_copy(policy->cpus, cpu_data->shared_cpu_map);
|
||||
|
||||
for_each_cpu(i, policy->cpus) {
|
||||
if (unlikely(i == policy->cpu))
|
||||
if (unlikely(i == cpu))
|
||||
continue;
|
||||
|
||||
memcpy(&all_cpu_data[i]->perf_caps, &cpu->perf_caps,
|
||||
sizeof(cpu->perf_caps));
|
||||
memcpy(&all_cpu_data[i]->perf_caps, caps,
|
||||
sizeof(cpu_data->perf_caps));
|
||||
}
|
||||
} else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) {
|
||||
/* Support only SW_ANY for now. */
|
||||
|
@ -296,24 +298,23 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
|||
return -EFAULT;
|
||||
}
|
||||
|
||||
cpu->cur_policy = policy;
|
||||
cpu_data->cur_policy = policy;
|
||||
|
||||
/*
|
||||
* If 'highest_perf' is greater than 'nominal_perf', we assume CPU Boost
|
||||
* is supported.
|
||||
*/
|
||||
if (cpu->perf_caps.highest_perf > cpu->perf_caps.nominal_perf)
|
||||
if (caps->highest_perf > caps->nominal_perf)
|
||||
boost_supported = true;
|
||||
|
||||
/* Set policy->cur to max now. The governors will adjust later. */
|
||||
policy->cur = cppc_cpufreq_perf_to_khz(cpu,
|
||||
cpu->perf_caps.highest_perf);
|
||||
cpu->perf_ctrls.desired_perf = cpu->perf_caps.highest_perf;
|
||||
policy->cur = cppc_cpufreq_perf_to_khz(cpu_data, caps->highest_perf);
|
||||
cpu_data->perf_ctrls.desired_perf = caps->highest_perf;
|
||||
|
||||
ret = cppc_set_perf(cpu_num, &cpu->perf_ctrls);
|
||||
ret = cppc_set_perf(cpu, &cpu_data->perf_ctrls);
|
||||
if (ret)
|
||||
pr_debug("Err setting perf value:%d on CPU:%d. ret:%d\n",
|
||||
cpu->perf_caps.highest_perf, cpu_num, ret);
|
||||
caps->highest_perf, cpu, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -326,7 +327,7 @@ static inline u64 get_delta(u64 t1, u64 t0)
|
|||
return (u32)t1 - (u32)t0;
|
||||
}
|
||||
|
||||
static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu,
|
||||
static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu_data,
|
||||
struct cppc_perf_fb_ctrs fb_ctrs_t0,
|
||||
struct cppc_perf_fb_ctrs fb_ctrs_t1)
|
||||
{
|
||||
|
@ -345,33 +346,34 @@ static int cppc_get_rate_from_fbctrs(struct cppc_cpudata *cpu,
|
|||
delivered_perf = (reference_perf * delta_delivered) /
|
||||
delta_reference;
|
||||
else
|
||||
delivered_perf = cpu->perf_ctrls.desired_perf;
|
||||
delivered_perf = cpu_data->perf_ctrls.desired_perf;
|
||||
|
||||
return cppc_cpufreq_perf_to_khz(cpu, delivered_perf);
|
||||
return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
|
||||
}
|
||||
|
||||
static unsigned int cppc_cpufreq_get_rate(unsigned int cpunum)
|
||||
static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
|
||||
struct cppc_cpudata *cpu = all_cpu_data[cpunum];
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[cpu];
|
||||
int ret;
|
||||
|
||||
ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t0);
|
||||
ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
udelay(2); /* 2usec delay between sampling */
|
||||
|
||||
ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t1);
|
||||
ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return cppc_get_rate_from_fbctrs(cpu, fb_ctrs_t0, fb_ctrs_t1);
|
||||
return cppc_get_rate_from_fbctrs(cpu_data, fb_ctrs_t0, fb_ctrs_t1);
|
||||
}
|
||||
|
||||
static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
|
||||
{
|
||||
struct cppc_cpudata *cpudata;
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[policy->cpu];
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
int ret;
|
||||
|
||||
if (!boost_supported) {
|
||||
|
@ -379,13 +381,12 @@ static int cppc_cpufreq_set_boost(struct cpufreq_policy *policy, int state)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
cpudata = all_cpu_data[policy->cpu];
|
||||
if (state)
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpudata,
|
||||
cpudata->perf_caps.highest_perf);
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->highest_perf);
|
||||
else
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpudata,
|
||||
cpudata->perf_caps.nominal_perf);
|
||||
policy->max = cppc_cpufreq_perf_to_khz(cpu_data,
|
||||
caps->nominal_perf);
|
||||
policy->cpuinfo.max_freq = policy->max;
|
||||
|
||||
ret = freq_qos_update_request(policy->max_freq_req, policy->max);
|
||||
|
@ -412,17 +413,17 @@ static struct cpufreq_driver cppc_cpufreq_driver = {
|
|||
* platform specific mechanism. We reuse the desired performance register to
|
||||
* store the real performance calculated by the platform.
|
||||
*/
|
||||
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpunum)
|
||||
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu)
|
||||
{
|
||||
struct cppc_cpudata *cpudata = all_cpu_data[cpunum];
|
||||
struct cppc_cpudata *cpu_data = all_cpu_data[cpu];
|
||||
u64 desired_perf;
|
||||
int ret;
|
||||
|
||||
ret = cppc_get_desired_perf(cpunum, &desired_perf);
|
||||
ret = cppc_get_desired_perf(cpu, &desired_perf);
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
|
||||
return cppc_cpufreq_perf_to_khz(cpudata, desired_perf);
|
||||
return cppc_cpufreq_perf_to_khz(cpu_data, desired_perf);
|
||||
}
|
||||
|
||||
static void cppc_check_hisi_workaround(void)
|
||||
|
@ -450,8 +451,8 @@ static void cppc_check_hisi_workaround(void)
|
|||
|
||||
static int __init cppc_cpufreq_init(void)
|
||||
{
|
||||
struct cppc_cpudata *cpu_data;
|
||||
int i, ret = 0;
|
||||
struct cppc_cpudata *cpu;
|
||||
|
||||
if (acpi_disabled)
|
||||
return -ENODEV;
|
||||
|
@ -466,8 +467,8 @@ static int __init cppc_cpufreq_init(void)
|
|||
if (!all_cpu_data[i])
|
||||
goto out;
|
||||
|
||||
cpu = all_cpu_data[i];
|
||||
if (!zalloc_cpumask_var(&cpu->shared_cpu_map, GFP_KERNEL))
|
||||
cpu_data = all_cpu_data[i];
|
||||
if (!zalloc_cpumask_var(&cpu_data->shared_cpu_map, GFP_KERNEL))
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -487,11 +488,11 @@ static int __init cppc_cpufreq_init(void)
|
|||
|
||||
out:
|
||||
for_each_possible_cpu(i) {
|
||||
cpu = all_cpu_data[i];
|
||||
if (!cpu)
|
||||
cpu_data = all_cpu_data[i];
|
||||
if (!cpu_data)
|
||||
break;
|
||||
free_cpumask_var(cpu->shared_cpu_map);
|
||||
kfree(cpu);
|
||||
free_cpumask_var(cpu_data->shared_cpu_map);
|
||||
kfree(cpu_data);
|
||||
}
|
||||
|
||||
kfree(all_cpu_data);
|
||||
|
@ -500,15 +501,15 @@ out:
|
|||
|
||||
static void __exit cppc_cpufreq_exit(void)
|
||||
{
|
||||
struct cppc_cpudata *cpu;
|
||||
struct cppc_cpudata *cpu_data;
|
||||
int i;
|
||||
|
||||
cpufreq_unregister_driver(&cppc_cpufreq_driver);
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
cpu = all_cpu_data[i];
|
||||
free_cpumask_var(cpu->shared_cpu_map);
|
||||
kfree(cpu);
|
||||
cpu_data = all_cpu_data[i];
|
||||
free_cpumask_var(cpu_data->shared_cpu_map);
|
||||
kfree(cpu_data);
|
||||
}
|
||||
|
||||
kfree(all_cpu_data);
|
||||
|
|
|
@ -119,10 +119,12 @@ static const struct of_device_id blacklist[] __initconst = {
|
|||
{ .compatible = "mediatek,mt2712", },
|
||||
{ .compatible = "mediatek,mt7622", },
|
||||
{ .compatible = "mediatek,mt7623", },
|
||||
{ .compatible = "mediatek,mt8167", },
|
||||
{ .compatible = "mediatek,mt817x", },
|
||||
{ .compatible = "mediatek,mt8173", },
|
||||
{ .compatible = "mediatek,mt8176", },
|
||||
{ .compatible = "mediatek,mt8183", },
|
||||
{ .compatible = "mediatek,mt8516", },
|
||||
|
||||
{ .compatible = "nvidia,tegra20", },
|
||||
{ .compatible = "nvidia,tegra30", },
|
||||
|
|
|
@ -30,7 +30,7 @@ struct private_data {
|
|||
cpumask_var_t cpus;
|
||||
struct device *cpu_dev;
|
||||
struct opp_table *opp_table;
|
||||
struct opp_table *reg_opp_table;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
bool have_static_opps;
|
||||
};
|
||||
|
||||
|
@ -102,7 +102,6 @@ node_put:
|
|||
|
||||
static int cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct private_data *priv;
|
||||
struct device *cpu_dev;
|
||||
struct clk *cpu_clk;
|
||||
|
@ -114,9 +113,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
pr_err("failed to find data for cpu%d\n", policy->cpu);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
cpu_dev = priv->cpu_dev;
|
||||
cpumask_copy(policy->cpus, priv->cpus);
|
||||
|
||||
cpu_clk = clk_get(cpu_dev, NULL);
|
||||
if (IS_ERR(cpu_clk)) {
|
||||
|
@ -125,67 +122,32 @@ static int cpufreq_init(struct cpufreq_policy *policy)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Initialize OPP tables for all policy->cpus. They will be shared by
|
||||
* all CPUs which have marked their CPUs shared with OPP bindings.
|
||||
*
|
||||
* For platforms not using operating-points-v2 bindings, we do this
|
||||
* before updating policy->cpus. Otherwise, we will end up creating
|
||||
* duplicate OPPs for policy->cpus.
|
||||
*
|
||||
* OPPs might be populated at runtime, don't check for error here
|
||||
*/
|
||||
if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
|
||||
priv->have_static_opps = true;
|
||||
|
||||
/*
|
||||
* But we need OPP table to function so if it is not there let's
|
||||
* give platform code chance to provide it for us.
|
||||
*/
|
||||
ret = dev_pm_opp_get_opp_count(cpu_dev);
|
||||
if (ret <= 0) {
|
||||
dev_err(cpu_dev, "OPP table can't be empty\n");
|
||||
ret = -ENODEV;
|
||||
goto out_free_opp;
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
|
||||
goto out_free_opp;
|
||||
}
|
||||
transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev);
|
||||
if (!transition_latency)
|
||||
transition_latency = CPUFREQ_ETERNAL;
|
||||
|
||||
cpumask_copy(policy->cpus, priv->cpus);
|
||||
policy->driver_data = priv;
|
||||
policy->clk = cpu_clk;
|
||||
policy->freq_table = freq_table;
|
||||
|
||||
policy->freq_table = priv->freq_table;
|
||||
policy->suspend_freq = dev_pm_opp_get_suspend_opp_freq(cpu_dev) / 1000;
|
||||
policy->cpuinfo.transition_latency = transition_latency;
|
||||
policy->dvfs_possible_from_any_cpu = true;
|
||||
|
||||
/* Support turbo/boost mode */
|
||||
if (policy_has_boost_freq(policy)) {
|
||||
/* This gets disabled by core on driver unregister */
|
||||
ret = cpufreq_enable_boost_support();
|
||||
if (ret)
|
||||
goto out_free_cpufreq_table;
|
||||
goto out_clk_put;
|
||||
cpufreq_dt_attr[1] = &cpufreq_freq_attr_scaling_boost_freqs;
|
||||
}
|
||||
|
||||
transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev);
|
||||
if (!transition_latency)
|
||||
transition_latency = CPUFREQ_ETERNAL;
|
||||
|
||||
policy->cpuinfo.transition_latency = transition_latency;
|
||||
policy->dvfs_possible_from_any_cpu = true;
|
||||
|
||||
dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_cpufreq_table:
|
||||
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
|
||||
out_free_opp:
|
||||
if (priv->have_static_opps)
|
||||
dev_pm_opp_of_cpumask_remove_table(policy->cpus);
|
||||
out_clk_put:
|
||||
clk_put(cpu_clk);
|
||||
|
||||
return ret;
|
||||
|
@ -208,11 +170,6 @@ static int cpufreq_offline(struct cpufreq_policy *policy)
|
|||
|
||||
static int cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct private_data *priv = policy->driver_data;
|
||||
|
||||
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
|
||||
if (priv->have_static_opps)
|
||||
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
|
||||
clk_put(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
@ -236,6 +193,7 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
|
|||
{
|
||||
struct private_data *priv;
|
||||
struct device *cpu_dev;
|
||||
bool fallback = false;
|
||||
const char *reg_name;
|
||||
int ret;
|
||||
|
||||
|
@ -254,68 +212,86 @@ static int dt_cpufreq_early_init(struct device *dev, int cpu)
|
|||
if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL))
|
||||
return -ENOMEM;
|
||||
|
||||
cpumask_set_cpu(cpu, priv->cpus);
|
||||
priv->cpu_dev = cpu_dev;
|
||||
|
||||
/* Try to get OPP table early to ensure resources are available */
|
||||
priv->opp_table = dev_pm_opp_get_opp_table(cpu_dev);
|
||||
if (IS_ERR(priv->opp_table)) {
|
||||
ret = PTR_ERR(priv->opp_table);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(cpu_dev, "failed to get OPP table: %d\n", ret);
|
||||
goto free_cpumask;
|
||||
}
|
||||
|
||||
/*
|
||||
* OPP layer will be taking care of regulators now, but it needs to know
|
||||
* the name of the regulator first.
|
||||
*/
|
||||
reg_name = find_supply_name(cpu_dev);
|
||||
if (reg_name) {
|
||||
priv->reg_opp_table = dev_pm_opp_set_regulators(cpu_dev,
|
||||
®_name, 1);
|
||||
if (IS_ERR(priv->reg_opp_table)) {
|
||||
ret = PTR_ERR(priv->reg_opp_table);
|
||||
priv->opp_table = dev_pm_opp_set_regulators(cpu_dev, ®_name,
|
||||
1);
|
||||
if (IS_ERR(priv->opp_table)) {
|
||||
ret = PTR_ERR(priv->opp_table);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(cpu_dev, "failed to set regulators: %d\n",
|
||||
ret);
|
||||
goto put_table;
|
||||
goto free_cpumask;
|
||||
}
|
||||
}
|
||||
|
||||
/* Find OPP sharing information so we can fill pri->cpus here */
|
||||
/* Get OPP-sharing information from "operating-points-v2" bindings */
|
||||
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, priv->cpus);
|
||||
if (ret) {
|
||||
if (ret != -ENOENT)
|
||||
goto put_reg;
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* operating-points-v2 not supported, fallback to all CPUs share
|
||||
* OPP for backward compatibility if the platform hasn't set
|
||||
* sharing CPUs.
|
||||
*/
|
||||
if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus)) {
|
||||
cpumask_setall(priv->cpus);
|
||||
if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus))
|
||||
fallback = true;
|
||||
}
|
||||
|
||||
/*
|
||||
* OPP tables are initialized only for cpu, do it for
|
||||
* others as well.
|
||||
*/
|
||||
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus);
|
||||
if (ret)
|
||||
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
|
||||
__func__, ret);
|
||||
}
|
||||
/*
|
||||
* Initialize OPP tables for all priv->cpus. They will be shared by
|
||||
* all CPUs which have marked their CPUs shared with OPP bindings.
|
||||
*
|
||||
* For platforms not using operating-points-v2 bindings, we do this
|
||||
* before updating priv->cpus. Otherwise, we will end up creating
|
||||
* duplicate OPPs for the CPUs.
|
||||
*
|
||||
* OPPs might be populated at runtime, don't check for error here.
|
||||
*/
|
||||
if (!dev_pm_opp_of_cpumask_add_table(priv->cpus))
|
||||
priv->have_static_opps = true;
|
||||
|
||||
/*
|
||||
* The OPP table must be initialized, statically or dynamically, by this
|
||||
* point.
|
||||
*/
|
||||
ret = dev_pm_opp_get_opp_count(cpu_dev);
|
||||
if (ret <= 0) {
|
||||
dev_err(cpu_dev, "OPP table can't be empty\n");
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (fallback) {
|
||||
cpumask_setall(priv->cpus);
|
||||
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus);
|
||||
if (ret)
|
||||
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
|
||||
__func__, ret);
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &priv->freq_table);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_add(&priv->node, &priv_list);
|
||||
return 0;
|
||||
|
||||
put_reg:
|
||||
if (priv->reg_opp_table)
|
||||
dev_pm_opp_put_regulators(priv->reg_opp_table);
|
||||
put_table:
|
||||
dev_pm_opp_put_opp_table(priv->opp_table);
|
||||
out:
|
||||
if (priv->have_static_opps)
|
||||
dev_pm_opp_of_cpumask_remove_table(priv->cpus);
|
||||
dev_pm_opp_put_regulators(priv->opp_table);
|
||||
free_cpumask:
|
||||
free_cpumask_var(priv->cpus);
|
||||
return ret;
|
||||
|
@ -326,9 +302,10 @@ static void dt_cpufreq_release(void)
|
|||
struct private_data *priv, *tmp;
|
||||
|
||||
list_for_each_entry_safe(priv, tmp, &priv_list, node) {
|
||||
if (priv->reg_opp_table)
|
||||
dev_pm_opp_put_regulators(priv->reg_opp_table);
|
||||
dev_pm_opp_put_opp_table(priv->opp_table);
|
||||
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &priv->freq_table);
|
||||
if (priv->have_static_opps)
|
||||
dev_pm_opp_of_cpumask_remove_table(priv->cpus);
|
||||
dev_pm_opp_put_regulators(priv->opp_table);
|
||||
free_cpumask_var(priv->cpus);
|
||||
list_del(&priv->node);
|
||||
}
|
||||
|
|
|
@ -298,8 +298,10 @@ struct cpufreq_policy *cpufreq_cpu_acquire(unsigned int cpu)
|
|||
* EXTERNALLY AFFECTING FREQUENCY CHANGES *
|
||||
*********************************************************************/
|
||||
|
||||
/*
|
||||
* adjust_jiffies - adjust the system "loops_per_jiffy"
|
||||
/**
|
||||
* adjust_jiffies - Adjust the system "loops_per_jiffy".
|
||||
* @val: CPUFREQ_PRECHANGE or CPUFREQ_POSTCHANGE.
|
||||
* @ci: Frequency change information.
|
||||
*
|
||||
* This function alters the system "loops_per_jiffy" for the clock
|
||||
* speed change. Note that loops_per_jiffy cannot be updated on SMP
|
||||
|
@ -331,14 +333,14 @@ static void adjust_jiffies(unsigned long val, struct cpufreq_freqs *ci)
|
|||
}
|
||||
|
||||
/**
|
||||
* cpufreq_notify_transition - Notify frequency transition and adjust_jiffies.
|
||||
* cpufreq_notify_transition - Notify frequency transition and adjust jiffies.
|
||||
* @policy: cpufreq policy to enable fast frequency switching for.
|
||||
* @freqs: contain details of the frequency update.
|
||||
* @state: set to CPUFREQ_PRECHANGE or CPUFREQ_POSTCHANGE.
|
||||
*
|
||||
* This function calls the transition notifiers and the "adjust_jiffies"
|
||||
* function. It is called twice on all CPU frequency changes that have
|
||||
* external effects.
|
||||
* This function calls the transition notifiers and adjust_jiffies().
|
||||
*
|
||||
* It is called twice on all CPU frequency changes that have external effects.
|
||||
*/
|
||||
static void cpufreq_notify_transition(struct cpufreq_policy *policy,
|
||||
struct cpufreq_freqs *freqs,
|
||||
|
@ -1391,8 +1393,10 @@ static int cpufreq_online(unsigned int cpu)
|
|||
|
||||
policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req),
|
||||
GFP_KERNEL);
|
||||
if (!policy->min_freq_req)
|
||||
if (!policy->min_freq_req) {
|
||||
ret = -ENOMEM;
|
||||
goto out_destroy_policy;
|
||||
}
|
||||
|
||||
ret = freq_qos_add_request(&policy->constraints,
|
||||
policy->min_freq_req, FREQ_QOS_MIN,
|
||||
|
@ -1429,6 +1433,7 @@ static int cpufreq_online(unsigned int cpu)
|
|||
if (cpufreq_driver->get && has_target()) {
|
||||
policy->cur = cpufreq_driver->get(policy->cpu);
|
||||
if (!policy->cur) {
|
||||
ret = -EIO;
|
||||
pr_err("%s: ->get() failed\n", __func__);
|
||||
goto out_destroy_policy;
|
||||
}
|
||||
|
@ -1646,13 +1651,12 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
|
|||
}
|
||||
|
||||
/**
|
||||
* cpufreq_out_of_sync - If actual and saved CPU frequency differs, we're
|
||||
* in deep trouble.
|
||||
* @policy: policy managing CPUs
|
||||
* @new_freq: CPU frequency the CPU actually runs at
|
||||
* cpufreq_out_of_sync - Fix up actual and saved CPU frequency difference.
|
||||
* @policy: Policy managing CPUs.
|
||||
* @new_freq: New CPU frequency.
|
||||
*
|
||||
* We adjust to current frequency first, and need to clean up later.
|
||||
* So either call to cpufreq_update_policy() or schedule handle_update()).
|
||||
* Adjust to the current frequency first and clean up later by either calling
|
||||
* cpufreq_update_policy(), or scheduling handle_update().
|
||||
*/
|
||||
static void cpufreq_out_of_sync(struct cpufreq_policy *policy,
|
||||
unsigned int new_freq)
|
||||
|
@ -1832,7 +1836,7 @@ int cpufreq_generic_suspend(struct cpufreq_policy *policy)
|
|||
EXPORT_SYMBOL(cpufreq_generic_suspend);
|
||||
|
||||
/**
|
||||
* cpufreq_suspend() - Suspend CPUFreq governors
|
||||
* cpufreq_suspend() - Suspend CPUFreq governors.
|
||||
*
|
||||
* Called during system wide Suspend/Hibernate cycles for suspending governors
|
||||
* as some platforms can't change frequency after this point in suspend cycle.
|
||||
|
@ -1868,7 +1872,7 @@ suspend:
|
|||
}
|
||||
|
||||
/**
|
||||
* cpufreq_resume() - Resume CPUFreq governors
|
||||
* cpufreq_resume() - Resume CPUFreq governors.
|
||||
*
|
||||
* Called during system wide Suspend/Hibernate cycle for resuming governors that
|
||||
* are suspended with cpufreq_suspend().
|
||||
|
@ -1920,10 +1924,10 @@ bool cpufreq_driver_test_flags(u16 flags)
|
|||
}
|
||||
|
||||
/**
|
||||
* cpufreq_get_current_driver - return current driver's name
|
||||
* cpufreq_get_current_driver - Return the current driver's name.
|
||||
*
|
||||
* Return the name string of the currently loaded cpufreq driver
|
||||
* or NULL, if none.
|
||||
* Return the name string of the currently registered cpufreq driver or NULL if
|
||||
* none.
|
||||
*/
|
||||
const char *cpufreq_get_current_driver(void)
|
||||
{
|
||||
|
@ -1935,10 +1939,10 @@ const char *cpufreq_get_current_driver(void)
|
|||
EXPORT_SYMBOL_GPL(cpufreq_get_current_driver);
|
||||
|
||||
/**
|
||||
* cpufreq_get_driver_data - return current driver data
|
||||
* cpufreq_get_driver_data - Return current driver data.
|
||||
*
|
||||
* Return the private data of the currently loaded cpufreq
|
||||
* driver, or NULL if no cpufreq driver is loaded.
|
||||
* Return the private data of the currently registered cpufreq driver, or NULL
|
||||
* if no cpufreq driver has been registered.
|
||||
*/
|
||||
void *cpufreq_get_driver_data(void)
|
||||
{
|
||||
|
@ -1954,17 +1958,16 @@ EXPORT_SYMBOL_GPL(cpufreq_get_driver_data);
|
|||
*********************************************************************/
|
||||
|
||||
/**
|
||||
* cpufreq_register_notifier - register a driver with cpufreq
|
||||
* @nb: notifier function to register
|
||||
* @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
|
||||
* cpufreq_register_notifier - Register a notifier with cpufreq.
|
||||
* @nb: notifier function to register.
|
||||
* @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER.
|
||||
*
|
||||
* Add a driver to one of two lists: either a list of drivers that
|
||||
* are notified about clock rate changes (once before and once after
|
||||
* the transition), or a list of drivers that are notified about
|
||||
* changes in cpufreq policy.
|
||||
* Add a notifier to one of two lists: either a list of notifiers that run on
|
||||
* clock rate changes (once before and once after every transition), or a list
|
||||
* of notifiers that ron on cpufreq policy changes.
|
||||
*
|
||||
* This function may sleep, and has the same return conditions as
|
||||
* blocking_notifier_chain_register.
|
||||
* This function may sleep and it has the same return values as
|
||||
* blocking_notifier_chain_register().
|
||||
*/
|
||||
int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
|
||||
{
|
||||
|
@ -2001,14 +2004,14 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
|
|||
EXPORT_SYMBOL(cpufreq_register_notifier);
|
||||
|
||||
/**
|
||||
* cpufreq_unregister_notifier - unregister a driver with cpufreq
|
||||
* @nb: notifier block to be unregistered
|
||||
* @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER
|
||||
* cpufreq_unregister_notifier - Unregister a notifier from cpufreq.
|
||||
* @nb: notifier block to be unregistered.
|
||||
* @list: CPUFREQ_TRANSITION_NOTIFIER or CPUFREQ_POLICY_NOTIFIER.
|
||||
*
|
||||
* Remove a driver from the CPU frequency notifier list.
|
||||
* Remove a notifier from one of the cpufreq notifier lists.
|
||||
*
|
||||
* This function may sleep, and has the same return conditions as
|
||||
* blocking_notifier_chain_unregister.
|
||||
* This function may sleep and it has the same return values as
|
||||
* blocking_notifier_chain_unregister().
|
||||
*/
|
||||
int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list)
|
||||
{
|
||||
|
@ -2123,7 +2126,7 @@ static int __target_intermediate(struct cpufreq_policy *policy,
|
|||
static int __target_index(struct cpufreq_policy *policy, int index)
|
||||
{
|
||||
struct cpufreq_freqs freqs = {.old = policy->cur, .flags = 0};
|
||||
unsigned int intermediate_freq = 0;
|
||||
unsigned int restore_freq, intermediate_freq = 0;
|
||||
unsigned int newfreq = policy->freq_table[index].frequency;
|
||||
int retval = -EINVAL;
|
||||
bool notify;
|
||||
|
@ -2131,6 +2134,9 @@ static int __target_index(struct cpufreq_policy *policy, int index)
|
|||
if (newfreq == policy->cur)
|
||||
return 0;
|
||||
|
||||
/* Save last value to restore later on errors */
|
||||
restore_freq = policy->cur;
|
||||
|
||||
notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION);
|
||||
if (notify) {
|
||||
/* Handle switching to intermediate frequency */
|
||||
|
@ -2168,7 +2174,7 @@ static int __target_index(struct cpufreq_policy *policy, int index)
|
|||
*/
|
||||
if (unlikely(retval && intermediate_freq)) {
|
||||
freqs.old = intermediate_freq;
|
||||
freqs.new = policy->restore_freq;
|
||||
freqs.new = restore_freq;
|
||||
cpufreq_freq_transition_begin(policy, &freqs);
|
||||
cpufreq_freq_transition_end(policy, &freqs, 0);
|
||||
}
|
||||
|
@ -2203,9 +2209,6 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
|
|||
!(cpufreq_driver->flags & CPUFREQ_NEED_UPDATE_LIMITS))
|
||||
return 0;
|
||||
|
||||
/* Save last value to restore later on errors */
|
||||
policy->restore_freq = policy->cur;
|
||||
|
||||
if (cpufreq_driver->target)
|
||||
return cpufreq_driver->target(policy, target_freq, relation);
|
||||
|
||||
|
|
|
@ -9,9 +9,9 @@
|
|||
#include <linux/cpu.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/sched/clock.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
|
||||
struct cpufreq_stats {
|
||||
unsigned int total_trans;
|
||||
unsigned long long last_time;
|
||||
|
@ -30,7 +30,7 @@ struct cpufreq_stats {
|
|||
static void cpufreq_stats_update(struct cpufreq_stats *stats,
|
||||
unsigned long long time)
|
||||
{
|
||||
unsigned long long cur_time = get_jiffies_64();
|
||||
unsigned long long cur_time = local_clock();
|
||||
|
||||
stats->time_in_state[stats->last_index] += cur_time - time;
|
||||
stats->last_time = cur_time;
|
||||
|
@ -42,7 +42,7 @@ static void cpufreq_stats_reset_table(struct cpufreq_stats *stats)
|
|||
|
||||
memset(stats->time_in_state, 0, count * sizeof(u64));
|
||||
memset(stats->trans_table, 0, count * count * sizeof(int));
|
||||
stats->last_time = get_jiffies_64();
|
||||
stats->last_time = local_clock();
|
||||
stats->total_trans = 0;
|
||||
|
||||
/* Adjust for the time elapsed since reset was requested */
|
||||
|
@ -82,18 +82,18 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
|
|||
* before the reset_pending read above.
|
||||
*/
|
||||
smp_rmb();
|
||||
time = get_jiffies_64() - READ_ONCE(stats->reset_time);
|
||||
time = local_clock() - READ_ONCE(stats->reset_time);
|
||||
} else {
|
||||
time = 0;
|
||||
}
|
||||
} else {
|
||||
time = stats->time_in_state[i];
|
||||
if (i == stats->last_index)
|
||||
time += get_jiffies_64() - stats->last_time;
|
||||
time += local_clock() - stats->last_time;
|
||||
}
|
||||
|
||||
len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
|
||||
jiffies_64_to_clock_t(time));
|
||||
nsec_to_clock_t(time));
|
||||
}
|
||||
return len;
|
||||
}
|
||||
|
@ -109,7 +109,7 @@ static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
|
|||
* Defer resetting of stats to cpufreq_stats_record_transition() to
|
||||
* avoid races.
|
||||
*/
|
||||
WRITE_ONCE(stats->reset_time, get_jiffies_64());
|
||||
WRITE_ONCE(stats->reset_time, local_clock());
|
||||
/*
|
||||
* The memory barrier below is to prevent the readers of reset_time from
|
||||
* seeing a stale or partially updated value.
|
||||
|
@ -249,7 +249,7 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
|||
stats->freq_table[i++] = pos->frequency;
|
||||
|
||||
stats->state_num = i;
|
||||
stats->last_time = get_jiffies_64();
|
||||
stats->last_time = local_clock();
|
||||
stats->last_index = freq_table_get_index(stats, policy->cur);
|
||||
|
||||
policy->stats = stats;
|
||||
|
|
|
@ -101,6 +101,13 @@ out_put_node:
|
|||
}
|
||||
module_init(hb_cpufreq_driver_init);
|
||||
|
||||
static const struct of_device_id __maybe_unused hb_cpufreq_of_match[] = {
|
||||
{ .compatible = "calxeda,highbank" },
|
||||
{ .compatible = "calxeda,ecx-2000" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, hb_cpufreq_of_match);
|
||||
|
||||
MODULE_AUTHOR("Mark Langsdorf <mark.langsdorf@calxeda.com>");
|
||||
MODULE_DESCRIPTION("Calxeda Highbank cpufreq driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -2569,14 +2569,13 @@ static int intel_cpufreq_update_pstate(struct cpufreq_policy *policy,
|
|||
int old_pstate = cpu->pstate.current_pstate;
|
||||
|
||||
target_pstate = intel_pstate_prepare_request(cpu, target_pstate);
|
||||
if (hwp_active) {
|
||||
if (hwp_active)
|
||||
intel_cpufreq_adjust_hwp(cpu, target_pstate,
|
||||
policy->strict_target, fast_switch);
|
||||
cpu->pstate.current_pstate = target_pstate;
|
||||
} else if (target_pstate != old_pstate) {
|
||||
else if (target_pstate != old_pstate)
|
||||
intel_cpufreq_adjust_perf_ctl(cpu, target_pstate, fast_switch);
|
||||
cpu->pstate.current_pstate = target_pstate;
|
||||
}
|
||||
|
||||
cpu->pstate.current_pstate = target_pstate;
|
||||
|
||||
intel_cpufreq_trace(cpu, fast_switch ? INTEL_PSTATE_TRACE_FAST_SWITCH :
|
||||
INTEL_PSTATE_TRACE_TARGET, old_pstate);
|
||||
|
|
|
@ -216,6 +216,7 @@ static struct platform_driver ls1x_cpufreq_platdrv = {
|
|||
|
||||
module_platform_driver(ls1x_cpufreq_platdrv);
|
||||
|
||||
MODULE_ALIAS("platform:ls1x-cpufreq");
|
||||
MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");
|
||||
MODULE_DESCRIPTION("Loongson1 CPUFreq driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -532,6 +532,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
|
|||
{ .compatible = "mediatek,mt2712", },
|
||||
{ .compatible = "mediatek,mt7622", },
|
||||
{ .compatible = "mediatek,mt7623", },
|
||||
{ .compatible = "mediatek,mt8167", },
|
||||
{ .compatible = "mediatek,mt817x", },
|
||||
{ .compatible = "mediatek,mt8173", },
|
||||
{ .compatible = "mediatek,mt8176", },
|
||||
|
@ -540,6 +541,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
|
|||
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, mtk_cpufreq_machines);
|
||||
|
||||
static int __init mtk_cpufreq_driver_init(void)
|
||||
{
|
||||
|
@ -572,6 +574,7 @@ static int __init mtk_cpufreq_driver_init(void)
|
|||
pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
|
||||
if (IS_ERR(pdev)) {
|
||||
pr_err("failed to register mtk-cpufreq platform device\n");
|
||||
platform_driver_unregister(&mtk_cpufreq_platdrv);
|
||||
return PTR_ERR(pdev);
|
||||
}
|
||||
|
||||
|
|
|
@ -397,19 +397,19 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
|
|||
|
||||
free_genpd_opp:
|
||||
for_each_possible_cpu(cpu) {
|
||||
if (IS_ERR_OR_NULL(drv->genpd_opp_tables[cpu]))
|
||||
if (IS_ERR(drv->genpd_opp_tables[cpu]))
|
||||
break;
|
||||
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
|
||||
}
|
||||
kfree(drv->genpd_opp_tables);
|
||||
free_opp:
|
||||
for_each_possible_cpu(cpu) {
|
||||
if (IS_ERR_OR_NULL(drv->names_opp_tables[cpu]))
|
||||
if (IS_ERR(drv->names_opp_tables[cpu]))
|
||||
break;
|
||||
dev_pm_opp_put_prop_name(drv->names_opp_tables[cpu]);
|
||||
}
|
||||
for_each_possible_cpu(cpu) {
|
||||
if (IS_ERR_OR_NULL(drv->hw_opp_tables[cpu]))
|
||||
if (IS_ERR(drv->hw_opp_tables[cpu]))
|
||||
break;
|
||||
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
|
||||
}
|
||||
|
@ -430,12 +430,9 @@ static int qcom_cpufreq_remove(struct platform_device *pdev)
|
|||
platform_device_unregister(cpufreq_dt_pdev);
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
if (drv->names_opp_tables[cpu])
|
||||
dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]);
|
||||
if (drv->hw_opp_tables[cpu])
|
||||
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
|
||||
if (drv->genpd_opp_tables[cpu])
|
||||
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
|
||||
dev_pm_opp_put_supported_hw(drv->names_opp_tables[cpu]);
|
||||
dev_pm_opp_put_supported_hw(drv->hw_opp_tables[cpu]);
|
||||
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
|
||||
}
|
||||
|
||||
kfree(drv->names_opp_tables);
|
||||
|
@ -464,6 +461,7 @@ static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
|
|||
{ .compatible = "qcom,msm8960", .data = &match_data_krait },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, qcom_cpufreq_match_list);
|
||||
|
||||
/*
|
||||
* Since the driver depends on smem and nvmem drivers, which may
|
||||
|
|
|
@ -126,6 +126,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
|||
struct scmi_data *priv;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
|
||||
bool power_scale_mw;
|
||||
|
||||
cpu_dev = get_cpu_device(policy->cpu);
|
||||
if (!cpu_dev) {
|
||||
|
@ -189,7 +190,9 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
|
|||
policy->fast_switch_possible =
|
||||
handle->perf_ops->fast_switch_possible(handle, cpu_dev);
|
||||
|
||||
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus);
|
||||
power_scale_mw = handle->perf_ops->power_scale_mw_get(handle);
|
||||
em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, policy->cpus,
|
||||
power_scale_mw);
|
||||
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -233,6 +233,7 @@ static struct platform_driver scpi_cpufreq_platdrv = {
|
|||
};
|
||||
module_platform_driver(scpi_cpufreq_platdrv);
|
||||
|
||||
MODULE_ALIAS("platform:scpi-cpufreq");
|
||||
MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
|
||||
MODULE_DESCRIPTION("ARM SCPI CPUFreq interface driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -223,7 +223,8 @@ use_defaults:
|
|||
opp_table = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS);
|
||||
if (IS_ERR(opp_table)) {
|
||||
dev_err(dev, "Failed to set supported hardware\n");
|
||||
return PTR_ERR(opp_table);
|
||||
ret = PTR_ERR(opp_table);
|
||||
goto err_put_prop_name;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n",
|
||||
|
@ -232,6 +233,10 @@ use_defaults:
|
|||
version[0], version[1], version[2]);
|
||||
|
||||
return 0;
|
||||
|
||||
err_put_prop_name:
|
||||
dev_pm_opp_put_prop_name(opp_table);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sti_cpufreq_fetch_syscon_registers(void)
|
||||
|
@ -292,6 +297,13 @@ register_cpufreq_dt:
|
|||
}
|
||||
module_init(sti_cpufreq_init);
|
||||
|
||||
static const struct of_device_id __maybe_unused sti_cpufreq_of_match[] = {
|
||||
{ .compatible = "st,stih407" },
|
||||
{ .compatible = "st,stih410" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sti_cpufreq_of_match);
|
||||
|
||||
MODULE_DESCRIPTION("STMicroelectronics CPUFreq/OPP driver");
|
||||
MODULE_AUTHOR("Ajitpal Singh <ajitpal.singh@st.com>");
|
||||
MODULE_AUTHOR("Lee Jones <lee.jones@linaro.org>");
|
||||
|
|
|
@ -167,6 +167,7 @@ static const struct of_device_id sun50i_cpufreq_match_list[] = {
|
|||
{ .compatible = "allwinner,sun50i-h6" },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list);
|
||||
|
||||
static const struct of_device_id *sun50i_cpufreq_match_node(void)
|
||||
{
|
||||
|
|
|
@ -12,35 +12,52 @@
|
|||
#include <soc/tegra/bpmp.h>
|
||||
#include <soc/tegra/bpmp-abi.h>
|
||||
|
||||
#define EDVD_CORE_VOLT_FREQ(core) (0x20 + (core) * 0x4)
|
||||
#define EDVD_CORE_VOLT_FREQ_F_SHIFT 0
|
||||
#define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff
|
||||
#define EDVD_CORE_VOLT_FREQ_V_SHIFT 16
|
||||
#define TEGRA186_NUM_CLUSTERS 2
|
||||
#define EDVD_OFFSET_A57(core) ((SZ_64K * 6) + (0x20 + (core) * 0x4))
|
||||
#define EDVD_OFFSET_DENVER(core) ((SZ_64K * 7) + (0x20 + (core) * 0x4))
|
||||
#define EDVD_CORE_VOLT_FREQ_F_SHIFT 0
|
||||
#define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff
|
||||
#define EDVD_CORE_VOLT_FREQ_V_SHIFT 16
|
||||
|
||||
struct tegra186_cpufreq_cluster_info {
|
||||
unsigned long offset;
|
||||
int cpus[4];
|
||||
struct tegra186_cpufreq_cpu {
|
||||
unsigned int bpmp_cluster_id;
|
||||
unsigned int edvd_offset;
|
||||
};
|
||||
|
||||
#define NO_CPU -1
|
||||
static const struct tegra186_cpufreq_cluster_info tegra186_clusters[] = {
|
||||
/* Denver cluster */
|
||||
static const struct tegra186_cpufreq_cpu tegra186_cpus[] = {
|
||||
/* CPU0 - A57 Cluster */
|
||||
{
|
||||
.offset = SZ_64K * 7,
|
||||
.cpus = { 1, 2, NO_CPU, NO_CPU },
|
||||
.bpmp_cluster_id = 0,
|
||||
},
|
||||
/* A57 cluster */
|
||||
{
|
||||
.offset = SZ_64K * 6,
|
||||
.cpus = { 0, 3, 4, 5 },
|
||||
.bpmp_cluster_id = 1,
|
||||
.edvd_offset = EDVD_OFFSET_A57(0)
|
||||
},
|
||||
/* CPU1 - Denver Cluster */
|
||||
{
|
||||
.bpmp_cluster_id = 0,
|
||||
.edvd_offset = EDVD_OFFSET_DENVER(0)
|
||||
},
|
||||
/* CPU2 - Denver Cluster */
|
||||
{
|
||||
.bpmp_cluster_id = 0,
|
||||
.edvd_offset = EDVD_OFFSET_DENVER(1)
|
||||
},
|
||||
/* CPU3 - A57 Cluster */
|
||||
{
|
||||
.bpmp_cluster_id = 1,
|
||||
.edvd_offset = EDVD_OFFSET_A57(1)
|
||||
},
|
||||
/* CPU4 - A57 Cluster */
|
||||
{
|
||||
.bpmp_cluster_id = 1,
|
||||
.edvd_offset = EDVD_OFFSET_A57(2)
|
||||
},
|
||||
/* CPU5 - A57 Cluster */
|
||||
{
|
||||
.bpmp_cluster_id = 1,
|
||||
.edvd_offset = EDVD_OFFSET_A57(3)
|
||||
},
|
||||
};
|
||||
|
||||
struct tegra186_cpufreq_cluster {
|
||||
const struct tegra186_cpufreq_cluster_info *info;
|
||||
struct cpufreq_frequency_table *table;
|
||||
u32 ref_clk_khz;
|
||||
u32 div;
|
||||
|
@ -48,36 +65,18 @@ struct tegra186_cpufreq_cluster {
|
|||
|
||||
struct tegra186_cpufreq_data {
|
||||
void __iomem *regs;
|
||||
|
||||
size_t num_clusters;
|
||||
struct tegra186_cpufreq_cluster *clusters;
|
||||
const struct tegra186_cpufreq_cpu *cpus;
|
||||
};
|
||||
|
||||
static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
|
||||
unsigned int i;
|
||||
|
||||
for (i = 0; i < data->num_clusters; i++) {
|
||||
struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
|
||||
const struct tegra186_cpufreq_cluster_info *info =
|
||||
cluster->info;
|
||||
int core;
|
||||
|
||||
for (core = 0; core < ARRAY_SIZE(info->cpus); core++) {
|
||||
if (info->cpus[core] == policy->cpu)
|
||||
break;
|
||||
}
|
||||
if (core == ARRAY_SIZE(info->cpus))
|
||||
continue;
|
||||
|
||||
policy->driver_data =
|
||||
data->regs + info->offset + EDVD_CORE_VOLT_FREQ(core);
|
||||
policy->freq_table = cluster->table;
|
||||
break;
|
||||
}
|
||||
unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id;
|
||||
|
||||
policy->freq_table = data->clusters[cluster].table;
|
||||
policy->cpuinfo.transition_latency = 300 * 1000;
|
||||
policy->driver_data = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -85,11 +84,12 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
|
|||
static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int index)
|
||||
{
|
||||
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
|
||||
struct cpufreq_frequency_table *tbl = policy->freq_table + index;
|
||||
void __iomem *edvd_reg = policy->driver_data;
|
||||
unsigned int edvd_offset = data->cpus[policy->cpu].edvd_offset;
|
||||
u32 edvd_val = tbl->driver_data;
|
||||
|
||||
writel(edvd_val, edvd_reg);
|
||||
writel(edvd_val, data->regs + edvd_offset);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -97,35 +97,22 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
|
|||
static unsigned int tegra186_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
|
||||
struct tegra186_cpufreq_cluster *cluster;
|
||||
struct cpufreq_policy *policy;
|
||||
void __iomem *edvd_reg;
|
||||
unsigned int i, freq = 0;
|
||||
unsigned int edvd_offset, cluster_id;
|
||||
u32 ndiv;
|
||||
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (!policy)
|
||||
return 0;
|
||||
|
||||
edvd_reg = policy->driver_data;
|
||||
ndiv = readl(edvd_reg) & EDVD_CORE_VOLT_FREQ_F_MASK;
|
||||
|
||||
for (i = 0; i < data->num_clusters; i++) {
|
||||
struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
|
||||
int core;
|
||||
|
||||
for (core = 0; core < ARRAY_SIZE(cluster->info->cpus); core++) {
|
||||
if (cluster->info->cpus[core] != policy->cpu)
|
||||
continue;
|
||||
|
||||
freq = (cluster->ref_clk_khz * ndiv) / cluster->div;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
edvd_offset = data->cpus[policy->cpu].edvd_offset;
|
||||
ndiv = readl(data->regs + edvd_offset) & EDVD_CORE_VOLT_FREQ_F_MASK;
|
||||
cluster_id = data->cpus[policy->cpu].bpmp_cluster_id;
|
||||
cluster = &data->clusters[cluster_id];
|
||||
cpufreq_cpu_put(policy);
|
||||
|
||||
return freq;
|
||||
return (cluster->ref_clk_khz * ndiv) / cluster->div;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver tegra186_cpufreq_driver = {
|
||||
|
@ -141,7 +128,7 @@ static struct cpufreq_driver tegra186_cpufreq_driver = {
|
|||
|
||||
static struct cpufreq_frequency_table *init_vhint_table(
|
||||
struct platform_device *pdev, struct tegra_bpmp *bpmp,
|
||||
struct tegra186_cpufreq_cluster *cluster)
|
||||
struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id)
|
||||
{
|
||||
struct cpufreq_frequency_table *table;
|
||||
struct mrq_cpu_vhint_request req;
|
||||
|
@ -160,7 +147,7 @@ static struct cpufreq_frequency_table *init_vhint_table(
|
|||
|
||||
memset(&req, 0, sizeof(req));
|
||||
req.addr = phys;
|
||||
req.cluster_id = cluster->info->bpmp_cluster_id;
|
||||
req.cluster_id = cluster_id;
|
||||
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.mrq = MRQ_CPU_VHINT;
|
||||
|
@ -234,12 +221,12 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
|
|||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
data->clusters = devm_kcalloc(&pdev->dev, ARRAY_SIZE(tegra186_clusters),
|
||||
data->clusters = devm_kcalloc(&pdev->dev, TEGRA186_NUM_CLUSTERS,
|
||||
sizeof(*data->clusters), GFP_KERNEL);
|
||||
if (!data->clusters)
|
||||
return -ENOMEM;
|
||||
|
||||
data->num_clusters = ARRAY_SIZE(tegra186_clusters);
|
||||
data->cpus = tegra186_cpus;
|
||||
|
||||
bpmp = tegra_bpmp_get(&pdev->dev);
|
||||
if (IS_ERR(bpmp))
|
||||
|
@ -251,11 +238,10 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
|
|||
goto put_bpmp;
|
||||
}
|
||||
|
||||
for (i = 0; i < data->num_clusters; i++) {
|
||||
for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) {
|
||||
struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
|
||||
|
||||
cluster->info = &tegra186_clusters[i];
|
||||
cluster->table = init_vhint_table(pdev, bpmp, cluster);
|
||||
cluster->table = init_vhint_table(pdev, bpmp, cluster, i);
|
||||
if (IS_ERR(cluster->table)) {
|
||||
err = PTR_ERR(cluster->table);
|
||||
goto put_bpmp;
|
||||
|
|
|
@ -21,7 +21,6 @@
|
|||
#define KHZ 1000
|
||||
#define REF_CLK_MHZ 408 /* 408 MHz */
|
||||
#define US_DELAY 500
|
||||
#define US_DELAY_MIN 2
|
||||
#define CPUFREQ_TBL_STEP_HZ (50 * KHZ * KHZ)
|
||||
#define MAX_CNT ~0U
|
||||
|
||||
|
@ -44,7 +43,6 @@ struct tegra194_cpufreq_data {
|
|||
|
||||
struct tegra_cpu_ctr {
|
||||
u32 cpu;
|
||||
u32 delay;
|
||||
u32 coreclk_cnt, last_coreclk_cnt;
|
||||
u32 refclk_cnt, last_refclk_cnt;
|
||||
};
|
||||
|
@ -112,7 +110,7 @@ static void tegra_read_counters(struct work_struct *work)
|
|||
val = read_freq_feedback();
|
||||
c->last_refclk_cnt = lower_32_bits(val);
|
||||
c->last_coreclk_cnt = upper_32_bits(val);
|
||||
udelay(c->delay);
|
||||
udelay(US_DELAY);
|
||||
val = read_freq_feedback();
|
||||
c->refclk_cnt = lower_32_bits(val);
|
||||
c->coreclk_cnt = upper_32_bits(val);
|
||||
|
@ -139,7 +137,7 @@ static void tegra_read_counters(struct work_struct *work)
|
|||
* @cpu - logical cpu whose freq to be updated
|
||||
* Returns freq in KHz on success, 0 if cpu is offline
|
||||
*/
|
||||
static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
|
||||
static unsigned int tegra194_calculate_speed(u32 cpu)
|
||||
{
|
||||
struct read_counters_work read_counters_work;
|
||||
struct tegra_cpu_ctr c;
|
||||
|
@ -153,7 +151,6 @@ static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
|
|||
* interrupts enabled.
|
||||
*/
|
||||
read_counters_work.c.cpu = cpu;
|
||||
read_counters_work.c.delay = delay;
|
||||
INIT_WORK_ONSTACK(&read_counters_work.work, tegra_read_counters);
|
||||
queue_work_on(cpu, read_counters_wq, &read_counters_work.work);
|
||||
flush_work(&read_counters_work.work);
|
||||
|
@ -180,9 +177,61 @@ static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
|
|||
return (rate_mhz * KHZ); /* in KHz */
|
||||
}
|
||||
|
||||
static void get_cpu_ndiv(void *ndiv)
|
||||
{
|
||||
u64 ndiv_val;
|
||||
|
||||
asm volatile("mrs %0, s3_0_c15_c0_4" : "=r" (ndiv_val) : );
|
||||
|
||||
*(u64 *)ndiv = ndiv_val;
|
||||
}
|
||||
|
||||
static void set_cpu_ndiv(void *data)
|
||||
{
|
||||
struct cpufreq_frequency_table *tbl = data;
|
||||
u64 ndiv_val = (u64)tbl->driver_data;
|
||||
|
||||
asm volatile("msr s3_0_c15_c0_4, %0" : : "r" (ndiv_val));
|
||||
}
|
||||
|
||||
static unsigned int tegra194_get_speed(u32 cpu)
|
||||
{
|
||||
return tegra194_get_speed_common(cpu, US_DELAY);
|
||||
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
|
||||
struct cpufreq_frequency_table *pos;
|
||||
unsigned int rate;
|
||||
u64 ndiv;
|
||||
int ret;
|
||||
u32 cl;
|
||||
|
||||
smp_call_function_single(cpu, get_cpu_cluster, &cl, true);
|
||||
|
||||
/* reconstruct actual cpu freq using counters */
|
||||
rate = tegra194_calculate_speed(cpu);
|
||||
|
||||
/* get last written ndiv value */
|
||||
ret = smp_call_function_single(cpu, get_cpu_ndiv, &ndiv, true);
|
||||
if (WARN_ON_ONCE(ret))
|
||||
return rate;
|
||||
|
||||
/*
|
||||
* If the reconstructed frequency has acceptable delta from
|
||||
* the last written value, then return freq corresponding
|
||||
* to the last written ndiv value from freq_table. This is
|
||||
* done to return consistent value.
|
||||
*/
|
||||
cpufreq_for_each_valid_entry(pos, data->tables[cl]) {
|
||||
if (pos->driver_data != ndiv)
|
||||
continue;
|
||||
|
||||
if (abs(pos->frequency - rate) > 115200) {
|
||||
pr_warn("cpufreq: cpu%d,cur:%u,set:%u,set ndiv:%llu\n",
|
||||
cpu, rate, pos->frequency, ndiv);
|
||||
} else {
|
||||
rate = pos->frequency;
|
||||
}
|
||||
break;
|
||||
}
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
|
||||
|
@ -196,9 +245,6 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
|
|||
if (cl >= data->num_clusters)
|
||||
return -EINVAL;
|
||||
|
||||
/* boot freq */
|
||||
policy->cur = tegra194_get_speed_common(policy->cpu, US_DELAY_MIN);
|
||||
|
||||
/* set same policy for all cpus in a cluster */
|
||||
for (cpu = (cl * 2); cpu < ((cl + 1) * 2); cpu++)
|
||||
cpumask_set_cpu(cpu, policy->cpus);
|
||||
|
@ -209,14 +255,6 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void set_cpu_ndiv(void *data)
|
||||
{
|
||||
struct cpufreq_frequency_table *tbl = data;
|
||||
u64 ndiv_val = (u64)tbl->driver_data;
|
||||
|
||||
asm volatile("msr s3_0_c15_c0_4, %0" : : "r" (ndiv_val));
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int index)
|
||||
{
|
||||
|
|
|
@ -591,6 +591,7 @@ static struct platform_driver ve_spc_cpufreq_platdrv = {
|
|||
};
|
||||
module_platform_driver(ve_spc_cpufreq_platdrv);
|
||||
|
||||
MODULE_ALIAS("platform:vexpress-spc-cpufreq");
|
||||
MODULE_AUTHOR("Viresh Kumar <viresh.kumar@linaro.org>");
|
||||
MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
|
||||
MODULE_DESCRIPTION("Vexpress SPC ARM big LITTLE cpufreq driver");
|
||||
|
|
|
@ -327,6 +327,8 @@ struct device *psci_dt_attach_cpu(int cpu)
|
|||
if (cpu_online(cpu))
|
||||
pm_runtime_get_sync(dev);
|
||||
|
||||
dev_pm_syscore_device(dev, true);
|
||||
|
||||
return dev;
|
||||
}
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/psci.h>
|
||||
#include <linux/pm_domain.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
|
@ -52,8 +53,9 @@ static inline int psci_enter_state(int idx, u32 state)
|
|||
return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter, idx, state);
|
||||
}
|
||||
|
||||
static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int idx)
|
||||
static int __psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int idx,
|
||||
bool s2idle)
|
||||
{
|
||||
struct psci_cpuidle_data *data = this_cpu_ptr(&psci_cpuidle_data);
|
||||
u32 *states = data->psci_states;
|
||||
|
@ -66,7 +68,12 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
|||
return -1;
|
||||
|
||||
/* Do runtime PM to manage a hierarchical CPU toplogy. */
|
||||
RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev));
|
||||
rcu_irq_enter_irqson();
|
||||
if (s2idle)
|
||||
dev_pm_genpd_suspend(pd_dev);
|
||||
else
|
||||
pm_runtime_put_sync_suspend(pd_dev);
|
||||
rcu_irq_exit_irqson();
|
||||
|
||||
state = psci_get_domain_state();
|
||||
if (!state)
|
||||
|
@ -74,7 +81,12 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
|||
|
||||
ret = psci_cpu_suspend_enter(state) ? -1 : idx;
|
||||
|
||||
RCU_NONIDLE(pm_runtime_get_sync(pd_dev));
|
||||
rcu_irq_enter_irqson();
|
||||
if (s2idle)
|
||||
dev_pm_genpd_resume(pd_dev);
|
||||
else
|
||||
pm_runtime_get_sync(pd_dev);
|
||||
rcu_irq_exit_irqson();
|
||||
|
||||
cpu_pm_exit();
|
||||
|
||||
|
@ -83,6 +95,19 @@ static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv, int idx)
|
||||
{
|
||||
return __psci_enter_domain_idle_state(dev, drv, idx, false);
|
||||
}
|
||||
|
||||
static int psci_enter_s2idle_domain_idle_state(struct cpuidle_device *dev,
|
||||
struct cpuidle_driver *drv,
|
||||
int idx)
|
||||
{
|
||||
return __psci_enter_domain_idle_state(dev, drv, idx, true);
|
||||
}
|
||||
|
||||
static int psci_idle_cpuhp_up(unsigned int cpu)
|
||||
{
|
||||
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
|
||||
|
@ -170,6 +195,7 @@ static int psci_dt_cpu_init_topology(struct cpuidle_driver *drv,
|
|||
* deeper states.
|
||||
*/
|
||||
drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
|
||||
drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state;
|
||||
psci_cpuidle_use_cpuhp = true;
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -368,6 +368,19 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index)
|
|||
cpuidle_curr_governor->reflect(dev, index);
|
||||
}
|
||||
|
||||
/*
|
||||
* Min polling interval of 10usec is a guess. It is assuming that
|
||||
* for most users, the time for a single ping-pong workload like
|
||||
* perf bench pipe would generally complete within 10usec but
|
||||
* this is hardware dependant. Actual time can be estimated with
|
||||
*
|
||||
* perf bench sched pipe -l 10000
|
||||
*
|
||||
* Run multiple times to avoid cpufreq effects.
|
||||
*/
|
||||
#define CPUIDLE_POLL_MIN 10000
|
||||
#define CPUIDLE_POLL_MAX (TICK_NSEC / 16)
|
||||
|
||||
/**
|
||||
* cpuidle_poll_time - return amount of time to poll for,
|
||||
* governors can override dev->poll_limit_ns if necessary
|
||||
|
@ -382,15 +395,23 @@ u64 cpuidle_poll_time(struct cpuidle_driver *drv,
|
|||
int i;
|
||||
u64 limit_ns;
|
||||
|
||||
BUILD_BUG_ON(CPUIDLE_POLL_MIN > CPUIDLE_POLL_MAX);
|
||||
|
||||
if (dev->poll_limit_ns)
|
||||
return dev->poll_limit_ns;
|
||||
|
||||
limit_ns = TICK_NSEC;
|
||||
limit_ns = CPUIDLE_POLL_MAX;
|
||||
for (i = 1; i < drv->state_count; i++) {
|
||||
u64 state_limit;
|
||||
|
||||
if (dev->states_usage[i].disable)
|
||||
continue;
|
||||
|
||||
limit_ns = drv->states[i].target_residency_ns;
|
||||
state_limit = drv->states[i].target_residency_ns;
|
||||
if (state_limit < CPUIDLE_POLL_MIN)
|
||||
continue;
|
||||
|
||||
limit_ns = min_t(u64, state_limit, CPUIDLE_POLL_MAX);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -121,16 +121,6 @@ config ARM_TEGRA_DEVFREQ
|
|||
It reads ACTMON counters of memory controllers and adjusts the
|
||||
operating frequencies and voltages with OPP support.
|
||||
|
||||
config ARM_TEGRA20_DEVFREQ
|
||||
tristate "NVIDIA Tegra20 DEVFREQ Driver"
|
||||
depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST
|
||||
depends on COMMON_CLK
|
||||
select DEVFREQ_GOV_SIMPLE_ONDEMAND
|
||||
help
|
||||
This adds the DEVFREQ driver for the Tegra20 family of SoCs.
|
||||
It reads Memory Controller counters and adjusts the operating
|
||||
frequencies and voltages with OPP support.
|
||||
|
||||
config ARM_RK3399_DMC_DEVFREQ
|
||||
tristate "ARM RK3399 DMC DEVFREQ Driver"
|
||||
depends on (ARCH_ROCKCHIP && HAVE_ARM_SMCCC) || \
|
||||
|
|
|
@ -13,7 +13,6 @@ obj-$(CONFIG_ARM_IMX_BUS_DEVFREQ) += imx-bus.o
|
|||
obj-$(CONFIG_ARM_IMX8M_DDRC_DEVFREQ) += imx8m-ddrc.o
|
||||
obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o
|
||||
obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra30-devfreq.o
|
||||
obj-$(CONFIG_ARM_TEGRA20_DEVFREQ) += tegra20-devfreq.o
|
||||
|
||||
# DEVFREQ Event Drivers
|
||||
obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/
|
||||
|
|
|
@ -31,6 +31,8 @@
|
|||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/devfreq.h>
|
||||
|
||||
#define IS_SUPPORTED_FLAG(f, name) ((f & DEVFREQ_GOV_FLAG_##name) ? true : false)
|
||||
#define IS_SUPPORTED_ATTR(f, name) ((f & DEVFREQ_GOV_ATTR_##name) ? true : false)
|
||||
#define HZ_PER_KHZ 1000
|
||||
|
||||
static struct class *devfreq_class;
|
||||
|
@ -367,6 +369,14 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
|
|||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Print devfreq_frequency trace information between DEVFREQ_PRECHANGE
|
||||
* and DEVFREQ_POSTCHANGE because for showing the correct frequency
|
||||
* change order of between devfreq device and passive devfreq device.
|
||||
*/
|
||||
if (trace_devfreq_frequency_enabled() && new_freq != cur_freq)
|
||||
trace_devfreq_frequency(devfreq, new_freq, cur_freq);
|
||||
|
||||
freqs.new = new_freq;
|
||||
devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE);
|
||||
|
||||
|
@ -382,18 +392,19 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
|
|||
return err;
|
||||
}
|
||||
|
||||
/* Load monitoring helper functions for governors use */
|
||||
|
||||
/**
|
||||
* update_devfreq() - Reevaluate the device and configure frequency.
|
||||
* devfreq_update_target() - Reevaluate the device and configure frequency
|
||||
* on the final stage.
|
||||
* @devfreq: the devfreq instance.
|
||||
* @freq: the new frequency of parent device. This argument
|
||||
* is only used for devfreq device using passive governor.
|
||||
*
|
||||
* Note: Lock devfreq->lock before calling update_devfreq
|
||||
* This function is exported for governors.
|
||||
* Note: Lock devfreq->lock before calling devfreq_update_target. This function
|
||||
* should be only used by both update_devfreq() and devfreq governors.
|
||||
*/
|
||||
int update_devfreq(struct devfreq *devfreq)
|
||||
int devfreq_update_target(struct devfreq *devfreq, unsigned long freq)
|
||||
{
|
||||
unsigned long freq, min_freq, max_freq;
|
||||
unsigned long min_freq, max_freq;
|
||||
int err = 0;
|
||||
u32 flags = 0;
|
||||
|
||||
|
@ -418,7 +429,21 @@ int update_devfreq(struct devfreq *devfreq)
|
|||
}
|
||||
|
||||
return devfreq_set_target(devfreq, freq, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(devfreq_update_target);
|
||||
|
||||
/* Load monitoring helper functions for governors use */
|
||||
|
||||
/**
|
||||
* update_devfreq() - Reevaluate the device and configure frequency.
|
||||
* @devfreq: the devfreq instance.
|
||||
*
|
||||
* Note: Lock devfreq->lock before calling update_devfreq
|
||||
* This function is exported for governors.
|
||||
*/
|
||||
int update_devfreq(struct devfreq *devfreq)
|
||||
{
|
||||
return devfreq_update_target(devfreq, 0L);
|
||||
}
|
||||
EXPORT_SYMBOL(update_devfreq);
|
||||
|
||||
|
@ -456,7 +481,7 @@ static void devfreq_monitor(struct work_struct *work)
|
|||
*/
|
||||
void devfreq_monitor_start(struct devfreq *devfreq)
|
||||
{
|
||||
if (devfreq->governor->interrupt_driven)
|
||||
if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
|
||||
return;
|
||||
|
||||
switch (devfreq->profile->timer) {
|
||||
|
@ -486,7 +511,7 @@ EXPORT_SYMBOL(devfreq_monitor_start);
|
|||
*/
|
||||
void devfreq_monitor_stop(struct devfreq *devfreq)
|
||||
{
|
||||
if (devfreq->governor->interrupt_driven)
|
||||
if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
|
||||
return;
|
||||
|
||||
cancel_delayed_work_sync(&devfreq->work);
|
||||
|
@ -517,7 +542,7 @@ void devfreq_monitor_suspend(struct devfreq *devfreq)
|
|||
devfreq->stop_polling = true;
|
||||
mutex_unlock(&devfreq->lock);
|
||||
|
||||
if (devfreq->governor->interrupt_driven)
|
||||
if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
|
||||
return;
|
||||
|
||||
cancel_delayed_work_sync(&devfreq->work);
|
||||
|
@ -537,12 +562,13 @@ void devfreq_monitor_resume(struct devfreq *devfreq)
|
|||
unsigned long freq;
|
||||
|
||||
mutex_lock(&devfreq->lock);
|
||||
|
||||
if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
|
||||
goto out_update;
|
||||
|
||||
if (!devfreq->stop_polling)
|
||||
goto out;
|
||||
|
||||
if (devfreq->governor->interrupt_driven)
|
||||
goto out_update;
|
||||
|
||||
if (!delayed_work_pending(&devfreq->work) &&
|
||||
devfreq->profile->polling_ms)
|
||||
queue_delayed_work(devfreq_wq, &devfreq->work,
|
||||
|
@ -577,10 +603,10 @@ void devfreq_update_interval(struct devfreq *devfreq, unsigned int *delay)
|
|||
mutex_lock(&devfreq->lock);
|
||||
devfreq->profile->polling_ms = new_delay;
|
||||
|
||||
if (devfreq->stop_polling)
|
||||
if (IS_SUPPORTED_FLAG(devfreq->governor->flags, IRQ_DRIVEN))
|
||||
goto out;
|
||||
|
||||
if (devfreq->governor->interrupt_driven)
|
||||
if (devfreq->stop_polling)
|
||||
goto out;
|
||||
|
||||
/* if new delay is zero, stop polling */
|
||||
|
@ -735,6 +761,11 @@ static void devfreq_dev_release(struct device *dev)
|
|||
kfree(devfreq);
|
||||
}
|
||||
|
||||
static void create_sysfs_files(struct devfreq *devfreq,
|
||||
const struct devfreq_governor *gov);
|
||||
static void remove_sysfs_files(struct devfreq *devfreq,
|
||||
const struct devfreq_governor *gov);
|
||||
|
||||
/**
|
||||
* devfreq_add_device() - Add devfreq feature to the device
|
||||
* @dev: the device to add devfreq feature.
|
||||
|
@ -780,7 +811,6 @@ struct devfreq *devfreq_add_device(struct device *dev,
|
|||
devfreq->dev.release = devfreq_dev_release;
|
||||
INIT_LIST_HEAD(&devfreq->node);
|
||||
devfreq->profile = profile;
|
||||
strscpy(devfreq->governor_name, governor_name, DEVFREQ_NAME_LEN);
|
||||
devfreq->previous_freq = profile->initial_freq;
|
||||
devfreq->last_status.current_frequency = profile->initial_freq;
|
||||
devfreq->data = data;
|
||||
|
@ -876,7 +906,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
|
|||
|
||||
mutex_lock(&devfreq_list_lock);
|
||||
|
||||
governor = try_then_request_governor(devfreq->governor_name);
|
||||
governor = try_then_request_governor(governor_name);
|
||||
if (IS_ERR(governor)) {
|
||||
dev_err(dev, "%s: Unable to find governor for the device\n",
|
||||
__func__);
|
||||
|
@ -892,6 +922,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
|
|||
__func__);
|
||||
goto err_init;
|
||||
}
|
||||
create_sysfs_files(devfreq, devfreq->governor);
|
||||
|
||||
list_add(&devfreq->node, &devfreq_list);
|
||||
|
||||
|
@ -922,9 +953,12 @@ int devfreq_remove_device(struct devfreq *devfreq)
|
|||
if (!devfreq)
|
||||
return -EINVAL;
|
||||
|
||||
if (devfreq->governor)
|
||||
if (devfreq->governor) {
|
||||
devfreq->governor->event_handler(devfreq,
|
||||
DEVFREQ_GOV_STOP, NULL);
|
||||
remove_sysfs_files(devfreq, devfreq->governor);
|
||||
}
|
||||
|
||||
device_unregister(&devfreq->dev);
|
||||
|
||||
return 0;
|
||||
|
@ -1214,7 +1248,7 @@ int devfreq_add_governor(struct devfreq_governor *governor)
|
|||
int ret = 0;
|
||||
struct device *dev = devfreq->dev.parent;
|
||||
|
||||
if (!strncmp(devfreq->governor_name, governor->name,
|
||||
if (!strncmp(devfreq->governor->name, governor->name,
|
||||
DEVFREQ_NAME_LEN)) {
|
||||
/* The following should never occur */
|
||||
if (devfreq->governor) {
|
||||
|
@ -1276,7 +1310,7 @@ int devfreq_remove_governor(struct devfreq_governor *governor)
|
|||
int ret;
|
||||
struct device *dev = devfreq->dev.parent;
|
||||
|
||||
if (!strncmp(devfreq->governor_name, governor->name,
|
||||
if (!strncmp(devfreq->governor->name, governor->name,
|
||||
DEVFREQ_NAME_LEN)) {
|
||||
/* we should have a devfreq governor! */
|
||||
if (!devfreq->governor) {
|
||||
|
@ -1347,36 +1381,53 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
|
|||
if (df->governor == governor) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
} else if (df->governor->immutable || governor->immutable) {
|
||||
} else if (IS_SUPPORTED_FLAG(df->governor->flags, IMMUTABLE)
|
||||
|| IS_SUPPORTED_FLAG(governor->flags, IMMUTABLE)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Stop the current governor and remove the specific sysfs files
|
||||
* which depend on current governor.
|
||||
*/
|
||||
ret = df->governor->event_handler(df, DEVFREQ_GOV_STOP, NULL);
|
||||
if (ret) {
|
||||
dev_warn(dev, "%s: Governor %s not stopped(%d)\n",
|
||||
__func__, df->governor->name, ret);
|
||||
goto out;
|
||||
}
|
||||
remove_sysfs_files(df, df->governor);
|
||||
|
||||
/*
|
||||
* Start the new governor and create the specific sysfs files
|
||||
* which depend on the new governor.
|
||||
*/
|
||||
prev_governor = df->governor;
|
||||
df->governor = governor;
|
||||
strncpy(df->governor_name, governor->name, DEVFREQ_NAME_LEN);
|
||||
ret = df->governor->event_handler(df, DEVFREQ_GOV_START, NULL);
|
||||
if (ret) {
|
||||
dev_warn(dev, "%s: Governor %s not started(%d)\n",
|
||||
__func__, df->governor->name, ret);
|
||||
|
||||
/* Restore previous governor */
|
||||
df->governor = prev_governor;
|
||||
strncpy(df->governor_name, prev_governor->name,
|
||||
DEVFREQ_NAME_LEN);
|
||||
ret = df->governor->event_handler(df, DEVFREQ_GOV_START, NULL);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"%s: reverting to Governor %s failed (%d)\n",
|
||||
__func__, df->governor_name, ret);
|
||||
__func__, prev_governor->name, ret);
|
||||
df->governor = NULL;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Create the sysfs files for the new governor. But if failed to start
|
||||
* the new governor, restore the sysfs files of previous governor.
|
||||
*/
|
||||
create_sysfs_files(df, df->governor);
|
||||
|
||||
out:
|
||||
mutex_unlock(&devfreq_list_lock);
|
||||
|
||||
|
@ -1402,9 +1453,9 @@ static ssize_t available_governors_show(struct device *d,
|
|||
* The devfreq with immutable governor (e.g., passive) shows
|
||||
* only own governor.
|
||||
*/
|
||||
if (df->governor->immutable) {
|
||||
if (IS_SUPPORTED_FLAG(df->governor->flags, IMMUTABLE)) {
|
||||
count = scnprintf(&buf[count], DEVFREQ_NAME_LEN,
|
||||
"%s ", df->governor_name);
|
||||
"%s ", df->governor->name);
|
||||
/*
|
||||
* The devfreq device shows the registered governor except for
|
||||
* immutable governors such as passive governor .
|
||||
|
@ -1413,7 +1464,7 @@ static ssize_t available_governors_show(struct device *d,
|
|||
struct devfreq_governor *governor;
|
||||
|
||||
list_for_each_entry(governor, &devfreq_governor_list, node) {
|
||||
if (governor->immutable)
|
||||
if (IS_SUPPORTED_FLAG(governor->flags, IMMUTABLE))
|
||||
continue;
|
||||
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
|
||||
"%s ", governor->name);
|
||||
|
@ -1458,39 +1509,6 @@ static ssize_t target_freq_show(struct device *dev,
|
|||
}
|
||||
static DEVICE_ATTR_RO(target_freq);
|
||||
|
||||
static ssize_t polling_interval_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct devfreq *df = to_devfreq(dev);
|
||||
|
||||
if (!df->profile)
|
||||
return -EINVAL;
|
||||
|
||||
return sprintf(buf, "%d\n", df->profile->polling_ms);
|
||||
}
|
||||
|
||||
static ssize_t polling_interval_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct devfreq *df = to_devfreq(dev);
|
||||
unsigned int value;
|
||||
int ret;
|
||||
|
||||
if (!df->governor)
|
||||
return -EINVAL;
|
||||
|
||||
ret = sscanf(buf, "%u", &value);
|
||||
if (ret != 1)
|
||||
return -EINVAL;
|
||||
|
||||
df->governor->event_handler(df, DEVFREQ_GOV_UPDATE_INTERVAL, &value);
|
||||
ret = count;
|
||||
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR_RW(polling_interval);
|
||||
|
||||
static ssize_t min_freq_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
|
@ -1698,6 +1716,53 @@ static ssize_t trans_stat_store(struct device *dev,
|
|||
}
|
||||
static DEVICE_ATTR_RW(trans_stat);
|
||||
|
||||
static struct attribute *devfreq_attrs[] = {
|
||||
&dev_attr_name.attr,
|
||||
&dev_attr_governor.attr,
|
||||
&dev_attr_available_governors.attr,
|
||||
&dev_attr_cur_freq.attr,
|
||||
&dev_attr_available_frequencies.attr,
|
||||
&dev_attr_target_freq.attr,
|
||||
&dev_attr_min_freq.attr,
|
||||
&dev_attr_max_freq.attr,
|
||||
&dev_attr_trans_stat.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(devfreq);
|
||||
|
||||
static ssize_t polling_interval_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct devfreq *df = to_devfreq(dev);
|
||||
|
||||
if (!df->profile)
|
||||
return -EINVAL;
|
||||
|
||||
return sprintf(buf, "%d\n", df->profile->polling_ms);
|
||||
}
|
||||
|
||||
static ssize_t polling_interval_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct devfreq *df = to_devfreq(dev);
|
||||
unsigned int value;
|
||||
int ret;
|
||||
|
||||
if (!df->governor)
|
||||
return -EINVAL;
|
||||
|
||||
ret = sscanf(buf, "%u", &value);
|
||||
if (ret != 1)
|
||||
return -EINVAL;
|
||||
|
||||
df->governor->event_handler(df, DEVFREQ_GOV_UPDATE_INTERVAL, &value);
|
||||
ret = count;
|
||||
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR_RW(polling_interval);
|
||||
|
||||
static ssize_t timer_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
|
@ -1761,21 +1826,36 @@ out:
|
|||
}
|
||||
static DEVICE_ATTR_RW(timer);
|
||||
|
||||
static struct attribute *devfreq_attrs[] = {
|
||||
&dev_attr_name.attr,
|
||||
&dev_attr_governor.attr,
|
||||
&dev_attr_available_governors.attr,
|
||||
&dev_attr_cur_freq.attr,
|
||||
&dev_attr_available_frequencies.attr,
|
||||
&dev_attr_target_freq.attr,
|
||||
&dev_attr_polling_interval.attr,
|
||||
&dev_attr_min_freq.attr,
|
||||
&dev_attr_max_freq.attr,
|
||||
&dev_attr_trans_stat.attr,
|
||||
&dev_attr_timer.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(devfreq);
|
||||
#define CREATE_SYSFS_FILE(df, name) \
|
||||
{ \
|
||||
int ret; \
|
||||
ret = sysfs_create_file(&df->dev.kobj, &dev_attr_##name.attr); \
|
||||
if (ret < 0) { \
|
||||
dev_warn(&df->dev, \
|
||||
"Unable to create attr(%s)\n", "##name"); \
|
||||
} \
|
||||
} \
|
||||
|
||||
/* Create the specific sysfs files which depend on each governor. */
|
||||
static void create_sysfs_files(struct devfreq *devfreq,
|
||||
const struct devfreq_governor *gov)
|
||||
{
|
||||
if (IS_SUPPORTED_ATTR(gov->attrs, POLLING_INTERVAL))
|
||||
CREATE_SYSFS_FILE(devfreq, polling_interval);
|
||||
if (IS_SUPPORTED_ATTR(gov->attrs, TIMER))
|
||||
CREATE_SYSFS_FILE(devfreq, timer);
|
||||
}
|
||||
|
||||
/* Remove the specific sysfs files which depend on each governor. */
|
||||
static void remove_sysfs_files(struct devfreq *devfreq,
|
||||
const struct devfreq_governor *gov)
|
||||
{
|
||||
if (IS_SUPPORTED_ATTR(gov->attrs, POLLING_INTERVAL))
|
||||
sysfs_remove_file(&devfreq->dev.kobj,
|
||||
&dev_attr_polling_interval.attr);
|
||||
if (IS_SUPPORTED_ATTR(gov->attrs, TIMER))
|
||||
sysfs_remove_file(&devfreq->dev.kobj, &dev_attr_timer.attr);
|
||||
}
|
||||
|
||||
/**
|
||||
* devfreq_summary_show() - Show the summary of the devfreq devices
|
||||
|
@ -1818,7 +1898,7 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
|
|||
|
||||
list_for_each_entry_reverse(devfreq, &devfreq_list, node) {
|
||||
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_PASSIVE)
|
||||
if (!strncmp(devfreq->governor_name, DEVFREQ_GOV_PASSIVE,
|
||||
if (!strncmp(devfreq->governor->name, DEVFREQ_GOV_PASSIVE,
|
||||
DEVFREQ_NAME_LEN)) {
|
||||
struct devfreq_passive_data *data = devfreq->data;
|
||||
|
||||
|
@ -1832,15 +1912,19 @@ static int devfreq_summary_show(struct seq_file *s, void *data)
|
|||
mutex_lock(&devfreq->lock);
|
||||
cur_freq = devfreq->previous_freq;
|
||||
get_freq_range(devfreq, &min_freq, &max_freq);
|
||||
polling_ms = devfreq->profile->polling_ms;
|
||||
timer = devfreq->profile->timer;
|
||||
|
||||
if (IS_SUPPORTED_ATTR(devfreq->governor->attrs, POLLING_INTERVAL))
|
||||
polling_ms = devfreq->profile->polling_ms;
|
||||
else
|
||||
polling_ms = 0;
|
||||
mutex_unlock(&devfreq->lock);
|
||||
|
||||
seq_printf(s,
|
||||
"%-30s %-30s %-15s %-10s %10d %12ld %12ld %12ld\n",
|
||||
dev_name(&devfreq->dev),
|
||||
p_devfreq ? dev_name(&p_devfreq->dev) : "null",
|
||||
devfreq->governor_name,
|
||||
devfreq->governor->name,
|
||||
polling_ms ? timer_name[timer] : "null",
|
||||
polling_ms,
|
||||
cur_freq,
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
|
||||
struct exynos_bus {
|
||||
struct device *dev;
|
||||
struct platform_device *icc_pdev;
|
||||
|
||||
struct devfreq *devfreq;
|
||||
struct devfreq_event_dev **edev;
|
||||
|
@ -156,18 +157,20 @@ static void exynos_bus_exit(struct device *dev)
|
|||
if (ret < 0)
|
||||
dev_warn(dev, "failed to disable the devfreq-event devices\n");
|
||||
|
||||
platform_device_unregister(bus->icc_pdev);
|
||||
|
||||
dev_pm_opp_of_remove_table(dev);
|
||||
clk_disable_unprepare(bus->clk);
|
||||
if (bus->opp_table) {
|
||||
dev_pm_opp_put_regulators(bus->opp_table);
|
||||
bus->opp_table = NULL;
|
||||
}
|
||||
dev_pm_opp_put_regulators(bus->opp_table);
|
||||
bus->opp_table = NULL;
|
||||
}
|
||||
|
||||
static void exynos_bus_passive_exit(struct device *dev)
|
||||
{
|
||||
struct exynos_bus *bus = dev_get_drvdata(dev);
|
||||
|
||||
platform_device_unregister(bus->icc_pdev);
|
||||
|
||||
dev_pm_opp_of_remove_table(dev);
|
||||
clk_disable_unprepare(bus->clk);
|
||||
}
|
||||
|
@ -432,6 +435,18 @@ static int exynos_bus_probe(struct platform_device *pdev)
|
|||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
/* Create child platform device for the interconnect provider */
|
||||
if (of_get_property(dev->of_node, "#interconnect-cells", NULL)) {
|
||||
bus->icc_pdev = platform_device_register_data(
|
||||
dev, "exynos-generic-icc",
|
||||
PLATFORM_DEVID_AUTO, NULL, 0);
|
||||
|
||||
if (IS_ERR(bus->icc_pdev)) {
|
||||
ret = PTR_ERR(bus->icc_pdev);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
max_state = bus->devfreq->profile->max_state;
|
||||
min_freq = (bus->devfreq->profile->freq_table[0] / 1000);
|
||||
max_freq = (bus->devfreq->profile->freq_table[max_state - 1] / 1000);
|
||||
|
@ -444,10 +459,8 @@ err:
|
|||
dev_pm_opp_of_remove_table(dev);
|
||||
clk_disable_unprepare(bus->clk);
|
||||
err_reg:
|
||||
if (!passive) {
|
||||
dev_pm_opp_put_regulators(bus->opp_table);
|
||||
bus->opp_table = NULL;
|
||||
}
|
||||
dev_pm_opp_put_regulators(bus->opp_table);
|
||||
bus->opp_table = NULL;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -13,6 +13,8 @@
|
|||
|
||||
#include <linux/devfreq.h>
|
||||
|
||||
#define DEVFREQ_NAME_LEN 16
|
||||
|
||||
#define to_devfreq(DEV) container_of((DEV), struct devfreq, dev)
|
||||
|
||||
/* Devfreq events */
|
||||
|
@ -25,14 +27,32 @@
|
|||
#define DEVFREQ_MIN_FREQ 0
|
||||
#define DEVFREQ_MAX_FREQ ULONG_MAX
|
||||
|
||||
/*
|
||||
* Definition of the governor feature flags
|
||||
* - DEVFREQ_GOV_FLAG_IMMUTABLE
|
||||
* : This governor is never changeable to other governors.
|
||||
* - DEVFREQ_GOV_FLAG_IRQ_DRIVEN
|
||||
* : The devfreq won't schedule the work for this governor.
|
||||
*/
|
||||
#define DEVFREQ_GOV_FLAG_IMMUTABLE BIT(0)
|
||||
#define DEVFREQ_GOV_FLAG_IRQ_DRIVEN BIT(1)
|
||||
|
||||
/*
|
||||
* Definition of governor attribute flags except for common sysfs attributes
|
||||
* - DEVFREQ_GOV_ATTR_POLLING_INTERVAL
|
||||
* : Indicate polling_interal sysfs attribute
|
||||
* - DEVFREQ_GOV_ATTR_TIMER
|
||||
* : Indicate timer sysfs attribute
|
||||
*/
|
||||
#define DEVFREQ_GOV_ATTR_POLLING_INTERVAL BIT(0)
|
||||
#define DEVFREQ_GOV_ATTR_TIMER BIT(1)
|
||||
|
||||
/**
|
||||
* struct devfreq_governor - Devfreq policy governor
|
||||
* @node: list node - contains registered devfreq governors
|
||||
* @name: Governor's name
|
||||
* @immutable: Immutable flag for governor. If the value is 1,
|
||||
* this governor is never changeable to other governor.
|
||||
* @interrupt_driven: Devfreq core won't schedule polling work for this
|
||||
* governor if value is set to 1.
|
||||
* @attrs: Governor's sysfs attribute flags
|
||||
* @flags: Governor's feature flags
|
||||
* @get_target_freq: Returns desired operating frequency for the device.
|
||||
* Basically, get_target_freq will run
|
||||
* devfreq_dev_profile.get_dev_status() to get the
|
||||
|
@ -50,8 +70,8 @@ struct devfreq_governor {
|
|||
struct list_head node;
|
||||
|
||||
const char name[DEVFREQ_NAME_LEN];
|
||||
const unsigned int immutable;
|
||||
const unsigned int interrupt_driven;
|
||||
const u64 attrs;
|
||||
const u64 flags;
|
||||
int (*get_target_freq)(struct devfreq *this, unsigned long *freq);
|
||||
int (*event_handler)(struct devfreq *devfreq,
|
||||
unsigned int event, void *data);
|
||||
|
@ -67,6 +87,7 @@ int devfreq_add_governor(struct devfreq_governor *governor);
|
|||
int devfreq_remove_governor(struct devfreq_governor *governor);
|
||||
|
||||
int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
|
||||
int devfreq_update_target(struct devfreq *devfreq, unsigned long freq);
|
||||
|
||||
static inline int devfreq_update_stats(struct devfreq *df)
|
||||
{
|
||||
|
|
|
@ -92,36 +92,6 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int update_devfreq_passive(struct devfreq *devfreq, unsigned long freq)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!devfreq->governor)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock_nested(&devfreq->lock, SINGLE_DEPTH_NESTING);
|
||||
|
||||
ret = devfreq->governor->get_target_freq(devfreq, &freq);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ret = devfreq->profile->target(devfreq->dev.parent, &freq, 0);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (devfreq->profile->freq_table
|
||||
&& (devfreq_update_status(devfreq, freq)))
|
||||
dev_err(&devfreq->dev,
|
||||
"Couldn't update frequency transition information.\n");
|
||||
|
||||
devfreq->previous_freq = freq;
|
||||
|
||||
out:
|
||||
mutex_unlock(&devfreq->lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int devfreq_passive_notifier_call(struct notifier_block *nb,
|
||||
unsigned long event, void *ptr)
|
||||
{
|
||||
|
@ -131,17 +101,25 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
|
|||
struct devfreq *parent = (struct devfreq *)data->parent;
|
||||
struct devfreq_freqs *freqs = (struct devfreq_freqs *)ptr;
|
||||
unsigned long freq = freqs->new;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock_nested(&devfreq->lock, SINGLE_DEPTH_NESTING);
|
||||
switch (event) {
|
||||
case DEVFREQ_PRECHANGE:
|
||||
if (parent->previous_freq > freq)
|
||||
update_devfreq_passive(devfreq, freq);
|
||||
ret = devfreq_update_target(devfreq, freq);
|
||||
|
||||
break;
|
||||
case DEVFREQ_POSTCHANGE:
|
||||
if (parent->previous_freq < freq)
|
||||
update_devfreq_passive(devfreq, freq);
|
||||
ret = devfreq_update_target(devfreq, freq);
|
||||
break;
|
||||
}
|
||||
mutex_unlock(&devfreq->lock);
|
||||
|
||||
if (ret < 0)
|
||||
dev_warn(&devfreq->dev,
|
||||
"failed to update devfreq using passive governor\n");
|
||||
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
@ -180,7 +158,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
|
|||
|
||||
static struct devfreq_governor devfreq_passive = {
|
||||
.name = DEVFREQ_GOV_PASSIVE,
|
||||
.immutable = 1,
|
||||
.flags = DEVFREQ_GOV_FLAG_IMMUTABLE,
|
||||
.get_target_freq = devfreq_passive_get_target_freq,
|
||||
.event_handler = devfreq_passive_event_handler,
|
||||
};
|
||||
|
|
|
@ -117,6 +117,8 @@ static int devfreq_simple_ondemand_handler(struct devfreq *devfreq,
|
|||
|
||||
static struct devfreq_governor devfreq_simple_ondemand = {
|
||||
.name = DEVFREQ_GOV_SIMPLE_ONDEMAND,
|
||||
.attrs = DEVFREQ_GOV_ATTR_POLLING_INTERVAL
|
||||
| DEVFREQ_GOV_ATTR_TIMER,
|
||||
.get_target_freq = devfreq_simple_ondemand_func,
|
||||
.event_handler = devfreq_simple_ondemand_handler,
|
||||
};
|
||||
|
|
|
@ -1,212 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* NVIDIA Tegra20 devfreq driver
|
||||
*
|
||||
* Copyright (C) 2019 GRATE-DRIVER project
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/devfreq.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_opp.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <soc/tegra/mc.h>
|
||||
|
||||
#include "governor.h"
|
||||
|
||||
#define MC_STAT_CONTROL 0x90
|
||||
#define MC_STAT_EMC_CLOCK_LIMIT 0xa0
|
||||
#define MC_STAT_EMC_CLOCKS 0xa4
|
||||
#define MC_STAT_EMC_CONTROL 0xa8
|
||||
#define MC_STAT_EMC_COUNT 0xb8
|
||||
|
||||
#define EMC_GATHER_CLEAR (1 << 8)
|
||||
#define EMC_GATHER_ENABLE (3 << 8)
|
||||
|
||||
struct tegra_devfreq {
|
||||
struct devfreq *devfreq;
|
||||
struct clk *emc_clock;
|
||||
void __iomem *regs;
|
||||
};
|
||||
|
||||
static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
|
||||
u32 flags)
|
||||
{
|
||||
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
|
||||
struct devfreq *devfreq = tegra->devfreq;
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long rate;
|
||||
int err;
|
||||
|
||||
opp = devfreq_recommended_opp(dev, freq, flags);
|
||||
if (IS_ERR(opp))
|
||||
return PTR_ERR(opp);
|
||||
|
||||
rate = dev_pm_opp_get_freq(opp);
|
||||
dev_pm_opp_put(opp);
|
||||
|
||||
err = clk_set_min_rate(tegra->emc_clock, rate);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = clk_set_rate(tegra->emc_clock, 0);
|
||||
if (err)
|
||||
goto restore_min_rate;
|
||||
|
||||
return 0;
|
||||
|
||||
restore_min_rate:
|
||||
clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int tegra_devfreq_get_dev_status(struct device *dev,
|
||||
struct devfreq_dev_status *stat)
|
||||
{
|
||||
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
|
||||
|
||||
/*
|
||||
* EMC_COUNT returns number of memory events, that number is lower
|
||||
* than the number of clocks. Conversion ratio of 1/8 results in a
|
||||
* bit higher bandwidth than actually needed, it is good enough for
|
||||
* the time being because drivers don't support requesting minimum
|
||||
* needed memory bandwidth yet.
|
||||
*
|
||||
* TODO: adjust the ratio value once relevant drivers will support
|
||||
* memory bandwidth management.
|
||||
*/
|
||||
stat->busy_time = readl_relaxed(tegra->regs + MC_STAT_EMC_COUNT);
|
||||
stat->total_time = readl_relaxed(tegra->regs + MC_STAT_EMC_CLOCKS) / 8;
|
||||
stat->current_frequency = clk_get_rate(tegra->emc_clock);
|
||||
|
||||
writel_relaxed(EMC_GATHER_CLEAR, tegra->regs + MC_STAT_CONTROL);
|
||||
writel_relaxed(EMC_GATHER_ENABLE, tegra->regs + MC_STAT_CONTROL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct devfreq_dev_profile tegra_devfreq_profile = {
|
||||
.polling_ms = 500,
|
||||
.target = tegra_devfreq_target,
|
||||
.get_dev_status = tegra_devfreq_get_dev_status,
|
||||
};
|
||||
|
||||
static struct tegra_mc *tegra_get_memory_controller(void)
|
||||
{
|
||||
struct platform_device *pdev;
|
||||
struct device_node *np;
|
||||
struct tegra_mc *mc;
|
||||
|
||||
np = of_find_compatible_node(NULL, NULL, "nvidia,tegra20-mc-gart");
|
||||
if (!np)
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
pdev = of_find_device_by_node(np);
|
||||
of_node_put(np);
|
||||
if (!pdev)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
mc = platform_get_drvdata(pdev);
|
||||
if (!mc)
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
|
||||
return mc;
|
||||
}
|
||||
|
||||
static int tegra_devfreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct tegra_devfreq *tegra;
|
||||
struct tegra_mc *mc;
|
||||
unsigned long max_rate;
|
||||
unsigned long rate;
|
||||
int err;
|
||||
|
||||
mc = tegra_get_memory_controller();
|
||||
if (IS_ERR(mc)) {
|
||||
err = PTR_ERR(mc);
|
||||
dev_err(&pdev->dev, "failed to get memory controller: %d\n",
|
||||
err);
|
||||
return err;
|
||||
}
|
||||
|
||||
tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
|
||||
if (!tegra)
|
||||
return -ENOMEM;
|
||||
|
||||
/* EMC is a system-critical clock that is always enabled */
|
||||
tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
|
||||
if (IS_ERR(tegra->emc_clock)) {
|
||||
err = PTR_ERR(tegra->emc_clock);
|
||||
dev_err(&pdev->dev, "failed to get emc clock: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
tegra->regs = mc->regs;
|
||||
|
||||
max_rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
|
||||
|
||||
for (rate = 0; rate <= max_rate; rate++) {
|
||||
rate = clk_round_rate(tegra->emc_clock, rate);
|
||||
|
||||
err = dev_pm_opp_add(&pdev->dev, rate, 0);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "failed to add opp: %d\n", err);
|
||||
goto remove_opps;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Reset statistic gathers state, select global bandwidth for the
|
||||
* statistics collection mode and set clocks counter saturation
|
||||
* limit to maximum.
|
||||
*/
|
||||
writel_relaxed(0x00000000, tegra->regs + MC_STAT_CONTROL);
|
||||
writel_relaxed(0x00000000, tegra->regs + MC_STAT_EMC_CONTROL);
|
||||
writel_relaxed(0xffffffff, tegra->regs + MC_STAT_EMC_CLOCK_LIMIT);
|
||||
|
||||
platform_set_drvdata(pdev, tegra);
|
||||
|
||||
tegra->devfreq = devfreq_add_device(&pdev->dev, &tegra_devfreq_profile,
|
||||
DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
|
||||
if (IS_ERR(tegra->devfreq)) {
|
||||
err = PTR_ERR(tegra->devfreq);
|
||||
goto remove_opps;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
remove_opps:
|
||||
dev_pm_opp_remove_all_dynamic(&pdev->dev);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int tegra_devfreq_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
|
||||
|
||||
devfreq_remove_device(tegra->devfreq);
|
||||
dev_pm_opp_remove_all_dynamic(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver tegra_devfreq_driver = {
|
||||
.probe = tegra_devfreq_probe,
|
||||
.remove = tegra_devfreq_remove,
|
||||
.driver = {
|
||||
.name = "tegra20-devfreq",
|
||||
},
|
||||
};
|
||||
module_platform_driver(tegra_devfreq_driver);
|
||||
|
||||
MODULE_ALIAS("platform:tegra20-devfreq");
|
||||
MODULE_AUTHOR("Dmitry Osipenko <digetx@gmail.com>");
|
||||
MODULE_DESCRIPTION("NVIDIA Tegra20 devfreq driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -19,6 +19,8 @@
|
|||
#include <linux/reset.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include <soc/tegra/fuse.h>
|
||||
|
||||
#include "governor.h"
|
||||
|
||||
#define ACTMON_GLB_STATUS 0x0
|
||||
|
@ -55,13 +57,6 @@
|
|||
#define ACTMON_BELOW_WMARK_WINDOW 3
|
||||
#define ACTMON_BOOST_FREQ_STEP 16000
|
||||
|
||||
/*
|
||||
* Activity counter is incremented every 256 memory transactions, and each
|
||||
* transaction takes 4 EMC clocks for Tegra124; So the COUNT_WEIGHT is
|
||||
* 4 * 256 = 1024.
|
||||
*/
|
||||
#define ACTMON_COUNT_WEIGHT 0x400
|
||||
|
||||
/*
|
||||
* ACTMON_AVERAGE_WINDOW_LOG2: default value for @DEV_CTRL_K_VAL, which
|
||||
* translates to 2 ^ (K_VAL + 1). ex: 2 ^ (6 + 1) = 128
|
||||
|
@ -109,7 +104,7 @@ enum tegra_actmon_device {
|
|||
MCCPU,
|
||||
};
|
||||
|
||||
static const struct tegra_devfreq_device_config actmon_device_configs[] = {
|
||||
static const struct tegra_devfreq_device_config tegra124_device_configs[] = {
|
||||
{
|
||||
/* MCALL: All memory accesses (including from the CPUs) */
|
||||
.offset = 0x1c0,
|
||||
|
@ -131,6 +126,28 @@ static const struct tegra_devfreq_device_config actmon_device_configs[] = {
|
|||
},
|
||||
};
|
||||
|
||||
static const struct tegra_devfreq_device_config tegra30_device_configs[] = {
|
||||
{
|
||||
/* MCALL: All memory accesses (including from the CPUs) */
|
||||
.offset = 0x1c0,
|
||||
.irq_mask = 1 << 26,
|
||||
.boost_up_coeff = 200,
|
||||
.boost_down_coeff = 50,
|
||||
.boost_up_threshold = 20,
|
||||
.boost_down_threshold = 10,
|
||||
},
|
||||
{
|
||||
/* MCCPU: memory accesses from the CPUs */
|
||||
.offset = 0x200,
|
||||
.irq_mask = 1 << 25,
|
||||
.boost_up_coeff = 800,
|
||||
.boost_down_coeff = 40,
|
||||
.boost_up_threshold = 27,
|
||||
.boost_down_threshold = 10,
|
||||
.avg_dependency_threshold = 16000, /* 16MHz in kHz units */
|
||||
},
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tegra_devfreq_device - state specific to an ACTMON device
|
||||
*
|
||||
|
@ -153,8 +170,15 @@ struct tegra_devfreq_device {
|
|||
unsigned long target_freq;
|
||||
};
|
||||
|
||||
struct tegra_devfreq_soc_data {
|
||||
const struct tegra_devfreq_device_config *configs;
|
||||
/* Weight value for count measurements */
|
||||
unsigned int count_weight;
|
||||
};
|
||||
|
||||
struct tegra_devfreq {
|
||||
struct devfreq *devfreq;
|
||||
struct opp_table *opp_table;
|
||||
|
||||
struct reset_control *reset;
|
||||
struct clk *clock;
|
||||
|
@ -168,11 +192,13 @@ struct tegra_devfreq {
|
|||
struct delayed_work cpufreq_update_work;
|
||||
struct notifier_block cpu_rate_change_nb;
|
||||
|
||||
struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)];
|
||||
struct tegra_devfreq_device devices[2];
|
||||
|
||||
unsigned int irq;
|
||||
|
||||
bool started;
|
||||
|
||||
const struct tegra_devfreq_soc_data *soc;
|
||||
};
|
||||
|
||||
struct tegra_actmon_emc_ratio {
|
||||
|
@ -485,7 +511,7 @@ static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
|
|||
tegra_devfreq_update_avg_wmark(tegra, dev);
|
||||
tegra_devfreq_update_wmark(tegra, dev);
|
||||
|
||||
device_writel(dev, ACTMON_COUNT_WEIGHT, ACTMON_DEV_COUNT_WEIGHT);
|
||||
device_writel(dev, tegra->soc->count_weight, ACTMON_DEV_COUNT_WEIGHT);
|
||||
device_writel(dev, ACTMON_INTR_STATUS_CLEAR, ACTMON_DEV_INTR_STATUS);
|
||||
|
||||
val |= ACTMON_DEV_CTRL_ENB_PERIODIC;
|
||||
|
@ -612,34 +638,19 @@ static void tegra_actmon_stop(struct tegra_devfreq *tegra)
|
|||
static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
|
||||
u32 flags)
|
||||
{
|
||||
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
|
||||
struct devfreq *devfreq = tegra->devfreq;
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long rate;
|
||||
int err;
|
||||
int ret;
|
||||
|
||||
opp = devfreq_recommended_opp(dev, freq, flags);
|
||||
if (IS_ERR(opp)) {
|
||||
dev_err(dev, "Failed to find opp for %lu Hz\n", *freq);
|
||||
return PTR_ERR(opp);
|
||||
}
|
||||
rate = dev_pm_opp_get_freq(opp);
|
||||
|
||||
ret = dev_pm_opp_set_bw(dev, opp);
|
||||
dev_pm_opp_put(opp);
|
||||
|
||||
err = clk_set_min_rate(tegra->emc_clock, rate * KHZ);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = clk_set_rate(tegra->emc_clock, 0);
|
||||
if (err)
|
||||
goto restore_min_rate;
|
||||
|
||||
return 0;
|
||||
|
||||
restore_min_rate:
|
||||
clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
|
||||
|
||||
return err;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tegra_devfreq_get_dev_status(struct device *dev,
|
||||
|
@ -655,7 +666,7 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
|
|||
stat->private_data = tegra;
|
||||
|
||||
/* The below are to be used by the other governors */
|
||||
stat->current_frequency = cur_freq;
|
||||
stat->current_frequency = cur_freq * KHZ;
|
||||
|
||||
actmon_dev = &tegra->devices[MCALL];
|
||||
|
||||
|
@ -705,7 +716,12 @@ static int tegra_governor_get_target(struct devfreq *devfreq,
|
|||
target_freq = max(target_freq, dev->target_freq);
|
||||
}
|
||||
|
||||
*freq = target_freq;
|
||||
/*
|
||||
* tegra-devfreq driver operates with KHz units, while OPP table
|
||||
* entries use Hz units. Hence we need to convert the units for the
|
||||
* devfreq core.
|
||||
*/
|
||||
*freq = target_freq * KHZ;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -765,14 +781,16 @@ static int tegra_governor_event_handler(struct devfreq *devfreq,
|
|||
|
||||
static struct devfreq_governor tegra_devfreq_governor = {
|
||||
.name = "tegra_actmon",
|
||||
.attrs = DEVFREQ_GOV_ATTR_POLLING_INTERVAL,
|
||||
.flags = DEVFREQ_GOV_FLAG_IMMUTABLE
|
||||
| DEVFREQ_GOV_FLAG_IRQ_DRIVEN,
|
||||
.get_target_freq = tegra_governor_get_target,
|
||||
.event_handler = tegra_governor_event_handler,
|
||||
.immutable = true,
|
||||
.interrupt_driven = true,
|
||||
};
|
||||
|
||||
static int tegra_devfreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
u32 hw_version = BIT(tegra_sku_info.soc_speedo_id);
|
||||
struct tegra_devfreq_device *dev;
|
||||
struct tegra_devfreq *tegra;
|
||||
struct devfreq *devfreq;
|
||||
|
@ -784,6 +802,8 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
|
|||
if (!tegra)
|
||||
return -ENOMEM;
|
||||
|
||||
tegra->soc = of_device_get_match_data(&pdev->dev);
|
||||
|
||||
tegra->regs = devm_platform_ioremap_resource(pdev, 0);
|
||||
if (IS_ERR(tegra->regs))
|
||||
return PTR_ERR(tegra->regs);
|
||||
|
@ -801,10 +821,9 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
|
||||
if (IS_ERR(tegra->emc_clock)) {
|
||||
dev_err(&pdev->dev, "Failed to get emc clock\n");
|
||||
return PTR_ERR(tegra->emc_clock);
|
||||
}
|
||||
if (IS_ERR(tegra->emc_clock))
|
||||
return dev_err_probe(&pdev->dev, PTR_ERR(tegra->emc_clock),
|
||||
"Failed to get emc clock\n");
|
||||
|
||||
err = platform_get_irq(pdev, 0);
|
||||
if (err < 0)
|
||||
|
@ -822,11 +841,25 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
|
|||
return err;
|
||||
}
|
||||
|
||||
tegra->opp_table = dev_pm_opp_set_supported_hw(&pdev->dev,
|
||||
&hw_version, 1);
|
||||
err = PTR_ERR_OR_ZERO(tegra->opp_table);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to set supported HW: %d\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
err = dev_pm_opp_of_add_table(&pdev->dev);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to add OPP table: %d\n", err);
|
||||
goto put_hw;
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(tegra->clock);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev,
|
||||
"Failed to prepare and enable ACTMON clock\n");
|
||||
return err;
|
||||
goto remove_table;
|
||||
}
|
||||
|
||||
err = reset_control_reset(tegra->reset);
|
||||
|
@ -844,29 +877,12 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
|
|||
|
||||
tegra->max_freq = rate / KHZ;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
|
||||
dev = tegra->devices + i;
|
||||
dev->config = actmon_device_configs + i;
|
||||
dev->config = tegra->soc->configs + i;
|
||||
dev->regs = tegra->regs + dev->config->offset;
|
||||
}
|
||||
|
||||
for (rate = 0; rate <= tegra->max_freq * KHZ; rate++) {
|
||||
rate = clk_round_rate(tegra->emc_clock, rate);
|
||||
|
||||
if (rate < 0) {
|
||||
dev_err(&pdev->dev,
|
||||
"Failed to round clock rate: %ld\n", rate);
|
||||
err = rate;
|
||||
goto remove_opps;
|
||||
}
|
||||
|
||||
err = dev_pm_opp_add(&pdev->dev, rate / KHZ, 0);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to add OPP: %d\n", err);
|
||||
goto remove_opps;
|
||||
}
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, tegra);
|
||||
|
||||
tegra->clk_rate_change_nb.notifier_call = tegra_actmon_clk_notify_cb;
|
||||
|
@ -882,7 +898,6 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock);
|
||||
tegra_devfreq_profile.initial_freq /= KHZ;
|
||||
|
||||
devfreq = devfreq_add_device(&pdev->dev, &tegra_devfreq_profile,
|
||||
"tegra_actmon", NULL);
|
||||
|
@ -902,6 +917,10 @@ remove_opps:
|
|||
reset_control_reset(tegra->reset);
|
||||
disable_clk:
|
||||
clk_disable_unprepare(tegra->clock);
|
||||
remove_table:
|
||||
dev_pm_opp_of_remove_table(&pdev->dev);
|
||||
put_hw:
|
||||
dev_pm_opp_put_supported_hw(tegra->opp_table);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -913,17 +932,33 @@ static int tegra_devfreq_remove(struct platform_device *pdev)
|
|||
devfreq_remove_device(tegra->devfreq);
|
||||
devfreq_remove_governor(&tegra_devfreq_governor);
|
||||
|
||||
dev_pm_opp_remove_all_dynamic(&pdev->dev);
|
||||
|
||||
reset_control_reset(tegra->reset);
|
||||
clk_disable_unprepare(tegra->clock);
|
||||
|
||||
dev_pm_opp_of_remove_table(&pdev->dev);
|
||||
dev_pm_opp_put_supported_hw(tegra->opp_table);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct tegra_devfreq_soc_data tegra124_soc = {
|
||||
.configs = tegra124_device_configs,
|
||||
|
||||
/*
|
||||
* Activity counter is incremented every 256 memory transactions,
|
||||
* and each transaction takes 4 EMC clocks.
|
||||
*/
|
||||
.count_weight = 4 * 256,
|
||||
};
|
||||
|
||||
static const struct tegra_devfreq_soc_data tegra30_soc = {
|
||||
.configs = tegra30_device_configs,
|
||||
.count_weight = 2 * 256,
|
||||
};
|
||||
|
||||
static const struct of_device_id tegra_devfreq_of_match[] = {
|
||||
{ .compatible = "nvidia,tegra30-actmon" },
|
||||
{ .compatible = "nvidia,tegra124-actmon" },
|
||||
{ .compatible = "nvidia,tegra30-actmon", .data = &tegra30_soc, },
|
||||
{ .compatible = "nvidia,tegra124-actmon", .data = &tegra124_soc, },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -750,6 +750,13 @@ static bool scmi_fast_switch_possible(const struct scmi_handle *handle,
|
|||
return dom->fc_info && dom->fc_info->level_set_addr;
|
||||
}
|
||||
|
||||
static bool scmi_power_scale_mw_get(const struct scmi_handle *handle)
|
||||
{
|
||||
struct scmi_perf_info *pi = handle->perf_priv;
|
||||
|
||||
return pi->power_scale_mw;
|
||||
}
|
||||
|
||||
static const struct scmi_perf_ops perf_ops = {
|
||||
.limits_set = scmi_perf_limits_set,
|
||||
.limits_get = scmi_perf_limits_get,
|
||||
|
@ -762,6 +769,7 @@ static const struct scmi_perf_ops perf_ops = {
|
|||
.freq_get = scmi_dvfs_freq_get,
|
||||
.est_power_get = scmi_dvfs_est_power_get,
|
||||
.fast_switch_possible = scmi_fast_switch_possible,
|
||||
.power_scale_mw_get = scmi_power_scale_mw_get,
|
||||
};
|
||||
|
||||
static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle,
|
||||
|
|
|
@ -102,15 +102,10 @@ void lima_devfreq_fini(struct lima_device *ldev)
|
|||
|
||||
dev_pm_opp_of_remove_table(ldev->dev);
|
||||
|
||||
if (devfreq->regulators_opp_table) {
|
||||
dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
|
||||
devfreq->regulators_opp_table = NULL;
|
||||
}
|
||||
|
||||
if (devfreq->clkname_opp_table) {
|
||||
dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
|
||||
devfreq->clkname_opp_table = NULL;
|
||||
}
|
||||
dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
|
||||
dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
|
||||
devfreq->regulators_opp_table = NULL;
|
||||
devfreq->clkname_opp_table = NULL;
|
||||
}
|
||||
|
||||
int lima_devfreq_init(struct lima_device *ldev)
|
||||
|
|
|
@ -165,10 +165,8 @@ void panfrost_devfreq_fini(struct panfrost_device *pfdev)
|
|||
pfdevfreq->opp_of_table_added = false;
|
||||
}
|
||||
|
||||
if (pfdevfreq->regulators_opp_table) {
|
||||
dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table);
|
||||
pfdevfreq->regulators_opp_table = NULL;
|
||||
}
|
||||
dev_pm_opp_put_regulators(pfdevfreq->regulators_opp_table);
|
||||
pfdevfreq->regulators_opp_table = NULL;
|
||||
}
|
||||
|
||||
void panfrost_devfreq_resume(struct panfrost_device *pfdev)
|
||||
|
|
|
@ -2322,7 +2322,7 @@ static int stm32f7_i2c_suspend(struct device *dev)
|
|||
|
||||
i2c_mark_adapter_suspended(&i2c_dev->adap);
|
||||
|
||||
if (!device_may_wakeup(dev) && !dev->power.wakeup_path) {
|
||||
if (!device_may_wakeup(dev) && !device_wakeup_path(dev)) {
|
||||
ret = stm32f7_i2c_regs_backup(i2c_dev);
|
||||
if (ret < 0) {
|
||||
i2c_mark_adapter_resumed(&i2c_dev->adap);
|
||||
|
@ -2341,7 +2341,7 @@ static int stm32f7_i2c_resume(struct device *dev)
|
|||
struct stm32f7_i2c_dev *i2c_dev = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
if (!device_may_wakeup(dev) && !dev->power.wakeup_path) {
|
||||
if (!device_may_wakeup(dev) && !device_wakeup_path(dev)) {
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
|
|
@ -908,8 +908,7 @@ static void core_put_v4(struct device *dev)
|
|||
|
||||
if (core->has_opp_table)
|
||||
dev_pm_opp_of_remove_table(dev);
|
||||
if (core->opp_table)
|
||||
dev_pm_opp_put_clkname(core->opp_table);
|
||||
dev_pm_opp_put_clkname(core->opp_table);
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -29,32 +29,32 @@
|
|||
LIST_HEAD(opp_tables);
|
||||
/* Lock to allow exclusive modification to the device and opp lists */
|
||||
DEFINE_MUTEX(opp_table_lock);
|
||||
/* Flag indicating that opp_tables list is being updated at the moment */
|
||||
static bool opp_tables_busy;
|
||||
|
||||
static struct opp_device *_find_opp_dev(const struct device *dev,
|
||||
struct opp_table *opp_table)
|
||||
static bool _find_opp_dev(const struct device *dev, struct opp_table *opp_table)
|
||||
{
|
||||
struct opp_device *opp_dev;
|
||||
bool found = false;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_for_each_entry(opp_dev, &opp_table->dev_list, node)
|
||||
if (opp_dev->dev == dev)
|
||||
return opp_dev;
|
||||
if (opp_dev->dev == dev) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
mutex_unlock(&opp_table->lock);
|
||||
return found;
|
||||
}
|
||||
|
||||
static struct opp_table *_find_opp_table_unlocked(struct device *dev)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
bool found;
|
||||
|
||||
list_for_each_entry(opp_table, &opp_tables, node) {
|
||||
mutex_lock(&opp_table->lock);
|
||||
found = !!_find_opp_dev(dev, opp_table);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
if (found) {
|
||||
if (_find_opp_dev(dev, opp_table)) {
|
||||
_get_opp_table_kref(opp_table);
|
||||
|
||||
return opp_table;
|
||||
}
|
||||
}
|
||||
|
@ -1036,8 +1036,8 @@ static void _remove_opp_dev(struct opp_device *opp_dev,
|
|||
kfree(opp_dev);
|
||||
}
|
||||
|
||||
static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
|
||||
struct opp_table *opp_table)
|
||||
struct opp_device *_add_opp_dev(const struct device *dev,
|
||||
struct opp_table *opp_table)
|
||||
{
|
||||
struct opp_device *opp_dev;
|
||||
|
||||
|
@ -1048,7 +1048,9 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
|
|||
/* Initialize opp-dev */
|
||||
opp_dev->dev = dev;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_add(&opp_dev->node, &opp_table->dev_list);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
/* Create debugfs entries for the opp_table */
|
||||
opp_debug_register(opp_dev, opp_table);
|
||||
|
@ -1056,18 +1058,6 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
|
|||
return opp_dev;
|
||||
}
|
||||
|
||||
struct opp_device *_add_opp_dev(const struct device *dev,
|
||||
struct opp_table *opp_table)
|
||||
{
|
||||
struct opp_device *opp_dev;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
opp_dev = _add_opp_dev_unlocked(dev, opp_table);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
return opp_dev;
|
||||
}
|
||||
|
||||
static struct opp_table *_allocate_opp_table(struct device *dev, int index)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
|
@ -1121,8 +1111,6 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
|
|||
INIT_LIST_HEAD(&opp_table->opp_list);
|
||||
kref_init(&opp_table->kref);
|
||||
|
||||
/* Secure the device table modification */
|
||||
list_add(&opp_table->node, &opp_tables);
|
||||
return opp_table;
|
||||
|
||||
err:
|
||||
|
@ -1135,27 +1123,64 @@ void _get_opp_table_kref(struct opp_table *opp_table)
|
|||
kref_get(&opp_table->kref);
|
||||
}
|
||||
|
||||
static struct opp_table *_opp_get_opp_table(struct device *dev, int index)
|
||||
/*
|
||||
* We need to make sure that the OPP table for a device doesn't get added twice,
|
||||
* if this routine gets called in parallel with the same device pointer.
|
||||
*
|
||||
* The simplest way to enforce that is to perform everything (find existing
|
||||
* table and if not found, create a new one) under the opp_table_lock, so only
|
||||
* one creator gets access to the same. But that expands the critical section
|
||||
* under the lock and may end up causing circular dependencies with frameworks
|
||||
* like debugfs, interconnect or clock framework as they may be direct or
|
||||
* indirect users of OPP core.
|
||||
*
|
||||
* And for that reason we have to go for a bit tricky implementation here, which
|
||||
* uses the opp_tables_busy flag to indicate if another creator is in the middle
|
||||
* of adding an OPP table and others should wait for it to finish.
|
||||
*/
|
||||
struct opp_table *_add_opp_table_indexed(struct device *dev, int index)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
|
||||
/* Hold our table modification lock here */
|
||||
again:
|
||||
mutex_lock(&opp_table_lock);
|
||||
|
||||
opp_table = _find_opp_table_unlocked(dev);
|
||||
if (!IS_ERR(opp_table))
|
||||
goto unlock;
|
||||
|
||||
/*
|
||||
* The opp_tables list or an OPP table's dev_list is getting updated by
|
||||
* another user, wait for it to finish.
|
||||
*/
|
||||
if (unlikely(opp_tables_busy)) {
|
||||
mutex_unlock(&opp_table_lock);
|
||||
cpu_relax();
|
||||
goto again;
|
||||
}
|
||||
|
||||
opp_tables_busy = true;
|
||||
opp_table = _managed_opp(dev, index);
|
||||
|
||||
/* Drop the lock to reduce the size of critical section */
|
||||
mutex_unlock(&opp_table_lock);
|
||||
|
||||
if (opp_table) {
|
||||
if (!_add_opp_dev_unlocked(dev, opp_table)) {
|
||||
if (!_add_opp_dev(dev, opp_table)) {
|
||||
dev_pm_opp_put_opp_table(opp_table);
|
||||
opp_table = ERR_PTR(-ENOMEM);
|
||||
}
|
||||
goto unlock;
|
||||
|
||||
mutex_lock(&opp_table_lock);
|
||||
} else {
|
||||
opp_table = _allocate_opp_table(dev, index);
|
||||
|
||||
mutex_lock(&opp_table_lock);
|
||||
if (!IS_ERR(opp_table))
|
||||
list_add(&opp_table->node, &opp_tables);
|
||||
}
|
||||
|
||||
opp_table = _allocate_opp_table(dev, index);
|
||||
opp_tables_busy = false;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&opp_table_lock);
|
||||
|
@ -1163,18 +1188,17 @@ unlock:
|
|||
return opp_table;
|
||||
}
|
||||
|
||||
struct opp_table *_add_opp_table(struct device *dev)
|
||||
{
|
||||
return _add_opp_table_indexed(dev, 0);
|
||||
}
|
||||
|
||||
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
|
||||
{
|
||||
return _opp_get_opp_table(dev, 0);
|
||||
return _find_opp_table(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_table);
|
||||
|
||||
struct opp_table *dev_pm_opp_get_opp_table_indexed(struct device *dev,
|
||||
int index)
|
||||
{
|
||||
return _opp_get_opp_table(dev, index);
|
||||
}
|
||||
|
||||
static void _opp_table_kref_release(struct kref *kref)
|
||||
{
|
||||
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
|
||||
|
@ -1227,9 +1251,14 @@ void _opp_free(struct dev_pm_opp *opp)
|
|||
kfree(opp);
|
||||
}
|
||||
|
||||
static void _opp_kref_release(struct dev_pm_opp *opp,
|
||||
struct opp_table *opp_table)
|
||||
static void _opp_kref_release(struct kref *kref)
|
||||
{
|
||||
struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
|
||||
struct opp_table *opp_table = opp->opp_table;
|
||||
|
||||
list_del(&opp->node);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
/*
|
||||
* Notify the changes in the availability of the operable
|
||||
* frequency/voltage list.
|
||||
|
@ -1237,27 +1266,9 @@ static void _opp_kref_release(struct dev_pm_opp *opp,
|
|||
blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_REMOVE, opp);
|
||||
_of_opp_free_required_opps(opp_table, opp);
|
||||
opp_debug_remove_one(opp);
|
||||
list_del(&opp->node);
|
||||
kfree(opp);
|
||||
}
|
||||
|
||||
static void _opp_kref_release_unlocked(struct kref *kref)
|
||||
{
|
||||
struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
|
||||
struct opp_table *opp_table = opp->opp_table;
|
||||
|
||||
_opp_kref_release(opp, opp_table);
|
||||
}
|
||||
|
||||
static void _opp_kref_release_locked(struct kref *kref)
|
||||
{
|
||||
struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
|
||||
struct opp_table *opp_table = opp->opp_table;
|
||||
|
||||
_opp_kref_release(opp, opp_table);
|
||||
mutex_unlock(&opp_table->lock);
|
||||
}
|
||||
|
||||
void dev_pm_opp_get(struct dev_pm_opp *opp)
|
||||
{
|
||||
kref_get(&opp->kref);
|
||||
|
@ -1265,16 +1276,10 @@ void dev_pm_opp_get(struct dev_pm_opp *opp)
|
|||
|
||||
void dev_pm_opp_put(struct dev_pm_opp *opp)
|
||||
{
|
||||
kref_put_mutex(&opp->kref, _opp_kref_release_locked,
|
||||
&opp->opp_table->lock);
|
||||
kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_put);
|
||||
|
||||
static void dev_pm_opp_put_unlocked(struct dev_pm_opp *opp)
|
||||
{
|
||||
kref_put(&opp->kref, _opp_kref_release_unlocked);
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_opp_remove() - Remove an OPP from OPP table
|
||||
* @dev: device for which we do this operation
|
||||
|
@ -1318,30 +1323,49 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
|
||||
|
||||
static struct dev_pm_opp *_opp_get_next(struct opp_table *opp_table,
|
||||
bool dynamic)
|
||||
{
|
||||
struct dev_pm_opp *opp = NULL, *temp;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_for_each_entry(temp, &opp_table->opp_list, node) {
|
||||
if (dynamic == temp->dynamic) {
|
||||
opp = temp;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&opp_table->lock);
|
||||
return opp;
|
||||
}
|
||||
|
||||
bool _opp_remove_all_static(struct opp_table *opp_table)
|
||||
{
|
||||
struct dev_pm_opp *opp, *tmp;
|
||||
bool ret = true;
|
||||
struct dev_pm_opp *opp;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
|
||||
if (!opp_table->parsed_static_opps) {
|
||||
ret = false;
|
||||
goto unlock;
|
||||
mutex_unlock(&opp_table->lock);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (--opp_table->parsed_static_opps)
|
||||
goto unlock;
|
||||
|
||||
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
|
||||
if (!opp->dynamic)
|
||||
dev_pm_opp_put_unlocked(opp);
|
||||
if (--opp_table->parsed_static_opps) {
|
||||
mutex_unlock(&opp_table->lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
return ret;
|
||||
/*
|
||||
* Can't remove the OPP from under the lock, debugfs removal needs to
|
||||
* happen lock less to avoid circular dependency issues.
|
||||
*/
|
||||
while ((opp = _opp_get_next(opp_table, false)))
|
||||
dev_pm_opp_put(opp);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1353,21 +1377,21 @@ unlock:
|
|||
void dev_pm_opp_remove_all_dynamic(struct device *dev)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
struct dev_pm_opp *opp, *temp;
|
||||
struct dev_pm_opp *opp;
|
||||
int count = 0;
|
||||
|
||||
opp_table = _find_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return;
|
||||
|
||||
mutex_lock(&opp_table->lock);
|
||||
list_for_each_entry_safe(opp, temp, &opp_table->opp_list, node) {
|
||||
if (opp->dynamic) {
|
||||
dev_pm_opp_put_unlocked(opp);
|
||||
count++;
|
||||
}
|
||||
/*
|
||||
* Can't remove the OPP from under the lock, debugfs removal needs to
|
||||
* happen lock less to avoid circular dependency issues.
|
||||
*/
|
||||
while ((opp = _opp_get_next(opp_table, true))) {
|
||||
dev_pm_opp_put(opp);
|
||||
count++;
|
||||
}
|
||||
mutex_unlock(&opp_table->lock);
|
||||
|
||||
/* Drop the references taken by dev_pm_opp_add() */
|
||||
while (count--)
|
||||
|
@ -1602,7 +1626,7 @@ struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
|
|||
{
|
||||
struct opp_table *opp_table;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -1636,6 +1660,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw);
|
|||
*/
|
||||
void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
|
||||
{
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
/* Make sure there are no concurrent readers while updating opp_table */
|
||||
WARN_ON(!list_empty(&opp_table->opp_list));
|
||||
|
||||
|
@ -1661,7 +1688,7 @@ struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
|
|||
{
|
||||
struct opp_table *opp_table;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -1692,6 +1719,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name);
|
|||
*/
|
||||
void dev_pm_opp_put_prop_name(struct opp_table *opp_table)
|
||||
{
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
/* Make sure there are no concurrent readers while updating opp_table */
|
||||
WARN_ON(!list_empty(&opp_table->opp_list));
|
||||
|
||||
|
@ -1754,7 +1784,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
|
|||
struct regulator *reg;
|
||||
int ret, i;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -1820,6 +1850,9 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
|
|||
{
|
||||
int i;
|
||||
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
if (!opp_table->regulators)
|
||||
goto put_opp_table;
|
||||
|
||||
|
@ -1862,7 +1895,7 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name)
|
|||
struct opp_table *opp_table;
|
||||
int ret;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -1902,6 +1935,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_clkname);
|
|||
*/
|
||||
void dev_pm_opp_put_clkname(struct opp_table *opp_table)
|
||||
{
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
/* Make sure there are no concurrent readers while updating opp_table */
|
||||
WARN_ON(!list_empty(&opp_table->opp_list));
|
||||
|
||||
|
@ -1930,7 +1966,7 @@ struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
|
|||
if (!set_opp)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -1957,6 +1993,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
|
|||
*/
|
||||
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table)
|
||||
{
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
/* Make sure there are no concurrent readers while updating opp_table */
|
||||
WARN_ON(!list_empty(&opp_table->opp_list));
|
||||
|
||||
|
@ -2014,7 +2053,7 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
|
|||
int index = 0, ret = -EINVAL;
|
||||
const char **name = names;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return opp_table;
|
||||
|
||||
|
@ -2085,6 +2124,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_attach_genpd);
|
|||
*/
|
||||
void dev_pm_opp_detach_genpd(struct opp_table *opp_table)
|
||||
{
|
||||
if (unlikely(!opp_table))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Acquire genpd_virt_dev_lock to make sure virt_dev isn't getting
|
||||
* used in parallel.
|
||||
|
@ -2179,7 +2221,7 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
|
|||
struct opp_table *opp_table;
|
||||
int ret;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table(dev);
|
||||
opp_table = _add_opp_table(dev);
|
||||
if (IS_ERR(opp_table))
|
||||
return PTR_ERR(opp_table);
|
||||
|
||||
|
|
|
@ -112,8 +112,6 @@ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np)
|
|||
struct opp_table *opp_table;
|
||||
struct device_node *opp_table_np;
|
||||
|
||||
lockdep_assert_held(&opp_table_lock);
|
||||
|
||||
opp_table_np = of_get_parent(opp_np);
|
||||
if (!opp_table_np)
|
||||
goto err;
|
||||
|
@ -121,12 +119,15 @@ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np)
|
|||
/* It is safe to put the node now as all we need now is its address */
|
||||
of_node_put(opp_table_np);
|
||||
|
||||
mutex_lock(&opp_table_lock);
|
||||
list_for_each_entry(opp_table, &opp_tables, node) {
|
||||
if (opp_table_np == opp_table->np) {
|
||||
_get_opp_table_kref(opp_table);
|
||||
mutex_unlock(&opp_table_lock);
|
||||
return opp_table;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&opp_table_lock);
|
||||
|
||||
err:
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
@ -169,7 +170,8 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table,
|
|||
/* Traversing the first OPP node is all we need */
|
||||
np = of_get_next_available_child(opp_np, NULL);
|
||||
if (!np) {
|
||||
dev_err(dev, "Empty OPP table\n");
|
||||
dev_warn(dev, "Empty OPP table\n");
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -377,7 +379,9 @@ int dev_pm_opp_of_find_icc_paths(struct device *dev,
|
|||
struct icc_path **paths;
|
||||
|
||||
ret = _bandwidth_supported(dev, opp_table);
|
||||
if (ret <= 0)
|
||||
if (ret == -EINVAL)
|
||||
return 0; /* Empty OPP table is a valid corner-case, let's not fail */
|
||||
else if (ret <= 0)
|
||||
return ret;
|
||||
|
||||
ret = 0;
|
||||
|
@ -974,7 +978,7 @@ int dev_pm_opp_of_add_table(struct device *dev)
|
|||
struct opp_table *opp_table;
|
||||
int ret;
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table_indexed(dev, 0);
|
||||
opp_table = _add_opp_table_indexed(dev, 0);
|
||||
if (IS_ERR(opp_table))
|
||||
return PTR_ERR(opp_table);
|
||||
|
||||
|
@ -1029,7 +1033,7 @@ int dev_pm_opp_of_add_table_indexed(struct device *dev, int index)
|
|||
index = 0;
|
||||
}
|
||||
|
||||
opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
|
||||
opp_table = _add_opp_table_indexed(dev, index);
|
||||
if (IS_ERR(opp_table))
|
||||
return PTR_ERR(opp_table);
|
||||
|
||||
|
@ -1335,7 +1339,7 @@ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus)
|
|||
goto failed;
|
||||
}
|
||||
|
||||
ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus);
|
||||
ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus, true);
|
||||
if (ret)
|
||||
goto failed;
|
||||
|
||||
|
|
|
@ -224,6 +224,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *o
|
|||
int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic);
|
||||
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, int last_cpu);
|
||||
struct opp_table *_add_opp_table(struct device *dev);
|
||||
struct opp_table *_add_opp_table_indexed(struct device *dev, int index);
|
||||
void _put_opp_list_kref(struct opp_table *opp_table);
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
|
|
@ -1060,7 +1060,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
|
|||
{
|
||||
while (bus->parent) {
|
||||
if (acpi_pm_device_can_wakeup(&bus->self->dev))
|
||||
return acpi_pm_set_bridge_wakeup(&bus->self->dev, enable);
|
||||
return acpi_pm_set_device_wakeup(&bus->self->dev, enable);
|
||||
|
||||
bus = bus->parent;
|
||||
}
|
||||
|
@ -1068,7 +1068,7 @@ static int acpi_pci_propagate_wakeup(struct pci_bus *bus, bool enable)
|
|||
/* We have reached the root bus. */
|
||||
if (bus->bridge) {
|
||||
if (acpi_pm_device_can_wakeup(bus->bridge))
|
||||
return acpi_pm_set_bridge_wakeup(bus->bridge, enable);
|
||||
return acpi_pm_set_device_wakeup(bus->bridge, enable);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1011,6 +1011,10 @@ static const struct rapl_defaults rapl_defaults_cht = {
|
|||
.compute_time_window = rapl_compute_time_window_atom,
|
||||
};
|
||||
|
||||
static const struct rapl_defaults rapl_defaults_amd = {
|
||||
.check_unit = rapl_check_unit_core,
|
||||
};
|
||||
|
||||
static const struct x86_cpu_id rapl_ids[] __initconst = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &rapl_defaults_core),
|
||||
|
@ -1061,6 +1065,9 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
|
|||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &rapl_defaults_hsw_server),
|
||||
|
||||
X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
|
||||
X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(x86cpu, rapl_ids);
|
||||
|
|
|
@ -31,7 +31,9 @@
|
|||
#define MSR_VR_CURRENT_CONFIG 0x00000601
|
||||
|
||||
/* private data for RAPL MSR Interface */
|
||||
static struct rapl_if_priv rapl_msr_priv = {
|
||||
static struct rapl_if_priv *rapl_msr_priv;
|
||||
|
||||
static struct rapl_if_priv rapl_msr_priv_intel = {
|
||||
.reg_unit = MSR_RAPL_POWER_UNIT,
|
||||
.regs[RAPL_DOMAIN_PACKAGE] = {
|
||||
MSR_PKG_POWER_LIMIT, MSR_PKG_ENERGY_STATUS, MSR_PKG_PERF_STATUS, 0, MSR_PKG_POWER_INFO },
|
||||
|
@ -47,6 +49,14 @@ static struct rapl_if_priv rapl_msr_priv = {
|
|||
.limits[RAPL_DOMAIN_PLATFORM] = 2,
|
||||
};
|
||||
|
||||
static struct rapl_if_priv rapl_msr_priv_amd = {
|
||||
.reg_unit = MSR_AMD_RAPL_POWER_UNIT,
|
||||
.regs[RAPL_DOMAIN_PACKAGE] = {
|
||||
0, MSR_AMD_PKG_ENERGY_STATUS, 0, 0, 0 },
|
||||
.regs[RAPL_DOMAIN_PP0] = {
|
||||
0, MSR_AMD_CORE_ENERGY_STATUS, 0, 0, 0 },
|
||||
};
|
||||
|
||||
/* Handles CPU hotplug on multi-socket systems.
|
||||
* If a CPU goes online as the first CPU of the physical package
|
||||
* we add the RAPL package to the system. Similarly, when the last
|
||||
|
@ -58,9 +68,9 @@ static int rapl_cpu_online(unsigned int cpu)
|
|||
{
|
||||
struct rapl_package *rp;
|
||||
|
||||
rp = rapl_find_package_domain(cpu, &rapl_msr_priv);
|
||||
rp = rapl_find_package_domain(cpu, rapl_msr_priv);
|
||||
if (!rp) {
|
||||
rp = rapl_add_package(cpu, &rapl_msr_priv);
|
||||
rp = rapl_add_package(cpu, rapl_msr_priv);
|
||||
if (IS_ERR(rp))
|
||||
return PTR_ERR(rp);
|
||||
}
|
||||
|
@ -73,7 +83,7 @@ static int rapl_cpu_down_prep(unsigned int cpu)
|
|||
struct rapl_package *rp;
|
||||
int lead_cpu;
|
||||
|
||||
rp = rapl_find_package_domain(cpu, &rapl_msr_priv);
|
||||
rp = rapl_find_package_domain(cpu, rapl_msr_priv);
|
||||
if (!rp)
|
||||
return 0;
|
||||
|
||||
|
@ -136,40 +146,51 @@ static int rapl_msr_probe(struct platform_device *pdev)
|
|||
const struct x86_cpu_id *id = x86_match_cpu(pl4_support_ids);
|
||||
int ret;
|
||||
|
||||
rapl_msr_priv.read_raw = rapl_msr_read_raw;
|
||||
rapl_msr_priv.write_raw = rapl_msr_write_raw;
|
||||
switch (boot_cpu_data.x86_vendor) {
|
||||
case X86_VENDOR_INTEL:
|
||||
rapl_msr_priv = &rapl_msr_priv_intel;
|
||||
break;
|
||||
case X86_VENDOR_AMD:
|
||||
rapl_msr_priv = &rapl_msr_priv_amd;
|
||||
break;
|
||||
default:
|
||||
pr_err("intel-rapl does not support CPU vendor %d\n", boot_cpu_data.x86_vendor);
|
||||
return -ENODEV;
|
||||
}
|
||||
rapl_msr_priv->read_raw = rapl_msr_read_raw;
|
||||
rapl_msr_priv->write_raw = rapl_msr_write_raw;
|
||||
|
||||
if (id) {
|
||||
rapl_msr_priv.limits[RAPL_DOMAIN_PACKAGE] = 3;
|
||||
rapl_msr_priv.regs[RAPL_DOMAIN_PACKAGE][RAPL_DOMAIN_REG_PL4] =
|
||||
rapl_msr_priv->limits[RAPL_DOMAIN_PACKAGE] = 3;
|
||||
rapl_msr_priv->regs[RAPL_DOMAIN_PACKAGE][RAPL_DOMAIN_REG_PL4] =
|
||||
MSR_VR_CURRENT_CONFIG;
|
||||
pr_info("PL4 support detected.\n");
|
||||
}
|
||||
|
||||
rapl_msr_priv.control_type = powercap_register_control_type(NULL, "intel-rapl", NULL);
|
||||
if (IS_ERR(rapl_msr_priv.control_type)) {
|
||||
rapl_msr_priv->control_type = powercap_register_control_type(NULL, "intel-rapl", NULL);
|
||||
if (IS_ERR(rapl_msr_priv->control_type)) {
|
||||
pr_debug("failed to register powercap control_type.\n");
|
||||
return PTR_ERR(rapl_msr_priv.control_type);
|
||||
return PTR_ERR(rapl_msr_priv->control_type);
|
||||
}
|
||||
|
||||
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powercap/rapl:online",
|
||||
rapl_cpu_online, rapl_cpu_down_prep);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
rapl_msr_priv.pcap_rapl_online = ret;
|
||||
rapl_msr_priv->pcap_rapl_online = ret;
|
||||
|
||||
return 0;
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
powercap_unregister_control_type(rapl_msr_priv.control_type);
|
||||
powercap_unregister_control_type(rapl_msr_priv->control_type);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int rapl_msr_remove(struct platform_device *pdev)
|
||||
{
|
||||
cpuhp_remove_state(rapl_msr_priv.pcap_rapl_online);
|
||||
powercap_unregister_control_type(rapl_msr_priv.control_type);
|
||||
cpuhp_remove_state(rapl_msr_priv->pcap_rapl_online);
|
||||
powercap_unregister_control_type(rapl_msr_priv->control_type);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -170,9 +170,8 @@ static ssize_t show_constraint_name(struct device *dev,
|
|||
if (pconst && pconst->ops && pconst->ops->get_name) {
|
||||
name = pconst->ops->get_name(power_zone, id);
|
||||
if (name) {
|
||||
snprintf(buf, POWERCAP_CONSTRAINT_NAME_LEN,
|
||||
"%s\n", name);
|
||||
buf[POWERCAP_CONSTRAINT_NAME_LEN] = '\0';
|
||||
sprintf(buf, "%.*s\n", POWERCAP_CONSTRAINT_NAME_LEN - 1,
|
||||
name);
|
||||
len = strlen(buf);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
* Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
|
@ -90,6 +91,7 @@ u32 tegra_read_ram_code(void)
|
|||
|
||||
return straps >> PMC_STRAPPING_OPT_A_RAM_CODE_SHIFT;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tegra_read_ram_code);
|
||||
|
||||
static const struct of_device_id apbmisc_match[] __initconst = {
|
||||
{ .compatible = "nvidia,tegra20-apbmisc", },
|
||||
|
|
|
@ -620,7 +620,6 @@ acpi_status acpi_remove_pm_notifier(struct acpi_device *adev);
|
|||
bool acpi_pm_device_can_wakeup(struct device *dev);
|
||||
int acpi_pm_device_sleep_state(struct device *, int *, int);
|
||||
int acpi_pm_set_device_wakeup(struct device *dev, bool enable);
|
||||
int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable);
|
||||
#else
|
||||
static inline void acpi_pm_wakeup_event(struct device *dev)
|
||||
{
|
||||
|
@ -651,10 +650,6 @@ static inline int acpi_pm_set_device_wakeup(struct device *dev, bool enable)
|
|||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
static inline int acpi_pm_set_bridge_wakeup(struct device *dev, bool enable)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
|
||||
|
|
|
@ -65,7 +65,6 @@ struct cpufreq_policy {
|
|||
unsigned int max; /* in kHz */
|
||||
unsigned int cur; /* in kHz, only needed if cpufreq
|
||||
* governors are used */
|
||||
unsigned int restore_freq; /* = policy->cur before transition */
|
||||
unsigned int suspend_freq; /* freq to set during suspend */
|
||||
|
||||
unsigned int policy; /* see above */
|
||||
|
@ -314,10 +313,6 @@ struct cpufreq_driver {
|
|||
/* define one out of two */
|
||||
int (*setpolicy)(struct cpufreq_policy *policy);
|
||||
|
||||
/*
|
||||
* On failure, should always restore frequency to policy->restore_freq
|
||||
* (i.e. old freq).
|
||||
*/
|
||||
int (*target)(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation); /* Deprecated */
|
||||
|
|
|
@ -15,8 +15,6 @@
|
|||
#include <linux/pm_opp.h>
|
||||
#include <linux/pm_qos.h>
|
||||
|
||||
#define DEVFREQ_NAME_LEN 16
|
||||
|
||||
/* DEVFREQ governor name */
|
||||
#define DEVFREQ_GOV_SIMPLE_ONDEMAND "simple_ondemand"
|
||||
#define DEVFREQ_GOV_PERFORMANCE "performance"
|
||||
|
@ -139,7 +137,6 @@ struct devfreq_stats {
|
|||
* using devfreq.
|
||||
* @profile: device-specific devfreq profile
|
||||
* @governor: method how to choose frequency based on the usage.
|
||||
* @governor_name: devfreq governor name for use with this devfreq
|
||||
* @nb: notifier block used to notify devfreq object that it should
|
||||
* reevaluate operable frequencies. Devfreq users may use
|
||||
* devfreq.nb to the corresponding register notifier call chain.
|
||||
|
@ -176,7 +173,6 @@ struct devfreq {
|
|||
struct device dev;
|
||||
struct devfreq_dev_profile *profile;
|
||||
const struct devfreq_governor *governor;
|
||||
char governor_name[DEVFREQ_NAME_LEN];
|
||||
struct notifier_block nb;
|
||||
struct delayed_work work;
|
||||
|
||||
|
|
|
@ -13,9 +13,8 @@
|
|||
/**
|
||||
* em_perf_state - Performance state of a performance domain
|
||||
* @frequency: The frequency in KHz, for consistency with CPUFreq
|
||||
* @power: The power consumed at this level, in milli-watts (by 1 CPU or
|
||||
by a registered device). It can be a total power: static and
|
||||
dynamic.
|
||||
* @power: The power consumed at this level (by 1 CPU or by a registered
|
||||
* device). It can be a total power: static and dynamic.
|
||||
* @cost: The cost coefficient associated with this level, used during
|
||||
* energy calculation. Equal to: power * max_frequency / frequency
|
||||
*/
|
||||
|
@ -29,6 +28,8 @@ struct em_perf_state {
|
|||
* em_perf_domain - Performance domain
|
||||
* @table: List of performance states, in ascending order
|
||||
* @nr_perf_states: Number of performance states
|
||||
* @milliwatts: Flag indicating the power values are in milli-Watts
|
||||
* or some other scale.
|
||||
* @cpus: Cpumask covering the CPUs of the domain. It's here
|
||||
* for performance reasons to avoid potential cache
|
||||
* misses during energy calculations in the scheduler
|
||||
|
@ -43,6 +44,7 @@ struct em_perf_state {
|
|||
struct em_perf_domain {
|
||||
struct em_perf_state *table;
|
||||
int nr_perf_states;
|
||||
int milliwatts;
|
||||
unsigned long cpus[];
|
||||
};
|
||||
|
||||
|
@ -55,7 +57,7 @@ struct em_data_callback {
|
|||
/**
|
||||
* active_power() - Provide power at the next performance state of
|
||||
* a device
|
||||
* @power : Active power at the performance state in mW
|
||||
* @power : Active power at the performance state
|
||||
* (modified)
|
||||
* @freq : Frequency at the performance state in kHz
|
||||
* (modified)
|
||||
|
@ -66,8 +68,8 @@ struct em_data_callback {
|
|||
* and frequency.
|
||||
*
|
||||
* In case of CPUs, the power is the one of a single CPU in the domain,
|
||||
* expressed in milli-watts. It is expected to fit in the
|
||||
* [0, EM_MAX_POWER] range.
|
||||
* expressed in milli-Watts or an abstract scale. It is expected to
|
||||
* fit in the [0, EM_MAX_POWER] range.
|
||||
*
|
||||
* Return 0 on success.
|
||||
*/
|
||||
|
@ -79,7 +81,8 @@ struct em_data_callback {
|
|||
struct em_perf_domain *em_cpu_get(int cpu);
|
||||
struct em_perf_domain *em_pd_get(struct device *dev);
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *span);
|
||||
struct em_data_callback *cb, cpumask_t *span,
|
||||
bool milliwatts);
|
||||
void em_dev_unregister_perf_domain(struct device *dev);
|
||||
|
||||
/**
|
||||
|
@ -103,6 +106,9 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
|
|||
struct em_perf_state *ps;
|
||||
int i, cpu;
|
||||
|
||||
if (!sum_util)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* In order to predict the performance state, map the utilization of
|
||||
* the most utilized CPU of the performance domain to a requested
|
||||
|
@ -186,7 +192,8 @@ struct em_data_callback {};
|
|||
|
||||
static inline
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *span)
|
||||
struct em_data_callback *cb, cpumask_t *span,
|
||||
bool milliwatts)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -255,24 +255,24 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
|
|||
}
|
||||
static inline int pm_genpd_remove(struct generic_pm_domain *genpd)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_genpd_set_performance_state(struct device *dev,
|
||||
unsigned int state)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_genpd_add_notifier(struct device *dev,
|
||||
struct notifier_block *nb)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_genpd_remove_notifier(struct device *dev)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
#define simple_qos_governor (*(struct dev_power_governor *)(NULL))
|
||||
|
@ -280,11 +280,11 @@ static inline int dev_pm_genpd_remove_notifier(struct device *dev)
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP
|
||||
void pm_genpd_syscore_poweroff(struct device *dev);
|
||||
void pm_genpd_syscore_poweron(struct device *dev);
|
||||
void dev_pm_genpd_suspend(struct device *dev);
|
||||
void dev_pm_genpd_resume(struct device *dev);
|
||||
#else
|
||||
static inline void pm_genpd_syscore_poweroff(struct device *dev) {}
|
||||
static inline void pm_genpd_syscore_poweron(struct device *dev) {}
|
||||
static inline void dev_pm_genpd_suspend(struct device *dev) {}
|
||||
static inline void dev_pm_genpd_resume(struct device *dev) {}
|
||||
#endif
|
||||
|
||||
/* OF PM domain providers */
|
||||
|
@ -325,13 +325,13 @@ struct device *genpd_dev_pm_attach_by_name(struct device *dev,
|
|||
static inline int of_genpd_add_provider_simple(struct device_node *np,
|
||||
struct generic_pm_domain *genpd)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int of_genpd_add_provider_onecell(struct device_node *np,
|
||||
struct genpd_onecell_data *data)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline void of_genpd_del_provider(struct device_node *np) {}
|
||||
|
@ -387,7 +387,7 @@ static inline struct device *genpd_dev_pm_attach_by_name(struct device *dev,
|
|||
static inline
|
||||
struct generic_pm_domain *of_genpd_remove_last(struct device_node *np)
|
||||
{
|
||||
return ERR_PTR(-ENOTSUPP);
|
||||
return ERR_PTR(-EOPNOTSUPP);
|
||||
}
|
||||
#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
|
||||
|
||||
|
|
|
@ -90,7 +90,6 @@ struct dev_pm_set_opp_data {
|
|||
#if defined(CONFIG_PM_OPP)
|
||||
|
||||
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev);
|
||||
struct opp_table *dev_pm_opp_get_opp_table_indexed(struct device *dev, int index);
|
||||
void dev_pm_opp_put_opp_table(struct opp_table *opp_table);
|
||||
|
||||
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp);
|
||||
|
|
|
@ -84,6 +84,11 @@ static inline bool device_may_wakeup(struct device *dev)
|
|||
return dev->power.can_wakeup && !!dev->power.wakeup;
|
||||
}
|
||||
|
||||
static inline bool device_wakeup_path(struct device *dev)
|
||||
{
|
||||
return dev->power.wakeup_path;
|
||||
}
|
||||
|
||||
static inline void device_set_wakeup_path(struct device *dev)
|
||||
{
|
||||
dev->power.wakeup_path = true;
|
||||
|
@ -174,6 +179,11 @@ static inline bool device_may_wakeup(struct device *dev)
|
|||
return dev->power.can_wakeup && dev->power.should_wakeup;
|
||||
}
|
||||
|
||||
static inline bool device_wakeup_path(struct device *dev)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void device_set_wakeup_path(struct device *dev) {}
|
||||
|
||||
static inline void __pm_stay_awake(struct wakeup_source *ws) {}
|
||||
|
|
|
@ -121,6 +121,7 @@ struct scmi_perf_ops {
|
|||
unsigned long *rate, unsigned long *power);
|
||||
bool (*fast_switch_possible)(const struct scmi_handle *handle,
|
||||
struct device *dev);
|
||||
bool (*power_scale_mw_get)(const struct scmi_handle *handle);
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -56,7 +56,11 @@ u32 tegra_read_straps(void);
|
|||
u32 tegra_read_ram_code(void);
|
||||
int tegra_fuse_readl(unsigned long offset, u32 *value);
|
||||
|
||||
#ifdef CONFIG_ARCH_TEGRA
|
||||
extern struct tegra_sku_info tegra_sku_info;
|
||||
#else
|
||||
static struct tegra_sku_info tegra_sku_info __maybe_unused;
|
||||
#endif
|
||||
|
||||
struct device *tegra_soc_device_register(void);
|
||||
|
||||
|
|
|
@ -8,6 +8,34 @@
|
|||
#include <linux/devfreq.h>
|
||||
#include <linux/tracepoint.h>
|
||||
|
||||
TRACE_EVENT(devfreq_frequency,
|
||||
TP_PROTO(struct devfreq *devfreq, unsigned long freq,
|
||||
unsigned long prev_freq),
|
||||
|
||||
TP_ARGS(devfreq, freq, prev_freq),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(dev_name, dev_name(&devfreq->dev))
|
||||
__field(unsigned long, freq)
|
||||
__field(unsigned long, prev_freq)
|
||||
__field(unsigned long, busy_time)
|
||||
__field(unsigned long, total_time)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(dev_name, dev_name(&devfreq->dev));
|
||||
__entry->freq = freq;
|
||||
__entry->prev_freq = prev_freq;
|
||||
__entry->busy_time = devfreq->last_status.busy_time;
|
||||
__entry->total_time = devfreq->last_status.total_time;
|
||||
),
|
||||
|
||||
TP_printk("dev_name=%-30s freq=%-12lu prev_freq=%-12lu load=%-2lu",
|
||||
__get_str(dev_name), __entry->freq, __entry->prev_freq,
|
||||
__entry->total_time == 0 ? 0 :
|
||||
(100 * __entry->busy_time) / __entry->total_time)
|
||||
);
|
||||
|
||||
TRACE_EVENT(devfreq_monitor,
|
||||
TP_PROTO(struct devfreq *devfreq),
|
||||
|
||||
|
@ -29,7 +57,7 @@ TRACE_EVENT(devfreq_monitor,
|
|||
__assign_str(dev_name, dev_name(&devfreq->dev));
|
||||
),
|
||||
|
||||
TP_printk("dev_name=%s freq=%lu polling_ms=%u load=%lu",
|
||||
TP_printk("dev_name=%-30s freq=%-12lu polling_ms=%-3u load=%-2lu",
|
||||
__get_str(dev_name), __entry->freq, __entry->polling_ms,
|
||||
__entry->total_time == 0 ? 0 :
|
||||
(100 * __entry->busy_time) / __entry->total_time)
|
||||
|
|
|
@ -52,6 +52,17 @@ static int em_debug_cpus_show(struct seq_file *s, void *unused)
|
|||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
|
||||
|
||||
static int em_debug_units_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
struct em_perf_domain *pd = s->private;
|
||||
char *units = pd->milliwatts ? "milliWatts" : "bogoWatts";
|
||||
|
||||
seq_printf(s, "%s\n", units);
|
||||
|
||||
return 0;
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(em_debug_units);
|
||||
|
||||
static void em_debug_create_pd(struct device *dev)
|
||||
{
|
||||
struct dentry *d;
|
||||
|
@ -64,6 +75,8 @@ static void em_debug_create_pd(struct device *dev)
|
|||
debugfs_create_file("cpus", 0444, d, dev->em_pd->cpus,
|
||||
&em_debug_cpus_fops);
|
||||
|
||||
debugfs_create_file("units", 0444, d, dev->em_pd, &em_debug_units_fops);
|
||||
|
||||
/* Create a sub-directory for each performance state */
|
||||
for (i = 0; i < dev->em_pd->nr_perf_states; i++)
|
||||
em_debug_create_ps(&dev->em_pd->table[i], d);
|
||||
|
@ -130,7 +143,7 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd,
|
|||
|
||||
/*
|
||||
* The power returned by active_state() is expected to be
|
||||
* positive, in milli-watts and to fit into 16 bits.
|
||||
* positive and to fit into 16 bits.
|
||||
*/
|
||||
if (!power || power > EM_MAX_POWER) {
|
||||
dev_err(dev, "EM: invalid power: %lu\n",
|
||||
|
@ -250,17 +263,24 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
|
|||
* @cpus : Pointer to cpumask_t, which in case of a CPU device is
|
||||
* obligatory. It can be taken from i.e. 'policy->cpus'. For other
|
||||
* type of devices this should be set to NULL.
|
||||
* @milliwatts : Flag indicating that the power values are in milliWatts or
|
||||
* in some other scale. It must be set properly.
|
||||
*
|
||||
* Create Energy Model tables for a performance domain using the callbacks
|
||||
* defined in cb.
|
||||
*
|
||||
* The @milliwatts is important to set with correct value. Some kernel
|
||||
* sub-systems might rely on this flag and check if all devices in the EM are
|
||||
* using the same scale.
|
||||
*
|
||||
* If multiple clients register the same performance domain, all but the first
|
||||
* registration will be ignored.
|
||||
*
|
||||
* Return 0 on success
|
||||
*/
|
||||
int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
||||
struct em_data_callback *cb, cpumask_t *cpus)
|
||||
struct em_data_callback *cb, cpumask_t *cpus,
|
||||
bool milliwatts)
|
||||
{
|
||||
unsigned long cap, prev_cap = 0;
|
||||
int cpu, ret;
|
||||
|
@ -313,6 +333,8 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
|
|||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
dev->em_pd->milliwatts = milliwatts;
|
||||
|
||||
em_debug_create_pd(dev);
|
||||
dev_info(dev, "EM: created perf domain\n");
|
||||
|
||||
|
|
|
@ -224,6 +224,7 @@ EXPORT_SYMBOL_GPL(suspend_set_ops);
|
|||
|
||||
/**
|
||||
* suspend_valid_only_mem - Generic memory-only valid callback.
|
||||
* @state: Target system sleep state.
|
||||
*
|
||||
* Platform drivers that implement mem suspend only and only need to check for
|
||||
* that in their .valid() callback can use this instead of rolling their own
|
||||
|
@ -335,6 +336,7 @@ static int suspend_test(int level)
|
|||
|
||||
/**
|
||||
* suspend_prepare - Prepare for entering system sleep state.
|
||||
* @state: Target system sleep state.
|
||||
*
|
||||
* Common code run for every system sleep state that can be entered (except for
|
||||
* hibernation). Run suspend notifiers, allocate the "suspend" console and
|
||||
|
|
|
@ -244,6 +244,8 @@ void migrate_to_reboot_cpu(void)
|
|||
void kernel_restart(char *cmd)
|
||||
{
|
||||
kernel_restart_prepare(cmd);
|
||||
if (pm_power_off_prepare)
|
||||
pm_power_off_prepare();
|
||||
migrate_to_reboot_cpu();
|
||||
syscore_shutdown();
|
||||
if (!cmd)
|
||||
|
|
|
@ -102,12 +102,10 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
|
|||
static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
|
||||
unsigned int next_freq)
|
||||
{
|
||||
if (!sg_policy->need_freq_update) {
|
||||
if (sg_policy->next_freq == next_freq)
|
||||
return false;
|
||||
} else {
|
||||
if (sg_policy->need_freq_update)
|
||||
sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
|
||||
}
|
||||
else if (sg_policy->next_freq == next_freq)
|
||||
return false;
|
||||
|
||||
sg_policy->next_freq = next_freq;
|
||||
sg_policy->last_freq_update_time = time;
|
||||
|
|
|
@ -315,6 +315,7 @@ int cmd_freq_set(int argc, char **argv)
|
|||
}
|
||||
}
|
||||
|
||||
get_cpustate();
|
||||
|
||||
/* loop over CPUs */
|
||||
for (cpu = bitmask_first(cpus_chosen);
|
||||
|
@ -332,5 +333,7 @@ int cmd_freq_set(int argc, char **argv)
|
|||
}
|
||||
}
|
||||
|
||||
print_offline_cpus();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -95,6 +95,8 @@ int cmd_idle_set(int argc, char **argv)
|
|||
exit(EXIT_FAILURE);
|
||||
}
|
||||
|
||||
get_cpustate();
|
||||
|
||||
/* Default is: set all CPUs */
|
||||
if (bitmask_isallclear(cpus_chosen))
|
||||
bitmask_setall(cpus_chosen);
|
||||
|
@ -181,5 +183,7 @@ int cmd_idle_set(int argc, char **argv)
|
|||
break;
|
||||
}
|
||||
}
|
||||
|
||||
print_offline_cpus();
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -34,6 +34,8 @@ int run_as_root;
|
|||
int base_cpu;
|
||||
/* Affected cpus chosen by -c/--cpu param */
|
||||
struct bitmask *cpus_chosen;
|
||||
struct bitmask *online_cpus;
|
||||
struct bitmask *offline_cpus;
|
||||
|
||||
#ifdef DEBUG
|
||||
int be_verbose;
|
||||
|
@ -178,6 +180,8 @@ int main(int argc, const char *argv[])
|
|||
char pathname[32];
|
||||
|
||||
cpus_chosen = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
|
||||
online_cpus = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
|
||||
offline_cpus = bitmask_alloc(sysconf(_SC_NPROCESSORS_CONF));
|
||||
|
||||
argc--;
|
||||
argv += 1;
|
||||
|
@ -230,6 +234,10 @@ int main(int argc, const char *argv[])
|
|||
ret = p->main(argc, argv);
|
||||
if (cpus_chosen)
|
||||
bitmask_free(cpus_chosen);
|
||||
if (online_cpus)
|
||||
bitmask_free(online_cpus);
|
||||
if (offline_cpus)
|
||||
bitmask_free(offline_cpus);
|
||||
return ret;
|
||||
}
|
||||
print_help();
|
||||
|
|
|
@ -94,6 +94,8 @@ struct cpupower_cpu_info {
|
|||
*/
|
||||
extern int get_cpu_info(struct cpupower_cpu_info *cpu_info);
|
||||
extern struct cpupower_cpu_info cpupower_cpu_info;
|
||||
|
||||
|
||||
/* cpuid and cpuinfo helpers **************************/
|
||||
|
||||
/* X86 ONLY ****************************************/
|
||||
|
@ -171,4 +173,14 @@ static inline unsigned int cpuid_ecx(unsigned int op) { return 0; };
|
|||
static inline unsigned int cpuid_edx(unsigned int op) { return 0; };
|
||||
#endif /* defined(__i386__) || defined(__x86_64__) */
|
||||
|
||||
/*
|
||||
* CPU State related functions
|
||||
*/
|
||||
extern struct bitmask *online_cpus;
|
||||
extern struct bitmask *offline_cpus;
|
||||
|
||||
void get_cpustate(void);
|
||||
void print_online_cpus(void);
|
||||
void print_offline_cpus(void);
|
||||
|
||||
#endif /* __CPUPOWERUTILS_HELPERS__ */
|
||||
|
|
|
@ -4,11 +4,11 @@
|
|||
#include <errno.h>
|
||||
#include <stdlib.h>
|
||||
|
||||
#if defined(__i386__) || defined(__x86_64__)
|
||||
|
||||
#include "helpers/helpers.h"
|
||||
#include "helpers/sysfs.h"
|
||||
|
||||
#if defined(__i386__) || defined(__x86_64__)
|
||||
|
||||
#include "cpupower_intern.h"
|
||||
|
||||
#define MSR_AMD_HWCR 0xc0010015
|
||||
|
@ -89,3 +89,63 @@ int cpupower_intel_set_perf_bias(unsigned int cpu, unsigned int val)
|
|||
}
|
||||
|
||||
#endif /* #if defined(__i386__) || defined(__x86_64__) */
|
||||
|
||||
/* get_cpustate
|
||||
*
|
||||
* Gather the information of all online CPUs into bitmask struct
|
||||
*/
|
||||
void get_cpustate(void)
|
||||
{
|
||||
unsigned int cpu = 0;
|
||||
|
||||
bitmask_clearall(online_cpus);
|
||||
bitmask_clearall(offline_cpus);
|
||||
|
||||
for (cpu = bitmask_first(cpus_chosen);
|
||||
cpu <= bitmask_last(cpus_chosen); cpu++) {
|
||||
|
||||
if (cpupower_is_cpu_online(cpu) == 1)
|
||||
bitmask_setbit(online_cpus, cpu);
|
||||
else
|
||||
bitmask_setbit(offline_cpus, cpu);
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
/* print_online_cpus
|
||||
*
|
||||
* Print the CPU numbers of all CPUs that are online currently
|
||||
*/
|
||||
void print_online_cpus(void)
|
||||
{
|
||||
int str_len = 0;
|
||||
char *online_cpus_str = NULL;
|
||||
|
||||
str_len = online_cpus->size * 5;
|
||||
online_cpus_str = (void *)malloc(sizeof(char) * str_len);
|
||||
|
||||
if (!bitmask_isallclear(online_cpus)) {
|
||||
bitmask_displaylist(online_cpus_str, str_len, online_cpus);
|
||||
printf(_("Following CPUs are online:\n%s\n"), online_cpus_str);
|
||||
}
|
||||
}
|
||||
|
||||
/* print_offline_cpus
|
||||
*
|
||||
* Print the CPU numbers of all CPUs that are offline currently
|
||||
*/
|
||||
void print_offline_cpus(void)
|
||||
{
|
||||
int str_len = 0;
|
||||
char *offline_cpus_str = NULL;
|
||||
|
||||
str_len = offline_cpus->size * 5;
|
||||
offline_cpus_str = (void *)malloc(sizeof(char) * str_len);
|
||||
|
||||
if (!bitmask_isallclear(offline_cpus)) {
|
||||
bitmask_displaylist(offline_cpus_str, str_len, offline_cpus);
|
||||
printf(_("Following CPUs are offline:\n%s\n"), offline_cpus_str);
|
||||
printf(_("cpupower set operation was not performed on them\n"));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|_| |___/ |_|
|
||||
|
||||
pm-graph: suspend/resume/boot timing analysis tools
|
||||
Version: 5.7
|
||||
Version: 5.8
|
||||
Author: Todd Brandt <todd.e.brandt@intel.com>
|
||||
Home Page: https://01.org/pm-graph
|
||||
|
||||
|
@ -61,7 +61,7 @@
|
|||
- runs with python2 or python3, choice is made by /usr/bin/python link
|
||||
- python
|
||||
- python-configparser (for python2 sleepgraph)
|
||||
- python-requests (for googlesheet.py)
|
||||
- python-requests (for stresstester.py)
|
||||
- linux-tools-common (for turbostat usage in sleepgraph)
|
||||
|
||||
Ubuntu:
|
||||
|
|
|
@ -81,7 +81,7 @@ def ascii(text):
|
|||
# store system values and test parameters
|
||||
class SystemValues:
|
||||
title = 'SleepGraph'
|
||||
version = '5.7'
|
||||
version = '5.8'
|
||||
ansi = False
|
||||
rs = 0
|
||||
display = ''
|
||||
|
@ -92,8 +92,9 @@ class SystemValues:
|
|||
testlog = True
|
||||
dmesglog = True
|
||||
ftracelog = False
|
||||
acpidebug = True
|
||||
tstat = True
|
||||
mindevlen = 0.0
|
||||
mindevlen = 0.0001
|
||||
mincglen = 0.0
|
||||
cgphase = ''
|
||||
cgtest = -1
|
||||
|
@ -115,6 +116,7 @@ class SystemValues:
|
|||
fpdtpath = '/sys/firmware/acpi/tables/FPDT'
|
||||
epath = '/sys/kernel/debug/tracing/events/power/'
|
||||
pmdpath = '/sys/power/pm_debug_messages'
|
||||
acpipath='/sys/module/acpi/parameters/debug_level'
|
||||
traceevents = [
|
||||
'suspend_resume',
|
||||
'wakeup_source_activate',
|
||||
|
@ -162,16 +164,16 @@ class SystemValues:
|
|||
devdump = False
|
||||
mixedphaseheight = True
|
||||
devprops = dict()
|
||||
cfgdef = dict()
|
||||
platinfo = []
|
||||
predelay = 0
|
||||
postdelay = 0
|
||||
pmdebug = ''
|
||||
tmstart = 'SUSPEND START %Y%m%d-%H:%M:%S.%f'
|
||||
tmend = 'RESUME COMPLETE %Y%m%d-%H:%M:%S.%f'
|
||||
tracefuncs = {
|
||||
'sys_sync': {},
|
||||
'ksys_sync': {},
|
||||
'pm_notifier_call_chain_robust': {},
|
||||
'__pm_notifier_call_chain': {},
|
||||
'pm_prepare_console': {},
|
||||
'pm_notifier_call_chain': {},
|
||||
'freeze_processes': {},
|
||||
|
@ -490,9 +492,9 @@ class SystemValues:
|
|||
call('echo 0 > %s/wakealarm' % self.rtcpath, shell=True)
|
||||
def initdmesg(self):
|
||||
# get the latest time stamp from the dmesg log
|
||||
fp = Popen('dmesg', stdout=PIPE).stdout
|
||||
lines = Popen('dmesg', stdout=PIPE).stdout.readlines()
|
||||
ktime = '0'
|
||||
for line in fp:
|
||||
for line in reversed(lines):
|
||||
line = ascii(line).replace('\r\n', '')
|
||||
idx = line.find('[')
|
||||
if idx > 1:
|
||||
|
@ -500,7 +502,7 @@ class SystemValues:
|
|||
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
|
||||
if(m):
|
||||
ktime = m.group('ktime')
|
||||
fp.close()
|
||||
break
|
||||
self.dmesgstart = float(ktime)
|
||||
def getdmesg(self, testdata):
|
||||
op = self.writeDatafileHeader(self.dmesgfile, testdata)
|
||||
|
@ -715,8 +717,6 @@ class SystemValues:
|
|||
self.fsetVal('0', 'events/kprobes/enable')
|
||||
self.fsetVal('', 'kprobe_events')
|
||||
self.fsetVal('1024', 'buffer_size_kb')
|
||||
if self.pmdebug:
|
||||
self.setVal(self.pmdebug, self.pmdpath)
|
||||
def setupAllKprobes(self):
|
||||
for name in self.tracefuncs:
|
||||
self.defaultKprobe(name, self.tracefuncs[name])
|
||||
|
@ -740,11 +740,7 @@ class SystemValues:
|
|||
# turn trace off
|
||||
self.fsetVal('0', 'tracing_on')
|
||||
self.cleanupFtrace()
|
||||
# pm debug messages
|
||||
pv = self.getVal(self.pmdpath)
|
||||
if pv != '1':
|
||||
self.setVal('1', self.pmdpath)
|
||||
self.pmdebug = pv
|
||||
self.testVal(self.pmdpath, 'basic', '1')
|
||||
# set the trace clock to global
|
||||
self.fsetVal('global', 'trace_clock')
|
||||
self.fsetVal('nop', 'current_tracer')
|
||||
|
@ -900,6 +896,14 @@ class SystemValues:
|
|||
if isgz:
|
||||
return gzip.open(filename, mode+'t')
|
||||
return open(filename, mode)
|
||||
def putlog(self, filename, text):
|
||||
with self.openlog(filename, 'a') as fp:
|
||||
fp.write(text)
|
||||
fp.close()
|
||||
def dlog(self, text):
|
||||
self.putlog(self.dmesgfile, '# %s\n' % text)
|
||||
def flog(self, text):
|
||||
self.putlog(self.ftracefile, text)
|
||||
def b64unzip(self, data):
|
||||
try:
|
||||
out = codecs.decode(base64.b64decode(data), 'zlib').decode()
|
||||
|
@ -992,9 +996,7 @@ class SystemValues:
|
|||
# add a line for each of these commands with their outputs
|
||||
for name, cmdline, info in cmdafter:
|
||||
footer += '# platform-%s: %s | %s\n' % (name, cmdline, self.b64zip(info))
|
||||
|
||||
with self.openlog(self.ftracefile, 'a') as fp:
|
||||
fp.write(footer)
|
||||
self.flog(footer)
|
||||
return True
|
||||
def commonPrefix(self, list):
|
||||
if len(list) < 2:
|
||||
|
@ -1034,6 +1036,7 @@ class SystemValues:
|
|||
cmdline, cmdpath = ' '.join(cargs[2:]), self.getExec(cargs[2])
|
||||
if not cmdpath or (begin and not delta):
|
||||
continue
|
||||
self.dlog('[%s]' % cmdline)
|
||||
try:
|
||||
fp = Popen([cmdpath]+cargs[3:], stdout=PIPE, stderr=PIPE).stdout
|
||||
info = ascii(fp.read()).strip()
|
||||
|
@ -1060,6 +1063,29 @@ class SystemValues:
|
|||
else:
|
||||
out.append((name, cmdline, '\tnothing' if not info else info))
|
||||
return out
|
||||
def testVal(self, file, fmt='basic', value=''):
|
||||
if file == 'restoreall':
|
||||
for f in self.cfgdef:
|
||||
if os.path.exists(f):
|
||||
fp = open(f, 'w')
|
||||
fp.write(self.cfgdef[f])
|
||||
fp.close()
|
||||
self.cfgdef = dict()
|
||||
elif value and os.path.exists(file):
|
||||
fp = open(file, 'r+')
|
||||
if fmt == 'radio':
|
||||
m = re.match('.*\[(?P<v>.*)\].*', fp.read())
|
||||
if m:
|
||||
self.cfgdef[file] = m.group('v')
|
||||
elif fmt == 'acpi':
|
||||
line = fp.read().strip().split('\n')[-1]
|
||||
m = re.match('.* (?P<v>[0-9A-Fx]*) .*', line)
|
||||
if m:
|
||||
self.cfgdef[file] = m.group('v')
|
||||
else:
|
||||
self.cfgdef[file] = fp.read().strip()
|
||||
fp.write(value)
|
||||
fp.close()
|
||||
def haveTurbostat(self):
|
||||
if not self.tstat:
|
||||
return False
|
||||
|
@ -1201,6 +1227,57 @@ class SystemValues:
|
|||
self.multitest[sz] *= 1440
|
||||
elif unit == 'h':
|
||||
self.multitest[sz] *= 60
|
||||
def displayControl(self, cmd):
|
||||
xset, ret = 'timeout 10 xset -d :0.0 {0}', 0
|
||||
if self.sudouser:
|
||||
xset = 'sudo -u %s %s' % (self.sudouser, xset)
|
||||
if cmd == 'init':
|
||||
ret = call(xset.format('dpms 0 0 0'), shell=True)
|
||||
if not ret:
|
||||
ret = call(xset.format('s off'), shell=True)
|
||||
elif cmd == 'reset':
|
||||
ret = call(xset.format('s reset'), shell=True)
|
||||
elif cmd in ['on', 'off', 'standby', 'suspend']:
|
||||
b4 = self.displayControl('stat')
|
||||
ret = call(xset.format('dpms force %s' % cmd), shell=True)
|
||||
if not ret:
|
||||
curr = self.displayControl('stat')
|
||||
self.vprint('Display Switched: %s -> %s' % (b4, curr))
|
||||
if curr != cmd:
|
||||
self.vprint('WARNING: Display failed to change to %s' % cmd)
|
||||
if ret:
|
||||
self.vprint('WARNING: Display failed to change to %s with xset' % cmd)
|
||||
return ret
|
||||
elif cmd == 'stat':
|
||||
fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout
|
||||
ret = 'unknown'
|
||||
for line in fp:
|
||||
m = re.match('[\s]*Monitor is (?P<m>.*)', ascii(line))
|
||||
if(m and len(m.group('m')) >= 2):
|
||||
out = m.group('m').lower()
|
||||
ret = out[3:] if out[0:2] == 'in' else out
|
||||
break
|
||||
fp.close()
|
||||
return ret
|
||||
def setRuntimeSuspend(self, before=True):
|
||||
if before:
|
||||
# runtime suspend disable or enable
|
||||
if self.rs > 0:
|
||||
self.rstgt, self.rsval, self.rsdir = 'on', 'auto', 'enabled'
|
||||
else:
|
||||
self.rstgt, self.rsval, self.rsdir = 'auto', 'on', 'disabled'
|
||||
pprint('CONFIGURING RUNTIME SUSPEND...')
|
||||
self.rslist = deviceInfo(self.rstgt)
|
||||
for i in self.rslist:
|
||||
self.setVal(self.rsval, i)
|
||||
pprint('runtime suspend %s on all devices (%d changed)' % (self.rsdir, len(self.rslist)))
|
||||
pprint('waiting 5 seconds...')
|
||||
time.sleep(5)
|
||||
else:
|
||||
# runtime suspend re-enable or re-disable
|
||||
for i in self.rslist:
|
||||
self.setVal(self.rstgt, i)
|
||||
pprint('runtime suspend settings restored on %d devices' % len(self.rslist))
|
||||
|
||||
sysvals = SystemValues()
|
||||
switchvalues = ['enable', 'disable', 'on', 'off', 'true', 'false', '1', '0']
|
||||
|
@ -1640,15 +1717,20 @@ class Data:
|
|||
if 'resume_machine' in phase and 'suspend_machine' in lp:
|
||||
tS, tR = self.dmesg[lp]['end'], self.dmesg[phase]['start']
|
||||
tL = tR - tS
|
||||
if tL > 0:
|
||||
left = True if tR > tZero else False
|
||||
self.trimTime(tS, tL, left)
|
||||
if 'trying' in self.dmesg[lp] and self.dmesg[lp]['trying'] >= 0.001:
|
||||
tTry = round(self.dmesg[lp]['trying'] * 1000)
|
||||
text = '%.0f (-%.0f waking)' % (tL * 1000, tTry)
|
||||
if tL <= 0:
|
||||
continue
|
||||
left = True if tR > tZero else False
|
||||
self.trimTime(tS, tL, left)
|
||||
if 'waking' in self.dmesg[lp]:
|
||||
tCnt = self.dmesg[lp]['waking'][0]
|
||||
if self.dmesg[lp]['waking'][1] >= 0.001:
|
||||
tTry = '-%.0f' % (round(self.dmesg[lp]['waking'][1] * 1000))
|
||||
else:
|
||||
text = '%.0f' % (tL * 1000)
|
||||
self.tLow.append(text)
|
||||
tTry = '-%.3f' % (self.dmesg[lp]['waking'][1] * 1000)
|
||||
text = '%.0f (%s ms waking %d times)' % (tL * 1000, tTry, tCnt)
|
||||
else:
|
||||
text = '%.0f' % (tL * 1000)
|
||||
self.tLow.append(text)
|
||||
lp = phase
|
||||
def getMemTime(self):
|
||||
if not self.hwstart or not self.hwend:
|
||||
|
@ -1921,7 +2003,7 @@ class Data:
|
|||
for dev in list:
|
||||
length = (list[dev]['end'] - list[dev]['start']) * 1000
|
||||
width = widfmt % (((list[dev]['end']-list[dev]['start'])*100)/tTotal)
|
||||
if width != '0.000000' and length >= mindevlen:
|
||||
if length >= mindevlen:
|
||||
devlist.append(dev)
|
||||
self.tdevlist[phase] = devlist
|
||||
def addHorizontalDivider(self, devname, devend):
|
||||
|
@ -3316,9 +3398,10 @@ def parseTraceLog(live=False):
|
|||
# trim out s2idle loops, track time trying to freeze
|
||||
llp = data.lastPhase(2)
|
||||
if llp.startswith('suspend_machine'):
|
||||
if 'trying' not in data.dmesg[llp]:
|
||||
data.dmesg[llp]['trying'] = 0
|
||||
data.dmesg[llp]['trying'] += \
|
||||
if 'waking' not in data.dmesg[llp]:
|
||||
data.dmesg[llp]['waking'] = [0, 0.0]
|
||||
data.dmesg[llp]['waking'][0] += 1
|
||||
data.dmesg[llp]['waking'][1] += \
|
||||
t.time - data.dmesg[lp]['start']
|
||||
data.currphase = ''
|
||||
del data.dmesg[lp]
|
||||
|
@ -4555,7 +4638,7 @@ def createHTML(testruns, testfail):
|
|||
# draw the devices for this phase
|
||||
phaselist = data.dmesg[b]['list']
|
||||
for d in sorted(data.tdevlist[b]):
|
||||
dname = d if '[' not in d else d.split('[')[0]
|
||||
dname = d if ('[' not in d or 'CPU' in d) else d.split('[')[0]
|
||||
name, dev = dname, phaselist[d]
|
||||
drv = xtraclass = xtrainfo = xtrastyle = ''
|
||||
if 'htmlclass' in dev:
|
||||
|
@ -5194,156 +5277,146 @@ def addScriptCode(hf, testruns):
|
|||
'</script>\n'
|
||||
hf.write(script_code);
|
||||
|
||||
def setRuntimeSuspend(before=True):
|
||||
global sysvals
|
||||
sv = sysvals
|
||||
if sv.rs == 0:
|
||||
return
|
||||
if before:
|
||||
# runtime suspend disable or enable
|
||||
if sv.rs > 0:
|
||||
sv.rstgt, sv.rsval, sv.rsdir = 'on', 'auto', 'enabled'
|
||||
else:
|
||||
sv.rstgt, sv.rsval, sv.rsdir = 'auto', 'on', 'disabled'
|
||||
pprint('CONFIGURING RUNTIME SUSPEND...')
|
||||
sv.rslist = deviceInfo(sv.rstgt)
|
||||
for i in sv.rslist:
|
||||
sv.setVal(sv.rsval, i)
|
||||
pprint('runtime suspend %s on all devices (%d changed)' % (sv.rsdir, len(sv.rslist)))
|
||||
pprint('waiting 5 seconds...')
|
||||
time.sleep(5)
|
||||
else:
|
||||
# runtime suspend re-enable or re-disable
|
||||
for i in sv.rslist:
|
||||
sv.setVal(sv.rstgt, i)
|
||||
pprint('runtime suspend settings restored on %d devices' % len(sv.rslist))
|
||||
|
||||
# Function: executeSuspend
|
||||
# Description:
|
||||
# Execute system suspend through the sysfs interface, then copy the output
|
||||
# dmesg and ftrace files to the test output directory.
|
||||
def executeSuspend(quiet=False):
|
||||
pm = ProcessMonitor()
|
||||
tp = sysvals.tpath
|
||||
if sysvals.wifi:
|
||||
wifi = sysvals.checkWifi()
|
||||
sv, tp, pm = sysvals, sysvals.tpath, ProcessMonitor()
|
||||
if sv.wifi:
|
||||
wifi = sv.checkWifi()
|
||||
sv.dlog('wifi check, connected device is "%s"' % wifi)
|
||||
testdata = []
|
||||
# run these commands to prepare the system for suspend
|
||||
if sysvals.display:
|
||||
if sv.display:
|
||||
if not quiet:
|
||||
pprint('SET DISPLAY TO %s' % sysvals.display.upper())
|
||||
displayControl(sysvals.display)
|
||||
pprint('SET DISPLAY TO %s' % sv.display.upper())
|
||||
ret = sv.displayControl(sv.display)
|
||||
sv.dlog('xset display %s, ret = %d' % (sv.display, ret))
|
||||
time.sleep(1)
|
||||
if sysvals.sync:
|
||||
if sv.sync:
|
||||
if not quiet:
|
||||
pprint('SYNCING FILESYSTEMS')
|
||||
sv.dlog('syncing filesystems')
|
||||
call('sync', shell=True)
|
||||
# mark the start point in the kernel ring buffer just as we start
|
||||
sysvals.initdmesg()
|
||||
sv.dlog('read dmesg')
|
||||
sv.initdmesg()
|
||||
# start ftrace
|
||||
if(sysvals.usecallgraph or sysvals.usetraceevents):
|
||||
if(sv.usecallgraph or sv.usetraceevents):
|
||||
if not quiet:
|
||||
pprint('START TRACING')
|
||||
sysvals.fsetVal('1', 'tracing_on')
|
||||
if sysvals.useprocmon:
|
||||
sv.dlog('start ftrace tracing')
|
||||
sv.fsetVal('1', 'tracing_on')
|
||||
if sv.useprocmon:
|
||||
sv.dlog('start the process monitor')
|
||||
pm.start()
|
||||
sysvals.cmdinfo(True)
|
||||
sv.dlog('run the cmdinfo list before')
|
||||
sv.cmdinfo(True)
|
||||
# execute however many s/r runs requested
|
||||
for count in range(1,sysvals.execcount+1):
|
||||
for count in range(1,sv.execcount+1):
|
||||
# x2delay in between test runs
|
||||
if(count > 1 and sysvals.x2delay > 0):
|
||||
sysvals.fsetVal('WAIT %d' % sysvals.x2delay, 'trace_marker')
|
||||
time.sleep(sysvals.x2delay/1000.0)
|
||||
sysvals.fsetVal('WAIT END', 'trace_marker')
|
||||
if(count > 1 and sv.x2delay > 0):
|
||||
sv.fsetVal('WAIT %d' % sv.x2delay, 'trace_marker')
|
||||
time.sleep(sv.x2delay/1000.0)
|
||||
sv.fsetVal('WAIT END', 'trace_marker')
|
||||
# start message
|
||||
if sysvals.testcommand != '':
|
||||
if sv.testcommand != '':
|
||||
pprint('COMMAND START')
|
||||
else:
|
||||
if(sysvals.rtcwake):
|
||||
if(sv.rtcwake):
|
||||
pprint('SUSPEND START')
|
||||
else:
|
||||
pprint('SUSPEND START (press a key to resume)')
|
||||
# set rtcwake
|
||||
if(sysvals.rtcwake):
|
||||
if(sv.rtcwake):
|
||||
if not quiet:
|
||||
pprint('will issue an rtcwake in %d seconds' % sysvals.rtcwaketime)
|
||||
sysvals.rtcWakeAlarmOn()
|
||||
pprint('will issue an rtcwake in %d seconds' % sv.rtcwaketime)
|
||||
sv.dlog('enable RTC wake alarm')
|
||||
sv.rtcWakeAlarmOn()
|
||||
# start of suspend trace marker
|
||||
if(sysvals.usecallgraph or sysvals.usetraceevents):
|
||||
sysvals.fsetVal(datetime.now().strftime(sysvals.tmstart), 'trace_marker')
|
||||
if(sv.usecallgraph or sv.usetraceevents):
|
||||
sv.fsetVal(datetime.now().strftime(sv.tmstart), 'trace_marker')
|
||||
# predelay delay
|
||||
if(count == 1 and sysvals.predelay > 0):
|
||||
sysvals.fsetVal('WAIT %d' % sysvals.predelay, 'trace_marker')
|
||||
time.sleep(sysvals.predelay/1000.0)
|
||||
sysvals.fsetVal('WAIT END', 'trace_marker')
|
||||
if(count == 1 and sv.predelay > 0):
|
||||
sv.fsetVal('WAIT %d' % sv.predelay, 'trace_marker')
|
||||
time.sleep(sv.predelay/1000.0)
|
||||
sv.fsetVal('WAIT END', 'trace_marker')
|
||||
# initiate suspend or command
|
||||
sv.dlog('system executing a suspend')
|
||||
tdata = {'error': ''}
|
||||
if sysvals.testcommand != '':
|
||||
res = call(sysvals.testcommand+' 2>&1', shell=True);
|
||||
if sv.testcommand != '':
|
||||
res = call(sv.testcommand+' 2>&1', shell=True);
|
||||
if res != 0:
|
||||
tdata['error'] = 'cmd returned %d' % res
|
||||
else:
|
||||
mode = sysvals.suspendmode
|
||||
if sysvals.memmode and os.path.exists(sysvals.mempowerfile):
|
||||
mode = sv.suspendmode
|
||||
if sv.memmode and os.path.exists(sv.mempowerfile):
|
||||
mode = 'mem'
|
||||
pf = open(sysvals.mempowerfile, 'w')
|
||||
pf.write(sysvals.memmode)
|
||||
pf.close()
|
||||
if sysvals.diskmode and os.path.exists(sysvals.diskpowerfile):
|
||||
sv.testVal(sv.mempowerfile, 'radio', sv.memmode)
|
||||
if sv.diskmode and os.path.exists(sv.diskpowerfile):
|
||||
mode = 'disk'
|
||||
pf = open(sysvals.diskpowerfile, 'w')
|
||||
pf.write(sysvals.diskmode)
|
||||
pf.close()
|
||||
if mode == 'freeze' and sysvals.haveTurbostat():
|
||||
sv.testVal(sv.diskpowerfile, 'radio', sv.diskmode)
|
||||
if sv.acpidebug:
|
||||
sv.testVal(sv.acpipath, 'acpi', '0xe')
|
||||
if mode == 'freeze' and sv.haveTurbostat():
|
||||
# execution will pause here
|
||||
turbo = sysvals.turbostat()
|
||||
turbo = sv.turbostat()
|
||||
if turbo:
|
||||
tdata['turbo'] = turbo
|
||||
else:
|
||||
pf = open(sysvals.powerfile, 'w')
|
||||
pf = open(sv.powerfile, 'w')
|
||||
pf.write(mode)
|
||||
# execution will pause here
|
||||
try:
|
||||
pf.close()
|
||||
except Exception as e:
|
||||
tdata['error'] = str(e)
|
||||
if(sysvals.rtcwake):
|
||||
sysvals.rtcWakeAlarmOff()
|
||||
sv.dlog('system returned from resume')
|
||||
# reset everything
|
||||
sv.testVal('restoreall')
|
||||
if(sv.rtcwake):
|
||||
sv.dlog('disable RTC wake alarm')
|
||||
sv.rtcWakeAlarmOff()
|
||||
# postdelay delay
|
||||
if(count == sysvals.execcount and sysvals.postdelay > 0):
|
||||
sysvals.fsetVal('WAIT %d' % sysvals.postdelay, 'trace_marker')
|
||||
time.sleep(sysvals.postdelay/1000.0)
|
||||
sysvals.fsetVal('WAIT END', 'trace_marker')
|
||||
if(count == sv.execcount and sv.postdelay > 0):
|
||||
sv.fsetVal('WAIT %d' % sv.postdelay, 'trace_marker')
|
||||
time.sleep(sv.postdelay/1000.0)
|
||||
sv.fsetVal('WAIT END', 'trace_marker')
|
||||
# return from suspend
|
||||
pprint('RESUME COMPLETE')
|
||||
if(sysvals.usecallgraph or sysvals.usetraceevents):
|
||||
sysvals.fsetVal(datetime.now().strftime(sysvals.tmend), 'trace_marker')
|
||||
if sysvals.wifi and wifi:
|
||||
tdata['wifi'] = sysvals.pollWifi(wifi)
|
||||
if(sysvals.suspendmode == 'mem' or sysvals.suspendmode == 'command'):
|
||||
if(sv.usecallgraph or sv.usetraceevents):
|
||||
sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')
|
||||
if sv.wifi and wifi:
|
||||
tdata['wifi'] = sv.pollWifi(wifi)
|
||||
sv.dlog('wifi check, %s' % tdata['wifi'])
|
||||
if(sv.suspendmode == 'mem' or sv.suspendmode == 'command'):
|
||||
sv.dlog('read the ACPI FPDT')
|
||||
tdata['fw'] = getFPDT(False)
|
||||
testdata.append(tdata)
|
||||
cmdafter = sysvals.cmdinfo(False)
|
||||
sv.dlog('run the cmdinfo list after')
|
||||
cmdafter = sv.cmdinfo(False)
|
||||
# stop ftrace
|
||||
if(sysvals.usecallgraph or sysvals.usetraceevents):
|
||||
if sysvals.useprocmon:
|
||||
if(sv.usecallgraph or sv.usetraceevents):
|
||||
if sv.useprocmon:
|
||||
sv.dlog('stop the process monitor')
|
||||
pm.stop()
|
||||
sysvals.fsetVal('0', 'tracing_on')
|
||||
sv.fsetVal('0', 'tracing_on')
|
||||
# grab a copy of the dmesg output
|
||||
if not quiet:
|
||||
pprint('CAPTURING DMESG')
|
||||
sysvals.getdmesg(testdata)
|
||||
sysvals.dlog('EXECUTION TRACE END')
|
||||
sv.getdmesg(testdata)
|
||||
# grab a copy of the ftrace output
|
||||
if(sysvals.usecallgraph or sysvals.usetraceevents):
|
||||
if(sv.usecallgraph or sv.usetraceevents):
|
||||
if not quiet:
|
||||
pprint('CAPTURING TRACE')
|
||||
op = sysvals.writeDatafileHeader(sysvals.ftracefile, testdata)
|
||||
op = sv.writeDatafileHeader(sv.ftracefile, testdata)
|
||||
fp = open(tp+'trace', 'r')
|
||||
for line in fp:
|
||||
op.write(line)
|
||||
op.close()
|
||||
sysvals.fsetVal('', 'trace')
|
||||
sysvals.platforminfo(cmdafter)
|
||||
sv.fsetVal('', 'trace')
|
||||
sv.platforminfo(cmdafter)
|
||||
|
||||
def readFile(file):
|
||||
if os.path.islink(file):
|
||||
|
@ -5586,39 +5659,6 @@ def dmidecode(mempath, fatal=False):
|
|||
count += 1
|
||||
return out
|
||||
|
||||
def displayControl(cmd):
|
||||
xset, ret = 'timeout 10 xset -d :0.0 {0}', 0
|
||||
if sysvals.sudouser:
|
||||
xset = 'sudo -u %s %s' % (sysvals.sudouser, xset)
|
||||
if cmd == 'init':
|
||||
ret = call(xset.format('dpms 0 0 0'), shell=True)
|
||||
if not ret:
|
||||
ret = call(xset.format('s off'), shell=True)
|
||||
elif cmd == 'reset':
|
||||
ret = call(xset.format('s reset'), shell=True)
|
||||
elif cmd in ['on', 'off', 'standby', 'suspend']:
|
||||
b4 = displayControl('stat')
|
||||
ret = call(xset.format('dpms force %s' % cmd), shell=True)
|
||||
if not ret:
|
||||
curr = displayControl('stat')
|
||||
sysvals.vprint('Display Switched: %s -> %s' % (b4, curr))
|
||||
if curr != cmd:
|
||||
sysvals.vprint('WARNING: Display failed to change to %s' % cmd)
|
||||
if ret:
|
||||
sysvals.vprint('WARNING: Display failed to change to %s with xset' % cmd)
|
||||
return ret
|
||||
elif cmd == 'stat':
|
||||
fp = Popen(xset.format('q').split(' '), stdout=PIPE).stdout
|
||||
ret = 'unknown'
|
||||
for line in fp:
|
||||
m = re.match('[\s]*Monitor is (?P<m>.*)', ascii(line))
|
||||
if(m and len(m.group('m')) >= 2):
|
||||
out = m.group('m').lower()
|
||||
ret = out[3:] if out[0:2] == 'in' else out
|
||||
break
|
||||
fp.close()
|
||||
return ret
|
||||
|
||||
# Function: getFPDT
|
||||
# Description:
|
||||
# Read the acpi bios tables and pull out FPDT, the firmware data
|
||||
|
@ -6001,8 +6041,19 @@ def rerunTest(htmlfile=''):
|
|||
# execute a suspend/resume, gather the logs, and generate the output
|
||||
def runTest(n=0, quiet=False):
|
||||
# prepare for the test
|
||||
sysvals.initFtrace(quiet)
|
||||
sysvals.initTestOutput('suspend')
|
||||
op = sysvals.writeDatafileHeader(sysvals.dmesgfile, [])
|
||||
op.write('# EXECUTION TRACE START\n')
|
||||
op.close()
|
||||
if n <= 1:
|
||||
if sysvals.rs != 0:
|
||||
sysvals.dlog('%sabling runtime suspend' % ('en' if sysvals.rs > 0 else 'dis'))
|
||||
sysvals.setRuntimeSuspend(True)
|
||||
if sysvals.display:
|
||||
ret = sysvals.displayControl('init')
|
||||
sysvals.dlog('xset display init, ret = %d' % ret)
|
||||
sysvals.dlog('initialize ftrace')
|
||||
sysvals.initFtrace(quiet)
|
||||
|
||||
# execute the test
|
||||
executeSuspend(quiet)
|
||||
|
@ -6098,8 +6149,16 @@ def data_from_html(file, outpath, issues, fulldetail=False):
|
|||
if wifi:
|
||||
extra['wifi'] = wifi
|
||||
low = find_in_html(html, 'freeze time: <b>', ' ms</b>')
|
||||
if low and 'waking' in low:
|
||||
issue = 'FREEZEWAKE'
|
||||
for lowstr in ['waking', '+']:
|
||||
if not low:
|
||||
break
|
||||
if lowstr not in low:
|
||||
continue
|
||||
if lowstr == '+':
|
||||
issue = 'S2LOOPx%d' % len(low.split('+'))
|
||||
else:
|
||||
m = re.match('.*waking *(?P<n>[0-9]*) *times.*', low)
|
||||
issue = 'S2WAKEx%s' % m.group('n') if m else 'S2WAKExNaN'
|
||||
match = [i for i in issues if i['match'] == issue]
|
||||
if len(match) > 0:
|
||||
match[0]['count'] += 1
|
||||
|
@ -6605,6 +6664,11 @@ if __name__ == '__main__':
|
|||
val = next(args)
|
||||
except:
|
||||
doError('-info requires one string argument', True)
|
||||
elif(arg == '-desc'):
|
||||
try:
|
||||
val = next(args)
|
||||
except:
|
||||
doError('-desc requires one string argument', True)
|
||||
elif(arg == '-rs'):
|
||||
try:
|
||||
val = next(args)
|
||||
|
@ -6814,9 +6878,9 @@ if __name__ == '__main__':
|
|||
runSummary(sysvals.outdir, True, genhtml)
|
||||
elif(cmd in ['xon', 'xoff', 'xstandby', 'xsuspend', 'xinit', 'xreset']):
|
||||
sysvals.verbose = True
|
||||
ret = displayControl(cmd[1:])
|
||||
ret = sysvals.displayControl(cmd[1:])
|
||||
elif(cmd == 'xstat'):
|
||||
pprint('Display Status: %s' % displayControl('stat').upper())
|
||||
pprint('Display Status: %s' % sysvals.displayControl('stat').upper())
|
||||
elif(cmd == 'wificheck'):
|
||||
dev = sysvals.checkWifi()
|
||||
if dev:
|
||||
|
@ -6854,12 +6918,8 @@ if __name__ == '__main__':
|
|||
if mode.startswith('disk-'):
|
||||
sysvals.diskmode = mode.split('-', 1)[-1]
|
||||
sysvals.suspendmode = 'disk'
|
||||
|
||||
sysvals.systemInfo(dmidecode(sysvals.mempath))
|
||||
|
||||
setRuntimeSuspend(True)
|
||||
if sysvals.display:
|
||||
displayControl('init')
|
||||
failcnt, ret = 0, 0
|
||||
if sysvals.multitest['run']:
|
||||
# run multiple tests in a separate subdirectory
|
||||
|
@ -6900,7 +6960,10 @@ if __name__ == '__main__':
|
|||
sysvals.testdir = sysvals.outdir
|
||||
# run the test in the current directory
|
||||
ret = runTest()
|
||||
|
||||
# reset to default values after testing
|
||||
if sysvals.display:
|
||||
displayControl('reset')
|
||||
setRuntimeSuspend(False)
|
||||
sysvals.displayControl('reset')
|
||||
if sysvals.rs != 0:
|
||||
sysvals.setRuntimeSuspend(False)
|
||||
sys.exit(ret)
|
||||
|
|
Loading…
Reference in New Issue