Driver core patches for 5.1-rc1

Here is the big driver core patchset for 5.1-rc1
 
 More patches than "normal" here this merge window, due to some work in
 the driver core by Alexander Duyck to rework the async probe
 functionality to work better for a number of devices, and independant
 work from Rafael for the device link functionality to make it work
 "correctly".
 
 Also in here is:
 	- lots of BUS_ATTR() removals, the macro is about to go away
 	- firmware test fixups
 	- ihex fixups and simplification
 	- component additions (also includes i915 patches)
 	- lots of minor coding style fixups and cleanups.
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXH+euQ8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynyTgCfbV8CLums843sBnT8NnWrTMTdTCcAn1K4re0m
 ep8g+6oRLxJy414hogxQ
 =bLs2
 -----END PGP SIGNATURE-----

Merge tag 'driver-core-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull driver core updates from Greg KH:
 "Here is the big driver core patchset for 5.1-rc1

  More patches than "normal" here this merge window, due to some work in
  the driver core by Alexander Duyck to rework the async probe
  functionality to work better for a number of devices, and independant
  work from Rafael for the device link functionality to make it work
  "correctly".

  Also in here is:

   - lots of BUS_ATTR() removals, the macro is about to go away

   - firmware test fixups

   - ihex fixups and simplification

   - component additions (also includes i915 patches)

   - lots of minor coding style fixups and cleanups.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'driver-core-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (65 commits)
  driver core: platform: remove misleading err_alloc label
  platform: set of_node in platform_device_register_full()
  firmware: hardcode the debug message for -ENOENT
  driver core: Add missing description of new struct device_link field
  driver core: Fix PM-runtime for links added during consumer probe
  drivers/component: kerneldoc polish
  async: Add cmdline option to specify drivers to be async probed
  driver core: Fix possible supplier PM-usage counter imbalance
  PM-runtime: Fix __pm_runtime_set_status() race with runtime resume
  driver: platform: Support parsing GpioInt 0 in platform_get_irq()
  selftests: firmware: fix verify_reqs() return value
  Revert "selftests: firmware: remove use of non-standard diff -Z option"
  Revert "selftests: firmware: add CONFIG_FW_LOADER_USER_HELPER_FALLBACK to config"
  device: Fix comment for driver_data in struct device
  kernfs: Allocating memory for kernfs_iattrs with kmem_cache.
  sysfs: remove unused include of kernfs-internal.h
  driver core: Postpone DMA tear-down until after devres release
  driver core: Document limitation related to DL_FLAG_RPM_ACTIVE
  PM-runtime: Take suppliers into account in __pm_runtime_set_status()
  device.h: Add __cold to dev_<level> logging functions
  ...
This commit is contained in:
Linus Torvalds 2019-03-06 14:52:48 -08:00
commit e431f2d74e
52 changed files with 1096 additions and 413 deletions

View File

@ -915,6 +915,10 @@
The filter can be disabled or changed to another
driver later using sysfs.
driver_async_probe= [KNL]
List of driver names to be probed asynchronously.
Format: <driver_name1>,<driver_name2>...
drm.edid_firmware=[<connector>:]<file>[,[<connector>:]<file>]
Broken monitors, graphic adapters, KVMs and EDIDless
panels may send no or incorrect EDID data sets.

View File

@ -28,8 +28,8 @@ suspend/resume and shutdown ordering.
Device links allow representation of such dependencies in the driver core.
In its standard form, a device link combines *both* dependency types:
It guarantees correct suspend/resume and shutdown ordering between a
In its standard or *managed* form, a device link combines *both* dependency
types: It guarantees correct suspend/resume and shutdown ordering between a
"supplier" device and its "consumer" devices, and it guarantees driver
presence on the supplier. The consumer devices are not probed before the
supplier is bound to a driver, and they're unbound before the supplier
@ -62,18 +62,24 @@ device ``->probe`` callback or a boot-time PCI quirk.
Another example for an inconsistent state would be a device link that
represents a driver presence dependency, yet is added from the consumer's
``->probe`` callback while the supplier hasn't probed yet: Had the driver
core known about the device link earlier, it wouldn't have probed the
``->probe`` callback while the supplier hasn't started to probe yet: Had the
driver core known about the device link earlier, it wouldn't have probed the
consumer in the first place. The onus is thus on the consumer to check
presence of the supplier after adding the link, and defer probing on
non-presence.
non-presence. [Note that it is valid to create a link from the consumer's
``->probe`` callback while the supplier is still probing, but the consumer must
know that the supplier is functional already at the link creation time (that is
the case, for instance, if the consumer has just acquired some resources that
would not have been available had the supplier not been functional then).]
If a device link is added in the ``->probe`` callback of the supplier or
consumer driver, it is typically deleted in its ``->remove`` callback for
symmetry. That way, if the driver is compiled as a module, the device
link is added on module load and orderly deleted on unload. The same
restrictions that apply to device link addition (e.g. exclusion of a
parallel suspend/resume transition) apply equally to deletion.
If a device link with ``DL_FLAG_STATELESS`` set (i.e. a stateless device link)
is added in the ``->probe`` callback of the supplier or consumer driver, it is
typically deleted in its ``->remove`` callback for symmetry. That way, if the
driver is compiled as a module, the device link is added on module load and
orderly deleted on unload. The same restrictions that apply to device link
addition (e.g. exclusion of a parallel suspend/resume transition) apply equally
to deletion. Device links with ``DL_FLAG_STATELESS`` unset (i.e. managed
device links) are deleted automatically by the driver core.
Several flags may be specified on device link addition, two of which
have already been mentioned above: ``DL_FLAG_STATELESS`` to express that no
@ -83,25 +89,55 @@ integration is desired.
Two other flags are specifically targeted at use cases where the device
link is added from the consumer's ``->probe`` callback: ``DL_FLAG_RPM_ACTIVE``
can be specified to runtime resume the supplier upon addition of the
device link. ``DL_FLAG_AUTOREMOVE_CONSUMER`` causes the device link to be
automatically purged when the consumer fails to probe or later unbinds.
This obviates the need to explicitly delete the link in the ``->remove``
callback or in the error path of the ``->probe`` callback.
can be specified to runtime resume the supplier and prevent it from suspending
before the consumer is runtime suspended. ``DL_FLAG_AUTOREMOVE_CONSUMER``
causes the device link to be automatically purged when the consumer fails to
probe or later unbinds.
Similarly, when the device link is added from supplier's ``->probe`` callback,
``DL_FLAG_AUTOREMOVE_SUPPLIER`` causes the device link to be automatically
purged when the supplier fails to probe or later unbinds.
If neither ``DL_FLAG_AUTOREMOVE_CONSUMER`` nor ``DL_FLAG_AUTOREMOVE_SUPPLIER``
is set, ``DL_FLAG_AUTOPROBE_CONSUMER`` can be used to request the driver core
to probe for a driver for the consumer driver on the link automatically after
a driver has been bound to the supplier device.
Note, however, that any combinations of ``DL_FLAG_AUTOREMOVE_CONSUMER``,
``DL_FLAG_AUTOREMOVE_SUPPLIER`` or ``DL_FLAG_AUTOPROBE_CONSUMER`` with
``DL_FLAG_STATELESS`` are invalid and cannot be used.
Limitations
===========
Driver authors should be aware that a driver presence dependency (i.e. when
``DL_FLAG_STATELESS`` is not specified on link addition) may cause probing of
the consumer to be deferred indefinitely. This can become a problem if the
consumer is required to probe before a certain initcall level is reached.
Worse, if the supplier driver is blacklisted or missing, the consumer will
never be probed.
Driver authors should be aware that a driver presence dependency for managed
device links (i.e. when ``DL_FLAG_STATELESS`` is not specified on link addition)
may cause probing of the consumer to be deferred indefinitely. This can become
a problem if the consumer is required to probe before a certain initcall level
is reached. Worse, if the supplier driver is blacklisted or missing, the
consumer will never be probed.
Moreover, managed device links cannot be deleted directly. They are deleted
by the driver core when they are not necessary any more in accordance with the
``DL_FLAG_AUTOREMOVE_CONSUMER`` and ``DL_FLAG_AUTOREMOVE_SUPPLIER`` flags.
However, stateless device links (i.e. device links with ``DL_FLAG_STATELESS``
set) are expected to be removed by whoever called :c:func:`device_link_add()`
to add them with the help of either :c:func:`device_link_del()` or
:c:func:`device_link_remove()`.
Passing ``DL_FLAG_RPM_ACTIVE`` along with ``DL_FLAG_STATELESS`` to
:c:func:`device_link_add()` may cause the PM-runtime usage counter of the
supplier device to remain nonzero after a subsequent invocation of either
:c:func:`device_link_del()` or :c:func:`device_link_remove()` to remove the
device link returned by it. This happens if :c:func:`device_link_add()` is
called twice in a row for the same consumer-supplier pair without removing the
link between these calls, in which case allowing the PM-runtime usage counter
of the supplier to drop on an attempt to remove the link may cause it to be
suspended while the consumer is still PM-runtime-active and that has to be
avoided. [To work around this limitation it is sufficient to let the consumer
runtime suspend at least once, or call :c:func:`pm_runtime_set_suspended()` for
it with PM-runtime disabled, between the :c:func:`device_link_add()` and
:c:func:`device_link_del()` or :c:func:`device_link_remove()` calls.]
Sometimes drivers depend on optional resources. They are able to operate
in a degraded mode (reduced feature set or performance) when those resources
@ -285,4 +321,4 @@ API
===
.. kernel-doc:: drivers/base/core.c
:functions: device_link_add device_link_del
:functions: device_link_add device_link_del device_link_remove

View File

@ -583,7 +583,7 @@ export KBUILD_MODULES KBUILD_BUILTIN
ifeq ($(KBUILD_EXTMOD),)
# Objects we will link into vmlinux / subdirs we need to visit
init-y := init/
drivers-y := drivers/ sound/ firmware/
drivers-y := drivers/ sound/
net-y := net/
libs-y := lib/
core-y := usr/

View File

@ -261,8 +261,7 @@ static char *ibmebus_chomp(const char *in, size_t count)
return out;
}
static ssize_t ibmebus_store_probe(struct bus_type *bus,
const char *buf, size_t count)
static ssize_t probe_store(struct bus_type *bus, const char *buf, size_t count)
{
struct device_node *dn = NULL;
struct device *dev;
@ -298,10 +297,9 @@ out:
return rc;
return count;
}
static BUS_ATTR(probe, 0200, NULL, ibmebus_store_probe);
static BUS_ATTR_WO(probe);
static ssize_t ibmebus_store_remove(struct bus_type *bus,
const char *buf, size_t count)
static ssize_t remove_store(struct bus_type *bus, const char *buf, size_t count)
{
struct device *dev;
char *path;
@ -325,7 +323,7 @@ static ssize_t ibmebus_store_remove(struct bus_type *bus,
return -ENODEV;
}
}
static BUS_ATTR(remove, 0200, NULL, ibmebus_store_remove);
static BUS_ATTR_WO(remove);
static struct attribute *ibmbus_bus_attrs[] = {
&bus_attr_probe.attr,

View File

@ -60,12 +60,17 @@ struct driver_private {
* @knode_parent - node in sibling list
* @knode_driver - node in driver list
* @knode_bus - node in bus list
* @knode_class - node in class list
* @deferred_probe - entry in deferred_probe_list which is used to retry the
* binding of drivers which were unable to get all the resources needed by
* the device; typically because it depends on another driver getting
* probed first.
* @async_driver - pointer to device driver awaiting probe via async_probe
* @device - pointer back to the struct device that this structure is
* associated with.
* @dead - This device is currently either in the process of or has been
* removed from the system. Any asynchronous events scheduled for this
* device should exit without taking any action.
*
* Nothing outside of the driver core should ever touch these fields.
*/
@ -74,8 +79,11 @@ struct device_private {
struct klist_node knode_parent;
struct klist_node knode_driver;
struct klist_node knode_bus;
struct klist_node knode_class;
struct list_head deferred_probe;
struct device_driver *async_driver;
struct device *device;
u8 dead:1;
};
#define to_device_private_parent(obj) \
container_of(obj, struct device_private, knode_parent)
@ -83,6 +91,8 @@ struct device_private {
container_of(obj, struct device_private, knode_driver)
#define to_device_private_bus(obj) \
container_of(obj, struct device_private, knode_bus)
#define to_device_private_class(obj) \
container_of(obj, struct device_private, knode_class)
/* initialisation functions */
extern int devices_init(void);
@ -124,6 +134,8 @@ extern int driver_add_groups(struct device_driver *drv,
const struct attribute_group **groups);
extern void driver_remove_groups(struct device_driver *drv,
const struct attribute_group **groups);
int device_driver_attach(struct device_driver *drv, struct device *dev);
void device_driver_detach(struct device *dev);
extern char *make_class_name(const char *name, struct kobject *kobj);

View File

@ -187,11 +187,7 @@ static ssize_t unbind_store(struct device_driver *drv, const char *buf,
dev = bus_find_device_by_name(bus, NULL, buf);
if (dev && dev->driver == drv) {
if (dev->parent && dev->bus->need_parent_lock)
device_lock(dev->parent);
device_release_driver(dev);
if (dev->parent && dev->bus->need_parent_lock)
device_unlock(dev->parent);
device_driver_detach(dev);
err = count;
}
put_device(dev);
@ -214,13 +210,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
dev = bus_find_device_by_name(bus, NULL, buf);
if (dev && dev->driver == NULL && driver_match_device(drv, dev)) {
if (dev->parent && bus->need_parent_lock)
device_lock(dev->parent);
device_lock(dev);
err = driver_probe_device(drv, dev);
device_unlock(dev);
if (dev->parent && bus->need_parent_lock)
device_unlock(dev->parent);
err = device_driver_attach(drv, dev);
if (err > 0) {
/* success */
@ -236,12 +226,12 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
}
static DRIVER_ATTR_IGNORE_LOCKDEP(bind, S_IWUSR, NULL, bind_store);
static ssize_t show_drivers_autoprobe(struct bus_type *bus, char *buf)
static ssize_t drivers_autoprobe_show(struct bus_type *bus, char *buf)
{
return sprintf(buf, "%d\n", bus->p->drivers_autoprobe);
}
static ssize_t store_drivers_autoprobe(struct bus_type *bus,
static ssize_t drivers_autoprobe_store(struct bus_type *bus,
const char *buf, size_t count)
{
if (buf[0] == '0')
@ -251,7 +241,7 @@ static ssize_t store_drivers_autoprobe(struct bus_type *bus,
return count;
}
static ssize_t store_drivers_probe(struct bus_type *bus,
static ssize_t drivers_probe_store(struct bus_type *bus,
const char *buf, size_t count)
{
struct device *dev;
@ -586,9 +576,8 @@ static void remove_bind_files(struct device_driver *drv)
driver_remove_file(drv, &driver_attr_unbind);
}
static BUS_ATTR(drivers_probe, S_IWUSR, NULL, store_drivers_probe);
static BUS_ATTR(drivers_autoprobe, S_IWUSR | S_IRUGO,
show_drivers_autoprobe, store_drivers_autoprobe);
static BUS_ATTR_WO(drivers_probe);
static BUS_ATTR_RW(drivers_autoprobe);
static int add_probe_files(struct bus_type *bus)
{
@ -621,17 +610,6 @@ static ssize_t uevent_store(struct device_driver *drv, const char *buf,
}
static DRIVER_ATTR_WO(uevent);
static void driver_attach_async(void *_drv, async_cookie_t cookie)
{
struct device_driver *drv = _drv;
int ret;
ret = driver_attach(drv);
pr_debug("bus: '%s': driver %s async attach completed: %d\n",
drv->bus->name, drv->name, ret);
}
/**
* bus_add_driver - Add a driver to the bus.
* @drv: driver.
@ -664,15 +642,9 @@ int bus_add_driver(struct device_driver *drv)
klist_add_tail(&priv->knode_bus, &bus->p->klist_drivers);
if (drv->bus->p->drivers_autoprobe) {
if (driver_allows_async_probing(drv)) {
pr_debug("bus: '%s': probing driver %s asynchronously\n",
drv->bus->name, drv->name);
async_schedule(driver_attach_async, drv);
} else {
error = driver_attach(drv);
if (error)
goto out_unregister;
}
error = driver_attach(drv);
if (error)
goto out_unregister;
}
module_add_driver(drv->owner, drv);
@ -774,13 +746,8 @@ EXPORT_SYMBOL_GPL(bus_rescan_devices);
*/
int device_reprobe(struct device *dev)
{
if (dev->driver) {
if (dev->parent && dev->bus->need_parent_lock)
device_lock(dev->parent);
device_release_driver(dev);
if (dev->parent && dev->bus->need_parent_lock)
device_unlock(dev->parent);
}
if (dev->driver)
device_driver_detach(dev);
return bus_rescan_devices_helper(dev, NULL);
}
EXPORT_SYMBOL_GPL(device_reprobe);
@ -838,7 +805,14 @@ static ssize_t bus_uevent_store(struct bus_type *bus,
rc = kobject_synth_uevent(&bus->p->subsys.kobj, buf, count);
return rc ? rc : count;
}
static BUS_ATTR(uevent, S_IWUSR, NULL, bus_uevent_store);
/*
* "open code" the old BUS_ATTR() macro here. We want to use BUS_ATTR_WO()
* here, but can not use it as earlier in the file we have
* DEVICE_ATTR_WO(uevent), which would cause a clash with the with the store
* function name.
*/
static struct bus_attribute bus_attr_uevent = __ATTR(uevent, S_IWUSR, NULL,
bus_uevent_store);
/**
* bus_register - register a driver-core subsystem

View File

@ -117,16 +117,22 @@ static void class_put(struct class *cls)
kset_put(&cls->p->subsys);
}
static struct device *klist_class_to_dev(struct klist_node *n)
{
struct device_private *p = to_device_private_class(n);
return p->device;
}
static void klist_class_dev_get(struct klist_node *n)
{
struct device *dev = container_of(n, struct device, knode_class);
struct device *dev = klist_class_to_dev(n);
get_device(dev);
}
static void klist_class_dev_put(struct klist_node *n)
{
struct device *dev = container_of(n, struct device, knode_class);
struct device *dev = klist_class_to_dev(n);
put_device(dev);
}
@ -277,7 +283,7 @@ void class_dev_iter_init(struct class_dev_iter *iter, struct class *class,
struct klist_node *start_knode = NULL;
if (start)
start_knode = &start->knode_class;
start_knode = &start->p->knode_class;
klist_iter_init_node(&class->p->klist_devices, &iter->ki, start_knode);
iter->type = type;
}
@ -304,7 +310,7 @@ struct device *class_dev_iter_next(struct class_dev_iter *iter)
knode = klist_next(&iter->ki);
if (!knode)
return NULL;
dev = container_of(knode, struct device, knode_class);
dev = klist_class_to_dev(knode);
if (!iter->type || iter->type == dev->type)
return dev;
}

View File

@ -27,7 +27,7 @@
* helper fills the niche of aggregate drivers for specific hardware, where
* further standardization into a subsystem would not be practical. The common
* example is when a logical device (e.g. a DRM display driver) is spread around
* the SoC on various component (scanout engines, blending blocks, transcoders
* the SoC on various components (scanout engines, blending blocks, transcoders
* for various outputs and so on).
*
* The component helper also doesn't solve runtime dependencies, e.g. for system
@ -378,7 +378,7 @@ static void __component_match_add(struct device *master,
}
/**
* component_match_add_release - add a component match with release callback
* component_match_add_release - add a component match entry with release callback
* @master: device with the aggregate driver
* @matchptr: pointer to the list of component matches
* @release: release function for @compare_data
@ -408,7 +408,7 @@ void component_match_add_release(struct device *master,
EXPORT_SYMBOL(component_match_add_release);
/**
* component_match_add_typed - add a compent match for a typed component
* component_match_add_typed - add a component match entry for a typed component
* @master: device with the aggregate driver
* @matchptr: pointer to the list of component matches
* @compare_typed: compare function to match against all typed components
@ -537,11 +537,11 @@ static void component_unbind(struct component *component,
}
/**
* component_unbind_all - unbind all component to an aggregate driver
* component_unbind_all - unbind all components of an aggregate driver
* @master_dev: device with the aggregate driver
* @data: opaque pointer, passed to all components
*
* Unbinds all components to the aggregate @dev by passing @data to their
* Unbinds all components of the aggregate @dev by passing @data to their
* &component_ops.unbind functions. Should be called from
* &component_master_ops.unbind.
*/
@ -619,11 +619,11 @@ static int component_bind(struct component *component, struct master *master,
}
/**
* component_bind_all - bind all component to an aggregate driver
* component_bind_all - bind all components of an aggregate driver
* @master_dev: device with the aggregate driver
* @data: opaque pointer, passed to all components
*
* Binds all components to the aggregate @dev by passing @data to their
* Binds all components of the aggregate @dev by passing @data to their
* &component_ops.bind functions. Should be called from
* &component_master_ops.bind.
*/

View File

@ -179,10 +179,31 @@ void device_pm_move_to_tail(struct device *dev)
* of the link. If DL_FLAG_PM_RUNTIME is not set, DL_FLAG_RPM_ACTIVE will be
* ignored.
*
* If the DL_FLAG_AUTOREMOVE_CONSUMER is set, the link will be removed
* automatically when the consumer device driver unbinds from it.
* The combination of both DL_FLAG_AUTOREMOVE_CONSUMER and DL_FLAG_STATELESS
* set is invalid and will cause NULL to be returned.
* If DL_FLAG_STATELESS is set in @flags, the link is not going to be managed by
* the driver core and, in particular, the caller of this function is expected
* to drop the reference to the link acquired by it directly.
*
* If that flag is not set, however, the caller of this function is handing the
* management of the link over to the driver core entirely and its return value
* can only be used to check whether or not the link is present. In that case,
* the DL_FLAG_AUTOREMOVE_CONSUMER and DL_FLAG_AUTOREMOVE_SUPPLIER device link
* flags can be used to indicate to the driver core when the link can be safely
* deleted. Namely, setting one of them in @flags indicates to the driver core
* that the link is not going to be used (by the given caller of this function)
* after unbinding the consumer or supplier driver, respectively, from its
* device, so the link can be deleted at that point. If none of them is set,
* the link will be maintained until one of the devices pointed to by it (either
* the consumer or the supplier) is unregistered.
*
* Also, if DL_FLAG_STATELESS, DL_FLAG_AUTOREMOVE_CONSUMER and
* DL_FLAG_AUTOREMOVE_SUPPLIER are not set in @flags (that is, a persistent
* managed device link is being added), the DL_FLAG_AUTOPROBE_CONSUMER flag can
* be used to request the driver core to automaticall probe for a consmer
* driver after successfully binding a driver to the supplier device.
*
* The combination of DL_FLAG_STATELESS and either DL_FLAG_AUTOREMOVE_CONSUMER
* or DL_FLAG_AUTOREMOVE_SUPPLIER set in @flags at the same time is invalid and
* will cause NULL to be returned upfront.
*
* A side effect of the link creation is re-ordering of dpm_list and the
* devices_kset list by moving the consumer device and all devices depending
@ -199,10 +220,22 @@ struct device_link *device_link_add(struct device *consumer,
struct device_link *link;
if (!consumer || !supplier ||
((flags & DL_FLAG_STATELESS) &&
(flags & DL_FLAG_AUTOREMOVE_CONSUMER)))
(flags & DL_FLAG_STATELESS &&
flags & (DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_AUTOPROBE_CONSUMER)) ||
(flags & DL_FLAG_AUTOPROBE_CONSUMER &&
flags & (DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER)))
return NULL;
if (flags & DL_FLAG_PM_RUNTIME && flags & DL_FLAG_RPM_ACTIVE) {
if (pm_runtime_get_sync(supplier) < 0) {
pm_runtime_put_noidle(supplier);
return NULL;
}
}
device_links_write_lock();
device_pm_lock();
@ -217,35 +250,71 @@ struct device_link *device_link_add(struct device *consumer,
goto out;
}
list_for_each_entry(link, &supplier->links.consumers, s_node)
if (link->consumer == consumer) {
/*
* DL_FLAG_AUTOREMOVE_SUPPLIER indicates that the link will be needed
* longer than for DL_FLAG_AUTOREMOVE_CONSUMER and setting them both
* together doesn't make sense, so prefer DL_FLAG_AUTOREMOVE_SUPPLIER.
*/
if (flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
flags &= ~DL_FLAG_AUTOREMOVE_CONSUMER;
list_for_each_entry(link, &supplier->links.consumers, s_node) {
if (link->consumer != consumer)
continue;
/*
* Don't return a stateless link if the caller wants a stateful
* one and vice versa.
*/
if (WARN_ON((flags & DL_FLAG_STATELESS) != (link->flags & DL_FLAG_STATELESS))) {
link = NULL;
goto out;
}
if (flags & DL_FLAG_PM_RUNTIME) {
if (!(link->flags & DL_FLAG_PM_RUNTIME)) {
pm_runtime_new_link(consumer);
link->flags |= DL_FLAG_PM_RUNTIME;
}
if (flags & DL_FLAG_RPM_ACTIVE)
refcount_inc(&link->rpm_active);
}
if (flags & DL_FLAG_STATELESS) {
kref_get(&link->kref);
goto out;
}
/*
* If the life time of the link following from the new flags is
* longer than indicated by the flags of the existing link,
* update the existing link to stay around longer.
*/
if (flags & DL_FLAG_AUTOREMOVE_SUPPLIER) {
if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) {
link->flags &= ~DL_FLAG_AUTOREMOVE_CONSUMER;
link->flags |= DL_FLAG_AUTOREMOVE_SUPPLIER;
}
} else if (!(flags & DL_FLAG_AUTOREMOVE_CONSUMER)) {
link->flags &= ~(DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER);
}
goto out;
}
link = kzalloc(sizeof(*link), GFP_KERNEL);
if (!link)
goto out;
refcount_set(&link->rpm_active, 1);
if (flags & DL_FLAG_PM_RUNTIME) {
if (flags & DL_FLAG_RPM_ACTIVE) {
if (pm_runtime_get_sync(supplier) < 0) {
pm_runtime_put_noidle(supplier);
kfree(link);
link = NULL;
goto out;
}
link->rpm_active = true;
}
if (flags & DL_FLAG_RPM_ACTIVE)
refcount_inc(&link->rpm_active);
pm_runtime_new_link(consumer);
/*
* If the link is being added by the consumer driver at probe
* time, balance the decrementation of the supplier's runtime PM
* usage counter after consumer probe in driver_probe_device().
*/
if (consumer->links.status == DL_DEV_PROBING)
pm_runtime_get_noresume(supplier);
}
get_device(supplier);
link->supplier = supplier;
INIT_LIST_HEAD(&link->s_node);
@ -260,17 +329,26 @@ struct device_link *device_link_add(struct device *consumer,
link->status = DL_STATE_NONE;
} else {
switch (supplier->links.status) {
case DL_DEV_DRIVER_BOUND:
case DL_DEV_PROBING:
switch (consumer->links.status) {
case DL_DEV_PROBING:
/*
* Some callers expect the link creation during
* consumer driver probe to resume the supplier
* even without DL_FLAG_RPM_ACTIVE.
* A consumer driver can create a link to a
* supplier that has not completed its probing
* yet as long as it knows that the supplier is
* already functional (for example, it has just
* acquired some resources from the supplier).
*/
if (flags & DL_FLAG_PM_RUNTIME)
pm_runtime_resume(supplier);
link->status = DL_STATE_CONSUMER_PROBE;
break;
default:
link->status = DL_STATE_DORMANT;
break;
}
break;
case DL_DEV_DRIVER_BOUND:
switch (consumer->links.status) {
case DL_DEV_PROBING:
link->status = DL_STATE_CONSUMER_PROBE;
break;
case DL_DEV_DRIVER_BOUND:
@ -290,6 +368,14 @@ struct device_link *device_link_add(struct device *consumer,
}
}
/*
* Some callers expect the link creation during consumer driver probe to
* resume the supplier even without DL_FLAG_RPM_ACTIVE.
*/
if (link->status == DL_STATE_CONSUMER_PROBE &&
flags & DL_FLAG_PM_RUNTIME)
pm_runtime_resume(supplier);
/*
* Move the consumer and all of the devices depending on it to the end
* of dpm_list and the devices_kset list.
@ -302,17 +388,24 @@ struct device_link *device_link_add(struct device *consumer,
list_add_tail_rcu(&link->s_node, &supplier->links.consumers);
list_add_tail_rcu(&link->c_node, &consumer->links.suppliers);
dev_info(consumer, "Linked as a consumer to %s\n", dev_name(supplier));
dev_dbg(consumer, "Linked as a consumer to %s\n", dev_name(supplier));
out:
device_pm_unlock();
device_links_write_unlock();
if ((flags & DL_FLAG_PM_RUNTIME && flags & DL_FLAG_RPM_ACTIVE) && !link)
pm_runtime_put(supplier);
return link;
}
EXPORT_SYMBOL_GPL(device_link_add);
static void device_link_free(struct device_link *link)
{
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
put_device(link->consumer);
put_device(link->supplier);
kfree(link);
@ -328,8 +421,8 @@ static void __device_link_del(struct kref *kref)
{
struct device_link *link = container_of(kref, struct device_link, kref);
dev_info(link->consumer, "Dropping the link to %s\n",
dev_name(link->supplier));
dev_dbg(link->consumer, "Dropping the link to %s\n",
dev_name(link->supplier));
if (link->flags & DL_FLAG_PM_RUNTIME)
pm_runtime_drop_link(link->consumer);
@ -355,8 +448,16 @@ static void __device_link_del(struct kref *kref)
}
#endif /* !CONFIG_SRCU */
static void device_link_put_kref(struct device_link *link)
{
if (link->flags & DL_FLAG_STATELESS)
kref_put(&link->kref, __device_link_del);
else
WARN(1, "Unable to drop a managed device link reference\n");
}
/**
* device_link_del - Delete a link between two devices.
* device_link_del - Delete a stateless link between two devices.
* @link: Device link to delete.
*
* The caller must ensure proper synchronization of this function with runtime
@ -368,14 +469,14 @@ void device_link_del(struct device_link *link)
{
device_links_write_lock();
device_pm_lock();
kref_put(&link->kref, __device_link_del);
device_link_put_kref(link);
device_pm_unlock();
device_links_write_unlock();
}
EXPORT_SYMBOL_GPL(device_link_del);
/**
* device_link_remove - remove a link between two devices.
* device_link_remove - Delete a stateless link between two devices.
* @consumer: Consumer end of the link.
* @supplier: Supplier end of the link.
*
@ -394,7 +495,7 @@ void device_link_remove(void *consumer, struct device *supplier)
list_for_each_entry(link, &supplier->links.consumers, s_node) {
if (link->consumer == consumer) {
kref_put(&link->kref, __device_link_del);
device_link_put_kref(link);
break;
}
}
@ -474,8 +575,21 @@ void device_links_driver_bound(struct device *dev)
if (link->flags & DL_FLAG_STATELESS)
continue;
/*
* Links created during consumer probe may be in the "consumer
* probe" state to start with if the supplier is still probing
* when they are created and they may become "active" if the
* consumer probe returns first. Skip them here.
*/
if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
continue;
WARN_ON(link->status != DL_STATE_DORMANT);
WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
if (link->flags & DL_FLAG_AUTOPROBE_CONSUMER)
driver_deferred_probe_add(link->consumer);
}
list_for_each_entry(link, &dev->links.suppliers, c_node) {
@ -512,18 +626,49 @@ static void __device_links_no_driver(struct device *dev)
continue;
if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER)
kref_put(&link->kref, __device_link_del);
else if (link->status != DL_STATE_SUPPLIER_UNBIND)
__device_link_del(&link->kref);
else if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
}
dev->links.status = DL_DEV_NO_DRIVER;
}
/**
* device_links_no_driver - Update links after failing driver probe.
* @dev: Device whose driver has just failed to probe.
*
* Clean up leftover links to consumers for @dev and invoke
* %__device_links_no_driver() to update links to suppliers for it as
* appropriate.
*
* Links with the DL_FLAG_STATELESS flag set are ignored.
*/
void device_links_no_driver(struct device *dev)
{
struct device_link *link;
device_links_write_lock();
list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS)
continue;
/*
* The probe has failed, so if the status of the link is
* "consumer probe" or "active", it must have been added by
* a probing consumer while this device was still probing.
* Change its state to "dormant", as it represents a valid
* relationship, but it is not functionally meaningful.
*/
if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
WRITE_ONCE(link->status, DL_STATE_DORMANT);
}
__device_links_no_driver(dev);
device_links_write_unlock();
}
@ -539,11 +684,11 @@ void device_links_no_driver(struct device *dev)
*/
void device_links_driver_cleanup(struct device *dev)
{
struct device_link *link;
struct device_link *link, *ln;
device_links_write_lock();
list_for_each_entry(link, &dev->links.consumers, s_node) {
list_for_each_entry_safe(link, ln, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS)
continue;
@ -557,7 +702,7 @@ void device_links_driver_cleanup(struct device *dev)
*/
if (link->status == DL_STATE_SUPPLIER_UNBIND &&
link->flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
kref_put(&link->kref, __device_link_del);
__device_link_del(&link->kref);
WRITE_ONCE(link->status, DL_STATE_DORMANT);
}
@ -1966,7 +2111,7 @@ int device_add(struct device *dev)
if (dev->class) {
mutex_lock(&dev->class->p->mutex);
/* tie the class to the device */
klist_add_tail(&dev->knode_class,
klist_add_tail(&dev->p->knode_class,
&dev->class->p->klist_devices);
/* notify any interfaces that the device is here */
@ -2080,6 +2225,17 @@ void device_del(struct device *dev)
struct kobject *glue_dir = NULL;
struct class_interface *class_intf;
/*
* Hold the device lock and set the "dead" flag to guarantee that
* the update behavior is consistent with the other bitfields near
* it and that we cannot have an asynchronous probe routine trying
* to run while we are tearing out the bus/class/sysfs from
* underneath the device.
*/
device_lock(dev);
dev->p->dead = true;
device_unlock(dev);
/* Notify clients of device removal. This call must come
* before dpm_sysfs_remove().
*/
@ -2105,7 +2261,7 @@ void device_del(struct device *dev)
if (class_intf->remove_dev)
class_intf->remove_dev(dev, class_intf);
/* remove the device from the class list */
klist_del(&dev->knode_class);
klist_del(&dev->p->knode_class);
mutex_unlock(&dev->class->p->mutex);
}
device_remove_file(dev, &dev_attr_uevent);

View File

@ -409,6 +409,7 @@ static void device_create_release(struct device *dev)
kfree(dev);
}
__printf(4, 0)
static struct device *
__cpu_device_create(struct device *parent, void *drvdata,
const struct attribute_group **groups,

View File

@ -57,6 +57,10 @@ static atomic_t deferred_trigger_count = ATOMIC_INIT(0);
static struct dentry *deferred_devices;
static bool initcalls_done;
/* Save the async probe drivers' name from kernel cmdline */
#define ASYNC_DRV_NAMES_MAX_LEN 256
static char async_probe_drv_names[ASYNC_DRV_NAMES_MAX_LEN];
/*
* In some cases, like suspend to RAM or hibernation, It might be reasonable
* to prohibit probing of devices as it could be unsafe.
@ -116,7 +120,7 @@ static void deferred_probe_work_func(struct work_struct *work)
}
static DECLARE_WORK(deferred_probe_work, deferred_probe_work_func);
static void driver_deferred_probe_add(struct device *dev)
void driver_deferred_probe_add(struct device *dev)
{
mutex_lock(&deferred_probe_mutex);
if (list_empty(&dev->p->deferred_probe)) {
@ -674,6 +678,23 @@ int driver_probe_device(struct device_driver *drv, struct device *dev)
return ret;
}
static inline bool cmdline_requested_async_probing(const char *drv_name)
{
return parse_option_str(async_probe_drv_names, drv_name);
}
/* The option format is "driver_async_probe=drv_name1,drv_name2,..." */
static int __init save_async_options(char *buf)
{
if (strlen(buf) >= ASYNC_DRV_NAMES_MAX_LEN)
printk(KERN_WARNING
"Too long list of driver names for 'driver_async_probe'!\n");
strlcpy(async_probe_drv_names, buf, ASYNC_DRV_NAMES_MAX_LEN);
return 0;
}
__setup("driver_async_probe=", save_async_options);
bool driver_allows_async_probing(struct device_driver *drv)
{
switch (drv->probe_type) {
@ -684,6 +705,9 @@ bool driver_allows_async_probing(struct device_driver *drv)
return false;
default:
if (cmdline_requested_async_probing(drv->name))
return true;
if (module_requested_async_probing(drv->owner))
return true;
@ -731,15 +755,6 @@ static int __device_attach_driver(struct device_driver *drv, void *_data)
bool async_allowed;
int ret;
/*
* Check if device has already been claimed. This may
* happen with driver loading, device discovery/registration,
* and deferred probe processing happens all at once with
* multiple threads.
*/
if (dev->driver)
return -EBUSY;
ret = driver_match_device(drv, dev);
if (ret == 0) {
/* no match */
@ -774,6 +789,15 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
device_lock(dev);
/*
* Check if device has already been removed or claimed. This may
* happen with driver loading, device discovery/registration,
* and deferred probe processing happens all at once with
* multiple threads.
*/
if (dev->p->dead || dev->driver)
goto out_unlock;
if (dev->parent)
pm_runtime_get_sync(dev->parent);
@ -784,7 +808,7 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
if (dev->parent)
pm_runtime_put(dev->parent);
out_unlock:
device_unlock(dev);
put_device(dev);
@ -829,7 +853,7 @@ static int __device_attach(struct device *dev, bool allow_async)
*/
dev_dbg(dev, "scheduling asynchronous probe\n");
get_device(dev);
async_schedule(__device_attach_async_helper, dev);
async_schedule_dev(__device_attach_async_helper, dev);
} else {
pm_request_idle(dev);
}
@ -867,6 +891,88 @@ void device_initial_probe(struct device *dev)
__device_attach(dev, true);
}
/*
* __device_driver_lock - acquire locks needed to manipulate dev->drv
* @dev: Device we will update driver info for
* @parent: Parent device. Needed if the bus requires parent lock
*
* This function will take the required locks for manipulating dev->drv.
* Normally this will just be the @dev lock, but when called for a USB
* interface, @parent lock will be held as well.
*/
static void __device_driver_lock(struct device *dev, struct device *parent)
{
if (parent && dev->bus->need_parent_lock)
device_lock(parent);
device_lock(dev);
}
/*
* __device_driver_unlock - release locks needed to manipulate dev->drv
* @dev: Device we will update driver info for
* @parent: Parent device. Needed if the bus requires parent lock
*
* This function will release the required locks for manipulating dev->drv.
* Normally this will just be the the @dev lock, but when called for a
* USB interface, @parent lock will be released as well.
*/
static void __device_driver_unlock(struct device *dev, struct device *parent)
{
device_unlock(dev);
if (parent && dev->bus->need_parent_lock)
device_unlock(parent);
}
/**
* device_driver_attach - attach a specific driver to a specific device
* @drv: Driver to attach
* @dev: Device to attach it to
*
* Manually attach driver to a device. Will acquire both @dev lock and
* @dev->parent lock if needed.
*/
int device_driver_attach(struct device_driver *drv, struct device *dev)
{
int ret = 0;
__device_driver_lock(dev, dev->parent);
/*
* If device has been removed or someone has already successfully
* bound a driver before us just skip the driver probe call.
*/
if (!dev->p->dead && !dev->driver)
ret = driver_probe_device(drv, dev);
__device_driver_unlock(dev, dev->parent);
return ret;
}
static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
{
struct device *dev = _dev;
struct device_driver *drv;
int ret = 0;
__device_driver_lock(dev, dev->parent);
drv = dev->p->async_driver;
/*
* If device has been removed or someone has already successfully
* bound a driver before us just skip the driver probe call.
*/
if (!dev->p->dead && !dev->driver)
ret = driver_probe_device(drv, dev);
__device_driver_unlock(dev, dev->parent);
dev_dbg(dev, "driver %s async attach completed: %d\n", drv->name, ret);
put_device(dev);
}
static int __driver_attach(struct device *dev, void *data)
{
struct device_driver *drv = data;
@ -894,14 +1000,26 @@ static int __driver_attach(struct device *dev, void *data)
return ret;
} /* ret > 0 means positive match */
if (dev->parent && dev->bus->need_parent_lock)
device_lock(dev->parent);
device_lock(dev);
if (!dev->driver)
driver_probe_device(drv, dev);
device_unlock(dev);
if (dev->parent && dev->bus->need_parent_lock)
device_unlock(dev->parent);
if (driver_allows_async_probing(drv)) {
/*
* Instead of probing the device synchronously we will
* probe it asynchronously to allow for more parallelism.
*
* We only take the device lock here in order to guarantee
* that the dev->driver and async_driver fields are protected
*/
dev_dbg(dev, "probing driver %s asynchronously\n", drv->name);
device_lock(dev);
if (!dev->driver) {
get_device(dev);
dev->p->async_driver = drv;
async_schedule_dev(__driver_attach_async_helper, dev);
}
device_unlock(dev);
return 0;
}
device_driver_attach(drv, dev);
return 0;
}
@ -932,15 +1050,11 @@ static void __device_release_driver(struct device *dev, struct device *parent)
drv = dev->driver;
if (drv) {
while (device_links_busy(dev)) {
device_unlock(dev);
if (parent && dev->bus->need_parent_lock)
device_unlock(parent);
__device_driver_unlock(dev, parent);
device_links_unbind_consumers(dev);
if (parent && dev->bus->need_parent_lock)
device_lock(parent);
device_lock(dev);
__device_driver_lock(dev, parent);
/*
* A concurrent invocation of the same function might
* have released the driver successfully while this one
@ -968,9 +1082,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
drv->remove(dev);
device_links_driver_cleanup(dev);
arch_teardown_dma_ops(dev);
devres_release_all(dev);
arch_teardown_dma_ops(dev);
dev->driver = NULL;
dev_set_drvdata(dev, NULL);
if (dev->pm_domain && dev->pm_domain->dismiss)
@ -993,16 +1107,12 @@ void device_release_driver_internal(struct device *dev,
struct device_driver *drv,
struct device *parent)
{
if (parent && dev->bus->need_parent_lock)
device_lock(parent);
__device_driver_lock(dev, parent);
device_lock(dev);
if (!drv || drv == dev->driver)
__device_release_driver(dev, parent);
device_unlock(dev);
if (parent && dev->bus->need_parent_lock)
device_unlock(parent);
__device_driver_unlock(dev, parent);
}
/**
@ -1027,6 +1137,18 @@ void device_release_driver(struct device *dev)
}
EXPORT_SYMBOL_GPL(device_release_driver);
/**
* device_driver_detach - detach driver from a specific device
* @dev: device to detach driver from
*
* Detach driver from device. Will acquire both @dev lock and @dev->parent
* lock if needed.
*/
void device_driver_detach(struct device *dev)
{
device_release_driver_internal(dev, NULL, dev->parent);
}
/**
* driver_detach - detach driver from all devices it controls.
* @drv: driver.

View File

@ -1,7 +1,9 @@
# SPDX-License-Identifier: GPL-2.0
# Makefile for the Linux firmware loader
obj-y := fallback_table.o
obj-$(CONFIG_FW_LOADER_USER_HELPER) += fallback_table.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
firmware_class-objs := main.o
firmware_class-$(CONFIG_FW_LOADER_USER_HELPER) += fallback.o
obj-y += builtin/

View File

@ -16,9 +16,6 @@
* firmware fallback configuration table
*/
/* Module or buit-in */
#ifdef CONFIG_FW_LOADER_USER_HELPER
static unsigned int zero;
static unsigned int one = 1;
@ -51,5 +48,3 @@ struct ctl_table firmware_config_table[] = {
{ }
};
EXPORT_SYMBOL_GPL(firmware_config_table);
#endif

View File

@ -328,12 +328,12 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv)
rc = kernel_read_file_from_path(path, &fw_priv->data, &size,
msize, id);
if (rc) {
if (rc == -ENOENT)
dev_dbg(device, "loading %s failed with error %d\n",
path, rc);
else
if (rc != -ENOENT)
dev_warn(device, "loading %s failed with error %d\n",
path, rc);
else
dev_dbg(device, "loading %s failed for no such file or directory.\n",
path);
continue;
}
dev_dbg(device, "direct-loading %s\n", fw_priv->fw_name);

View File

@ -127,7 +127,20 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
irqd_set_trigger_type(irqd, r->flags & IORESOURCE_BITS);
}
return r ? r->start : -ENXIO;
if (r)
return r->start;
/*
* For the index 0 interrupt, allow falling back to GpioInt
* resources. While a device could have both Interrupt and GpioInt
* resources, making this fallback ambiguous, in many common cases
* the device will only expose one IRQ, and this fallback
* allows a common code path across either kind of resource.
*/
if (num == 0 && has_acpi_companion(&dev->dev))
return acpi_dev_gpio_irq_get(ACPI_COMPANION(&dev->dev), num);
return -ENXIO;
#endif
}
EXPORT_SYMBOL_GPL(platform_get_irq);
@ -508,10 +521,12 @@ struct platform_device *platform_device_register_full(
pdev = platform_device_alloc(pdevinfo->name, pdevinfo->id);
if (!pdev)
goto err_alloc;
return ERR_PTR(-ENOMEM);
pdev->dev.parent = pdevinfo->parent;
pdev->dev.fwnode = pdevinfo->fwnode;
pdev->dev.of_node = of_node_get(to_of_node(pdev->dev.fwnode));
pdev->dev.of_node_reused = pdevinfo->of_node_reused;
if (pdevinfo->dma_mask) {
/*
@ -553,8 +568,6 @@ struct platform_device *platform_device_register_full(
err:
ACPI_COMPANION_SET(&pdev->dev, NULL);
kfree(pdev->dev.dma_mask);
err_alloc:
platform_device_put(pdev);
return ERR_PTR(ret);
}

View File

@ -734,7 +734,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
async_schedule(async_resume_noirq, dev);
async_schedule_dev(async_resume_noirq, dev);
}
}
@ -891,7 +891,7 @@ void dpm_resume_early(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
async_schedule(async_resume_early, dev);
async_schedule_dev(async_resume_early, dev);
}
}
@ -1055,7 +1055,7 @@ void dpm_resume(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
async_schedule(async_resume, dev);
async_schedule_dev(async_resume, dev);
}
}
@ -1375,7 +1375,7 @@ static int device_suspend_noirq(struct device *dev)
if (is_async(dev)) {
get_device(dev);
async_schedule(async_suspend_noirq, dev);
async_schedule_dev(async_suspend_noirq, dev);
return 0;
}
return __device_suspend_noirq(dev, pm_transition, false);
@ -1578,7 +1578,7 @@ static int device_suspend_late(struct device *dev)
if (is_async(dev)) {
get_device(dev);
async_schedule(async_suspend_late, dev);
async_schedule_dev(async_suspend_late, dev);
return 0;
}
@ -1844,7 +1844,7 @@ static int device_suspend(struct device *dev)
if (is_async(dev)) {
get_device(dev);
async_schedule(async_suspend, dev);
async_schedule_dev(async_suspend, dev);
return 0;
}

View File

@ -282,11 +282,8 @@ static int rpm_get_suppliers(struct device *dev)
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
int retval;
if (!(link->flags & DL_FLAG_PM_RUNTIME))
continue;
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND ||
link->rpm_active)
if (!(link->flags & DL_FLAG_PM_RUNTIME) ||
READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
continue;
retval = pm_runtime_get_sync(link->supplier);
@ -295,7 +292,7 @@ static int rpm_get_suppliers(struct device *dev)
pm_runtime_put_noidle(link->supplier);
return retval;
}
link->rpm_active = true;
refcount_inc(&link->rpm_active);
}
return 0;
}
@ -304,12 +301,13 @@ static void rpm_put_suppliers(struct device *dev)
{
struct device_link *link;
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
if (link->rpm_active &&
READ_ONCE(link->status) != DL_STATE_SUPPLIER_UNBIND) {
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
continue;
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
link->rpm_active = false;
}
}
}
/**
@ -1114,24 +1112,57 @@ EXPORT_SYMBOL_GPL(pm_runtime_get_if_in_use);
* and the device parent's counter of unsuspended children is modified to
* reflect the new status. If the new status is RPM_SUSPENDED, an idle
* notification request for the parent is submitted.
*
* If @dev has any suppliers (as reflected by device links to them), and @status
* is RPM_ACTIVE, they will be activated upfront and if the activation of one
* of them fails, the status of @dev will be changed to RPM_SUSPENDED (instead
* of the @status value) and the suppliers will be deacticated on exit. The
* error returned by the failing supplier activation will be returned in that
* case.
*/
int __pm_runtime_set_status(struct device *dev, unsigned int status)
{
struct device *parent = dev->parent;
unsigned long flags;
bool notify_parent = false;
int error = 0;
if (status != RPM_ACTIVE && status != RPM_SUSPENDED)
return -EINVAL;
spin_lock_irqsave(&dev->power.lock, flags);
spin_lock_irq(&dev->power.lock);
if (!dev->power.runtime_error && !dev->power.disable_depth) {
/*
* Prevent PM-runtime from being enabled for the device or return an
* error if it is enabled already and working.
*/
if (dev->power.runtime_error || dev->power.disable_depth)
dev->power.disable_depth++;
else
error = -EAGAIN;
goto out;
spin_unlock_irq(&dev->power.lock);
if (error)
return error;
/*
* If the new status is RPM_ACTIVE, the suppliers can be activated
* upfront regardless of the current status, because next time
* rpm_put_suppliers() runs, the rpm_active refcounts of the links
* involved will be dropped down to one anyway.
*/
if (status == RPM_ACTIVE) {
int idx = device_links_read_lock();
error = rpm_get_suppliers(dev);
if (error)
status = RPM_SUSPENDED;
device_links_read_unlock(idx);
}
spin_lock_irq(&dev->power.lock);
if (dev->power.runtime_status == status || !parent)
goto out_set;
@ -1159,19 +1190,33 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
spin_unlock(&parent->power.lock);
if (error)
if (error) {
status = RPM_SUSPENDED;
goto out;
}
}
out_set:
__update_runtime_status(dev, status);
dev->power.runtime_error = 0;
if (!error)
dev->power.runtime_error = 0;
out:
spin_unlock_irqrestore(&dev->power.lock, flags);
spin_unlock_irq(&dev->power.lock);
if (notify_parent)
pm_request_idle(parent);
if (status == RPM_SUSPENDED) {
int idx = device_links_read_lock();
rpm_put_suppliers(dev);
device_links_read_unlock(idx);
}
pm_runtime_enable(dev);
return error;
}
EXPORT_SYMBOL_GPL(__pm_runtime_set_status);
@ -1569,7 +1614,7 @@ void pm_runtime_remove(struct device *dev)
*
* Check links from this device to any consumers and if any of them have active
* runtime PM references to the device, drop the usage counter of the device
* (once per link).
* (as many times as needed).
*
* Links with the DL_FLAG_STATELESS flag set are ignored.
*
@ -1591,10 +1636,8 @@ void pm_runtime_clean_up_links(struct device *dev)
if (link->flags & DL_FLAG_STATELESS)
continue;
if (link->rpm_active) {
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put_noidle(dev);
link->rpm_active = false;
}
}
device_links_read_unlock(idx);
@ -1612,8 +1655,11 @@ void pm_runtime_get_suppliers(struct device *dev)
idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
if (link->flags & DL_FLAG_PM_RUNTIME)
if (link->flags & DL_FLAG_PM_RUNTIME) {
link->supplier_preactivated = true;
refcount_inc(&link->rpm_active);
pm_runtime_get_sync(link->supplier);
}
device_links_read_unlock(idx);
}
@ -1630,8 +1676,11 @@ void pm_runtime_put_suppliers(struct device *dev)
idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
if (link->flags & DL_FLAG_PM_RUNTIME)
pm_runtime_put(link->supplier);
if (link->supplier_preactivated) {
link->supplier_preactivated = false;
if (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
}
device_links_read_unlock(idx);
}
@ -1645,8 +1694,6 @@ void pm_runtime_new_link(struct device *dev)
void pm_runtime_drop_link(struct device *dev)
{
rpm_put_suppliers(dev);
spin_lock_irq(&dev->power.lock);
WARN_ON(dev->power.links_count == 0);
dev->power.links_count--;

View File

@ -11,16 +11,47 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/time.h>
#include <linux/numa.h>
#include <linux/nodemask.h>
#include <linux/topology.h>
#define TEST_PROBE_DELAY (5 * 1000) /* 5 sec */
#define TEST_PROBE_THRESHOLD (TEST_PROBE_DELAY / 2)
static atomic_t warnings, errors, timeout, async_completed;
static int test_probe(struct platform_device *pdev)
{
dev_info(&pdev->dev, "sleeping for %d msecs in probe\n",
TEST_PROBE_DELAY);
msleep(TEST_PROBE_DELAY);
dev_info(&pdev->dev, "done sleeping\n");
struct device *dev = &pdev->dev;
/*
* Determine if we have hit the "timeout" limit for the test if we
* have then report it as an error, otherwise we wil sleep for the
* required amount of time and then report completion.
*/
if (atomic_read(&timeout)) {
dev_err(dev, "async probe took too long\n");
atomic_inc(&errors);
} else {
dev_dbg(&pdev->dev, "sleeping for %d msecs in probe\n",
TEST_PROBE_DELAY);
msleep(TEST_PROBE_DELAY);
dev_dbg(&pdev->dev, "done sleeping\n");
}
/*
* Report NUMA mismatch if device node is set and we are not
* performing an async init on that node.
*/
if (dev->driver->probe_type == PROBE_PREFER_ASYNCHRONOUS) {
if (dev_to_node(dev) != numa_node_id()) {
dev_warn(dev, "NUMA node mismatch %d != %d\n",
dev_to_node(dev), numa_node_id());
atomic_inc(&warnings);
}
atomic_inc(&async_completed);
}
return 0;
}
@ -41,31 +72,64 @@ static struct platform_driver sync_driver = {
.probe = test_probe,
};
static struct platform_device *async_dev_1, *async_dev_2;
static struct platform_device *sync_dev_1;
static struct platform_device *async_dev[NR_CPUS * 2];
static struct platform_device *sync_dev[2];
static struct platform_device *
test_platform_device_register_node(char *name, int id, int nid)
{
struct platform_device *pdev;
int ret;
pdev = platform_device_alloc(name, id);
if (!pdev)
return NULL;
if (nid != NUMA_NO_NODE)
set_dev_node(&pdev->dev, nid);
ret = platform_device_add(pdev);
if (ret) {
platform_device_put(pdev);
return ERR_PTR(ret);
}
return pdev;
}
static int __init test_async_probe_init(void)
{
ktime_t calltime, delta;
struct platform_device **pdev = NULL;
int async_id = 0, sync_id = 0;
unsigned long long duration;
int error;
ktime_t calltime, delta;
int err, nid, cpu;
pr_info("registering first asynchronous device...\n");
pr_info("registering first set of asynchronous devices...\n");
async_dev_1 = platform_device_register_simple("test_async_driver", 1,
NULL, 0);
if (IS_ERR(async_dev_1)) {
error = PTR_ERR(async_dev_1);
pr_err("failed to create async_dev_1: %d\n", error);
return error;
for_each_online_cpu(cpu) {
nid = cpu_to_node(cpu);
pdev = &async_dev[async_id];
*pdev = test_platform_device_register_node("test_async_driver",
async_id,
nid);
if (IS_ERR(*pdev)) {
err = PTR_ERR(*pdev);
*pdev = NULL;
pr_err("failed to create async_dev: %d\n", err);
goto err_unregister_async_devs;
}
async_id++;
}
pr_info("registering asynchronous driver...\n");
calltime = ktime_get();
error = platform_driver_register(&async_driver);
if (error) {
pr_err("Failed to register async_driver: %d\n", error);
goto err_unregister_async_dev_1;
err = platform_driver_register(&async_driver);
if (err) {
pr_err("Failed to register async_driver: %d\n", err);
goto err_unregister_async_devs;
}
delta = ktime_sub(ktime_get(), calltime);
@ -73,86 +137,163 @@ static int __init test_async_probe_init(void)
pr_info("registration took %lld msecs\n", duration);
if (duration > TEST_PROBE_THRESHOLD) {
pr_err("test failed: probe took too long\n");
error = -ETIMEDOUT;
err = -ETIMEDOUT;
goto err_unregister_async_driver;
}
pr_info("registering second asynchronous device...\n");
pr_info("registering second set of asynchronous devices...\n");
calltime = ktime_get();
async_dev_2 = platform_device_register_simple("test_async_driver", 2,
NULL, 0);
if (IS_ERR(async_dev_2)) {
error = PTR_ERR(async_dev_2);
pr_err("failed to create async_dev_2: %d\n", error);
goto err_unregister_async_driver;
for_each_online_cpu(cpu) {
nid = cpu_to_node(cpu);
pdev = &sync_dev[sync_id];
*pdev = test_platform_device_register_node("test_async_driver",
async_id,
nid);
if (IS_ERR(*pdev)) {
err = PTR_ERR(*pdev);
*pdev = NULL;
pr_err("failed to create async_dev: %d\n", err);
goto err_unregister_async_driver;
}
async_id++;
}
delta = ktime_sub(ktime_get(), calltime);
duration = (unsigned long long) ktime_to_ms(delta);
pr_info("registration took %lld msecs\n", duration);
dev_info(&(*pdev)->dev,
"registration took %lld msecs\n", duration);
if (duration > TEST_PROBE_THRESHOLD) {
pr_err("test failed: probe took too long\n");
error = -ETIMEDOUT;
goto err_unregister_async_dev_2;
dev_err(&(*pdev)->dev,
"test failed: probe took too long\n");
err = -ETIMEDOUT;
goto err_unregister_async_driver;
}
pr_info("registering first synchronous device...\n");
nid = cpu_to_node(cpu);
pdev = &sync_dev[sync_id];
*pdev = test_platform_device_register_node("test_sync_driver",
sync_id,
NUMA_NO_NODE);
if (IS_ERR(*pdev)) {
err = PTR_ERR(*pdev);
*pdev = NULL;
pr_err("failed to create sync_dev: %d\n", err);
goto err_unregister_async_driver;
}
sync_id++;
pr_info("registering synchronous driver...\n");
error = platform_driver_register(&sync_driver);
if (error) {
pr_err("Failed to register async_driver: %d\n", error);
goto err_unregister_async_dev_2;
}
pr_info("registering synchronous device...\n");
calltime = ktime_get();
sync_dev_1 = platform_device_register_simple("test_sync_driver", 1,
NULL, 0);
if (IS_ERR(sync_dev_1)) {
error = PTR_ERR(sync_dev_1);
pr_err("failed to create sync_dev_1: %d\n", error);
goto err_unregister_sync_driver;
err = platform_driver_register(&sync_driver);
if (err) {
pr_err("Failed to register async_driver: %d\n", err);
goto err_unregister_sync_devs;
}
delta = ktime_sub(ktime_get(), calltime);
duration = (unsigned long long) ktime_to_ms(delta);
pr_info("registration took %lld msecs\n", duration);
if (duration < TEST_PROBE_THRESHOLD) {
pr_err("test failed: probe was too quick\n");
error = -ETIMEDOUT;
goto err_unregister_sync_dev_1;
dev_err(&(*pdev)->dev,
"test failed: probe was too quick\n");
err = -ETIMEDOUT;
goto err_unregister_sync_driver;
}
pr_info("completed successfully");
pr_info("registering second synchronous device...\n");
pdev = &sync_dev[sync_id];
calltime = ktime_get();
return 0;
*pdev = test_platform_device_register_node("test_sync_driver",
sync_id,
NUMA_NO_NODE);
if (IS_ERR(*pdev)) {
err = PTR_ERR(*pdev);
*pdev = NULL;
pr_err("failed to create sync_dev: %d\n", err);
goto err_unregister_sync_driver;
}
err_unregister_sync_dev_1:
platform_device_unregister(sync_dev_1);
sync_id++;
delta = ktime_sub(ktime_get(), calltime);
duration = (unsigned long long) ktime_to_ms(delta);
dev_info(&(*pdev)->dev,
"registration took %lld msecs\n", duration);
if (duration < TEST_PROBE_THRESHOLD) {
dev_err(&(*pdev)->dev,
"test failed: probe was too quick\n");
err = -ETIMEDOUT;
goto err_unregister_sync_driver;
}
/*
* The async events should have completed while we were taking care
* of the synchronous events. We will now terminate any outstanding
* asynchronous probe calls remaining by forcing timeout and remove
* the driver before we return which should force the flush of the
* pending asynchronous probe calls.
*
* Otherwise if they completed without errors or warnings then
* report successful completion.
*/
if (atomic_read(&async_completed) != async_id) {
pr_err("async events still pending, forcing timeout\n");
atomic_inc(&timeout);
err = -ETIMEDOUT;
} else if (!atomic_read(&errors) && !atomic_read(&warnings)) {
pr_info("completed successfully\n");
return 0;
}
err_unregister_sync_driver:
platform_driver_unregister(&sync_driver);
err_unregister_async_dev_2:
platform_device_unregister(async_dev_2);
err_unregister_sync_devs:
while (sync_id--)
platform_device_unregister(sync_dev[sync_id]);
err_unregister_async_driver:
platform_driver_unregister(&async_driver);
err_unregister_async_devs:
while (async_id--)
platform_device_unregister(async_dev[async_id]);
err_unregister_async_dev_1:
platform_device_unregister(async_dev_1);
/*
* If err is already set then count that as an additional error for
* the test. Otherwise we will report an invalid argument error and
* not count that as we should have reached here as a result of
* errors or warnings being reported by the probe routine.
*/
if (err)
atomic_inc(&errors);
else
err = -EINVAL;
return error;
pr_err("Test failed with %d errors and %d warnings\n",
atomic_read(&errors), atomic_read(&warnings));
return err;
}
module_init(test_async_probe_init);
static void __exit test_async_probe_exit(void)
{
int id = 2;
platform_driver_unregister(&async_driver);
platform_driver_unregister(&sync_driver);
platform_device_unregister(async_dev_1);
platform_device_unregister(async_dev_2);
platform_device_unregister(sync_dev_1);
while (id--)
platform_device_unregister(sync_dev[id]);
id = NR_CPUS * 2;
while (id--)
platform_device_unregister(async_dev[id]);
}
module_exit(test_async_probe_exit);

View File

@ -428,14 +428,13 @@ static bool single_major = true;
module_param(single_major, bool, 0444);
MODULE_PARM_DESC(single_major, "Use a single major number for all rbd devices (default: true)");
static ssize_t rbd_add(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t rbd_remove(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t rbd_add_single_major(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t rbd_remove_single_major(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t add_store(struct bus_type *bus, const char *buf, size_t count);
static ssize_t remove_store(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t add_single_major_store(struct bus_type *bus, const char *buf,
size_t count);
static ssize_t remove_single_major_store(struct bus_type *bus, const char *buf,
size_t count);
static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth);
static int rbd_dev_id_to_minor(int dev_id)
@ -464,16 +463,16 @@ static bool rbd_is_lock_owner(struct rbd_device *rbd_dev)
return is_lock_owner;
}
static ssize_t rbd_supported_features_show(struct bus_type *bus, char *buf)
static ssize_t supported_features_show(struct bus_type *bus, char *buf)
{
return sprintf(buf, "0x%llx\n", RBD_FEATURES_SUPPORTED);
}
static BUS_ATTR(add, 0200, NULL, rbd_add);
static BUS_ATTR(remove, 0200, NULL, rbd_remove);
static BUS_ATTR(add_single_major, 0200, NULL, rbd_add_single_major);
static BUS_ATTR(remove_single_major, 0200, NULL, rbd_remove_single_major);
static BUS_ATTR(supported_features, 0444, rbd_supported_features_show, NULL);
static BUS_ATTR_WO(add);
static BUS_ATTR_WO(remove);
static BUS_ATTR_WO(add_single_major);
static BUS_ATTR_WO(remove_single_major);
static BUS_ATTR_RO(supported_features);
static struct attribute *rbd_bus_attrs[] = {
&bus_attr_add.attr,
@ -5934,9 +5933,7 @@ err_out_args:
goto out;
}
static ssize_t rbd_add(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t add_store(struct bus_type *bus, const char *buf, size_t count)
{
if (single_major)
return -EINVAL;
@ -5944,9 +5941,8 @@ static ssize_t rbd_add(struct bus_type *bus,
return do_rbd_add(bus, buf, count);
}
static ssize_t rbd_add_single_major(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t add_single_major_store(struct bus_type *bus, const char *buf,
size_t count)
{
return do_rbd_add(bus, buf, count);
}
@ -6049,9 +6045,7 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
return count;
}
static ssize_t rbd_remove(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t remove_store(struct bus_type *bus, const char *buf, size_t count)
{
if (single_major)
return -EINVAL;
@ -6059,9 +6053,8 @@ static ssize_t rbd_remove(struct bus_type *bus,
return do_rbd_remove(bus, buf, count);
}
static ssize_t rbd_remove_single_major(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t remove_single_major_store(struct bus_type *bus, const char *buf,
size_t count)
{
return do_rbd_remove(bus, buf, count);
}

View File

@ -218,7 +218,7 @@ config FW_CFG_SYSFS_CMDLINE
config INTEL_STRATIX10_SERVICE
tristate "Intel Stratix10 Service Layer"
depends on HAVE_ARM_SMCCC
depends on ARCH_STRATIX10 && HAVE_ARM_SMCCC
default n
help
Intel Stratix10 service layer runs at privileged exception level,

View File

@ -1260,6 +1260,7 @@ static int exynos_iommu_add_device(struct device *dev)
* direct calls to pm_runtime_get/put in this driver.
*/
data->link = device_link_add(dev, data->sysmmu,
DL_FLAG_STATELESS |
DL_FLAG_PM_RUNTIME);
}
iommu_group_put(group);

View File

@ -1071,7 +1071,8 @@ static int rk_iommu_add_device(struct device *dev)
iommu_group_put(group);
iommu_device_link(&iommu->iommu, dev);
data->link = device_link_add(dev, iommu->dev, DL_FLAG_PM_RUNTIME);
data->link = device_link_add(dev, iommu->dev,
DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME);
return 0;
}

View File

@ -23,6 +23,7 @@
#include <linux/ndctl.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/fs.h>
#include <linux/io.h>
#include <linux/mm.h>
@ -534,11 +535,15 @@ void __nd_device_register(struct device *dev)
set_dev_node(dev, to_nd_region(dev)->numa_node);
dev->bus = &nvdimm_bus_type;
if (dev->parent)
if (dev->parent) {
get_device(dev->parent);
if (dev_to_node(dev) == NUMA_NO_NODE)
set_dev_node(dev, dev_to_node(dev->parent));
}
get_device(dev);
async_schedule_domain(nd_async_device_register, dev,
&nd_async_domain);
async_schedule_dev_domain(nd_async_device_register, dev,
&nd_async_domain);
}
void nd_device_register(struct device *dev)

View File

@ -412,8 +412,7 @@ static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(msi_bus);
static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
size_t count)
static ssize_t rescan_store(struct bus_type *bus, const char *buf, size_t count)
{
unsigned long val;
struct pci_bus *b = NULL;
@ -429,7 +428,7 @@ static ssize_t bus_rescan_store(struct bus_type *bus, const char *buf,
}
return count;
}
static BUS_ATTR(rescan, (S_IWUSR|S_IWGRP), NULL, bus_rescan_store);
static BUS_ATTR_WO(rescan);
static struct attribute *pci_bus_attrs[] = {
&bus_attr_rescan.attr,

View File

@ -6034,19 +6034,18 @@ static ssize_t pci_get_resource_alignment_param(char *buf, size_t size)
return count;
}
static ssize_t pci_resource_alignment_show(struct bus_type *bus, char *buf)
static ssize_t resource_alignment_show(struct bus_type *bus, char *buf)
{
return pci_get_resource_alignment_param(buf, PAGE_SIZE);
}
static ssize_t pci_resource_alignment_store(struct bus_type *bus,
static ssize_t resource_alignment_store(struct bus_type *bus,
const char *buf, size_t count)
{
return pci_set_resource_alignment_param(buf, count);
}
static BUS_ATTR(resource_alignment, 0644, pci_resource_alignment_show,
pci_resource_alignment_store);
static BUS_ATTR_RW(resource_alignment);
static int __init pci_resource_alignment_sysfs_init(void)
{

View File

@ -290,8 +290,7 @@ const struct attribute_group *rio_dev_groups[] = {
NULL,
};
static ssize_t bus_scan_store(struct bus_type *bus, const char *buf,
size_t count)
static ssize_t scan_store(struct bus_type *bus, const char *buf, size_t count)
{
long val;
int rc;
@ -314,7 +313,7 @@ exit:
return rc;
}
static BUS_ATTR(scan, (S_IWUSR|S_IWGRP), NULL, bus_scan_store);
static BUS_ATTR_WO(scan);
static struct attribute *rio_bus_attrs[] = {
&bus_attr_scan.attr,

View File

@ -423,8 +423,8 @@ EXPORT_SYMBOL_GPL(debugfs_create_file);
* debugfs core.
*
* It is your responsibility to protect your struct file_operation
* methods against file removals by means of debugfs_use_file_start()
* and debugfs_use_file_finish(). ->open() is still protected by
* methods against file removals by means of debugfs_file_get()
* and debugfs_file_put(). ->open() is still protected by
* debugfs though.
*
* Any struct file_operations defined by means of

View File

@ -506,30 +506,16 @@ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
kvfree(si);
}
int __init f2fs_create_root_stats(void)
void __init f2fs_create_root_stats(void)
{
struct dentry *file;
f2fs_debugfs_root = debugfs_create_dir("f2fs", NULL);
if (!f2fs_debugfs_root)
return -ENOMEM;
file = debugfs_create_file("status", S_IRUGO, f2fs_debugfs_root,
NULL, &stat_fops);
if (!file) {
debugfs_remove(f2fs_debugfs_root);
f2fs_debugfs_root = NULL;
return -ENOMEM;
}
return 0;
debugfs_create_file("status", S_IRUGO, f2fs_debugfs_root, NULL,
&stat_fops);
}
void f2fs_destroy_root_stats(void)
{
if (!f2fs_debugfs_root)
return;
debugfs_remove_recursive(f2fs_debugfs_root);
f2fs_debugfs_root = NULL;
}

View File

@ -3328,7 +3328,7 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
int f2fs_build_stats(struct f2fs_sb_info *sbi);
void f2fs_destroy_stats(struct f2fs_sb_info *sbi);
int __init f2fs_create_root_stats(void);
void __init f2fs_create_root_stats(void);
void f2fs_destroy_root_stats(void);
#else
#define stat_inc_cp_count(si) do { } while (0)
@ -3366,7 +3366,7 @@ void f2fs_destroy_root_stats(void);
static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
static inline int __init f2fs_create_root_stats(void) { return 0; }
static inline void __init f2fs_create_root_stats(void) { }
static inline void f2fs_destroy_root_stats(void) { }
#endif

View File

@ -3545,9 +3545,7 @@ static int __init init_f2fs_fs(void)
err = register_filesystem(&f2fs_fs_type);
if (err)
goto free_shrinker;
err = f2fs_create_root_stats();
if (err)
goto free_filesystem;
f2fs_create_root_stats();
err = f2fs_init_post_read_processing();
if (err)
goto free_root_stats;
@ -3555,7 +3553,6 @@ static int __init init_f2fs_fs(void)
free_root_stats:
f2fs_destroy_root_stats();
free_filesystem:
unregister_filesystem(&f2fs_fs_type);
free_shrinker:
unregister_shrinker(&f2fs_shrinker_info);

View File

@ -536,8 +536,8 @@ void kernfs_put(struct kernfs_node *kn)
security_release_secctx(kn->iattr->ia_secdata,
kn->iattr->ia_secdata_len);
simple_xattrs_free(&kn->iattr->xattrs);
kmem_cache_free(kernfs_iattrs_cache, kn->iattr);
}
kfree(kn->iattr);
spin_lock(&kernfs_idr_lock);
idr_remove(&root->ino_idr, kn->id.ino);
spin_unlock(&kernfs_idr_lock);

View File

@ -42,7 +42,7 @@ static struct kernfs_iattrs *kernfs_iattrs(struct kernfs_node *kn)
if (kn->iattr)
goto out_unlock;
kn->iattr = kzalloc(sizeof(struct kernfs_iattrs), GFP_KERNEL);
kn->iattr = kmem_cache_zalloc(kernfs_iattrs_cache, GFP_KERNEL);
if (!kn->iattr)
goto out_unlock;
iattrs = &kn->iattr->ia_iattr;

View File

@ -78,7 +78,7 @@ static inline struct kernfs_node *kernfs_dentry_node(struct dentry *dentry)
}
extern const struct super_operations kernfs_sops;
extern struct kmem_cache *kernfs_node_cache;
extern struct kmem_cache *kernfs_node_cache, *kernfs_iattrs_cache;
/*
* inode.c

View File

@ -20,7 +20,7 @@
#include "kernfs-internal.h"
struct kmem_cache *kernfs_node_cache;
struct kmem_cache *kernfs_node_cache, *kernfs_iattrs_cache;
static int kernfs_sop_remount_fs(struct super_block *sb, int *flags, char *data)
{
@ -421,4 +421,9 @@ void __init kernfs_init(void)
0,
SLAB_PANIC | SLAB_TYPESAFE_BY_RCU,
NULL);
/* Creates slab cache for kernfs inode attributes */
kernfs_iattrs_cache = kmem_cache_create("kernfs_iattrs_cache",
sizeof(struct kernfs_iattrs),
0, SLAB_PANIC, NULL);
}

View File

@ -17,7 +17,6 @@
#include <linux/seq_file.h>
#include "sysfs.h"
#include "../kernfs/kernfs-internal.h"
/*
* Determine ktype->sysfs_ops for the given kernfs_node. This function
@ -497,6 +496,7 @@ bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr)
void sysfs_remove_files(struct kobject *kobj, const struct attribute * const *ptr)
{
int i;
for (i = 0; ptr[i]; i++)
sysfs_remove_file(kobj, ptr[i]);
}

View File

@ -14,6 +14,8 @@
#include <linux/types.h>
#include <linux/list.h>
#include <linux/numa.h>
#include <linux/device.h>
typedef u64 async_cookie_t;
typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@ -37,9 +39,83 @@ struct async_domain {
struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
.registered = 0 }
extern async_cookie_t async_schedule(async_func_t func, void *data);
extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
struct async_domain *domain);
async_cookie_t async_schedule_node(async_func_t func, void *data,
int node);
async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
int node,
struct async_domain *domain);
/**
* async_schedule - schedule a function for asynchronous execution
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
*
* Returns an async_cookie_t that may be used for checkpointing later.
* Note: This function may be called from atomic or non-atomic contexts.
*/
static inline async_cookie_t async_schedule(async_func_t func, void *data)
{
return async_schedule_node(func, data, NUMA_NO_NODE);
}
/**
* async_schedule_domain - schedule a function for asynchronous execution within a certain domain
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
* @domain: the domain
*
* Returns an async_cookie_t that may be used for checkpointing later.
* @domain may be used in the async_synchronize_*_domain() functions to
* wait within a certain synchronization domain rather than globally.
* Note: This function may be called from atomic or non-atomic contexts.
*/
static inline async_cookie_t
async_schedule_domain(async_func_t func, void *data,
struct async_domain *domain)
{
return async_schedule_node_domain(func, data, NUMA_NO_NODE, domain);
}
/**
* async_schedule_dev - A device specific version of async_schedule
* @func: function to execute asynchronously
* @dev: device argument to be passed to function
*
* Returns an async_cookie_t that may be used for checkpointing later.
* @dev is used as both the argument for the function and to provide NUMA
* context for where to run the function. By doing this we can try to
* provide for the best possible outcome by operating on the device on the
* CPUs closest to the device.
* Note: This function may be called from atomic or non-atomic contexts.
*/
static inline async_cookie_t
async_schedule_dev(async_func_t func, struct device *dev)
{
return async_schedule_node(func, dev, dev_to_node(dev));
}
/**
* async_schedule_dev_domain - A device specific version of async_schedule_domain
* @func: function to execute asynchronously
* @dev: device argument to be passed to function
* @domain: the domain
*
* Returns an async_cookie_t that may be used for checkpointing later.
* @dev is used as both the argument for the function and to provide NUMA
* context for where to run the function. By doing this we can try to
* provide for the best possible outcome by operating on the device on the
* CPUs closest to the device.
* @domain may be used in the async_synchronize_*_domain() functions to
* wait within a certain synchronization domain rather than globally.
* Note: This function may be called from atomic or non-atomic contexts.
*/
static inline async_cookie_t
async_schedule_dev_domain(async_func_t func, struct device *dev,
struct async_domain *domain)
{
return async_schedule_node_domain(func, dev, dev_to_node(dev), domain);
}
void async_unregister_domain(struct async_domain *domain);
extern void async_synchronize_full(void);
extern void async_synchronize_full_domain(struct async_domain *domain);

View File

@ -98,7 +98,7 @@ void component_match_add_typed(struct device *master,
int (*compare_typed)(struct device *, int, void *), void *compare_data);
/**
* component_match_add - add a compent match
* component_match_add - add a component match entry
* @master: device with the aggregate driver
* @matchptr: pointer to the list of component matches
* @compare: compare function to match against all components

View File

@ -341,6 +341,7 @@ struct device *driver_find_device(struct device_driver *drv,
struct device *start, void *data,
int (*match)(struct device *dev, void *data));
void driver_deferred_probe_add(struct device *dev);
int driver_deferred_probe_check_state(struct device *dev);
/**
@ -827,12 +828,14 @@ enum device_link_state {
* PM_RUNTIME: If set, the runtime PM framework will use this link.
* RPM_ACTIVE: Run pm_runtime_get_sync() on the supplier during link creation.
* AUTOREMOVE_SUPPLIER: Remove the link automatically on supplier driver unbind.
* AUTOPROBE_CONSUMER: Probe consumer driver automatically after supplier binds.
*/
#define DL_FLAG_STATELESS BIT(0)
#define DL_FLAG_AUTOREMOVE_CONSUMER BIT(1)
#define DL_FLAG_PM_RUNTIME BIT(2)
#define DL_FLAG_RPM_ACTIVE BIT(3)
#define DL_FLAG_AUTOREMOVE_SUPPLIER BIT(4)
#define DL_FLAG_AUTOPROBE_CONSUMER BIT(5)
/**
* struct device_link - Device link representation.
@ -845,6 +848,7 @@ enum device_link_state {
* @rpm_active: Whether or not the consumer device is runtime-PM-active.
* @kref: Count repeated addition of the same link.
* @rcu_head: An RCU head to use for deferred execution of SRCU callbacks.
* @supplier_preactivated: Supplier has been made active before consumer probe.
*/
struct device_link {
struct device *supplier;
@ -853,11 +857,12 @@ struct device_link {
struct list_head c_node;
enum device_link_state status;
u32 flags;
bool rpm_active;
refcount_t rpm_active;
struct kref kref;
#ifdef CONFIG_SRCU
struct rcu_head rcu_head;
#endif
bool supplier_preactivated; /* Owned by consumer probe. */
};
/**
@ -985,7 +990,7 @@ struct device {
void *platform_data; /* Platform specific data, device
core doesn't touch it */
void *driver_data; /* Driver data, set and get with
dev_set/get_drvdata */
dev_set_drvdata/dev_get_drvdata */
struct dev_links_info links;
struct dev_pm_info power;
struct dev_pm_domain *pm_domain;
@ -1035,7 +1040,6 @@ struct device {
spinlock_t devres_lock;
struct list_head devres_head;
struct klist_node knode_class;
struct class *class;
const struct attribute_group **groups; /* optional groups */
@ -1392,28 +1396,28 @@ void device_link_remove(void *consumer, struct device *supplier);
#ifdef CONFIG_PRINTK
__printf(3, 0)
__printf(3, 0) __cold
int dev_vprintk_emit(int level, const struct device *dev,
const char *fmt, va_list args);
__printf(3, 4)
__printf(3, 4) __cold
int dev_printk_emit(int level, const struct device *dev, const char *fmt, ...);
__printf(3, 4)
__printf(3, 4) __cold
void dev_printk(const char *level, const struct device *dev,
const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_emerg(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_alert(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_crit(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_err(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_warn(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_notice(const struct device *dev, const char *fmt, ...);
__printf(2, 3)
__printf(2, 3) __cold
void _dev_info(const struct device *dev, const char *fmt, ...);
#else

View File

@ -21,12 +21,24 @@ struct ihex_binrec {
uint8_t data[0];
} __attribute__((packed));
static inline uint16_t ihex_binrec_size(const struct ihex_binrec *p)
{
return be16_to_cpu(p->len) + sizeof(*p);
}
/* Find the next record, taking into account the 4-byte alignment */
static inline const struct ihex_binrec *
__ihex_next_binrec(const struct ihex_binrec *rec)
{
const void *p = rec;
return p + ALIGN(ihex_binrec_size(rec), 4);
}
static inline const struct ihex_binrec *
ihex_next_binrec(const struct ihex_binrec *rec)
{
int next = ((be16_to_cpu(rec->len) + 5) & ~3) - 2;
rec = (void *)&rec->data[next];
rec = __ihex_next_binrec(rec);
return be16_to_cpu(rec->len) ? rec : NULL;
}
@ -34,18 +46,15 @@ ihex_next_binrec(const struct ihex_binrec *rec)
/* Check that ihex_next_binrec() won't take us off the end of the image... */
static inline int ihex_validate_fw(const struct firmware *fw)
{
const struct ihex_binrec *rec;
size_t ofs = 0;
const struct ihex_binrec *end, *rec;
while (ofs <= fw->size - sizeof(*rec)) {
rec = (void *)&fw->data[ofs];
rec = (const void *)fw->data;
end = (const void *)&fw->data[fw->size - sizeof(*end)];
for (; rec <= end; rec = __ihex_next_binrec(rec)) {
/* Zero length marks end of records */
if (!be16_to_cpu(rec->len))
if (rec == end && !be16_to_cpu(rec->len))
return 0;
/* Point to next record... */
ofs += (sizeof(*rec) + be16_to_cpu(rec->len) + 3) & ~3;
}
return -EINVAL;
}

View File

@ -63,6 +63,7 @@ extern int platform_add_devices(struct platform_device **, int);
struct platform_device_info {
struct device *parent;
struct fwnode_handle *fwnode;
bool of_node_reused;
const char *name;
int id;

View File

@ -443,6 +443,8 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask);
extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work);
extern bool queue_work_node(int node, struct workqueue_struct *wq,
struct work_struct *work);
extern bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
struct delayed_work *work, unsigned long delay);
extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,

View File

@ -149,7 +149,25 @@ static void async_run_entry_fn(struct work_struct *work)
wake_up(&async_done);
}
static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
/**
* async_schedule_node_domain - NUMA specific version of async_schedule_domain
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
* @node: NUMA node that we want to schedule this on or close to
* @domain: the domain
*
* Returns an async_cookie_t that may be used for checkpointing later.
* @domain may be used in the async_synchronize_*_domain() functions to
* wait within a certain synchronization domain rather than globally.
*
* Note: This function may be called from atomic or non-atomic contexts.
*
* The node requested will be honored on a best effort basis. If the node
* has no CPUs associated with it then the work is distributed among all
* available CPUs.
*/
async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
int node, struct async_domain *domain)
{
struct async_entry *entry;
unsigned long flags;
@ -195,43 +213,30 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
current->flags |= PF_USED_ASYNC;
/* schedule for execution */
queue_work(system_unbound_wq, &entry->work);
queue_work_node(node, system_unbound_wq, &entry->work);
return newcookie;
}
EXPORT_SYMBOL_GPL(async_schedule_node_domain);
/**
* async_schedule - schedule a function for asynchronous execution
* async_schedule_node - NUMA specific version of async_schedule
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
* @node: NUMA node that we want to schedule this on or close to
*
* Returns an async_cookie_t that may be used for checkpointing later.
* Note: This function may be called from atomic or non-atomic contexts.
*/
async_cookie_t async_schedule(async_func_t func, void *data)
{
return __async_schedule(func, data, &async_dfl_domain);
}
EXPORT_SYMBOL_GPL(async_schedule);
/**
* async_schedule_domain - schedule a function for asynchronous execution within a certain domain
* @func: function to execute asynchronously
* @data: data pointer to pass to the function
* @domain: the domain
*
* Returns an async_cookie_t that may be used for checkpointing later.
* @domain may be used in the async_synchronize_*_domain() functions to
* wait within a certain synchronization domain rather than globally. A
* synchronization domain is specified via @domain. Note: This function
* may be called from atomic or non-atomic contexts.
* The node requested will be honored on a best effort basis. If the node
* has no CPUs associated with it then the work is distributed among all
* available CPUs.
*/
async_cookie_t async_schedule_domain(async_func_t func, void *data,
struct async_domain *domain)
async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
{
return __async_schedule(func, data, domain);
return async_schedule_node_domain(func, data, node, &async_dfl_domain);
}
EXPORT_SYMBOL_GPL(async_schedule_domain);
EXPORT_SYMBOL_GPL(async_schedule_node);
/**
* async_synchronize_full - synchronize all asynchronous function calls

View File

@ -1514,6 +1514,90 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
}
EXPORT_SYMBOL(queue_work_on);
/**
* workqueue_select_cpu_near - Select a CPU based on NUMA node
* @node: NUMA node ID that we want to select a CPU from
*
* This function will attempt to find a "random" cpu available on a given
* node. If there are no CPUs available on the given node it will return
* WORK_CPU_UNBOUND indicating that we should just schedule to any
* available CPU if we need to schedule this work.
*/
static int workqueue_select_cpu_near(int node)
{
int cpu;
/* No point in doing this if NUMA isn't enabled for workqueues */
if (!wq_numa_enabled)
return WORK_CPU_UNBOUND;
/* Delay binding to CPU if node is not valid or online */
if (node < 0 || node >= MAX_NUMNODES || !node_online(node))
return WORK_CPU_UNBOUND;
/* Use local node/cpu if we are already there */
cpu = raw_smp_processor_id();
if (node == cpu_to_node(cpu))
return cpu;
/* Use "random" otherwise know as "first" online CPU of node */
cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
/* If CPU is valid return that, otherwise just defer */
return cpu < nr_cpu_ids ? cpu : WORK_CPU_UNBOUND;
}
/**
* queue_work_node - queue work on a "random" cpu for a given NUMA node
* @node: NUMA node that we are targeting the work for
* @wq: workqueue to use
* @work: work to queue
*
* We queue the work to a "random" CPU within a given NUMA node. The basic
* idea here is to provide a way to somehow associate work with a given
* NUMA node.
*
* This function will only make a best effort attempt at getting this onto
* the right NUMA node. If no node is requested or the requested node is
* offline then we just fall back to standard queue_work behavior.
*
* Currently the "random" CPU ends up being the first available CPU in the
* intersection of cpu_online_mask and the cpumask of the node, unless we
* are running on the node. In that case we just use the current CPU.
*
* Return: %false if @work was already on a queue, %true otherwise.
*/
bool queue_work_node(int node, struct workqueue_struct *wq,
struct work_struct *work)
{
unsigned long flags;
bool ret = false;
/*
* This current implementation is specific to unbound workqueues.
* Specifically we only return the first available CPU for a given
* node instead of cycling through individual CPUs within the node.
*
* If this is used with a per-cpu workqueue then the logic in
* workqueue_select_cpu_near would need to be updated to allow for
* some round robin type logic.
*/
WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND));
local_irq_save(flags);
if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
int cpu = workqueue_select_cpu_near(node);
__queue_work(cpu, wq, work);
ret = true;
}
local_irq_restore(flags);
return ret;
}
EXPORT_SYMBOL_GPL(queue_work_node);
void delayed_work_timer_fn(struct timer_list *t)
{
struct delayed_work *dwork = from_timer(dwork, t, timer);

View File

@ -134,7 +134,6 @@ EXPORT_SYMBOL(devm_iounmap);
void __iomem *devm_ioremap_resource(struct device *dev, struct resource *res)
{
resource_size_t size;
const char *name;
void __iomem *dest_ptr;
BUG_ON(!dev);
@ -145,9 +144,8 @@ void __iomem *devm_ioremap_resource(struct device *dev, struct resource *res)
}
size = resource_size(res);
name = res->name ?: dev_name(dev);
if (!devm_request_mem_region(dev, res->start, size, name)) {
if (!devm_request_mem_region(dev, res->start, size, dev_name(dev))) {
dev_err(dev, "can't request region for resource %pR\n", res);
return IOMEM_ERR_PTR(-EBUSY);
}

View File

@ -887,7 +887,7 @@ static void kset_release(struct kobject *kobj)
kfree(kset);
}
void kset_get_ownership(struct kobject *kobj, kuid_t *uid, kgid_t *gid)
static void kset_get_ownership(struct kobject *kobj, kuid_t *uid, kgid_t *gid)
{
if (kobj->parent)
kobject_get_ownership(kobj->parent, uid, gid);

View File

@ -200,7 +200,7 @@ int kobject_synth_uevent(struct kobject *kobj, const char *buf, size_t count)
r = kobject_action_type(buf, count, &action, &action_args);
if (r) {
msg = "unknown uevent action string\n";
msg = "unknown uevent action string";
goto out;
}
@ -212,7 +212,7 @@ int kobject_synth_uevent(struct kobject *kobj, const char *buf, size_t count)
r = kobject_action_args(action_args,
count - (action_args - buf), &env);
if (r == -EINVAL) {
msg = "incorrect uevent action arguments\n";
msg = "incorrect uevent action arguments";
goto out;
}
@ -224,7 +224,7 @@ int kobject_synth_uevent(struct kobject *kobj, const char *buf, size_t count)
out:
if (r) {
devpath = kobject_get_path(kobj, GFP_KERNEL);
printk(KERN_WARNING "synth uevent: %s: %s",
pr_warn("synth uevent: %s: %s\n",
devpath ?: "unknown device",
msg ?: "failed to send uevent");
kfree(devpath);
@ -765,8 +765,7 @@ static int uevent_net_init(struct net *net)
ue_sk->sk = netlink_kernel_create(net, NETLINK_KOBJECT_UEVENT, &cfg);
if (!ue_sk->sk) {
printk(KERN_ERR
"kobject_uevent: unable to create netlink socket!\n");
pr_err("kobject_uevent: unable to create netlink socket!\n");
kfree(ue_sk);
return -ENODEV;
}

View File

@ -24,6 +24,10 @@
#include <getopt.h>
#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
#define ALIGN(x, a) __ALIGN_KERNEL((x), (a))
struct ihex_binrec {
struct ihex_binrec *next; /* not part of the real data structure */
uint32_t addr;
@ -131,6 +135,7 @@ int main(int argc, char **argv)
static int process_ihex(uint8_t *data, ssize_t size)
{
struct ihex_binrec *record;
size_t record_size;
uint32_t offset = 0;
uint32_t data32;
uint8_t type, crc = 0, crcbyte = 0;
@ -157,12 +162,13 @@ next_record:
len <<= 8;
len += hex(data + i, &crc); i += 2;
}
record = malloc((sizeof (*record) + len + 3) & ~3);
record_size = ALIGN(sizeof(*record) + len, 4);
record = malloc(record_size);
if (!record) {
fprintf(stderr, "out of memory for records\n");
return -ENOMEM;
}
memset(record, 0, (sizeof(*record) + len + 3) & ~3);
memset(record, 0, record_size);
record->len = len;
/* now check if we have enough data to read everything */
@ -259,13 +265,18 @@ static void file_record(struct ihex_binrec *record)
*p = record;
}
static uint16_t ihex_binrec_size(struct ihex_binrec *p)
{
return p->len + sizeof(p->addr) + sizeof(p->len);
}
static int output_records(int outfd)
{
unsigned char zeroes[6] = {0, 0, 0, 0, 0, 0};
struct ihex_binrec *p = records;
while (p) {
uint16_t writelen = (p->len + 9) & ~3;
uint16_t writelen = ALIGN(ihex_binrec_size(p), 4);
p->addr = htonl(p->addr);
p->len = htons(p->len);

View File

@ -1,6 +1,5 @@
CONFIG_TEST_FIRMWARE=y
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y

View File

@ -155,8 +155,11 @@ read_firmwares()
{
for i in $(seq 0 3); do
config_set_read_fw_idx $i
# Verify the contents match
if ! diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
# Verify the contents are what we expect.
# -Z required for now -- check for yourself, md5sum
# on $FW and DIR/read_firmware will yield the same. Even
# cmp agrees, so something is off.
if ! diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
echo "request #$i: firmware was not loaded" >&2
exit 1
fi
@ -168,7 +171,7 @@ read_firmwares_expect_nofile()
for i in $(seq 0 3); do
config_set_read_fw_idx $i
# Ensures contents differ
if diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
if diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
echo "request $i: file was not expected to match" >&2
exit 1
fi

View File

@ -91,7 +91,7 @@ verify_reqs()
if [ "$TEST_REQS_FW_SYSFS_FALLBACK" = "yes" ]; then
if [ ! "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
echo "usermode helper disabled so ignoring test"
exit $ksft_skip
exit 0
fi
fi
}