driver core changes for 5.17-rc1
Here is the set of changes for the driver core for 5.17-rc1. Lots of little things here, including: - kobj_type cleanups - auxiliary_bus documentation updates - auxiliary_device conversions for some drivers (relevant subsystems all have provided acks for these) - kernfs lock contention reduction for some workloads - other tiny cleanups and changes. All of these have been in linux-next for a while with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYd7deA8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ym8ngCgw0ANwrRPE5b1dthEmfU2f8Knk5kAn0pHQv6R VRZJypgNfU/Pt0ykstZD =CO9J -----END PGP SIGNATURE----- Merge tag 'driver-core-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core updates from Greg KH: "Here is the set of changes for the driver core for 5.17-rc1. Lots of little things here, including: - kobj_type cleanups - auxiliary_bus documentation updates - auxiliary_device conversions for some drivers (relevant subsystems all have provided acks for these) - kernfs lock contention reduction for some workloads - other tiny cleanups and changes. All of these have been in linux-next for a while with no reported issues" * tag 'driver-core-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (43 commits) kobject documentation: remove default_attrs information drivers/firmware: Add missing platform_device_put() in sysfb_create_simplefb debugfs: lockdown: Allow reading debugfs files that are not world readable driver core: Make bus notifiers in right order in really_probe() driver core: Move driver_sysfs_remove() after driver_sysfs_add() firmware: edd: remove empty default_attrs array firmware: dmi-sysfs: use default_groups in kobj_type qemu_fw_cfg: use default_groups in kobj_type firmware: memmap: use default_groups in kobj_type sh: sq: use default_groups in kobj_type headers/uninline: Uninline single-use function: kobject_has_children() devtmpfs: mount with noexec and nosuid driver core: Simplify async probe test code by using ktime_ms_delta() nilfs2: use default_groups in kobj_type kobject: remove kset from struct kset_uevent_ops callbacks driver core: make kobj_type constant. driver core: platform: document registration-failure requirement vdpa/mlx5: Use auxiliary_device driver data helpers net/mlx5e: Use auxiliary_device driver data helpers soundwire: intel: Use auxiliary_device driver data helpers ...
This commit is contained in:
commit
6dc69d3d0d
|
@ -666,3 +666,18 @@ Description: Preferred MTE tag checking mode
|
|||
================ ==============================================
|
||||
|
||||
See also: Documentation/arm64/memory-tagging-extension.rst
|
||||
|
||||
What: /sys/devices/system/cpu/nohz_full
|
||||
Date: Apr 2015
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description:
|
||||
(RO) the list of CPUs that are in nohz_full mode.
|
||||
These CPUs are set by boot parameter "nohz_full=".
|
||||
|
||||
What: /sys/devices/system/cpu/isolated
|
||||
Date: Apr 2015
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description:
|
||||
(RO) the list of CPUs that are isolated and don't
|
||||
participate in load balancing. These CPUs are set by
|
||||
boot parameter "isolcpus=".
|
||||
|
|
|
@ -8,11 +8,9 @@ to /proc/cpuinfo output of some architectures. They reside in
|
|||
Documentation/ABI/stable/sysfs-devices-system-cpu.
|
||||
|
||||
Architecture-neutral, drivers/base/topology.c, exports these attributes.
|
||||
However, the book and drawer related sysfs files will only be created if
|
||||
CONFIG_SCHED_BOOK and CONFIG_SCHED_DRAWER are selected, respectively.
|
||||
|
||||
CONFIG_SCHED_BOOK and CONFIG_SCHED_DRAWER are currently only used on s390,
|
||||
where they reflect the cpu and cache hierarchy.
|
||||
However the die, cluster, book, and drawer hierarchy related sysfs files will
|
||||
only be created if an architecture provides the related macros as described
|
||||
below.
|
||||
|
||||
For an architecture to support this feature, it must define some of
|
||||
these macros in include/asm-XXX/topology.h::
|
||||
|
@ -43,15 +41,14 @@ not defined by include/asm-XXX/topology.h:
|
|||
2) topology_die_id: -1
|
||||
3) topology_cluster_id: -1
|
||||
4) topology_core_id: 0
|
||||
5) topology_sibling_cpumask: just the given CPU
|
||||
6) topology_core_cpumask: just the given CPU
|
||||
7) topology_cluster_cpumask: just the given CPU
|
||||
8) topology_die_cpumask: just the given CPU
|
||||
|
||||
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
|
||||
default definitions for topology_book_id() and topology_book_cpumask().
|
||||
For architectures that don't support drawers (CONFIG_SCHED_DRAWER) there are
|
||||
no default definitions for topology_drawer_id() and topology_drawer_cpumask().
|
||||
5) topology_book_id: -1
|
||||
6) topology_drawer_id: -1
|
||||
7) topology_sibling_cpumask: just the given CPU
|
||||
8) topology_core_cpumask: just the given CPU
|
||||
9) topology_cluster_cpumask: just the given CPU
|
||||
10) topology_die_cpumask: just the given CPU
|
||||
11) topology_book_cpumask: just the given CPU
|
||||
12) topology_drawer_cpumask: just the given CPU
|
||||
|
||||
Additionally, CPU topology information is provided under
|
||||
/sys/devices/system/cpu and includes these files. The internal
|
||||
|
|
|
@ -118,7 +118,7 @@ Initialization of kobjects
|
|||
Code which creates a kobject must, of course, initialize that object. Some
|
||||
of the internal fields are setup with a (mandatory) call to kobject_init()::
|
||||
|
||||
void kobject_init(struct kobject *kobj, struct kobj_type *ktype);
|
||||
void kobject_init(struct kobject *kobj, const struct kobj_type *ktype);
|
||||
|
||||
The ktype is required for a kobject to be created properly, as every kobject
|
||||
must have an associated kobj_type. After calling kobject_init(), to
|
||||
|
@ -156,7 +156,7 @@ kobject_name()::
|
|||
There is a helper function to both initialize and add the kobject to the
|
||||
kernel at the same time, called surprisingly enough kobject_init_and_add()::
|
||||
|
||||
int kobject_init_and_add(struct kobject *kobj, struct kobj_type *ktype,
|
||||
int kobject_init_and_add(struct kobject *kobj, const struct kobj_type *ktype,
|
||||
struct kobject *parent, const char *fmt, ...);
|
||||
|
||||
The arguments are the same as the individual kobject_init() and
|
||||
|
@ -299,7 +299,6 @@ kobj_type::
|
|||
struct kobj_type {
|
||||
void (*release)(struct kobject *kobj);
|
||||
const struct sysfs_ops *sysfs_ops;
|
||||
struct attribute **default_attrs;
|
||||
const struct attribute_group **default_groups;
|
||||
const struct kobj_ns_type_operations *(*child_ns_type)(struct kobject *kobj);
|
||||
const void *(*namespace)(struct kobject *kobj);
|
||||
|
@ -313,10 +312,10 @@ call kobject_init() or kobject_init_and_add().
|
|||
|
||||
The release field in struct kobj_type is, of course, a pointer to the
|
||||
release() method for this type of kobject. The other two fields (sysfs_ops
|
||||
and default_attrs) control how objects of this type are represented in
|
||||
and default_groups) control how objects of this type are represented in
|
||||
sysfs; they are beyond the scope of this document.
|
||||
|
||||
The default_attrs pointer is a list of default attributes that will be
|
||||
The default_groups pointer is a list of default attributes that will be
|
||||
automatically created for any kobject that is registered with this ktype.
|
||||
|
||||
|
||||
|
@ -373,10 +372,9 @@ If a kset wishes to control the uevent operations of the kobjects
|
|||
associated with it, it can use the struct kset_uevent_ops to handle it::
|
||||
|
||||
struct kset_uevent_ops {
|
||||
int (* const filter)(struct kset *kset, struct kobject *kobj);
|
||||
const char *(* const name)(struct kset *kset, struct kobject *kobj);
|
||||
int (* const uevent)(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env);
|
||||
int (* const filter)(struct kobject *kobj);
|
||||
const char *(* const name)(struct kobject *kobj);
|
||||
int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env);
|
||||
};
|
||||
|
||||
|
||||
|
|
|
@ -6,231 +6,45 @@
|
|||
Auxiliary Bus
|
||||
=============
|
||||
|
||||
In some subsystems, the functionality of the core device (PCI/ACPI/other) is
|
||||
too complex for a single device to be managed by a monolithic driver
|
||||
(e.g. Sound Open Firmware), multiple devices might implement a common
|
||||
intersection of functionality (e.g. NICs + RDMA), or a driver may want to
|
||||
export an interface for another subsystem to drive (e.g. SIOV Physical Function
|
||||
export Virtual Function management). A split of the functionality into child-
|
||||
devices representing sub-domains of functionality makes it possible to
|
||||
compartmentalize, layer, and distribute domain-specific concerns via a Linux
|
||||
device-driver model.
|
||||
|
||||
An example for this kind of requirement is the audio subsystem where a single
|
||||
IP is handling multiple entities such as HDMI, Soundwire, local devices such as
|
||||
mics/speakers etc. The split for the core's functionality can be arbitrary or
|
||||
be defined by the DSP firmware topology and include hooks for test/debug. This
|
||||
allows for the audio core device to be minimal and focused on hardware-specific
|
||||
control and communication.
|
||||
|
||||
Each auxiliary_device represents a part of its parent functionality. The
|
||||
generic behavior can be extended and specialized as needed by encapsulating an
|
||||
auxiliary_device within other domain-specific structures and the use of .ops
|
||||
callbacks. Devices on the auxiliary bus do not share any structures and the use
|
||||
of a communication channel with the parent is domain-specific.
|
||||
|
||||
Note that ops are intended as a way to augment instance behavior within a class
|
||||
of auxiliary devices, it is not the mechanism for exporting common
|
||||
infrastructure from the parent. Consider EXPORT_SYMBOL_NS() to convey
|
||||
infrastructure from the parent module to the auxiliary module(s).
|
||||
|
||||
.. kernel-doc:: drivers/base/auxiliary.c
|
||||
:doc: PURPOSE
|
||||
|
||||
When Should the Auxiliary Bus Be Used
|
||||
=====================================
|
||||
|
||||
The auxiliary bus is to be used when a driver and one or more kernel modules,
|
||||
who share a common header file with the driver, need a mechanism to connect and
|
||||
provide access to a shared object allocated by the auxiliary_device's
|
||||
registering driver. The registering driver for the auxiliary_device(s) and the
|
||||
kernel module(s) registering auxiliary_drivers can be from the same subsystem,
|
||||
or from multiple subsystems.
|
||||
.. kernel-doc:: drivers/base/auxiliary.c
|
||||
:doc: USAGE
|
||||
|
||||
The emphasis here is on a common generic interface that keeps subsystem
|
||||
customization out of the bus infrastructure.
|
||||
|
||||
One example is a PCI network device that is RDMA-capable and exports a child
|
||||
device to be driven by an auxiliary_driver in the RDMA subsystem. The PCI
|
||||
driver allocates and registers an auxiliary_device for each physical
|
||||
function on the NIC. The RDMA driver registers an auxiliary_driver that claims
|
||||
each of these auxiliary_devices. This conveys data/ops published by the parent
|
||||
PCI device/driver to the RDMA auxiliary_driver.
|
||||
Auxiliary Device Creation
|
||||
=========================
|
||||
|
||||
Another use case is for the PCI device to be split out into multiple sub
|
||||
functions. For each sub function an auxiliary_device is created. A PCI sub
|
||||
function driver binds to such devices that creates its own one or more class
|
||||
devices. A PCI sub function auxiliary device is likely to be contained in a
|
||||
struct with additional attributes such as user defined sub function number and
|
||||
optional attributes such as resources and a link to the parent device. These
|
||||
attributes could be used by systemd/udev; and hence should be initialized
|
||||
before a driver binds to an auxiliary_device.
|
||||
.. kernel-doc:: include/linux/auxiliary_bus.h
|
||||
:identifiers: auxiliary_device
|
||||
|
||||
A key requirement for utilizing the auxiliary bus is that there is no
|
||||
dependency on a physical bus, device, register accesses or regmap support.
|
||||
These individual devices split from the core cannot live on the platform bus as
|
||||
they are not physical devices that are controlled by DT/ACPI. The same
|
||||
argument applies for not using MFD in this scenario as MFD relies on individual
|
||||
function devices being physical devices.
|
||||
|
||||
Auxiliary Device
|
||||
================
|
||||
|
||||
An auxiliary_device represents a part of its parent device's functionality. It
|
||||
is given a name that, combined with the registering drivers KBUILD_MODNAME,
|
||||
creates a match_name that is used for driver binding, and an id that combined
|
||||
with the match_name provide a unique name to register with the bus subsystem.
|
||||
|
||||
Registering an auxiliary_device is a two-step process. First call
|
||||
auxiliary_device_init(), which checks several aspects of the auxiliary_device
|
||||
struct and performs a device_initialize(). After this step completes, any
|
||||
error state must have a call to auxiliary_device_uninit() in its resolution path.
|
||||
The second step in registering an auxiliary_device is to perform a call to
|
||||
auxiliary_device_add(), which sets the name of the device and add the device to
|
||||
the bus.
|
||||
|
||||
Unregistering an auxiliary_device is also a two-step process to mirror the
|
||||
register process. First call auxiliary_device_delete(), then call
|
||||
auxiliary_device_uninit().
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct auxiliary_device {
|
||||
struct device dev;
|
||||
const char *name;
|
||||
u32 id;
|
||||
};
|
||||
|
||||
If two auxiliary_devices both with a match_name "mod.foo" are registered onto
|
||||
the bus, they must have unique id values (e.g. "x" and "y") so that the
|
||||
registered devices names are "mod.foo.x" and "mod.foo.y". If match_name + id
|
||||
are not unique, then the device_add fails and generates an error message.
|
||||
|
||||
The auxiliary_device.dev.type.release or auxiliary_device.dev.release must be
|
||||
populated with a non-NULL pointer to successfully register the auxiliary_device.
|
||||
|
||||
The auxiliary_device.dev.parent must also be populated.
|
||||
.. kernel-doc:: drivers/base/auxiliary.c
|
||||
:identifiers: auxiliary_device_init __auxiliary_device_add
|
||||
auxiliary_find_device
|
||||
|
||||
Auxiliary Device Memory Model and Lifespan
|
||||
------------------------------------------
|
||||
|
||||
The registering driver is the entity that allocates memory for the
|
||||
auxiliary_device and register it on the auxiliary bus. It is important to note
|
||||
that, as opposed to the platform bus, the registering driver is wholly
|
||||
responsible for the management for the memory used for the driver object.
|
||||
.. kernel-doc:: include/linux/auxiliary_bus.h
|
||||
:doc: DEVICE_LIFESPAN
|
||||
|
||||
A parent object, defined in the shared header file, contains the
|
||||
auxiliary_device. It also contains a pointer to the shared object(s), which
|
||||
also is defined in the shared header. Both the parent object and the shared
|
||||
object(s) are allocated by the registering driver. This layout allows the
|
||||
auxiliary_driver's registering module to perform a container_of() call to go
|
||||
from the pointer to the auxiliary_device, that is passed during the call to the
|
||||
auxiliary_driver's probe function, up to the parent object, and then have
|
||||
access to the shared object(s).
|
||||
|
||||
The memory for the auxiliary_device is freed only in its release() callback
|
||||
flow as defined by its registering driver.
|
||||
|
||||
The memory for the shared object(s) must have a lifespan equal to, or greater
|
||||
than, the lifespan of the memory for the auxiliary_device. The auxiliary_driver
|
||||
should only consider that this shared object is valid as long as the
|
||||
auxiliary_device is still registered on the auxiliary bus. It is up to the
|
||||
registering driver to manage (e.g. free or keep available) the memory for the
|
||||
shared object beyond the life of the auxiliary_device.
|
||||
|
||||
The registering driver must unregister all auxiliary devices before its own
|
||||
driver.remove() is completed.
|
||||
|
||||
Auxiliary Drivers
|
||||
=================
|
||||
|
||||
Auxiliary drivers follow the standard driver model convention, where
|
||||
discovery/enumeration is handled by the core, and drivers
|
||||
provide probe() and remove() methods. They support power management
|
||||
and shutdown notifications using the standard conventions.
|
||||
.. kernel-doc:: include/linux/auxiliary_bus.h
|
||||
:identifiers: auxiliary_driver module_auxiliary_driver
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct auxiliary_driver {
|
||||
int (*probe)(struct auxiliary_device *,
|
||||
const struct auxiliary_device_id *id);
|
||||
void (*remove)(struct auxiliary_device *);
|
||||
void (*shutdown)(struct auxiliary_device *);
|
||||
int (*suspend)(struct auxiliary_device *, pm_message_t);
|
||||
int (*resume)(struct auxiliary_device *);
|
||||
struct device_driver driver;
|
||||
const struct auxiliary_device_id *id_table;
|
||||
};
|
||||
|
||||
Auxiliary drivers register themselves with the bus by calling
|
||||
auxiliary_driver_register(). The id_table contains the match_names of auxiliary
|
||||
devices that a driver can bind with.
|
||||
.. kernel-doc:: drivers/base/auxiliary.c
|
||||
:identifiers: __auxiliary_driver_register auxiliary_driver_unregister
|
||||
|
||||
Example Usage
|
||||
=============
|
||||
|
||||
Auxiliary devices are created and registered by a subsystem-level core device
|
||||
that needs to break up its functionality into smaller fragments. One way to
|
||||
extend the scope of an auxiliary_device is to encapsulate it within a domain-
|
||||
pecific structure defined by the parent device. This structure contains the
|
||||
auxiliary_device and any associated shared data/callbacks needed to establish
|
||||
the connection with the parent.
|
||||
.. kernel-doc:: drivers/base/auxiliary.c
|
||||
:doc: EXAMPLE
|
||||
|
||||
An example is:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct foo {
|
||||
struct auxiliary_device auxdev;
|
||||
void (*connect)(struct auxiliary_device *auxdev);
|
||||
void (*disconnect)(struct auxiliary_device *auxdev);
|
||||
void *data;
|
||||
};
|
||||
|
||||
The parent device then registers the auxiliary_device by calling
|
||||
auxiliary_device_init(), and then auxiliary_device_add(), with the pointer to
|
||||
the auxdev member of the above structure. The parent provides a name for the
|
||||
auxiliary_device that, combined with the parent's KBUILD_MODNAME, creates a
|
||||
match_name that is be used for matching and binding with a driver.
|
||||
|
||||
Whenever an auxiliary_driver is registered, based on the match_name, the
|
||||
auxiliary_driver's probe() is invoked for the matching devices. The
|
||||
auxiliary_driver can also be encapsulated inside custom drivers that make the
|
||||
core device's functionality extensible by adding additional domain-specific ops
|
||||
as follows:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct my_ops {
|
||||
void (*send)(struct auxiliary_device *auxdev);
|
||||
void (*receive)(struct auxiliary_device *auxdev);
|
||||
};
|
||||
|
||||
|
||||
struct my_driver {
|
||||
struct auxiliary_driver auxiliary_drv;
|
||||
const struct my_ops ops;
|
||||
};
|
||||
|
||||
An example of this type of usage is:
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
const struct auxiliary_device_id my_auxiliary_id_table[] = {
|
||||
{ .name = "foo_mod.foo_dev" },
|
||||
{ },
|
||||
};
|
||||
|
||||
const struct my_ops my_custom_ops = {
|
||||
.send = my_tx,
|
||||
.receive = my_rx,
|
||||
};
|
||||
|
||||
const struct my_driver my_drv = {
|
||||
.auxiliary_drv = {
|
||||
.name = "myauxiliarydrv",
|
||||
.id_table = my_auxiliary_id_table,
|
||||
.probe = my_probe,
|
||||
.remove = my_remove,
|
||||
.shutdown = my_shutdown,
|
||||
},
|
||||
.ops = my_custom_ops,
|
||||
};
|
||||
|
|
|
@ -258,7 +258,6 @@ kobject_put()以避免错误的发生是一个很好的做法。
|
|||
struct kobj_type {
|
||||
void (*release)(struct kobject *kobj);
|
||||
const struct sysfs_ops *sysfs_ops;
|
||||
struct attribute **default_attrs;
|
||||
const struct attribute_group **default_groups;
|
||||
const struct kobj_ns_type_operations *(*child_ns_type)(struct kobject *kobj);
|
||||
const void *(*namespace)(struct kobject *kobj);
|
||||
|
@ -271,10 +270,10 @@ kobject_init()或kobject_init_and_add()时必须指定一个指向该结构的
|
|||
指针。
|
||||
|
||||
当然,kobj_type结构中的release字段是指向这种类型的kobject的release()
|
||||
方法的一个指针。另外两个字段(sysfs_ops 和 default_attrs)控制这种
|
||||
方法的一个指针。另外两个字段(sysfs_ops 和 default_groups)控制这种
|
||||
类型的对象如何在 sysfs 中被表示;它们超出了本文的范围。
|
||||
|
||||
default_attrs 指针是一个默认属性的列表,它将为任何用这个 ktype 注册
|
||||
default_groups 指针是一个默认属性的列表,它将为任何用这个 ktype 注册
|
||||
的 kobject 自动创建。
|
||||
|
||||
|
||||
|
@ -325,10 +324,9 @@ ksets
|
|||
结构体kset_uevent_ops来处理它::
|
||||
|
||||
struct kset_uevent_ops {
|
||||
int (* const filter)(struct kset *kset, struct kobject *kobj);
|
||||
const char *(* const name)(struct kset *kset, struct kobject *kobj);
|
||||
int (* const uevent)(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env);
|
||||
int (* const filter)(struct kobject *kobj);
|
||||
const char *(* const name)(struct kobject *kobj);
|
||||
int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env);
|
||||
};
|
||||
|
||||
|
||||
|
|
12
MAINTAINERS
12
MAINTAINERS
|
@ -9819,10 +9819,9 @@ S: Maintained
|
|||
F: drivers/mfd/intel_soc_pmic*
|
||||
F: include/linux/mfd/intel_soc_pmic*
|
||||
|
||||
INTEL PMT DRIVER
|
||||
M: "David E. Box" <david.e.box@linux.intel.com>
|
||||
S: Maintained
|
||||
F: drivers/mfd/intel_pmt.c
|
||||
INTEL PMT DRIVERS
|
||||
M: David E. Box <david.e.box@linux.intel.com>
|
||||
S: Supported
|
||||
F: drivers/platform/x86/intel/pmt/
|
||||
|
||||
INTEL PRO/WIRELESS 2100, 2200BG, 2915ABG NETWORK CONNECTION SUPPORT
|
||||
|
@ -9889,6 +9888,11 @@ L: platform-driver-x86@vger.kernel.org
|
|||
S: Maintained
|
||||
F: drivers/platform/x86/intel/uncore-frequency.c
|
||||
|
||||
INTEL VENDOR SPECIFIC EXTENDED CAPABILITIES DRIVER
|
||||
M: David E. Box <david.e.box@linux.intel.com>
|
||||
S: Supported
|
||||
F: drivers/platform/x86/intel/vsec.*
|
||||
|
||||
INTEL VIRTUAL BUTTON DRIVER
|
||||
M: AceLan Kao <acelan.kao@canonical.com>
|
||||
L: platform-driver-x86@vger.kernel.org
|
||||
|
|
|
@ -324,6 +324,7 @@ static struct attribute *sq_sysfs_attrs[] = {
|
|||
&mapping_attr.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(sq_sysfs);
|
||||
|
||||
static const struct sysfs_ops sq_sysfs_ops = {
|
||||
.show = sq_sysfs_show,
|
||||
|
@ -332,7 +333,7 @@ static const struct sysfs_ops sq_sysfs_ops = {
|
|||
|
||||
static struct kobj_type ktype_percpu_entry = {
|
||||
.sysfs_ops = &sq_sysfs_ops,
|
||||
.default_attrs = sq_sysfs_attrs,
|
||||
.default_groups = sq_sysfs_groups,
|
||||
};
|
||||
|
||||
static int sq_dev_add(struct device *dev, struct subsys_interface *sif)
|
||||
|
|
|
@ -62,6 +62,17 @@ config DEVTMPFS_MOUNT
|
|||
rescue mode with init=/bin/sh, even when the /dev directory
|
||||
on the rootfs is completely empty.
|
||||
|
||||
config DEVTMPFS_SAFE
|
||||
bool "Use nosuid,noexec mount options on devtmpfs"
|
||||
depends on DEVTMPFS
|
||||
help
|
||||
This instructs the kernel to include the MS_NOEXEC and MS_NOSUID mount
|
||||
flags when mounting devtmpfs.
|
||||
|
||||
Notice: If enabled, things like /dev/mem cannot be mmapped
|
||||
with the PROT_EXEC flag. This can break, for example, non-KMS
|
||||
video drivers.
|
||||
|
||||
config STANDALONE
|
||||
bool "Select only drivers that don't need compile-time external firmware"
|
||||
default y
|
||||
|
|
|
@ -17,6 +17,147 @@
|
|||
#include <linux/auxiliary_bus.h>
|
||||
#include "base.h"
|
||||
|
||||
/**
|
||||
* DOC: PURPOSE
|
||||
*
|
||||
* In some subsystems, the functionality of the core device (PCI/ACPI/other) is
|
||||
* too complex for a single device to be managed by a monolithic driver (e.g.
|
||||
* Sound Open Firmware), multiple devices might implement a common intersection
|
||||
* of functionality (e.g. NICs + RDMA), or a driver may want to export an
|
||||
* interface for another subsystem to drive (e.g. SIOV Physical Function export
|
||||
* Virtual Function management). A split of the functionality into child-
|
||||
* devices representing sub-domains of functionality makes it possible to
|
||||
* compartmentalize, layer, and distribute domain-specific concerns via a Linux
|
||||
* device-driver model.
|
||||
*
|
||||
* An example for this kind of requirement is the audio subsystem where a
|
||||
* single IP is handling multiple entities such as HDMI, Soundwire, local
|
||||
* devices such as mics/speakers etc. The split for the core's functionality
|
||||
* can be arbitrary or be defined by the DSP firmware topology and include
|
||||
* hooks for test/debug. This allows for the audio core device to be minimal
|
||||
* and focused on hardware-specific control and communication.
|
||||
*
|
||||
* Each auxiliary_device represents a part of its parent functionality. The
|
||||
* generic behavior can be extended and specialized as needed by encapsulating
|
||||
* an auxiliary_device within other domain-specific structures and the use of
|
||||
* .ops callbacks. Devices on the auxiliary bus do not share any structures and
|
||||
* the use of a communication channel with the parent is domain-specific.
|
||||
*
|
||||
* Note that ops are intended as a way to augment instance behavior within a
|
||||
* class of auxiliary devices, it is not the mechanism for exporting common
|
||||
* infrastructure from the parent. Consider EXPORT_SYMBOL_NS() to convey
|
||||
* infrastructure from the parent module to the auxiliary module(s).
|
||||
*/
|
||||
|
||||
/**
|
||||
* DOC: USAGE
|
||||
*
|
||||
* The auxiliary bus is to be used when a driver and one or more kernel
|
||||
* modules, who share a common header file with the driver, need a mechanism to
|
||||
* connect and provide access to a shared object allocated by the
|
||||
* auxiliary_device's registering driver. The registering driver for the
|
||||
* auxiliary_device(s) and the kernel module(s) registering auxiliary_drivers
|
||||
* can be from the same subsystem, or from multiple subsystems.
|
||||
*
|
||||
* The emphasis here is on a common generic interface that keeps subsystem
|
||||
* customization out of the bus infrastructure.
|
||||
*
|
||||
* One example is a PCI network device that is RDMA-capable and exports a child
|
||||
* device to be driven by an auxiliary_driver in the RDMA subsystem. The PCI
|
||||
* driver allocates and registers an auxiliary_device for each physical
|
||||
* function on the NIC. The RDMA driver registers an auxiliary_driver that
|
||||
* claims each of these auxiliary_devices. This conveys data/ops published by
|
||||
* the parent PCI device/driver to the RDMA auxiliary_driver.
|
||||
*
|
||||
* Another use case is for the PCI device to be split out into multiple sub
|
||||
* functions. For each sub function an auxiliary_device is created. A PCI sub
|
||||
* function driver binds to such devices that creates its own one or more class
|
||||
* devices. A PCI sub function auxiliary device is likely to be contained in a
|
||||
* struct with additional attributes such as user defined sub function number
|
||||
* and optional attributes such as resources and a link to the parent device.
|
||||
* These attributes could be used by systemd/udev; and hence should be
|
||||
* initialized before a driver binds to an auxiliary_device.
|
||||
*
|
||||
* A key requirement for utilizing the auxiliary bus is that there is no
|
||||
* dependency on a physical bus, device, register accesses or regmap support.
|
||||
* These individual devices split from the core cannot live on the platform bus
|
||||
* as they are not physical devices that are controlled by DT/ACPI. The same
|
||||
* argument applies for not using MFD in this scenario as MFD relies on
|
||||
* individual function devices being physical devices.
|
||||
*/
|
||||
|
||||
/**
|
||||
* DOC: EXAMPLE
|
||||
*
|
||||
* Auxiliary devices are created and registered by a subsystem-level core
|
||||
* device that needs to break up its functionality into smaller fragments. One
|
||||
* way to extend the scope of an auxiliary_device is to encapsulate it within a
|
||||
* domain- pecific structure defined by the parent device. This structure
|
||||
* contains the auxiliary_device and any associated shared data/callbacks
|
||||
* needed to establish the connection with the parent.
|
||||
*
|
||||
* An example is:
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* struct foo {
|
||||
* struct auxiliary_device auxdev;
|
||||
* void (*connect)(struct auxiliary_device *auxdev);
|
||||
* void (*disconnect)(struct auxiliary_device *auxdev);
|
||||
* void *data;
|
||||
* };
|
||||
*
|
||||
* The parent device then registers the auxiliary_device by calling
|
||||
* auxiliary_device_init(), and then auxiliary_device_add(), with the pointer
|
||||
* to the auxdev member of the above structure. The parent provides a name for
|
||||
* the auxiliary_device that, combined with the parent's KBUILD_MODNAME,
|
||||
* creates a match_name that is be used for matching and binding with a driver.
|
||||
*
|
||||
* Whenever an auxiliary_driver is registered, based on the match_name, the
|
||||
* auxiliary_driver's probe() is invoked for the matching devices. The
|
||||
* auxiliary_driver can also be encapsulated inside custom drivers that make
|
||||
* the core device's functionality extensible by adding additional
|
||||
* domain-specific ops as follows:
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* struct my_ops {
|
||||
* void (*send)(struct auxiliary_device *auxdev);
|
||||
* void (*receive)(struct auxiliary_device *auxdev);
|
||||
* };
|
||||
*
|
||||
*
|
||||
* struct my_driver {
|
||||
* struct auxiliary_driver auxiliary_drv;
|
||||
* const struct my_ops ops;
|
||||
* };
|
||||
*
|
||||
* An example of this type of usage is:
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* const struct auxiliary_device_id my_auxiliary_id_table[] = {
|
||||
* { .name = "foo_mod.foo_dev" },
|
||||
* { },
|
||||
* };
|
||||
*
|
||||
* const struct my_ops my_custom_ops = {
|
||||
* .send = my_tx,
|
||||
* .receive = my_rx,
|
||||
* };
|
||||
*
|
||||
* const struct my_driver my_drv = {
|
||||
* .auxiliary_drv = {
|
||||
* .name = "myauxiliarydrv",
|
||||
* .id_table = my_auxiliary_id_table,
|
||||
* .probe = my_probe,
|
||||
* .remove = my_remove,
|
||||
* .shutdown = my_shutdown,
|
||||
* },
|
||||
* .ops = my_custom_ops,
|
||||
* };
|
||||
*/
|
||||
|
||||
static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id,
|
||||
const struct auxiliary_device *auxdev)
|
||||
{
|
||||
|
@ -117,7 +258,7 @@ static struct bus_type auxiliary_bus_type = {
|
|||
* auxiliary_device_init - check auxiliary_device and initialize
|
||||
* @auxdev: auxiliary device struct
|
||||
*
|
||||
* This is the first step in the two-step process to register an
|
||||
* This is the second step in the three-step process to register an
|
||||
* auxiliary_device.
|
||||
*
|
||||
* When this function returns an error code, then the device_initialize will
|
||||
|
@ -155,7 +296,7 @@ EXPORT_SYMBOL_GPL(auxiliary_device_init);
|
|||
* @auxdev: auxiliary bus device to add to the bus
|
||||
* @modname: name of the parent device's driver module
|
||||
*
|
||||
* This is the second step in the two-step process to register an
|
||||
* This is the third step in the three-step process to register an
|
||||
* auxiliary_device.
|
||||
*
|
||||
* This function must be called after a successful call to
|
||||
|
@ -202,6 +343,8 @@ EXPORT_SYMBOL_GPL(__auxiliary_device_add);
|
|||
* This function returns a reference to a device that is 'found'
|
||||
* for later use, as determined by the @match callback.
|
||||
*
|
||||
* The reference returned should be released with put_device().
|
||||
*
|
||||
* The callback should return 0 if the device doesn't match and non-zero
|
||||
* if it does. If the callback returns non-zero, this function will
|
||||
* return to the caller and not iterate over any more devices.
|
||||
|
@ -225,6 +368,11 @@ EXPORT_SYMBOL_GPL(auxiliary_find_device);
|
|||
* @auxdrv: auxiliary_driver structure
|
||||
* @owner: owning module/driver
|
||||
* @modname: KBUILD_MODNAME for parent driver
|
||||
*
|
||||
* The expectation is that users will call the "auxiliary_driver_register"
|
||||
* macro so that the caller's KBUILD_MODNAME is automatically inserted for the
|
||||
* modname parameter. Only if a user requires a custom name would this version
|
||||
* be called directly.
|
||||
*/
|
||||
int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,
|
||||
struct module *owner, const char *modname)
|
||||
|
|
|
@ -163,9 +163,9 @@ static struct kobj_type bus_ktype = {
|
|||
.release = bus_release,
|
||||
};
|
||||
|
||||
static int bus_uevent_filter(struct kset *kset, struct kobject *kobj)
|
||||
static int bus_uevent_filter(struct kobject *kobj)
|
||||
{
|
||||
struct kobj_type *ktype = get_ktype(kobj);
|
||||
const struct kobj_type *ktype = get_ktype(kobj);
|
||||
|
||||
if (ktype == &bus_ktype)
|
||||
return 1;
|
||||
|
|
|
@ -2260,9 +2260,9 @@ static struct kobj_type device_ktype = {
|
|||
};
|
||||
|
||||
|
||||
static int dev_uevent_filter(struct kset *kset, struct kobject *kobj)
|
||||
static int dev_uevent_filter(struct kobject *kobj)
|
||||
{
|
||||
struct kobj_type *ktype = get_ktype(kobj);
|
||||
const struct kobj_type *ktype = get_ktype(kobj);
|
||||
|
||||
if (ktype == &device_ktype) {
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
|
@ -2274,7 +2274,7 @@ static int dev_uevent_filter(struct kset *kset, struct kobject *kobj)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const char *dev_uevent_name(struct kset *kset, struct kobject *kobj)
|
||||
static const char *dev_uevent_name(struct kobject *kobj)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
|
||||
|
@ -2285,8 +2285,7 @@ static const char *dev_uevent_name(struct kset *kset, struct kobject *kobj)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int dev_uevent(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env)
|
||||
static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
int retval = 0;
|
||||
|
@ -2381,7 +2380,7 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
|
|||
|
||||
/* respect filter */
|
||||
if (kset->uevent_ops && kset->uevent_ops->filter)
|
||||
if (!kset->uevent_ops->filter(kset, &dev->kobj))
|
||||
if (!kset->uevent_ops->filter(&dev->kobj))
|
||||
goto out;
|
||||
|
||||
env = kzalloc(sizeof(struct kobj_uevent_env), GFP_KERNEL);
|
||||
|
@ -2389,7 +2388,7 @@ static ssize_t uevent_show(struct device *dev, struct device_attribute *attr,
|
|||
return -ENOMEM;
|
||||
|
||||
/* let the kset specific function add its keys */
|
||||
retval = kset->uevent_ops->uevent(kset, &dev->kobj, env);
|
||||
retval = kset->uevent_ops->uevent(&dev->kobj, env);
|
||||
if (retval)
|
||||
goto out;
|
||||
|
||||
|
@ -3028,6 +3027,23 @@ static inline struct kobject *get_glue_dir(struct device *dev)
|
|||
return dev->kobj.parent;
|
||||
}
|
||||
|
||||
/**
|
||||
* kobject_has_children - Returns whether a kobject has children.
|
||||
* @kobj: the object to test
|
||||
*
|
||||
* This will return whether a kobject has other kobjects as children.
|
||||
*
|
||||
* It does NOT account for the presence of attribute files, only sub
|
||||
* directories. It also assumes there is no concurrent addition or
|
||||
* removal of such children, and thus relies on external locking.
|
||||
*/
|
||||
static inline bool kobject_has_children(struct kobject *kobj)
|
||||
{
|
||||
WARN_ON_ONCE(kref_read(&kobj->kref) == 0);
|
||||
|
||||
return kobj->sd && kobj->sd->dir.subdirs;
|
||||
}
|
||||
|
||||
/*
|
||||
* make sure cleaning up dir as the last step, we need to make
|
||||
* sure .release handler of kobject is run with holding the
|
||||
|
|
|
@ -577,14 +577,14 @@ re_probe:
|
|||
if (dev->bus->dma_configure) {
|
||||
ret = dev->bus->dma_configure(dev);
|
||||
if (ret)
|
||||
goto probe_failed;
|
||||
goto pinctrl_bind_failed;
|
||||
}
|
||||
|
||||
ret = driver_sysfs_add(dev);
|
||||
if (ret) {
|
||||
pr_err("%s: driver_sysfs_add(%s) failed\n",
|
||||
__func__, dev_name(dev));
|
||||
goto probe_failed;
|
||||
goto sysfs_failed;
|
||||
}
|
||||
|
||||
if (dev->pm_domain && dev->pm_domain->activate) {
|
||||
|
@ -657,6 +657,8 @@ dev_groups_failed:
|
|||
else if (drv->remove)
|
||||
drv->remove(dev);
|
||||
probe_failed:
|
||||
driver_sysfs_remove(dev);
|
||||
sysfs_failed:
|
||||
if (dev->bus)
|
||||
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
|
||||
BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
|
||||
|
@ -666,7 +668,6 @@ pinctrl_bind_failed:
|
|||
arch_teardown_dma_ops(dev);
|
||||
kfree(dev->dma_range_map);
|
||||
dev->dma_range_map = NULL;
|
||||
driver_sysfs_remove(dev);
|
||||
dev->driver = NULL;
|
||||
dev_set_drvdata(dev, NULL);
|
||||
if (dev->pm_domain && dev->pm_domain->dismiss)
|
||||
|
|
|
@ -29,6 +29,12 @@
|
|||
#include <uapi/linux/mount.h>
|
||||
#include "base.h"
|
||||
|
||||
#ifdef CONFIG_DEVTMPFS_SAFE
|
||||
#define DEVTMPFS_MFLAGS (MS_SILENT | MS_NOEXEC | MS_NOSUID)
|
||||
#else
|
||||
#define DEVTMPFS_MFLAGS (MS_SILENT)
|
||||
#endif
|
||||
|
||||
static struct task_struct *thread;
|
||||
|
||||
static int __initdata mount_dev = IS_ENABLED(CONFIG_DEVTMPFS_MOUNT);
|
||||
|
@ -363,7 +369,7 @@ int __init devtmpfs_mount(void)
|
|||
if (!thread)
|
||||
return 0;
|
||||
|
||||
err = init_mount("devtmpfs", "dev", "devtmpfs", MS_SILENT, NULL);
|
||||
err = init_mount("devtmpfs", "dev", "devtmpfs", DEVTMPFS_MFLAGS, NULL);
|
||||
if (err)
|
||||
printk(KERN_INFO "devtmpfs: error mounting %i\n", err);
|
||||
else
|
||||
|
@ -412,7 +418,7 @@ static noinline int __init devtmpfs_setup(void *p)
|
|||
err = ksys_unshare(CLONE_NEWNS);
|
||||
if (err)
|
||||
goto out;
|
||||
err = init_mount("devtmpfs", "/", "devtmpfs", MS_SILENT, NULL);
|
||||
err = init_mount("devtmpfs", "/", "devtmpfs", DEVTMPFS_MFLAGS, NULL);
|
||||
if (err)
|
||||
goto out;
|
||||
init_chdir("/.."); /* will traverse into overmounted root */
|
||||
|
|
|
@ -258,8 +258,9 @@ int platform_get_irq(struct platform_device *dev, unsigned int num)
|
|||
int ret;
|
||||
|
||||
ret = platform_get_irq_optional(dev, num);
|
||||
if (ret < 0 && ret != -EPROBE_DEFER)
|
||||
dev_err(&dev->dev, "IRQ index %u not found\n", num);
|
||||
if (ret < 0)
|
||||
return dev_err_probe(&dev->dev, ret,
|
||||
"IRQ index %u not found\n", num);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -762,6 +763,10 @@ EXPORT_SYMBOL_GPL(platform_device_del);
|
|||
/**
|
||||
* platform_device_register - add a platform-level device
|
||||
* @pdev: platform device we're adding
|
||||
*
|
||||
* NOTE: _Never_ directly free @pdev after calling this function, even if it
|
||||
* returned an error! Always use platform_device_put() to give up the
|
||||
* reference initialised in this function instead.
|
||||
*/
|
||||
int platform_device_register(struct platform_device *pdev)
|
||||
{
|
||||
|
|
|
@ -478,8 +478,17 @@ int fwnode_property_get_reference_args(const struct fwnode_handle *fwnode,
|
|||
unsigned int nargs, unsigned int index,
|
||||
struct fwnode_reference_args *args)
|
||||
{
|
||||
return fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
|
||||
nargs, index, args);
|
||||
int ret;
|
||||
|
||||
ret = fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
|
||||
nargs, index, args);
|
||||
|
||||
if (ret < 0 && !IS_ERR_OR_NULL(fwnode) &&
|
||||
!IS_ERR_OR_NULL(fwnode->secondary))
|
||||
ret = fwnode_call_int_op(fwnode->secondary, get_reference_args,
|
||||
prop, nargs_prop, nargs, index, args);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args);
|
||||
|
||||
|
|
|
@ -104,7 +104,7 @@ static int __init test_async_probe_init(void)
|
|||
struct platform_device **pdev = NULL;
|
||||
int async_id = 0, sync_id = 0;
|
||||
unsigned long long duration;
|
||||
ktime_t calltime, delta;
|
||||
ktime_t calltime;
|
||||
int err, nid, cpu;
|
||||
|
||||
pr_info("registering first set of asynchronous devices...\n");
|
||||
|
@ -133,8 +133,7 @@ static int __init test_async_probe_init(void)
|
|||
goto err_unregister_async_devs;
|
||||
}
|
||||
|
||||
delta = ktime_sub(ktime_get(), calltime);
|
||||
duration = (unsigned long long) ktime_to_ms(delta);
|
||||
duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime);
|
||||
pr_info("registration took %lld msecs\n", duration);
|
||||
if (duration > TEST_PROBE_THRESHOLD) {
|
||||
pr_err("test failed: probe took too long\n");
|
||||
|
@ -161,8 +160,7 @@ static int __init test_async_probe_init(void)
|
|||
async_id++;
|
||||
}
|
||||
|
||||
delta = ktime_sub(ktime_get(), calltime);
|
||||
duration = (unsigned long long) ktime_to_ms(delta);
|
||||
duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime);
|
||||
dev_info(&(*pdev)->dev,
|
||||
"registration took %lld msecs\n", duration);
|
||||
if (duration > TEST_PROBE_THRESHOLD) {
|
||||
|
@ -197,8 +195,7 @@ static int __init test_async_probe_init(void)
|
|||
goto err_unregister_sync_devs;
|
||||
}
|
||||
|
||||
delta = ktime_sub(ktime_get(), calltime);
|
||||
duration = (unsigned long long) ktime_to_ms(delta);
|
||||
duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime);
|
||||
pr_info("registration took %lld msecs\n", duration);
|
||||
if (duration < TEST_PROBE_THRESHOLD) {
|
||||
dev_err(&(*pdev)->dev,
|
||||
|
@ -223,8 +220,7 @@ static int __init test_async_probe_init(void)
|
|||
|
||||
sync_id++;
|
||||
|
||||
delta = ktime_sub(ktime_get(), calltime);
|
||||
duration = (unsigned long long) ktime_to_ms(delta);
|
||||
duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime);
|
||||
dev_info(&(*pdev)->dev,
|
||||
"registration took %lld msecs\n", duration);
|
||||
if (duration < TEST_PROBE_THRESHOLD) {
|
||||
|
|
|
@ -45,11 +45,15 @@ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
|
|||
define_id_show_func(physical_package_id);
|
||||
static DEVICE_ATTR_RO(physical_package_id);
|
||||
|
||||
#ifdef TOPOLOGY_DIE_SYSFS
|
||||
define_id_show_func(die_id);
|
||||
static DEVICE_ATTR_RO(die_id);
|
||||
#endif
|
||||
|
||||
#ifdef TOPOLOGY_CLUSTER_SYSFS
|
||||
define_id_show_func(cluster_id);
|
||||
static DEVICE_ATTR_RO(cluster_id);
|
||||
#endif
|
||||
|
||||
define_id_show_func(core_id);
|
||||
static DEVICE_ATTR_RO(core_id);
|
||||
|
@ -66,19 +70,23 @@ define_siblings_read_func(core_siblings, core_cpumask);
|
|||
static BIN_ATTR_RO(core_siblings, 0);
|
||||
static BIN_ATTR_RO(core_siblings_list, 0);
|
||||
|
||||
#ifdef TOPOLOGY_CLUSTER_SYSFS
|
||||
define_siblings_read_func(cluster_cpus, cluster_cpumask);
|
||||
static BIN_ATTR_RO(cluster_cpus, 0);
|
||||
static BIN_ATTR_RO(cluster_cpus_list, 0);
|
||||
#endif
|
||||
|
||||
#ifdef TOPOLOGY_DIE_SYSFS
|
||||
define_siblings_read_func(die_cpus, die_cpumask);
|
||||
static BIN_ATTR_RO(die_cpus, 0);
|
||||
static BIN_ATTR_RO(die_cpus_list, 0);
|
||||
#endif
|
||||
|
||||
define_siblings_read_func(package_cpus, core_cpumask);
|
||||
static BIN_ATTR_RO(package_cpus, 0);
|
||||
static BIN_ATTR_RO(package_cpus_list, 0);
|
||||
|
||||
#ifdef CONFIG_SCHED_BOOK
|
||||
#ifdef TOPOLOGY_BOOK_SYSFS
|
||||
define_id_show_func(book_id);
|
||||
static DEVICE_ATTR_RO(book_id);
|
||||
define_siblings_read_func(book_siblings, book_cpumask);
|
||||
|
@ -86,7 +94,7 @@ static BIN_ATTR_RO(book_siblings, 0);
|
|||
static BIN_ATTR_RO(book_siblings_list, 0);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SCHED_DRAWER
|
||||
#ifdef TOPOLOGY_DRAWER_SYSFS
|
||||
define_id_show_func(drawer_id);
|
||||
static DEVICE_ATTR_RO(drawer_id);
|
||||
define_siblings_read_func(drawer_siblings, drawer_cpumask);
|
||||
|
@ -101,17 +109,21 @@ static struct bin_attribute *bin_attrs[] = {
|
|||
&bin_attr_thread_siblings_list,
|
||||
&bin_attr_core_siblings,
|
||||
&bin_attr_core_siblings_list,
|
||||
#ifdef TOPOLOGY_CLUSTER_SYSFS
|
||||
&bin_attr_cluster_cpus,
|
||||
&bin_attr_cluster_cpus_list,
|
||||
#endif
|
||||
#ifdef TOPOLOGY_DIE_SYSFS
|
||||
&bin_attr_die_cpus,
|
||||
&bin_attr_die_cpus_list,
|
||||
#endif
|
||||
&bin_attr_package_cpus,
|
||||
&bin_attr_package_cpus_list,
|
||||
#ifdef CONFIG_SCHED_BOOK
|
||||
#ifdef TOPOLOGY_BOOK_SYSFS
|
||||
&bin_attr_book_siblings,
|
||||
&bin_attr_book_siblings_list,
|
||||
#endif
|
||||
#ifdef CONFIG_SCHED_DRAWER
|
||||
#ifdef TOPOLOGY_DRAWER_SYSFS
|
||||
&bin_attr_drawer_siblings,
|
||||
&bin_attr_drawer_siblings_list,
|
||||
#endif
|
||||
|
@ -120,13 +132,17 @@ static struct bin_attribute *bin_attrs[] = {
|
|||
|
||||
static struct attribute *default_attrs[] = {
|
||||
&dev_attr_physical_package_id.attr,
|
||||
#ifdef TOPOLOGY_DIE_SYSFS
|
||||
&dev_attr_die_id.attr,
|
||||
#endif
|
||||
#ifdef TOPOLOGY_CLUSTER_SYSFS
|
||||
&dev_attr_cluster_id.attr,
|
||||
#endif
|
||||
&dev_attr_core_id.attr,
|
||||
#ifdef CONFIG_SCHED_BOOK
|
||||
#ifdef TOPOLOGY_BOOK_SYSFS
|
||||
&dev_attr_book_id.attr,
|
||||
#endif
|
||||
#ifdef CONFIG_SCHED_DRAWER
|
||||
#ifdef TOPOLOGY_DRAWER_SYSFS
|
||||
&dev_attr_drawer_id.attr,
|
||||
#endif
|
||||
NULL
|
||||
|
|
|
@ -132,7 +132,7 @@ void dma_buf_stats_teardown(struct dma_buf *dmabuf)
|
|||
|
||||
|
||||
/* Statistics files do not need to send uevents. */
|
||||
static int dmabuf_sysfs_uevent_filter(struct kset *kset, struct kobject *kobj)
|
||||
static int dmabuf_sysfs_uevent_filter(struct kobject *kobj)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -302,12 +302,12 @@ static struct attribute *dmi_sysfs_sel_attrs[] = {
|
|||
&dmi_sysfs_attr_sel_per_log_type_descriptor_length.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
ATTRIBUTE_GROUPS(dmi_sysfs_sel);
|
||||
|
||||
static struct kobj_type dmi_system_event_log_ktype = {
|
||||
.release = dmi_entry_free,
|
||||
.sysfs_ops = &dmi_sysfs_specialize_attr_ops,
|
||||
.default_attrs = dmi_sysfs_sel_attrs,
|
||||
.default_groups = dmi_sysfs_sel_groups,
|
||||
};
|
||||
|
||||
typedef u8 (*sel_io_reader)(const struct dmi_system_event_log *sel,
|
||||
|
@ -518,6 +518,7 @@ static struct attribute *dmi_sysfs_entry_attrs[] = {
|
|||
&dmi_sysfs_attr_entry_position.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(dmi_sysfs_entry);
|
||||
|
||||
static ssize_t dmi_entry_raw_read_helper(struct dmi_sysfs_entry *entry,
|
||||
const struct dmi_header *dh,
|
||||
|
@ -565,7 +566,7 @@ static void dmi_sysfs_entry_release(struct kobject *kobj)
|
|||
static struct kobj_type dmi_sysfs_entry_ktype = {
|
||||
.release = dmi_sysfs_entry_release,
|
||||
.sysfs_ops = &dmi_sysfs_attr_ops,
|
||||
.default_attrs = dmi_sysfs_entry_attrs,
|
||||
.default_groups = dmi_sysfs_entry_groups,
|
||||
};
|
||||
|
||||
static struct kset *dmi_kset;
|
||||
|
|
|
@ -574,14 +574,6 @@ static EDD_DEVICE_ATTR(interface, 0444, edd_show_interface, edd_has_edd30);
|
|||
static EDD_DEVICE_ATTR(host_bus, 0444, edd_show_host_bus, edd_has_edd30);
|
||||
static EDD_DEVICE_ATTR(mbr_signature, 0444, edd_show_mbr_signature, edd_has_mbr_signature);
|
||||
|
||||
|
||||
/* These are default attributes that are added for every edd
|
||||
* device discovered. There are none.
|
||||
*/
|
||||
static struct attribute * def_attrs[] = {
|
||||
NULL,
|
||||
};
|
||||
|
||||
/* These attributes are conditional and only added for some devices. */
|
||||
static struct edd_attribute * edd_attrs[] = {
|
||||
&edd_attr_raw_data,
|
||||
|
@ -619,7 +611,6 @@ static void edd_release(struct kobject * kobj)
|
|||
static struct kobj_type edd_ktype = {
|
||||
.release = edd_release,
|
||||
.sysfs_ops = &edd_attr_ops,
|
||||
.default_attrs = def_attrs,
|
||||
};
|
||||
|
||||
static struct kset *edd_kset;
|
||||
|
|
|
@ -69,6 +69,7 @@ static struct attribute *def_attrs[] = {
|
|||
&memmap_type_attr.attr,
|
||||
NULL
|
||||
};
|
||||
ATTRIBUTE_GROUPS(def);
|
||||
|
||||
static const struct sysfs_ops memmap_attr_ops = {
|
||||
.show = memmap_attr_show,
|
||||
|
@ -118,7 +119,7 @@ static void __meminit release_firmware_map_entry(struct kobject *kobj)
|
|||
static struct kobj_type __refdata memmap_ktype = {
|
||||
.release = release_firmware_map_entry,
|
||||
.sysfs_ops = &memmap_attr_ops,
|
||||
.default_attrs = def_attrs,
|
||||
.default_groups = def_groups,
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
@ -395,7 +395,7 @@ static void fw_cfg_sysfs_cache_cleanup(void)
|
|||
}
|
||||
}
|
||||
|
||||
/* default_attrs: per-entry attributes and show methods */
|
||||
/* per-entry attributes and show methods */
|
||||
|
||||
#define FW_CFG_SYSFS_ATTR(_attr) \
|
||||
struct fw_cfg_sysfs_attribute fw_cfg_sysfs_attr_##_attr = { \
|
||||
|
@ -428,6 +428,7 @@ static struct attribute *fw_cfg_sysfs_entry_attrs[] = {
|
|||
&fw_cfg_sysfs_attr_name.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(fw_cfg_sysfs_entry);
|
||||
|
||||
/* sysfs_ops: find fw_cfg_[entry, attribute] and call appropriate show method */
|
||||
static ssize_t fw_cfg_sysfs_attr_show(struct kobject *kobj, struct attribute *a,
|
||||
|
@ -454,7 +455,7 @@ static void fw_cfg_sysfs_release_entry(struct kobject *kobj)
|
|||
|
||||
/* kobj_type: ties together all properties required to register an entry */
|
||||
static struct kobj_type fw_cfg_sysfs_entry_ktype = {
|
||||
.default_attrs = fw_cfg_sysfs_entry_attrs,
|
||||
.default_groups = fw_cfg_sysfs_entry_groups,
|
||||
.sysfs_ops = &fw_cfg_sysfs_attr_ops,
|
||||
.release = fw_cfg_sysfs_release_entry,
|
||||
};
|
||||
|
|
|
@ -113,12 +113,16 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
|
|||
sysfb_apply_efi_quirks(pd);
|
||||
|
||||
ret = platform_device_add_resources(pd, &res, 1);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
platform_device_put(pd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = platform_device_add_data(pd, mode, sizeof(*mode));
|
||||
if (ret)
|
||||
if (ret) {
|
||||
platform_device_put(pd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return platform_device_add(pd);
|
||||
}
|
||||
|
|
|
@ -207,7 +207,7 @@ static void irdma_remove(struct auxiliary_device *aux_dev)
|
|||
struct iidc_auxiliary_dev,
|
||||
adev);
|
||||
struct ice_pf *pf = iidc_adev->pf;
|
||||
struct irdma_device *iwdev = dev_get_drvdata(&aux_dev->dev);
|
||||
struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev);
|
||||
|
||||
irdma_ib_unregister_device(iwdev);
|
||||
ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, false);
|
||||
|
@ -295,7 +295,7 @@ static int irdma_probe(struct auxiliary_device *aux_dev, const struct auxiliary_
|
|||
ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, true);
|
||||
|
||||
ibdev_dbg(&iwdev->ibdev, "INIT: Gen2 PF[%d] device probe success\n", PCI_FUNC(rf->pcidev->devfn));
|
||||
dev_set_drvdata(&aux_dev->dev, iwdev);
|
||||
auxiliary_set_drvdata(aux_dev, iwdev);
|
||||
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -4422,7 +4422,7 @@ static int mlx5r_mp_probe(struct auxiliary_device *adev,
|
|||
}
|
||||
mutex_unlock(&mlx5_ib_multiport_mutex);
|
||||
|
||||
dev_set_drvdata(&adev->dev, mpi);
|
||||
auxiliary_set_drvdata(adev, mpi);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -4430,7 +4430,7 @@ static void mlx5r_mp_remove(struct auxiliary_device *adev)
|
|||
{
|
||||
struct mlx5_ib_multiport_info *mpi;
|
||||
|
||||
mpi = dev_get_drvdata(&adev->dev);
|
||||
mpi = auxiliary_get_drvdata(adev);
|
||||
mutex_lock(&mlx5_ib_multiport_mutex);
|
||||
if (mpi->ibdev)
|
||||
mlx5_ib_unbind_slave_port(mpi->ibdev, mpi);
|
||||
|
@ -4480,7 +4480,7 @@ static int mlx5r_probe(struct auxiliary_device *adev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
dev_set_drvdata(&adev->dev, dev);
|
||||
auxiliary_set_drvdata(adev, dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -4488,7 +4488,7 @@ static void mlx5r_remove(struct auxiliary_device *adev)
|
|||
{
|
||||
struct mlx5_ib_dev *dev;
|
||||
|
||||
dev = dev_get_drvdata(&adev->dev);
|
||||
dev = auxiliary_get_drvdata(adev);
|
||||
__mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX);
|
||||
}
|
||||
|
||||
|
|
|
@ -696,16 +696,6 @@ config MFD_INTEL_PMC_BXT
|
|||
Register and P-unit access. In addition this creates devices
|
||||
for iTCO watchdog and telemetry that are part of the PMC.
|
||||
|
||||
config MFD_INTEL_PMT
|
||||
tristate "Intel Platform Monitoring Technology (PMT) support"
|
||||
depends on X86 && PCI
|
||||
select MFD_CORE
|
||||
help
|
||||
The Intel Platform Monitoring Technology (PMT) is an interface that
|
||||
provides access to hardware monitor registers. This driver supports
|
||||
Telemetry, Watcher, and Crashlog PMT capabilities/devices for
|
||||
platforms starting from Tiger Lake.
|
||||
|
||||
config MFD_IPAQ_MICRO
|
||||
bool "Atmel Micro ASIC (iPAQ h3100/h3600/h3700) Support"
|
||||
depends on SA1100_H3100 || SA1100_H3600
|
||||
|
|
|
@ -211,7 +211,6 @@ obj-$(CONFIG_MFD_INTEL_LPSS) += intel-lpss.o
|
|||
obj-$(CONFIG_MFD_INTEL_LPSS_PCI) += intel-lpss-pci.o
|
||||
obj-$(CONFIG_MFD_INTEL_LPSS_ACPI) += intel-lpss-acpi.o
|
||||
obj-$(CONFIG_MFD_INTEL_PMC_BXT) += intel_pmc_bxt.o
|
||||
obj-$(CONFIG_MFD_INTEL_PMT) += intel_pmt.o
|
||||
obj-$(CONFIG_MFD_PALMAS) += palmas.o
|
||||
obj-$(CONFIG_MFD_VIPERBOARD) += viperboard.o
|
||||
obj-$(CONFIG_MFD_NTXEC) += ntxec.o
|
||||
|
|
|
@ -1,261 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Intel Platform Monitoring Technology PMT driver
|
||||
*
|
||||
* Copyright (c) 2020, Intel Corporation.
|
||||
* All Rights Reserved.
|
||||
*
|
||||
* Author: David E. Box <david.e.box@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/bits.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/core.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
/* Intel DVSEC capability vendor space offsets */
|
||||
#define INTEL_DVSEC_ENTRIES 0xA
|
||||
#define INTEL_DVSEC_SIZE 0xB
|
||||
#define INTEL_DVSEC_TABLE 0xC
|
||||
#define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0))
|
||||
#define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3))
|
||||
#define INTEL_DVSEC_ENTRY_SIZE 4
|
||||
|
||||
/* PMT capabilities */
|
||||
#define DVSEC_INTEL_ID_TELEMETRY 2
|
||||
#define DVSEC_INTEL_ID_WATCHER 3
|
||||
#define DVSEC_INTEL_ID_CRASHLOG 4
|
||||
|
||||
struct intel_dvsec_header {
|
||||
u16 length;
|
||||
u16 id;
|
||||
u8 num_entries;
|
||||
u8 entry_size;
|
||||
u8 tbir;
|
||||
u32 offset;
|
||||
};
|
||||
|
||||
enum pmt_quirks {
|
||||
/* Watcher capability not supported */
|
||||
PMT_QUIRK_NO_WATCHER = BIT(0),
|
||||
|
||||
/* Crashlog capability not supported */
|
||||
PMT_QUIRK_NO_CRASHLOG = BIT(1),
|
||||
|
||||
/* Use shift instead of mask to read discovery table offset */
|
||||
PMT_QUIRK_TABLE_SHIFT = BIT(2),
|
||||
|
||||
/* DVSEC not present (provided in driver data) */
|
||||
PMT_QUIRK_NO_DVSEC = BIT(3),
|
||||
};
|
||||
|
||||
struct pmt_platform_info {
|
||||
unsigned long quirks;
|
||||
struct intel_dvsec_header **capabilities;
|
||||
};
|
||||
|
||||
static const struct pmt_platform_info tgl_info = {
|
||||
.quirks = PMT_QUIRK_NO_WATCHER | PMT_QUIRK_NO_CRASHLOG |
|
||||
PMT_QUIRK_TABLE_SHIFT,
|
||||
};
|
||||
|
||||
/* DG1 Platform with DVSEC quirk*/
|
||||
static struct intel_dvsec_header dg1_telemetry = {
|
||||
.length = 0x10,
|
||||
.id = 2,
|
||||
.num_entries = 1,
|
||||
.entry_size = 3,
|
||||
.tbir = 0,
|
||||
.offset = 0x466000,
|
||||
};
|
||||
|
||||
static struct intel_dvsec_header *dg1_capabilities[] = {
|
||||
&dg1_telemetry,
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct pmt_platform_info dg1_info = {
|
||||
.quirks = PMT_QUIRK_NO_DVSEC,
|
||||
.capabilities = dg1_capabilities,
|
||||
};
|
||||
|
||||
static int pmt_add_dev(struct pci_dev *pdev, struct intel_dvsec_header *header,
|
||||
unsigned long quirks)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct resource *res, *tmp;
|
||||
struct mfd_cell *cell;
|
||||
const char *name;
|
||||
int count = header->num_entries;
|
||||
int size = header->entry_size;
|
||||
int id = header->id;
|
||||
int i;
|
||||
|
||||
switch (id) {
|
||||
case DVSEC_INTEL_ID_TELEMETRY:
|
||||
name = "pmt_telemetry";
|
||||
break;
|
||||
case DVSEC_INTEL_ID_WATCHER:
|
||||
if (quirks & PMT_QUIRK_NO_WATCHER) {
|
||||
dev_info(dev, "Watcher not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
name = "pmt_watcher";
|
||||
break;
|
||||
case DVSEC_INTEL_ID_CRASHLOG:
|
||||
if (quirks & PMT_QUIRK_NO_CRASHLOG) {
|
||||
dev_info(dev, "Crashlog not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
name = "pmt_crashlog";
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!header->num_entries || !header->entry_size) {
|
||||
dev_err(dev, "Invalid count or size for %s header\n", name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cell = devm_kzalloc(dev, sizeof(*cell), GFP_KERNEL);
|
||||
if (!cell)
|
||||
return -ENOMEM;
|
||||
|
||||
res = devm_kcalloc(dev, count, sizeof(*res), GFP_KERNEL);
|
||||
if (!res)
|
||||
return -ENOMEM;
|
||||
|
||||
if (quirks & PMT_QUIRK_TABLE_SHIFT)
|
||||
header->offset >>= 3;
|
||||
|
||||
/*
|
||||
* The PMT DVSEC contains the starting offset and count for a block of
|
||||
* discovery tables, each providing access to monitoring facilities for
|
||||
* a section of the device. Create a resource list of these tables to
|
||||
* provide to the driver.
|
||||
*/
|
||||
for (i = 0, tmp = res; i < count; i++, tmp++) {
|
||||
tmp->start = pdev->resource[header->tbir].start +
|
||||
header->offset + i * (size << 2);
|
||||
tmp->end = tmp->start + (size << 2) - 1;
|
||||
tmp->flags = IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
cell->resources = res;
|
||||
cell->num_resources = count;
|
||||
cell->name = name;
|
||||
|
||||
return devm_mfd_add_devices(dev, PLATFORM_DEVID_AUTO, cell, 1, NULL, 0,
|
||||
NULL);
|
||||
}
|
||||
|
||||
static int pmt_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
struct pmt_platform_info *info;
|
||||
unsigned long quirks = 0;
|
||||
bool found_devices = false;
|
||||
int ret, pos = 0;
|
||||
|
||||
ret = pcim_enable_device(pdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
info = (struct pmt_platform_info *)id->driver_data;
|
||||
|
||||
if (info)
|
||||
quirks = info->quirks;
|
||||
|
||||
if (info && (info->quirks & PMT_QUIRK_NO_DVSEC)) {
|
||||
struct intel_dvsec_header **header;
|
||||
|
||||
header = info->capabilities;
|
||||
while (*header) {
|
||||
ret = pmt_add_dev(pdev, *header, quirks);
|
||||
if (ret)
|
||||
dev_warn(&pdev->dev,
|
||||
"Failed to add device for DVSEC id %d\n",
|
||||
(*header)->id);
|
||||
else
|
||||
found_devices = true;
|
||||
|
||||
++header;
|
||||
}
|
||||
} else {
|
||||
do {
|
||||
struct intel_dvsec_header header;
|
||||
u32 table;
|
||||
u16 vid;
|
||||
|
||||
pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DVSEC);
|
||||
if (!pos)
|
||||
break;
|
||||
|
||||
pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vid);
|
||||
if (vid != PCI_VENDOR_ID_INTEL)
|
||||
continue;
|
||||
|
||||
pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2,
|
||||
&header.id);
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES,
|
||||
&header.num_entries);
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE,
|
||||
&header.entry_size);
|
||||
pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE,
|
||||
&table);
|
||||
|
||||
header.tbir = INTEL_DVSEC_TABLE_BAR(table);
|
||||
header.offset = INTEL_DVSEC_TABLE_OFFSET(table);
|
||||
|
||||
ret = pmt_add_dev(pdev, &header, quirks);
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
found_devices = true;
|
||||
} while (true);
|
||||
}
|
||||
|
||||
if (!found_devices)
|
||||
return -ENODEV;
|
||||
|
||||
pm_runtime_put(&pdev->dev);
|
||||
pm_runtime_allow(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pmt_pci_remove(struct pci_dev *pdev)
|
||||
{
|
||||
pm_runtime_forbid(&pdev->dev);
|
||||
pm_runtime_get_sync(&pdev->dev);
|
||||
}
|
||||
|
||||
#define PCI_DEVICE_ID_INTEL_PMT_ADL 0x467d
|
||||
#define PCI_DEVICE_ID_INTEL_PMT_DG1 0x490e
|
||||
#define PCI_DEVICE_ID_INTEL_PMT_OOBMSM 0x09a7
|
||||
#define PCI_DEVICE_ID_INTEL_PMT_TGL 0x9a0d
|
||||
static const struct pci_device_id pmt_pci_ids[] = {
|
||||
{ PCI_DEVICE_DATA(INTEL, PMT_ADL, &tgl_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, PMT_DG1, &dg1_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, PMT_OOBMSM, NULL) },
|
||||
{ PCI_DEVICE_DATA(INTEL, PMT_TGL, &tgl_info) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, pmt_pci_ids);
|
||||
|
||||
static struct pci_driver pmt_pci_driver = {
|
||||
.name = "intel-pmt",
|
||||
.id_table = pmt_pci_ids,
|
||||
.probe = pmt_pci_probe,
|
||||
.remove = pmt_pci_remove,
|
||||
};
|
||||
module_pci_driver(pmt_pci_driver);
|
||||
|
||||
MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>");
|
||||
MODULE_DESCRIPTION("Intel Platform Monitoring Technology PMT driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -5534,7 +5534,7 @@ void mlx5e_destroy_netdev(struct mlx5e_priv *priv)
|
|||
static int mlx5e_resume(struct auxiliary_device *adev)
|
||||
{
|
||||
struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);
|
||||
struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev);
|
||||
struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
|
||||
struct net_device *netdev = priv->netdev;
|
||||
struct mlx5_core_dev *mdev = edev->mdev;
|
||||
int err;
|
||||
|
@ -5557,7 +5557,7 @@ static int mlx5e_resume(struct auxiliary_device *adev)
|
|||
|
||||
static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state)
|
||||
{
|
||||
struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev);
|
||||
struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
|
||||
struct net_device *netdev = priv->netdev;
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
|
||||
|
@ -5589,7 +5589,7 @@ static int mlx5e_probe(struct auxiliary_device *adev,
|
|||
mlx5e_build_nic_netdev(netdev);
|
||||
|
||||
priv = netdev_priv(netdev);
|
||||
dev_set_drvdata(&adev->dev, priv);
|
||||
auxiliary_set_drvdata(adev, priv);
|
||||
|
||||
priv->profile = profile;
|
||||
priv->ppriv = NULL;
|
||||
|
@ -5637,7 +5637,7 @@ err_destroy_netdev:
|
|||
|
||||
static void mlx5e_remove(struct auxiliary_device *adev)
|
||||
{
|
||||
struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev);
|
||||
struct mlx5e_priv *priv = auxiliary_get_drvdata(adev);
|
||||
pm_message_t state = {};
|
||||
|
||||
mlx5e_dcbnl_delete_app(priv);
|
||||
|
|
|
@ -170,3 +170,14 @@ config INTEL_UNCORE_FREQ_CONTROL
|
|||
|
||||
To compile this driver as a module, choose M here: the module
|
||||
will be called intel-uncore-frequency.
|
||||
|
||||
config INTEL_VSEC
|
||||
tristate "Intel Vendor Specific Extended Capabilities Driver"
|
||||
depends on PCI
|
||||
select AUXILIARY_BUS
|
||||
help
|
||||
Adds support for feature drivers exposed using Intel PCIe VSEC and
|
||||
DVSEC.
|
||||
|
||||
To compile this driver as a module, choose M here: the module will
|
||||
be called intel_vsec.
|
||||
|
|
|
@ -26,6 +26,8 @@ intel_int0002_vgpio-y := int0002_vgpio.o
|
|||
obj-$(CONFIG_INTEL_INT0002_VGPIO) += intel_int0002_vgpio.o
|
||||
intel_oaktrail-y := oaktrail.o
|
||||
obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o
|
||||
intel_vsec-y := vsec.o
|
||||
obj-$(CONFIG_INTEL_VSEC) += intel_vsec.o
|
||||
|
||||
# Intel PMIC / PMC / P-Unit drivers
|
||||
intel_bxtwc_tmu-y := bxtwc_tmu.o
|
||||
|
|
|
@ -17,7 +17,7 @@ config INTEL_PMT_CLASS
|
|||
|
||||
config INTEL_PMT_TELEMETRY
|
||||
tristate "Intel Platform Monitoring Technology (PMT) Telemetry driver"
|
||||
depends on MFD_INTEL_PMT
|
||||
depends on INTEL_VSEC
|
||||
select INTEL_PMT_CLASS
|
||||
help
|
||||
The Intel Platform Monitory Technology (PMT) Telemetry driver provides
|
||||
|
@ -29,7 +29,7 @@ config INTEL_PMT_TELEMETRY
|
|||
|
||||
config INTEL_PMT_CRASHLOG
|
||||
tristate "Intel Platform Monitoring Technology (PMT) Crashlog driver"
|
||||
depends on MFD_INTEL_PMT
|
||||
depends on INTEL_VSEC
|
||||
select INTEL_PMT_CLASS
|
||||
help
|
||||
The Intel Platform Monitoring Technology (PMT) crashlog driver provides
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/mm.h>
|
||||
#include <linux/pci.h>
|
||||
|
||||
#include "../vsec.h"
|
||||
#include "class.h"
|
||||
|
||||
#define PMT_XA_START 0
|
||||
|
@ -281,31 +282,29 @@ fail_dev_create:
|
|||
return ret;
|
||||
}
|
||||
|
||||
int intel_pmt_dev_create(struct intel_pmt_entry *entry,
|
||||
struct intel_pmt_namespace *ns,
|
||||
struct platform_device *pdev, int idx)
|
||||
int intel_pmt_dev_create(struct intel_pmt_entry *entry, struct intel_pmt_namespace *ns,
|
||||
struct intel_vsec_device *intel_vsec_dev, int idx)
|
||||
{
|
||||
struct device *dev = &intel_vsec_dev->auxdev.dev;
|
||||
struct intel_pmt_header header;
|
||||
struct resource *disc_res;
|
||||
int ret = -ENODEV;
|
||||
int ret;
|
||||
|
||||
disc_res = platform_get_resource(pdev, IORESOURCE_MEM, idx);
|
||||
if (!disc_res)
|
||||
return ret;
|
||||
disc_res = &intel_vsec_dev->resource[idx];
|
||||
|
||||
entry->disc_table = devm_platform_ioremap_resource(pdev, idx);
|
||||
entry->disc_table = devm_ioremap_resource(dev, disc_res);
|
||||
if (IS_ERR(entry->disc_table))
|
||||
return PTR_ERR(entry->disc_table);
|
||||
|
||||
ret = ns->pmt_header_decode(entry, &header, &pdev->dev);
|
||||
ret = ns->pmt_header_decode(entry, &header, dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_pmt_populate_entry(entry, &header, &pdev->dev, disc_res);
|
||||
ret = intel_pmt_populate_entry(entry, &header, dev, disc_res);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return intel_pmt_dev_register(entry, ns, &pdev->dev);
|
||||
return intel_pmt_dev_register(entry, ns, dev);
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_pmt_dev_create);
|
||||
|
|
|
@ -2,13 +2,14 @@
|
|||
#ifndef _INTEL_PMT_CLASS_H
|
||||
#define _INTEL_PMT_CLASS_H
|
||||
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/xarray.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/bits.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/io.h>
|
||||
|
||||
#include "../vsec.h"
|
||||
|
||||
/* PMT access types */
|
||||
#define ACCESS_BARID 2
|
||||
#define ACCESS_LOCAL 3
|
||||
|
@ -47,7 +48,7 @@ struct intel_pmt_namespace {
|
|||
bool intel_pmt_is_early_client_hw(struct device *dev);
|
||||
int intel_pmt_dev_create(struct intel_pmt_entry *entry,
|
||||
struct intel_pmt_namespace *ns,
|
||||
struct platform_device *pdev, int idx);
|
||||
struct intel_vsec_device *dev, int idx);
|
||||
void intel_pmt_dev_destroy(struct intel_pmt_entry *entry,
|
||||
struct intel_pmt_namespace *ns);
|
||||
#endif
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* Author: "Alexander Duyck" <alexander.h.duyck@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/auxiliary_bus.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -15,10 +16,9 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/overflow.h>
|
||||
|
||||
#include "../vsec.h"
|
||||
#include "class.h"
|
||||
|
||||
#define DRV_NAME "pmt_crashlog"
|
||||
|
||||
/* Crashlog discovery header types */
|
||||
#define CRASH_TYPE_OOBMSM 1
|
||||
|
||||
|
@ -257,34 +257,34 @@ static struct intel_pmt_namespace pmt_crashlog_ns = {
|
|||
/*
|
||||
* initialization
|
||||
*/
|
||||
static int pmt_crashlog_remove(struct platform_device *pdev)
|
||||
static void pmt_crashlog_remove(struct auxiliary_device *auxdev)
|
||||
{
|
||||
struct pmt_crashlog_priv *priv = platform_get_drvdata(pdev);
|
||||
struct pmt_crashlog_priv *priv = auxiliary_get_drvdata(auxdev);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < priv->num_entries; i++)
|
||||
intel_pmt_dev_destroy(&priv->entry[i].entry, &pmt_crashlog_ns);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pmt_crashlog_probe(struct platform_device *pdev)
|
||||
static int pmt_crashlog_probe(struct auxiliary_device *auxdev,
|
||||
const struct auxiliary_device_id *id)
|
||||
{
|
||||
struct intel_vsec_device *intel_vsec_dev = auxdev_to_ivdev(auxdev);
|
||||
struct pmt_crashlog_priv *priv;
|
||||
size_t size;
|
||||
int i, ret;
|
||||
|
||||
size = struct_size(priv, entry, pdev->num_resources);
|
||||
priv = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
|
||||
size = struct_size(priv, entry, intel_vsec_dev->num_resources);
|
||||
priv = devm_kzalloc(&auxdev->dev, size, GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
platform_set_drvdata(pdev, priv);
|
||||
auxiliary_set_drvdata(auxdev, priv);
|
||||
|
||||
for (i = 0; i < pdev->num_resources; i++) {
|
||||
for (i = 0; i < intel_vsec_dev->num_resources; i++) {
|
||||
struct intel_pmt_entry *entry = &priv->entry[i].entry;
|
||||
|
||||
ret = intel_pmt_dev_create(entry, &pmt_crashlog_ns, pdev, i);
|
||||
ret = intel_pmt_dev_create(entry, &pmt_crashlog_ns, intel_vsec_dev, i);
|
||||
if (ret < 0)
|
||||
goto abort_probe;
|
||||
if (ret)
|
||||
|
@ -295,26 +295,30 @@ static int pmt_crashlog_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
abort_probe:
|
||||
pmt_crashlog_remove(pdev);
|
||||
pmt_crashlog_remove(auxdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct platform_driver pmt_crashlog_driver = {
|
||||
.driver = {
|
||||
.name = DRV_NAME,
|
||||
},
|
||||
.remove = pmt_crashlog_remove,
|
||||
.probe = pmt_crashlog_probe,
|
||||
static const struct auxiliary_device_id pmt_crashlog_id_table[] = {
|
||||
{ .name = "intel_vsec.crashlog" },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(auxiliary, pmt_crashlog_id_table);
|
||||
|
||||
static struct auxiliary_driver pmt_crashlog_aux_driver = {
|
||||
.id_table = pmt_crashlog_id_table,
|
||||
.remove = pmt_crashlog_remove,
|
||||
.probe = pmt_crashlog_probe,
|
||||
};
|
||||
|
||||
static int __init pmt_crashlog_init(void)
|
||||
{
|
||||
return platform_driver_register(&pmt_crashlog_driver);
|
||||
return auxiliary_driver_register(&pmt_crashlog_aux_driver);
|
||||
}
|
||||
|
||||
static void __exit pmt_crashlog_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&pmt_crashlog_driver);
|
||||
auxiliary_driver_unregister(&pmt_crashlog_aux_driver);
|
||||
xa_destroy(&crashlog_array);
|
||||
}
|
||||
|
||||
|
@ -323,5 +327,4 @@ module_exit(pmt_crashlog_exit);
|
|||
|
||||
MODULE_AUTHOR("Alexander Duyck <alexander.h.duyck@linux.intel.com>");
|
||||
MODULE_DESCRIPTION("Intel PMT Crashlog driver");
|
||||
MODULE_ALIAS("platform:" DRV_NAME);
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* Author: "David E. Box" <david.e.box@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/auxiliary_bus.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
|
@ -15,10 +16,9 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/overflow.h>
|
||||
|
||||
#include "../vsec.h"
|
||||
#include "class.h"
|
||||
|
||||
#define TELEM_DEV_NAME "pmt_telemetry"
|
||||
|
||||
#define TELEM_SIZE_OFFSET 0x0
|
||||
#define TELEM_GUID_OFFSET 0x4
|
||||
#define TELEM_BASE_OFFSET 0x8
|
||||
|
@ -79,34 +79,33 @@ static struct intel_pmt_namespace pmt_telem_ns = {
|
|||
.pmt_header_decode = pmt_telem_header_decode,
|
||||
};
|
||||
|
||||
static int pmt_telem_remove(struct platform_device *pdev)
|
||||
static void pmt_telem_remove(struct auxiliary_device *auxdev)
|
||||
{
|
||||
struct pmt_telem_priv *priv = platform_get_drvdata(pdev);
|
||||
struct pmt_telem_priv *priv = auxiliary_get_drvdata(auxdev);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < priv->num_entries; i++)
|
||||
intel_pmt_dev_destroy(&priv->entry[i], &pmt_telem_ns);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pmt_telem_probe(struct platform_device *pdev)
|
||||
static int pmt_telem_probe(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id)
|
||||
{
|
||||
struct intel_vsec_device *intel_vsec_dev = auxdev_to_ivdev(auxdev);
|
||||
struct pmt_telem_priv *priv;
|
||||
size_t size;
|
||||
int i, ret;
|
||||
|
||||
size = struct_size(priv, entry, pdev->num_resources);
|
||||
priv = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
|
||||
size = struct_size(priv, entry, intel_vsec_dev->num_resources);
|
||||
priv = devm_kzalloc(&auxdev->dev, size, GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
platform_set_drvdata(pdev, priv);
|
||||
auxiliary_set_drvdata(auxdev, priv);
|
||||
|
||||
for (i = 0; i < pdev->num_resources; i++) {
|
||||
for (i = 0; i < intel_vsec_dev->num_resources; i++) {
|
||||
struct intel_pmt_entry *entry = &priv->entry[i];
|
||||
|
||||
ret = intel_pmt_dev_create(entry, &pmt_telem_ns, pdev, i);
|
||||
ret = intel_pmt_dev_create(entry, &pmt_telem_ns, intel_vsec_dev, i);
|
||||
if (ret < 0)
|
||||
goto abort_probe;
|
||||
if (ret)
|
||||
|
@ -117,32 +116,35 @@ static int pmt_telem_probe(struct platform_device *pdev)
|
|||
|
||||
return 0;
|
||||
abort_probe:
|
||||
pmt_telem_remove(pdev);
|
||||
pmt_telem_remove(auxdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct platform_driver pmt_telem_driver = {
|
||||
.driver = {
|
||||
.name = TELEM_DEV_NAME,
|
||||
},
|
||||
.remove = pmt_telem_remove,
|
||||
.probe = pmt_telem_probe,
|
||||
static const struct auxiliary_device_id pmt_telem_id_table[] = {
|
||||
{ .name = "intel_vsec.telemetry" },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(auxiliary, pmt_telem_id_table);
|
||||
|
||||
static struct auxiliary_driver pmt_telem_aux_driver = {
|
||||
.id_table = pmt_telem_id_table,
|
||||
.remove = pmt_telem_remove,
|
||||
.probe = pmt_telem_probe,
|
||||
};
|
||||
|
||||
static int __init pmt_telem_init(void)
|
||||
{
|
||||
return platform_driver_register(&pmt_telem_driver);
|
||||
return auxiliary_driver_register(&pmt_telem_aux_driver);
|
||||
}
|
||||
module_init(pmt_telem_init);
|
||||
|
||||
static void __exit pmt_telem_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&pmt_telem_driver);
|
||||
auxiliary_driver_unregister(&pmt_telem_aux_driver);
|
||||
xa_destroy(&telem_array);
|
||||
}
|
||||
module_exit(pmt_telem_exit);
|
||||
|
||||
MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>");
|
||||
MODULE_DESCRIPTION("Intel PMT Telemetry driver");
|
||||
MODULE_ALIAS("platform:" TELEM_DEV_NAME);
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -0,0 +1,408 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Intel Vendor Specific Extended Capabilities auxiliary bus driver
|
||||
*
|
||||
* Copyright (c) 2021, Intel Corporation.
|
||||
* All Rights Reserved.
|
||||
*
|
||||
* Author: David E. Box <david.e.box@linux.intel.com>
|
||||
*
|
||||
* This driver discovers and creates auxiliary devices for Intel defined PCIe
|
||||
* "Vendor Specific" and "Designated Vendor Specific" Extended Capabilities,
|
||||
* VSEC and DVSEC respectively. The driver supports features on specific PCIe
|
||||
* endpoints that exist primarily to expose them.
|
||||
*/
|
||||
|
||||
#include <linux/auxiliary_bus.h>
|
||||
#include <linux/bits.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "vsec.h"
|
||||
|
||||
/* Intel DVSEC offsets */
|
||||
#define INTEL_DVSEC_ENTRIES 0xA
|
||||
#define INTEL_DVSEC_SIZE 0xB
|
||||
#define INTEL_DVSEC_TABLE 0xC
|
||||
#define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0))
|
||||
#define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3))
|
||||
#define TABLE_OFFSET_SHIFT 3
|
||||
|
||||
static DEFINE_IDA(intel_vsec_ida);
|
||||
|
||||
/**
|
||||
* struct intel_vsec_header - Common fields of Intel VSEC and DVSEC registers.
|
||||
* @rev: Revision ID of the VSEC/DVSEC register space
|
||||
* @length: Length of the VSEC/DVSEC register space
|
||||
* @id: ID of the feature
|
||||
* @num_entries: Number of instances of the feature
|
||||
* @entry_size: Size of the discovery table for each feature
|
||||
* @tbir: BAR containing the discovery tables
|
||||
* @offset: BAR offset of start of the first discovery table
|
||||
*/
|
||||
struct intel_vsec_header {
|
||||
u8 rev;
|
||||
u16 length;
|
||||
u16 id;
|
||||
u8 num_entries;
|
||||
u8 entry_size;
|
||||
u8 tbir;
|
||||
u32 offset;
|
||||
};
|
||||
|
||||
/* Platform specific data */
|
||||
struct intel_vsec_platform_info {
|
||||
struct intel_vsec_header **capabilities;
|
||||
unsigned long quirks;
|
||||
};
|
||||
|
||||
enum intel_vsec_id {
|
||||
VSEC_ID_TELEMETRY = 2,
|
||||
VSEC_ID_WATCHER = 3,
|
||||
VSEC_ID_CRASHLOG = 4,
|
||||
};
|
||||
|
||||
static enum intel_vsec_id intel_vsec_allow_list[] = {
|
||||
VSEC_ID_TELEMETRY,
|
||||
VSEC_ID_WATCHER,
|
||||
VSEC_ID_CRASHLOG,
|
||||
};
|
||||
|
||||
static const char *intel_vsec_name(enum intel_vsec_id id)
|
||||
{
|
||||
switch (id) {
|
||||
case VSEC_ID_TELEMETRY:
|
||||
return "telemetry";
|
||||
|
||||
case VSEC_ID_WATCHER:
|
||||
return "watcher";
|
||||
|
||||
case VSEC_ID_CRASHLOG:
|
||||
return "crashlog";
|
||||
|
||||
default:
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static bool intel_vsec_allowed(u16 id)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(intel_vsec_allow_list); i++)
|
||||
if (intel_vsec_allow_list[i] == id)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool intel_vsec_disabled(u16 id, unsigned long quirks)
|
||||
{
|
||||
switch (id) {
|
||||
case VSEC_ID_WATCHER:
|
||||
return !!(quirks & VSEC_QUIRK_NO_WATCHER);
|
||||
|
||||
case VSEC_ID_CRASHLOG:
|
||||
return !!(quirks & VSEC_QUIRK_NO_CRASHLOG);
|
||||
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_vsec_remove_aux(void *data)
|
||||
{
|
||||
auxiliary_device_delete(data);
|
||||
auxiliary_device_uninit(data);
|
||||
}
|
||||
|
||||
static void intel_vsec_dev_release(struct device *dev)
|
||||
{
|
||||
struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(dev);
|
||||
|
||||
ida_free(intel_vsec_dev->ida, intel_vsec_dev->auxdev.id);
|
||||
kfree(intel_vsec_dev->resource);
|
||||
kfree(intel_vsec_dev);
|
||||
}
|
||||
|
||||
static int intel_vsec_add_aux(struct pci_dev *pdev, struct intel_vsec_device *intel_vsec_dev,
|
||||
const char *name)
|
||||
{
|
||||
struct auxiliary_device *auxdev = &intel_vsec_dev->auxdev;
|
||||
int ret;
|
||||
|
||||
ret = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL);
|
||||
if (ret < 0) {
|
||||
kfree(intel_vsec_dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
auxdev->id = ret;
|
||||
auxdev->name = name;
|
||||
auxdev->dev.parent = &pdev->dev;
|
||||
auxdev->dev.release = intel_vsec_dev_release;
|
||||
|
||||
ret = auxiliary_device_init(auxdev);
|
||||
if (ret < 0) {
|
||||
ida_free(intel_vsec_dev->ida, auxdev->id);
|
||||
kfree(intel_vsec_dev->resource);
|
||||
kfree(intel_vsec_dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = auxiliary_device_add(auxdev);
|
||||
if (ret < 0) {
|
||||
auxiliary_device_uninit(auxdev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return devm_add_action_or_reset(&pdev->dev, intel_vsec_remove_aux, auxdev);
|
||||
}
|
||||
|
||||
static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *header,
|
||||
unsigned long quirks)
|
||||
{
|
||||
struct intel_vsec_device *intel_vsec_dev;
|
||||
struct resource *res, *tmp;
|
||||
int i;
|
||||
|
||||
if (!intel_vsec_allowed(header->id) || intel_vsec_disabled(header->id, quirks))
|
||||
return -EINVAL;
|
||||
|
||||
if (!header->num_entries) {
|
||||
dev_dbg(&pdev->dev, "Invalid 0 entry count for header id %d\n", header->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!header->entry_size) {
|
||||
dev_dbg(&pdev->dev, "Invalid 0 entry size for header id %d\n", header->id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
intel_vsec_dev = kzalloc(sizeof(*intel_vsec_dev), GFP_KERNEL);
|
||||
if (!intel_vsec_dev)
|
||||
return -ENOMEM;
|
||||
|
||||
res = kcalloc(header->num_entries, sizeof(*res), GFP_KERNEL);
|
||||
if (!res) {
|
||||
kfree(intel_vsec_dev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (quirks & VSEC_QUIRK_TABLE_SHIFT)
|
||||
header->offset >>= TABLE_OFFSET_SHIFT;
|
||||
|
||||
/*
|
||||
* The DVSEC/VSEC contains the starting offset and count for a block of
|
||||
* discovery tables. Create a resource array of these tables to the
|
||||
* auxiliary device driver.
|
||||
*/
|
||||
for (i = 0, tmp = res; i < header->num_entries; i++, tmp++) {
|
||||
tmp->start = pdev->resource[header->tbir].start +
|
||||
header->offset + i * (header->entry_size * sizeof(u32));
|
||||
tmp->end = tmp->start + (header->entry_size * sizeof(u32)) - 1;
|
||||
tmp->flags = IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
intel_vsec_dev->pcidev = pdev;
|
||||
intel_vsec_dev->resource = res;
|
||||
intel_vsec_dev->num_resources = header->num_entries;
|
||||
intel_vsec_dev->quirks = quirks;
|
||||
intel_vsec_dev->ida = &intel_vsec_ida;
|
||||
|
||||
return intel_vsec_add_aux(pdev, intel_vsec_dev, intel_vsec_name(header->id));
|
||||
}
|
||||
|
||||
static bool intel_vsec_walk_header(struct pci_dev *pdev, unsigned long quirks,
|
||||
struct intel_vsec_header **header)
|
||||
{
|
||||
bool have_devices = false;
|
||||
int ret;
|
||||
|
||||
for ( ; *header; header++) {
|
||||
ret = intel_vsec_add_dev(pdev, *header, quirks);
|
||||
if (ret)
|
||||
dev_info(&pdev->dev, "Could not add device for DVSEC id %d\n",
|
||||
(*header)->id);
|
||||
else
|
||||
have_devices = true;
|
||||
}
|
||||
|
||||
return have_devices;
|
||||
}
|
||||
|
||||
static bool intel_vsec_walk_dvsec(struct pci_dev *pdev, unsigned long quirks)
|
||||
{
|
||||
bool have_devices = false;
|
||||
int pos = 0;
|
||||
|
||||
do {
|
||||
struct intel_vsec_header header;
|
||||
u32 table, hdr;
|
||||
u16 vid;
|
||||
int ret;
|
||||
|
||||
pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DVSEC);
|
||||
if (!pos)
|
||||
break;
|
||||
|
||||
pci_read_config_dword(pdev, pos + PCI_DVSEC_HEADER1, &hdr);
|
||||
vid = PCI_DVSEC_HEADER1_VID(hdr);
|
||||
if (vid != PCI_VENDOR_ID_INTEL)
|
||||
continue;
|
||||
|
||||
/* Support only revision 1 */
|
||||
header.rev = PCI_DVSEC_HEADER1_REV(hdr);
|
||||
if (header.rev != 1) {
|
||||
dev_info(&pdev->dev, "Unsupported DVSEC revision %d\n", header.rev);
|
||||
continue;
|
||||
}
|
||||
|
||||
header.length = PCI_DVSEC_HEADER1_LEN(hdr);
|
||||
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES, &header.num_entries);
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE, &header.entry_size);
|
||||
pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE, &table);
|
||||
|
||||
header.tbir = INTEL_DVSEC_TABLE_BAR(table);
|
||||
header.offset = INTEL_DVSEC_TABLE_OFFSET(table);
|
||||
|
||||
pci_read_config_dword(pdev, pos + PCI_DVSEC_HEADER2, &hdr);
|
||||
header.id = PCI_DVSEC_HEADER2_ID(hdr);
|
||||
|
||||
ret = intel_vsec_add_dev(pdev, &header, quirks);
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
have_devices = true;
|
||||
} while (true);
|
||||
|
||||
return have_devices;
|
||||
}
|
||||
|
||||
static bool intel_vsec_walk_vsec(struct pci_dev *pdev, unsigned long quirks)
|
||||
{
|
||||
bool have_devices = false;
|
||||
int pos = 0;
|
||||
|
||||
do {
|
||||
struct intel_vsec_header header;
|
||||
u32 table, hdr;
|
||||
int ret;
|
||||
|
||||
pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_VNDR);
|
||||
if (!pos)
|
||||
break;
|
||||
|
||||
pci_read_config_dword(pdev, pos + PCI_VNDR_HEADER, &hdr);
|
||||
|
||||
/* Support only revision 1 */
|
||||
header.rev = PCI_VNDR_HEADER_REV(hdr);
|
||||
if (header.rev != 1) {
|
||||
dev_info(&pdev->dev, "Unsupported VSEC revision %d\n", header.rev);
|
||||
continue;
|
||||
}
|
||||
|
||||
header.id = PCI_VNDR_HEADER_ID(hdr);
|
||||
header.length = PCI_VNDR_HEADER_LEN(hdr);
|
||||
|
||||
/* entry, size, and table offset are the same as DVSEC */
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES, &header.num_entries);
|
||||
pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE, &header.entry_size);
|
||||
pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE, &table);
|
||||
|
||||
header.tbir = INTEL_DVSEC_TABLE_BAR(table);
|
||||
header.offset = INTEL_DVSEC_TABLE_OFFSET(table);
|
||||
|
||||
ret = intel_vsec_add_dev(pdev, &header, quirks);
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
have_devices = true;
|
||||
} while (true);
|
||||
|
||||
return have_devices;
|
||||
}
|
||||
|
||||
static int intel_vsec_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
struct intel_vsec_platform_info *info;
|
||||
bool have_devices = false;
|
||||
unsigned long quirks = 0;
|
||||
int ret;
|
||||
|
||||
ret = pcim_enable_device(pdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
info = (struct intel_vsec_platform_info *)id->driver_data;
|
||||
if (info)
|
||||
quirks = info->quirks;
|
||||
|
||||
if (intel_vsec_walk_dvsec(pdev, quirks))
|
||||
have_devices = true;
|
||||
|
||||
if (intel_vsec_walk_vsec(pdev, quirks))
|
||||
have_devices = true;
|
||||
|
||||
if (info && (info->quirks & VSEC_QUIRK_NO_DVSEC) &&
|
||||
intel_vsec_walk_header(pdev, quirks, info->capabilities))
|
||||
have_devices = true;
|
||||
|
||||
if (!have_devices)
|
||||
return -ENODEV;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* TGL info */
|
||||
static const struct intel_vsec_platform_info tgl_info = {
|
||||
.quirks = VSEC_QUIRK_NO_WATCHER | VSEC_QUIRK_NO_CRASHLOG | VSEC_QUIRK_TABLE_SHIFT,
|
||||
};
|
||||
|
||||
/* DG1 info */
|
||||
static struct intel_vsec_header dg1_telemetry = {
|
||||
.length = 0x10,
|
||||
.id = 2,
|
||||
.num_entries = 1,
|
||||
.entry_size = 3,
|
||||
.tbir = 0,
|
||||
.offset = 0x466000,
|
||||
};
|
||||
|
||||
static struct intel_vsec_header *dg1_capabilities[] = {
|
||||
&dg1_telemetry,
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct intel_vsec_platform_info dg1_info = {
|
||||
.capabilities = dg1_capabilities,
|
||||
.quirks = VSEC_QUIRK_NO_DVSEC,
|
||||
};
|
||||
|
||||
#define PCI_DEVICE_ID_INTEL_VSEC_ADL 0x467d
|
||||
#define PCI_DEVICE_ID_INTEL_VSEC_DG1 0x490e
|
||||
#define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7
|
||||
#define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d
|
||||
static const struct pci_device_id intel_vsec_pci_ids[] = {
|
||||
{ PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) },
|
||||
{ PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, NULL) },
|
||||
{ PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);
|
||||
|
||||
static struct pci_driver intel_vsec_pci_driver = {
|
||||
.name = "intel_vsec",
|
||||
.id_table = intel_vsec_pci_ids,
|
||||
.probe = intel_vsec_pci_probe,
|
||||
};
|
||||
module_pci_driver(intel_vsec_pci_driver);
|
||||
|
||||
MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>");
|
||||
MODULE_DESCRIPTION("Intel Extended Capabilities auxiliary bus driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,43 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _VSEC_H
|
||||
#define _VSEC_H
|
||||
|
||||
#include <linux/auxiliary_bus.h>
|
||||
#include <linux/bits.h>
|
||||
|
||||
struct pci_dev;
|
||||
struct resource;
|
||||
|
||||
enum intel_vsec_quirks {
|
||||
/* Watcher feature not supported */
|
||||
VSEC_QUIRK_NO_WATCHER = BIT(0),
|
||||
|
||||
/* Crashlog feature not supported */
|
||||
VSEC_QUIRK_NO_CRASHLOG = BIT(1),
|
||||
|
||||
/* Use shift instead of mask to read discovery table offset */
|
||||
VSEC_QUIRK_TABLE_SHIFT = BIT(2),
|
||||
|
||||
/* DVSEC not present (provided in driver data) */
|
||||
VSEC_QUIRK_NO_DVSEC = BIT(3),
|
||||
};
|
||||
|
||||
struct intel_vsec_device {
|
||||
struct auxiliary_device auxdev;
|
||||
struct pci_dev *pcidev;
|
||||
struct resource *resource;
|
||||
struct ida *ida;
|
||||
unsigned long quirks;
|
||||
int num_resources;
|
||||
};
|
||||
|
||||
static inline struct intel_vsec_device *dev_to_ivdev(struct device *dev)
|
||||
{
|
||||
return container_of(dev, struct intel_vsec_device, auxdev.dev);
|
||||
}
|
||||
|
||||
static inline struct intel_vsec_device *auxdev_to_ivdev(struct auxiliary_device *auxdev)
|
||||
{
|
||||
return container_of(auxdev, struct intel_vsec_device, auxdev);
|
||||
}
|
||||
#endif
|
|
@ -1293,7 +1293,7 @@ static int intel_link_probe(struct auxiliary_device *auxdev,
|
|||
bus->ops = &sdw_intel_ops;
|
||||
|
||||
/* set driver data, accessed by snd_soc_dai_get_drvdata() */
|
||||
dev_set_drvdata(dev, cdns);
|
||||
auxiliary_set_drvdata(auxdev, cdns);
|
||||
|
||||
/* use generic bandwidth allocation algorithm */
|
||||
sdw->cdns.bus.compute_params = sdw_compute_params;
|
||||
|
@ -1321,7 +1321,7 @@ int intel_link_startup(struct auxiliary_device *auxdev)
|
|||
{
|
||||
struct sdw_cdns_stream_config config;
|
||||
struct device *dev = &auxdev->dev;
|
||||
struct sdw_cdns *cdns = dev_get_drvdata(dev);
|
||||
struct sdw_cdns *cdns = auxiliary_get_drvdata(auxdev);
|
||||
struct sdw_intel *sdw = cdns_to_intel(cdns);
|
||||
struct sdw_bus *bus = &cdns->bus;
|
||||
int link_flags;
|
||||
|
@ -1463,7 +1463,7 @@ err_init:
|
|||
static void intel_link_remove(struct auxiliary_device *auxdev)
|
||||
{
|
||||
struct device *dev = &auxdev->dev;
|
||||
struct sdw_cdns *cdns = dev_get_drvdata(dev);
|
||||
struct sdw_cdns *cdns = auxiliary_get_drvdata(auxdev);
|
||||
struct sdw_intel *sdw = cdns_to_intel(cdns);
|
||||
struct sdw_bus *bus = &cdns->bus;
|
||||
|
||||
|
@ -1488,7 +1488,7 @@ int intel_link_process_wakeen_event(struct auxiliary_device *auxdev)
|
|||
void __iomem *shim;
|
||||
u16 wake_sts;
|
||||
|
||||
sdw = dev_get_drvdata(dev);
|
||||
sdw = auxiliary_get_drvdata(auxdev);
|
||||
bus = &sdw->cdns.bus;
|
||||
|
||||
if (bus->prop.hw_disabled || !sdw->startup_done) {
|
||||
|
|
|
@ -244,7 +244,7 @@ static struct sdw_intel_ctx
|
|||
goto err;
|
||||
|
||||
link = &ldev->link_res;
|
||||
link->cdns = dev_get_drvdata(&ldev->auxdev.dev);
|
||||
link->cdns = auxiliary_get_drvdata(&ldev->auxdev);
|
||||
|
||||
if (!link->cdns) {
|
||||
dev_err(&adev->dev, "failed to get link->cdns\n");
|
||||
|
|
|
@ -2683,7 +2683,7 @@ static int mlx5v_probe(struct auxiliary_device *adev,
|
|||
if (err)
|
||||
goto reg_err;
|
||||
|
||||
dev_set_drvdata(&adev->dev, mgtdev);
|
||||
auxiliary_set_drvdata(adev, mgtdev);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -2696,7 +2696,7 @@ static void mlx5v_remove(struct auxiliary_device *adev)
|
|||
{
|
||||
struct mlx5_vdpa_mgmtdev *mgtdev;
|
||||
|
||||
mgtdev = dev_get_drvdata(&adev->dev);
|
||||
mgtdev = auxiliary_get_drvdata(adev);
|
||||
vdpa_mgmtdev_unregister(&mgtdev->mgtdev);
|
||||
kfree(mgtdev);
|
||||
}
|
||||
|
|
|
@ -147,7 +147,7 @@ static int debugfs_locked_down(struct inode *inode,
|
|||
struct file *filp,
|
||||
const struct file_operations *real_fops)
|
||||
{
|
||||
if ((inode->i_mode & 07777) == 0444 &&
|
||||
if ((inode->i_mode & 07777 & ~0444) == 0 &&
|
||||
!(filp->f_mode & FMODE_WRITE) &&
|
||||
!real_fops->unlocked_ioctl &&
|
||||
!real_fops->compat_ioctl &&
|
||||
|
|
|
@ -216,8 +216,7 @@ static int do_uevent(struct dlm_ls *ls, int in)
|
|||
return ls->ls_uevent_result;
|
||||
}
|
||||
|
||||
static int dlm_uevent(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env)
|
||||
static int dlm_uevent(struct kobject *kobj, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct dlm_ls *ls = container_of(kobj, struct dlm_ls, ls_kobj);
|
||||
|
||||
|
|
|
@ -767,8 +767,7 @@ void gfs2_sys_fs_del(struct gfs2_sbd *sdp)
|
|||
wait_for_completion(&sdp->sd_kobj_unregister);
|
||||
}
|
||||
|
||||
static int gfs2_uevent(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env)
|
||||
static int gfs2_uevent(struct kobject *kobj, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct gfs2_sbd *sdp = container_of(kobj, struct gfs2_sbd, sd_kobj);
|
||||
struct super_block *s = sdp->sd_vfs;
|
||||
|
|
118
fs/kernfs/dir.c
118
fs/kernfs/dir.c
|
@ -17,7 +17,6 @@
|
|||
|
||||
#include "kernfs-internal.h"
|
||||
|
||||
DECLARE_RWSEM(kernfs_rwsem);
|
||||
static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */
|
||||
static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by rename_lock */
|
||||
static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */
|
||||
|
@ -26,7 +25,7 @@ static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */
|
|||
|
||||
static bool kernfs_active(struct kernfs_node *kn)
|
||||
{
|
||||
lockdep_assert_held(&kernfs_rwsem);
|
||||
lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem);
|
||||
return atomic_read(&kn->active) >= 0;
|
||||
}
|
||||
|
||||
|
@ -457,14 +456,15 @@ void kernfs_put_active(struct kernfs_node *kn)
|
|||
* return after draining is complete.
|
||||
*/
|
||||
static void kernfs_drain(struct kernfs_node *kn)
|
||||
__releases(&kernfs_rwsem) __acquires(&kernfs_rwsem)
|
||||
__releases(&kernfs_root(kn)->kernfs_rwsem)
|
||||
__acquires(&kernfs_root(kn)->kernfs_rwsem)
|
||||
{
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
lockdep_assert_held_write(&kernfs_rwsem);
|
||||
lockdep_assert_held_write(&root->kernfs_rwsem);
|
||||
WARN_ON_ONCE(kernfs_active(kn));
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
|
||||
if (kernfs_lockdep(kn)) {
|
||||
rwsem_acquire(&kn->dep_map, 0, 0, _RET_IP_);
|
||||
|
@ -483,7 +483,7 @@ static void kernfs_drain(struct kernfs_node *kn)
|
|||
|
||||
kernfs_drain_open_files(kn);
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -718,11 +718,12 @@ err_unlock:
|
|||
int kernfs_add_one(struct kernfs_node *kn)
|
||||
{
|
||||
struct kernfs_node *parent = kn->parent;
|
||||
struct kernfs_root *root = kernfs_root(parent);
|
||||
struct kernfs_iattrs *ps_iattr;
|
||||
bool has_ns;
|
||||
int ret;
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
|
||||
ret = -EINVAL;
|
||||
has_ns = kernfs_ns_enabled(parent);
|
||||
|
@ -753,7 +754,7 @@ int kernfs_add_one(struct kernfs_node *kn)
|
|||
ps_iattr->ia_mtime = ps_iattr->ia_ctime;
|
||||
}
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
|
||||
/*
|
||||
* Activate the new node unless CREATE_DEACTIVATED is requested.
|
||||
|
@ -767,7 +768,7 @@ int kernfs_add_one(struct kernfs_node *kn)
|
|||
return 0;
|
||||
|
||||
out_unlock:
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -788,7 +789,7 @@ static struct kernfs_node *kernfs_find_ns(struct kernfs_node *parent,
|
|||
bool has_ns = kernfs_ns_enabled(parent);
|
||||
unsigned int hash;
|
||||
|
||||
lockdep_assert_held(&kernfs_rwsem);
|
||||
lockdep_assert_held(&kernfs_root(parent)->kernfs_rwsem);
|
||||
|
||||
if (has_ns != (bool)ns) {
|
||||
WARN(1, KERN_WARNING "kernfs: ns %s in '%s' for '%s'\n",
|
||||
|
@ -820,7 +821,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent,
|
|||
size_t len;
|
||||
char *p, *name;
|
||||
|
||||
lockdep_assert_held_read(&kernfs_rwsem);
|
||||
lockdep_assert_held_read(&kernfs_root(parent)->kernfs_rwsem);
|
||||
|
||||
/* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */
|
||||
spin_lock_irq(&kernfs_rename_lock);
|
||||
|
@ -859,11 +860,12 @@ struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent,
|
|||
const char *name, const void *ns)
|
||||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root = kernfs_root(parent);
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
kn = kernfs_find_ns(parent, name, ns);
|
||||
kernfs_get(kn);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
return kn;
|
||||
}
|
||||
|
@ -883,11 +885,12 @@ struct kernfs_node *kernfs_walk_and_get_ns(struct kernfs_node *parent,
|
|||
const char *path, const void *ns)
|
||||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root = kernfs_root(parent);
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
kn = kernfs_walk_ns(parent, path, ns);
|
||||
kernfs_get(kn);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
return kn;
|
||||
}
|
||||
|
@ -912,6 +915,7 @@ struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops,
|
|||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
idr_init(&root->ino_idr);
|
||||
init_rwsem(&root->kernfs_rwsem);
|
||||
INIT_LIST_HEAD(&root->supers);
|
||||
|
||||
/*
|
||||
|
@ -957,7 +961,13 @@ struct kernfs_root *kernfs_create_root(struct kernfs_syscall_ops *scops,
|
|||
*/
|
||||
void kernfs_destroy_root(struct kernfs_root *root)
|
||||
{
|
||||
kernfs_remove(root->kn); /* will also free @root */
|
||||
/*
|
||||
* kernfs_remove holds kernfs_rwsem from the root so the root
|
||||
* shouldn't be freed during the operation.
|
||||
*/
|
||||
kernfs_get(root->kn);
|
||||
kernfs_remove(root->kn);
|
||||
kernfs_put(root->kn); /* will also free @root */
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1035,6 +1045,7 @@ struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
|
|||
static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags)
|
||||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root;
|
||||
|
||||
if (flags & LOOKUP_RCU)
|
||||
return -ECHILD;
|
||||
|
@ -1046,18 +1057,19 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags)
|
|||
/* If the kernfs parent node has changed discard and
|
||||
* proceed to ->lookup.
|
||||
*/
|
||||
down_read(&kernfs_rwsem);
|
||||
spin_lock(&dentry->d_lock);
|
||||
parent = kernfs_dentry_node(dentry->d_parent);
|
||||
if (parent) {
|
||||
spin_unlock(&dentry->d_lock);
|
||||
root = kernfs_root(parent);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
if (kernfs_dir_changed(parent, dentry)) {
|
||||
spin_unlock(&dentry->d_lock);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
spin_unlock(&dentry->d_lock);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
} else
|
||||
spin_unlock(&dentry->d_lock);
|
||||
|
||||
/* The kernfs parent node hasn't changed, leave the
|
||||
* dentry negative and return success.
|
||||
|
@ -1066,7 +1078,8 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags)
|
|||
}
|
||||
|
||||
kn = kernfs_dentry_node(dentry);
|
||||
down_read(&kernfs_rwsem);
|
||||
root = kernfs_root(kn);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
|
||||
/* The kernfs node has been deactivated */
|
||||
if (!kernfs_active(kn))
|
||||
|
@ -1085,10 +1098,10 @@ static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags)
|
|||
kernfs_info(dentry->d_sb)->ns != kn->ns)
|
||||
goto out_bad;
|
||||
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
return 1;
|
||||
out_bad:
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1102,10 +1115,12 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir,
|
|||
{
|
||||
struct kernfs_node *parent = dir->i_private;
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root;
|
||||
struct inode *inode = NULL;
|
||||
const void *ns = NULL;
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
root = kernfs_root(parent);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
if (kernfs_ns_enabled(parent))
|
||||
ns = kernfs_info(dir->i_sb)->ns;
|
||||
|
||||
|
@ -1116,7 +1131,7 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir,
|
|||
* create a negative.
|
||||
*/
|
||||
if (!kernfs_active(kn)) {
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
return NULL;
|
||||
}
|
||||
inode = kernfs_get_inode(dir->i_sb, kn);
|
||||
|
@ -1131,7 +1146,7 @@ static struct dentry *kernfs_iop_lookup(struct inode *dir,
|
|||
*/
|
||||
if (!IS_ERR(inode))
|
||||
kernfs_set_rev(parent, dentry);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
/* instantiate and hash (possibly negative) dentry */
|
||||
return d_splice_alias(inode, dentry);
|
||||
|
@ -1254,7 +1269,7 @@ static struct kernfs_node *kernfs_next_descendant_post(struct kernfs_node *pos,
|
|||
{
|
||||
struct rb_node *rbn;
|
||||
|
||||
lockdep_assert_held_write(&kernfs_rwsem);
|
||||
lockdep_assert_held_write(&kernfs_root(root)->kernfs_rwsem);
|
||||
|
||||
/* if first iteration, visit leftmost descendant which may be root */
|
||||
if (!pos)
|
||||
|
@ -1289,8 +1304,9 @@ static struct kernfs_node *kernfs_next_descendant_post(struct kernfs_node *pos,
|
|||
void kernfs_activate(struct kernfs_node *kn)
|
||||
{
|
||||
struct kernfs_node *pos;
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
|
||||
pos = NULL;
|
||||
while ((pos = kernfs_next_descendant_post(pos, kn))) {
|
||||
|
@ -1304,14 +1320,14 @@ void kernfs_activate(struct kernfs_node *kn)
|
|||
pos->flags |= KERNFS_ACTIVATED;
|
||||
}
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
}
|
||||
|
||||
static void __kernfs_remove(struct kernfs_node *kn)
|
||||
{
|
||||
struct kernfs_node *pos;
|
||||
|
||||
lockdep_assert_held_write(&kernfs_rwsem);
|
||||
lockdep_assert_held_write(&kernfs_root(kn)->kernfs_rwsem);
|
||||
|
||||
/*
|
||||
* Short-circuit if non-root @kn has already finished removal.
|
||||
|
@ -1381,9 +1397,11 @@ static void __kernfs_remove(struct kernfs_node *kn)
|
|||
*/
|
||||
void kernfs_remove(struct kernfs_node *kn)
|
||||
{
|
||||
down_write(&kernfs_rwsem);
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
down_write(&root->kernfs_rwsem);
|
||||
__kernfs_remove(kn);
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1469,8 +1487,9 @@ void kernfs_unbreak_active_protection(struct kernfs_node *kn)
|
|||
bool kernfs_remove_self(struct kernfs_node *kn)
|
||||
{
|
||||
bool ret;
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
kernfs_break_active_protection(kn);
|
||||
|
||||
/*
|
||||
|
@ -1498,9 +1517,9 @@ bool kernfs_remove_self(struct kernfs_node *kn)
|
|||
atomic_read(&kn->active) == KN_DEACTIVATED_BIAS)
|
||||
break;
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
schedule();
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
}
|
||||
finish_wait(waitq, &wait);
|
||||
WARN_ON_ONCE(!RB_EMPTY_NODE(&kn->rb));
|
||||
|
@ -1513,7 +1532,7 @@ bool kernfs_remove_self(struct kernfs_node *kn)
|
|||
*/
|
||||
kernfs_unbreak_active_protection(kn);
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1530,6 +1549,7 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
|
|||
const void *ns)
|
||||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root;
|
||||
|
||||
if (!parent) {
|
||||
WARN(1, KERN_WARNING "kernfs: can not remove '%s', no directory\n",
|
||||
|
@ -1537,13 +1557,14 @@ int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
|
|||
return -ENOENT;
|
||||
}
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
root = kernfs_root(parent);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
|
||||
kn = kernfs_find_ns(parent, name, ns);
|
||||
if (kn)
|
||||
__kernfs_remove(kn);
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
|
||||
if (kn)
|
||||
return 0;
|
||||
|
@ -1562,6 +1583,7 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
|
|||
const char *new_name, const void *new_ns)
|
||||
{
|
||||
struct kernfs_node *old_parent;
|
||||
struct kernfs_root *root;
|
||||
const char *old_name = NULL;
|
||||
int error;
|
||||
|
||||
|
@ -1569,7 +1591,8 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
|
|||
if (!kn->parent)
|
||||
return -EINVAL;
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
root = kernfs_root(kn);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
|
||||
error = -ENOENT;
|
||||
if (!kernfs_active(kn) || !kernfs_active(new_parent) ||
|
||||
|
@ -1623,7 +1646,7 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
|
|||
|
||||
error = 0;
|
||||
out:
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -1694,11 +1717,14 @@ static int kernfs_fop_readdir(struct file *file, struct dir_context *ctx)
|
|||
struct dentry *dentry = file->f_path.dentry;
|
||||
struct kernfs_node *parent = kernfs_dentry_node(dentry);
|
||||
struct kernfs_node *pos = file->private_data;
|
||||
struct kernfs_root *root;
|
||||
const void *ns = NULL;
|
||||
|
||||
if (!dir_emit_dots(file, ctx))
|
||||
return 0;
|
||||
down_read(&kernfs_rwsem);
|
||||
|
||||
root = kernfs_root(parent);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
|
||||
if (kernfs_ns_enabled(parent))
|
||||
ns = kernfs_info(dentry->d_sb)->ns;
|
||||
|
@ -1715,12 +1741,12 @@ static int kernfs_fop_readdir(struct file *file, struct dir_context *ctx)
|
|||
file->private_data = pos;
|
||||
kernfs_get(pos);
|
||||
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
if (!dir_emit(ctx, name, len, ino, type))
|
||||
return 0;
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
}
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
file->private_data = NULL;
|
||||
ctx->pos = INT_MAX;
|
||||
return 0;
|
||||
|
|
|
@ -847,6 +847,7 @@ static void kernfs_notify_workfn(struct work_struct *work)
|
|||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_super_info *info;
|
||||
struct kernfs_root *root;
|
||||
repeat:
|
||||
/* pop one off the notify_list */
|
||||
spin_lock_irq(&kernfs_notify_lock);
|
||||
|
@ -859,8 +860,9 @@ repeat:
|
|||
kn->attr.notify_next = NULL;
|
||||
spin_unlock_irq(&kernfs_notify_lock);
|
||||
|
||||
root = kernfs_root(kn);
|
||||
/* kick fsnotify */
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
|
||||
list_for_each_entry(info, &kernfs_root(kn)->supers, node) {
|
||||
struct kernfs_node *parent;
|
||||
|
@ -898,7 +900,7 @@ repeat:
|
|||
iput(inode);
|
||||
}
|
||||
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
kernfs_put(kn);
|
||||
goto repeat;
|
||||
}
|
||||
|
|
|
@ -99,10 +99,11 @@ int __kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr)
|
|||
int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr)
|
||||
{
|
||||
int ret;
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
ret = __kernfs_setattr(kn, iattr);
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -111,12 +112,14 @@ int kernfs_iop_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
|
|||
{
|
||||
struct inode *inode = d_inode(dentry);
|
||||
struct kernfs_node *kn = inode->i_private;
|
||||
struct kernfs_root *root;
|
||||
int error;
|
||||
|
||||
if (!kn)
|
||||
return -EINVAL;
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
root = kernfs_root(kn);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
error = setattr_prepare(&init_user_ns, dentry, iattr);
|
||||
if (error)
|
||||
goto out;
|
||||
|
@ -129,7 +132,7 @@ int kernfs_iop_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
|
|||
setattr_copy(&init_user_ns, inode, iattr);
|
||||
|
||||
out:
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -184,13 +187,14 @@ int kernfs_iop_getattr(struct user_namespace *mnt_userns,
|
|||
{
|
||||
struct inode *inode = d_inode(path->dentry);
|
||||
struct kernfs_node *kn = inode->i_private;
|
||||
struct kernfs_root *root = kernfs_root(kn);
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
spin_lock(&inode->i_lock);
|
||||
kernfs_refresh_inode(kn, inode);
|
||||
generic_fillattr(&init_user_ns, inode, stat);
|
||||
spin_unlock(&inode->i_lock);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -274,19 +278,21 @@ int kernfs_iop_permission(struct user_namespace *mnt_userns,
|
|||
struct inode *inode, int mask)
|
||||
{
|
||||
struct kernfs_node *kn;
|
||||
struct kernfs_root *root;
|
||||
int ret;
|
||||
|
||||
if (mask & MAY_NOT_BLOCK)
|
||||
return -ECHILD;
|
||||
|
||||
kn = inode->i_private;
|
||||
root = kernfs_root(kn);
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
spin_lock(&inode->i_lock);
|
||||
kernfs_refresh_inode(kn, inode);
|
||||
ret = generic_permission(&init_user_ns, inode, mask);
|
||||
spin_unlock(&inode->i_lock);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -236,6 +236,7 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
|
|||
static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *kfc)
|
||||
{
|
||||
struct kernfs_super_info *info = kernfs_info(sb);
|
||||
struct kernfs_root *kf_root = kfc->root;
|
||||
struct inode *inode;
|
||||
struct dentry *root;
|
||||
|
||||
|
@ -255,9 +256,9 @@ static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *k
|
|||
sb->s_shrink.seeks = 0;
|
||||
|
||||
/* get root inode, initialize and unlock it */
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&kf_root->kernfs_rwsem);
|
||||
inode = kernfs_get_inode(sb, info->root->kn);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&kf_root->kernfs_rwsem);
|
||||
if (!inode) {
|
||||
pr_debug("kernfs: could not get root inode\n");
|
||||
return -ENOMEM;
|
||||
|
@ -334,6 +335,7 @@ int kernfs_get_tree(struct fs_context *fc)
|
|||
|
||||
if (!sb->s_root) {
|
||||
struct kernfs_super_info *info = kernfs_info(sb);
|
||||
struct kernfs_root *root = kfc->root;
|
||||
|
||||
kfc->new_sb_created = true;
|
||||
|
||||
|
@ -344,9 +346,9 @@ int kernfs_get_tree(struct fs_context *fc)
|
|||
}
|
||||
sb->s_flags |= SB_ACTIVE;
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
list_add(&info->node, &info->root->supers);
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
}
|
||||
|
||||
fc->root = dget(sb->s_root);
|
||||
|
@ -371,10 +373,11 @@ void kernfs_free_fs_context(struct fs_context *fc)
|
|||
void kernfs_kill_sb(struct super_block *sb)
|
||||
{
|
||||
struct kernfs_super_info *info = kernfs_info(sb);
|
||||
struct kernfs_root *root = info->root;
|
||||
|
||||
down_write(&kernfs_rwsem);
|
||||
down_write(&root->kernfs_rwsem);
|
||||
list_del(&info->node);
|
||||
up_write(&kernfs_rwsem);
|
||||
up_write(&root->kernfs_rwsem);
|
||||
|
||||
/*
|
||||
* Remove the superblock from fs_supers/s_instances
|
||||
|
|
|
@ -113,11 +113,12 @@ static int kernfs_getlink(struct inode *inode, char *path)
|
|||
struct kernfs_node *kn = inode->i_private;
|
||||
struct kernfs_node *parent = kn->parent;
|
||||
struct kernfs_node *target = kn->symlink.target_kn;
|
||||
struct kernfs_root *root = kernfs_root(parent);
|
||||
int error;
|
||||
|
||||
down_read(&kernfs_rwsem);
|
||||
down_read(&root->kernfs_rwsem);
|
||||
error = kernfs_get_target_path(parent, target, path);
|
||||
up_read(&kernfs_rwsem);
|
||||
up_read(&root->kernfs_rwsem);
|
||||
|
||||
return error;
|
||||
}
|
||||
|
|
|
@ -57,7 +57,7 @@ static void nilfs_##name##_attr_release(struct kobject *kobj) \
|
|||
complete(&subgroups->sg_##name##_kobj_unregister); \
|
||||
} \
|
||||
static struct kobj_type nilfs_##name##_ktype = { \
|
||||
.default_attrs = nilfs_##name##_attrs, \
|
||||
.default_groups = nilfs_##name##_groups, \
|
||||
.sysfs_ops = &nilfs_##name##_attr_ops, \
|
||||
.release = nilfs_##name##_attr_release, \
|
||||
}
|
||||
|
@ -129,6 +129,7 @@ static struct attribute *nilfs_snapshot_attrs[] = {
|
|||
NILFS_SNAPSHOT_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_snapshot);
|
||||
|
||||
static ssize_t nilfs_snapshot_attr_show(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
|
@ -166,7 +167,7 @@ static const struct sysfs_ops nilfs_snapshot_attr_ops = {
|
|||
};
|
||||
|
||||
static struct kobj_type nilfs_snapshot_ktype = {
|
||||
.default_attrs = nilfs_snapshot_attrs,
|
||||
.default_groups = nilfs_snapshot_groups,
|
||||
.sysfs_ops = &nilfs_snapshot_attr_ops,
|
||||
.release = nilfs_snapshot_attr_release,
|
||||
};
|
||||
|
@ -226,6 +227,7 @@ static struct attribute *nilfs_mounted_snapshots_attrs[] = {
|
|||
NILFS_MOUNTED_SNAPSHOTS_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_mounted_snapshots);
|
||||
|
||||
NILFS_DEV_INT_GROUP_OPS(mounted_snapshots, dev);
|
||||
NILFS_DEV_INT_GROUP_TYPE(mounted_snapshots, dev);
|
||||
|
@ -339,6 +341,7 @@ static struct attribute *nilfs_checkpoints_attrs[] = {
|
|||
NILFS_CHECKPOINTS_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_checkpoints);
|
||||
|
||||
NILFS_DEV_INT_GROUP_OPS(checkpoints, dev);
|
||||
NILFS_DEV_INT_GROUP_TYPE(checkpoints, dev);
|
||||
|
@ -428,6 +431,7 @@ static struct attribute *nilfs_segments_attrs[] = {
|
|||
NILFS_SEGMENTS_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_segments);
|
||||
|
||||
NILFS_DEV_INT_GROUP_OPS(segments, dev);
|
||||
NILFS_DEV_INT_GROUP_TYPE(segments, dev);
|
||||
|
@ -689,6 +693,7 @@ static struct attribute *nilfs_segctor_attrs[] = {
|
|||
NILFS_SEGCTOR_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_segctor);
|
||||
|
||||
NILFS_DEV_INT_GROUP_OPS(segctor, dev);
|
||||
NILFS_DEV_INT_GROUP_TYPE(segctor, dev);
|
||||
|
@ -816,6 +821,7 @@ static struct attribute *nilfs_superblock_attrs[] = {
|
|||
NILFS_SUPERBLOCK_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_superblock);
|
||||
|
||||
NILFS_DEV_INT_GROUP_OPS(superblock, dev);
|
||||
NILFS_DEV_INT_GROUP_TYPE(superblock, dev);
|
||||
|
@ -924,6 +930,7 @@ static struct attribute *nilfs_dev_attrs[] = {
|
|||
NILFS_DEV_ATTR_LIST(README),
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(nilfs_dev);
|
||||
|
||||
static ssize_t nilfs_dev_attr_show(struct kobject *kobj,
|
||||
struct attribute *attr, char *buf)
|
||||
|
@ -961,7 +968,7 @@ static const struct sysfs_ops nilfs_dev_attr_ops = {
|
|||
};
|
||||
|
||||
static struct kobj_type nilfs_dev_ktype = {
|
||||
.default_attrs = nilfs_dev_attrs,
|
||||
.default_groups = nilfs_dev_groups,
|
||||
.sysfs_ops = &nilfs_dev_attr_ops,
|
||||
.release = nilfs_dev_attr_release,
|
||||
};
|
||||
|
|
|
@ -11,12 +11,172 @@
|
|||
#include <linux/device.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
|
||||
/**
|
||||
* DOC: DEVICE_LIFESPAN
|
||||
*
|
||||
* The registering driver is the entity that allocates memory for the
|
||||
* auxiliary_device and registers it on the auxiliary bus. It is important to
|
||||
* note that, as opposed to the platform bus, the registering driver is wholly
|
||||
* responsible for the management of the memory used for the device object.
|
||||
*
|
||||
* To be clear the memory for the auxiliary_device is freed in the release()
|
||||
* callback defined by the registering driver. The registering driver should
|
||||
* only call auxiliary_device_delete() and then auxiliary_device_uninit() when
|
||||
* it is done with the device. The release() function is then automatically
|
||||
* called if and when other code releases their reference to the devices.
|
||||
*
|
||||
* A parent object, defined in the shared header file, contains the
|
||||
* auxiliary_device. It also contains a pointer to the shared object(s), which
|
||||
* also is defined in the shared header. Both the parent object and the shared
|
||||
* object(s) are allocated by the registering driver. This layout allows the
|
||||
* auxiliary_driver's registering module to perform a container_of() call to go
|
||||
* from the pointer to the auxiliary_device, that is passed during the call to
|
||||
* the auxiliary_driver's probe function, up to the parent object, and then
|
||||
* have access to the shared object(s).
|
||||
*
|
||||
* The memory for the shared object(s) must have a lifespan equal to, or
|
||||
* greater than, the lifespan of the memory for the auxiliary_device. The
|
||||
* auxiliary_driver should only consider that the shared object is valid as
|
||||
* long as the auxiliary_device is still registered on the auxiliary bus. It
|
||||
* is up to the registering driver to manage (e.g. free or keep available) the
|
||||
* memory for the shared object beyond the life of the auxiliary_device.
|
||||
*
|
||||
* The registering driver must unregister all auxiliary devices before its own
|
||||
* driver.remove() is completed. An easy way to ensure this is to use the
|
||||
* devm_add_action_or_reset() call to register a function against the parent
|
||||
* device which unregisters the auxiliary device object(s).
|
||||
*
|
||||
* Finally, any operations which operate on the auxiliary devices must continue
|
||||
* to function (if only to return an error) after the registering driver
|
||||
* unregisters the auxiliary device.
|
||||
*/
|
||||
|
||||
/**
|
||||
* struct auxiliary_device - auxiliary device object.
|
||||
* @dev: Device,
|
||||
* The release and parent fields of the device structure must be filled
|
||||
* in
|
||||
* @name: Match name found by the auxiliary device driver,
|
||||
* @id: unique identitier if multiple devices of the same name are exported,
|
||||
*
|
||||
* An auxiliary_device represents a part of its parent device's functionality.
|
||||
* It is given a name that, combined with the registering drivers
|
||||
* KBUILD_MODNAME, creates a match_name that is used for driver binding, and an
|
||||
* id that combined with the match_name provide a unique name to register with
|
||||
* the bus subsystem. For example, a driver registering an auxiliary device is
|
||||
* named 'foo_mod.ko' and the subdevice is named 'foo_dev'. The match name is
|
||||
* therefore 'foo_mod.foo_dev'.
|
||||
*
|
||||
* Registering an auxiliary_device is a three-step process.
|
||||
*
|
||||
* First, a 'struct auxiliary_device' needs to be defined or allocated for each
|
||||
* sub-device desired. The name, id, dev.release, and dev.parent fields of
|
||||
* this structure must be filled in as follows.
|
||||
*
|
||||
* The 'name' field is to be given a name that is recognized by the auxiliary
|
||||
* driver. If two auxiliary_devices with the same match_name, eg
|
||||
* "foo_mod.foo_dev", are registered onto the bus, they must have unique id
|
||||
* values (e.g. "x" and "y") so that the registered devices names are
|
||||
* "foo_mod.foo_dev.x" and "foo_mod.foo_dev.y". If match_name + id are not
|
||||
* unique, then the device_add fails and generates an error message.
|
||||
*
|
||||
* The auxiliary_device.dev.type.release or auxiliary_device.dev.release must
|
||||
* be populated with a non-NULL pointer to successfully register the
|
||||
* auxiliary_device. This release call is where resources associated with the
|
||||
* auxiliary device must be free'ed. Because once the device is placed on the
|
||||
* bus the parent driver can not tell what other code may have a reference to
|
||||
* this data.
|
||||
*
|
||||
* The auxiliary_device.dev.parent should be set. Typically to the registering
|
||||
* drivers device.
|
||||
*
|
||||
* Second, call auxiliary_device_init(), which checks several aspects of the
|
||||
* auxiliary_device struct and performs a device_initialize(). After this step
|
||||
* completes, any error state must have a call to auxiliary_device_uninit() in
|
||||
* its resolution path.
|
||||
*
|
||||
* The third and final step in registering an auxiliary_device is to perform a
|
||||
* call to auxiliary_device_add(), which sets the name of the device and adds
|
||||
* the device to the bus.
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* #define MY_DEVICE_NAME "foo_dev"
|
||||
*
|
||||
* ...
|
||||
*
|
||||
* struct auxiliary_device *my_aux_dev = my_aux_dev_alloc(xxx);
|
||||
*
|
||||
* // Step 1:
|
||||
* my_aux_dev->name = MY_DEVICE_NAME;
|
||||
* my_aux_dev->id = my_unique_id_alloc(xxx);
|
||||
* my_aux_dev->dev.release = my_aux_dev_release;
|
||||
* my_aux_dev->dev.parent = my_dev;
|
||||
*
|
||||
* // Step 2:
|
||||
* if (auxiliary_device_init(my_aux_dev))
|
||||
* goto fail;
|
||||
*
|
||||
* // Step 3:
|
||||
* if (auxiliary_device_add(my_aux_dev)) {
|
||||
* auxiliary_device_uninit(my_aux_dev);
|
||||
* goto fail;
|
||||
* }
|
||||
*
|
||||
* ...
|
||||
*
|
||||
*
|
||||
* Unregistering an auxiliary_device is a two-step process to mirror the
|
||||
* register process. First call auxiliary_device_delete(), then call
|
||||
* auxiliary_device_uninit().
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* auxiliary_device_delete(my_dev->my_aux_dev);
|
||||
* auxiliary_device_uninit(my_dev->my_aux_dev);
|
||||
*/
|
||||
struct auxiliary_device {
|
||||
struct device dev;
|
||||
const char *name;
|
||||
u32 id;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct auxiliary_driver - Definition of an auxiliary bus driver
|
||||
* @probe: Called when a matching device is added to the bus.
|
||||
* @remove: Called when device is removed from the bus.
|
||||
* @shutdown: Called at shut-down time to quiesce the device.
|
||||
* @suspend: Called to put the device to sleep mode. Usually to a power state.
|
||||
* @resume: Called to bring a device from sleep mode.
|
||||
* @name: Driver name.
|
||||
* @driver: Core driver structure.
|
||||
* @id_table: Table of devices this driver should match on the bus.
|
||||
*
|
||||
* Auxiliary drivers follow the standard driver model convention, where
|
||||
* discovery/enumeration is handled by the core, and drivers provide probe()
|
||||
* and remove() methods. They support power management and shutdown
|
||||
* notifications using the standard conventions.
|
||||
*
|
||||
* Auxiliary drivers register themselves with the bus by calling
|
||||
* auxiliary_driver_register(). The id_table contains the match_names of
|
||||
* auxiliary devices that a driver can bind with.
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* static const struct auxiliary_device_id my_auxiliary_id_table[] = {
|
||||
* { .name = "foo_mod.foo_dev" },
|
||||
* {},
|
||||
* };
|
||||
*
|
||||
* MODULE_DEVICE_TABLE(auxiliary, my_auxiliary_id_table);
|
||||
*
|
||||
* struct auxiliary_driver my_drv = {
|
||||
* .name = "myauxiliarydrv",
|
||||
* .id_table = my_auxiliary_id_table,
|
||||
* .probe = my_drv_probe,
|
||||
* .remove = my_drv_remove
|
||||
* };
|
||||
*/
|
||||
struct auxiliary_driver {
|
||||
int (*probe)(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id);
|
||||
void (*remove)(struct auxiliary_device *auxdev);
|
||||
|
@ -28,6 +188,16 @@ struct auxiliary_driver {
|
|||
const struct auxiliary_device_id *id_table;
|
||||
};
|
||||
|
||||
static inline void *auxiliary_get_drvdata(struct auxiliary_device *auxdev)
|
||||
{
|
||||
return dev_get_drvdata(&auxdev->dev);
|
||||
}
|
||||
|
||||
static inline void auxiliary_set_drvdata(struct auxiliary_device *auxdev, void *data)
|
||||
{
|
||||
dev_set_drvdata(&auxdev->dev, data);
|
||||
}
|
||||
|
||||
static inline struct auxiliary_device *to_auxiliary_dev(struct device *dev)
|
||||
{
|
||||
return container_of(dev, struct auxiliary_device, dev);
|
||||
|
@ -66,6 +236,10 @@ void auxiliary_driver_unregister(struct auxiliary_driver *auxdrv);
|
|||
* Helper macro for auxiliary drivers which do not do anything special in
|
||||
* module init/exit. This eliminates a lot of boilerplate. Each module may only
|
||||
* use this macro once, and calling it replaces module_init() and module_exit()
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* module_auxiliary_driver(my_drv);
|
||||
*/
|
||||
#define module_auxiliary_driver(__auxiliary_driver) \
|
||||
module_driver(__auxiliary_driver, auxiliary_driver_register, auxiliary_driver_unregister)
|
||||
|
|
|
@ -6,7 +6,6 @@
|
|||
#ifndef __LINUX_KERNFS_H
|
||||
#define __LINUX_KERNFS_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mutex.h>
|
||||
|
@ -14,14 +13,18 @@
|
|||
#include <linux/lockdep.h>
|
||||
#include <linux/rbtree.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/uidgid.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/rwsem.h>
|
||||
|
||||
struct file;
|
||||
struct dentry;
|
||||
struct iattr;
|
||||
struct seq_file;
|
||||
struct vm_area_struct;
|
||||
struct vm_operations_struct;
|
||||
struct super_block;
|
||||
struct file_system_type;
|
||||
struct poll_table_struct;
|
||||
|
@ -197,6 +200,7 @@ struct kernfs_root {
|
|||
struct list_head supers;
|
||||
|
||||
wait_queue_head_t deactivate_waitq;
|
||||
struct rw_semaphore kernfs_rwsem;
|
||||
};
|
||||
|
||||
struct kernfs_open_file {
|
||||
|
|
|
@ -19,10 +19,10 @@
|
|||
#include <linux/list.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/container_of.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/kref.h>
|
||||
#include <linux/kobject_ns.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
@ -66,7 +66,7 @@ struct kobject {
|
|||
struct list_head entry;
|
||||
struct kobject *parent;
|
||||
struct kset *kset;
|
||||
struct kobj_type *ktype;
|
||||
const struct kobj_type *ktype;
|
||||
struct kernfs_node *sd; /* sysfs directory entry */
|
||||
struct kref kref;
|
||||
#ifdef CONFIG_DEBUG_KOBJECT_RELEASE
|
||||
|
@ -90,13 +90,13 @@ static inline const char *kobject_name(const struct kobject *kobj)
|
|||
return kobj->name;
|
||||
}
|
||||
|
||||
extern void kobject_init(struct kobject *kobj, struct kobj_type *ktype);
|
||||
extern void kobject_init(struct kobject *kobj, const struct kobj_type *ktype);
|
||||
extern __printf(3, 4) __must_check
|
||||
int kobject_add(struct kobject *kobj, struct kobject *parent,
|
||||
const char *fmt, ...);
|
||||
extern __printf(4, 5) __must_check
|
||||
int kobject_init_and_add(struct kobject *kobj,
|
||||
struct kobj_type *ktype, struct kobject *parent,
|
||||
const struct kobj_type *ktype, struct kobject *parent,
|
||||
const char *fmt, ...);
|
||||
|
||||
extern void kobject_del(struct kobject *kobj);
|
||||
|
@ -117,23 +117,6 @@ extern void kobject_get_ownership(struct kobject *kobj,
|
|||
kuid_t *uid, kgid_t *gid);
|
||||
extern char *kobject_get_path(struct kobject *kobj, gfp_t flag);
|
||||
|
||||
/**
|
||||
* kobject_has_children - Returns whether a kobject has children.
|
||||
* @kobj: the object to test
|
||||
*
|
||||
* This will return whether a kobject has other kobjects as children.
|
||||
*
|
||||
* It does NOT account for the presence of attribute files, only sub
|
||||
* directories. It also assumes there is no concurrent addition or
|
||||
* removal of such children, and thus relies on external locking.
|
||||
*/
|
||||
static inline bool kobject_has_children(struct kobject *kobj)
|
||||
{
|
||||
WARN_ON_ONCE(kref_read(&kobj->kref) == 0);
|
||||
|
||||
return kobj->sd && kobj->sd->dir.subdirs;
|
||||
}
|
||||
|
||||
struct kobj_type {
|
||||
void (*release)(struct kobject *kobj);
|
||||
const struct sysfs_ops *sysfs_ops;
|
||||
|
@ -153,10 +136,9 @@ struct kobj_uevent_env {
|
|||
};
|
||||
|
||||
struct kset_uevent_ops {
|
||||
int (* const filter)(struct kset *kset, struct kobject *kobj);
|
||||
const char *(* const name)(struct kset *kset, struct kobject *kobj);
|
||||
int (* const uevent)(struct kset *kset, struct kobject *kobj,
|
||||
struct kobj_uevent_env *env);
|
||||
int (* const filter)(struct kobject *kobj);
|
||||
const char *(* const name)(struct kobject *kobj);
|
||||
int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env);
|
||||
};
|
||||
|
||||
struct kobj_attribute {
|
||||
|
@ -217,7 +199,7 @@ static inline void kset_put(struct kset *k)
|
|||
kobject_put(&k->kobj);
|
||||
}
|
||||
|
||||
static inline struct kobj_type *get_ktype(struct kobject *kobj)
|
||||
static inline const struct kobj_type *get_ktype(struct kobject *kobj)
|
||||
{
|
||||
return kobj->ktype;
|
||||
}
|
||||
|
|
|
@ -180,6 +180,19 @@ static inline int cpu_to_mem(int cpu)
|
|||
|
||||
#endif /* [!]CONFIG_HAVE_MEMORYLESS_NODES */
|
||||
|
||||
#if defined(topology_die_id) && defined(topology_die_cpumask)
|
||||
#define TOPOLOGY_DIE_SYSFS
|
||||
#endif
|
||||
#if defined(topology_cluster_id) && defined(topology_cluster_cpumask)
|
||||
#define TOPOLOGY_CLUSTER_SYSFS
|
||||
#endif
|
||||
#if defined(topology_book_id) && defined(topology_book_cpumask)
|
||||
#define TOPOLOGY_BOOK_SYSFS
|
||||
#endif
|
||||
#if defined(topology_drawer_id) && defined(topology_drawer_cpumask)
|
||||
#define TOPOLOGY_DRAWER_SYSFS
|
||||
#endif
|
||||
|
||||
#ifndef topology_physical_package_id
|
||||
#define topology_physical_package_id(cpu) ((void)(cpu), -1)
|
||||
#endif
|
||||
|
@ -192,6 +205,12 @@ static inline int cpu_to_mem(int cpu)
|
|||
#ifndef topology_core_id
|
||||
#define topology_core_id(cpu) ((void)(cpu), 0)
|
||||
#endif
|
||||
#ifndef topology_book_id
|
||||
#define topology_book_id(cpu) ((void)(cpu), -1)
|
||||
#endif
|
||||
#ifndef topology_drawer_id
|
||||
#define topology_drawer_id(cpu) ((void)(cpu), -1)
|
||||
#endif
|
||||
#ifndef topology_sibling_cpumask
|
||||
#define topology_sibling_cpumask(cpu) cpumask_of(cpu)
|
||||
#endif
|
||||
|
@ -204,6 +223,12 @@ static inline int cpu_to_mem(int cpu)
|
|||
#ifndef topology_die_cpumask
|
||||
#define topology_die_cpumask(cpu) cpumask_of(cpu)
|
||||
#endif
|
||||
#ifndef topology_book_cpumask
|
||||
#define topology_book_cpumask(cpu) cpumask_of(cpu)
|
||||
#endif
|
||||
#ifndef topology_drawer_cpumask
|
||||
#define topology_drawer_cpumask(cpu) cpumask_of(cpu)
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_SCHED_SMT) && !defined(cpu_smt_mask)
|
||||
static inline const struct cpumask *cpu_smt_mask(int cpu)
|
||||
|
|
|
@ -1086,7 +1086,11 @@
|
|||
|
||||
/* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */
|
||||
#define PCI_DVSEC_HEADER1 0x4 /* Designated Vendor-Specific Header1 */
|
||||
#define PCI_DVSEC_HEADER1_VID(x) ((x) & 0xffff)
|
||||
#define PCI_DVSEC_HEADER1_REV(x) (((x) >> 16) & 0xf)
|
||||
#define PCI_DVSEC_HEADER1_LEN(x) (((x) >> 20) & 0xfff)
|
||||
#define PCI_DVSEC_HEADER2 0x8 /* Designated Vendor-Specific Header2 */
|
||||
#define PCI_DVSEC_HEADER2_ID(x) ((x) & 0xffff)
|
||||
|
||||
/* Data Link Feature */
|
||||
#define PCI_DLF_CAP 0x04 /* Capabilities Register */
|
||||
|
|
|
@ -926,9 +926,9 @@ static const struct sysfs_ops module_sysfs_ops = {
|
|||
.store = module_attr_store,
|
||||
};
|
||||
|
||||
static int uevent_filter(struct kset *kset, struct kobject *kobj)
|
||||
static int uevent_filter(struct kobject *kobj)
|
||||
{
|
||||
struct kobj_type *ktype = get_ktype(kobj);
|
||||
const struct kobj_type *ktype = get_ktype(kobj);
|
||||
|
||||
if (ktype == &module_ktype)
|
||||
return 1;
|
||||
|
|
|
@ -65,7 +65,7 @@ void kobject_get_ownership(struct kobject *kobj, kuid_t *uid, kgid_t *gid)
|
|||
*/
|
||||
static int populate_dir(struct kobject *kobj)
|
||||
{
|
||||
struct kobj_type *t = get_ktype(kobj);
|
||||
const struct kobj_type *t = get_ktype(kobj);
|
||||
struct attribute *attr;
|
||||
int error = 0;
|
||||
int i;
|
||||
|
@ -346,7 +346,7 @@ EXPORT_SYMBOL(kobject_set_name);
|
|||
* to kobject_put(), not by a call to kfree directly to ensure that all of
|
||||
* the memory is cleaned up properly.
|
||||
*/
|
||||
void kobject_init(struct kobject *kobj, struct kobj_type *ktype)
|
||||
void kobject_init(struct kobject *kobj, const struct kobj_type *ktype)
|
||||
{
|
||||
char *err_str;
|
||||
|
||||
|
@ -461,7 +461,7 @@ EXPORT_SYMBOL(kobject_add);
|
|||
* same type of error handling after a call to kobject_add() and kobject
|
||||
* lifetime rules are the same here.
|
||||
*/
|
||||
int kobject_init_and_add(struct kobject *kobj, struct kobj_type *ktype,
|
||||
int kobject_init_and_add(struct kobject *kobj, const struct kobj_type *ktype,
|
||||
struct kobject *parent, const char *fmt, ...)
|
||||
{
|
||||
va_list args;
|
||||
|
@ -679,7 +679,7 @@ EXPORT_SYMBOL(kobject_get_unless_zero);
|
|||
static void kobject_cleanup(struct kobject *kobj)
|
||||
{
|
||||
struct kobject *parent = kobj->parent;
|
||||
struct kobj_type *t = get_ktype(kobj);
|
||||
const struct kobj_type *t = get_ktype(kobj);
|
||||
const char *name = kobj->name;
|
||||
|
||||
pr_debug("kobject: '%s' (%p): %s, parent %p\n",
|
||||
|
|
|
@ -501,7 +501,7 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
|
|||
}
|
||||
/* skip the event, if the filter returns zero. */
|
||||
if (uevent_ops && uevent_ops->filter)
|
||||
if (!uevent_ops->filter(kset, kobj)) {
|
||||
if (!uevent_ops->filter(kobj)) {
|
||||
pr_debug("kobject: '%s' (%p): %s: filter function "
|
||||
"caused the event to drop!\n",
|
||||
kobject_name(kobj), kobj, __func__);
|
||||
|
@ -510,7 +510,7 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
|
|||
|
||||
/* originating subsystem */
|
||||
if (uevent_ops && uevent_ops->name)
|
||||
subsystem = uevent_ops->name(kset, kobj);
|
||||
subsystem = uevent_ops->name(kobj);
|
||||
else
|
||||
subsystem = kobject_name(&kset->kobj);
|
||||
if (!subsystem) {
|
||||
|
@ -554,7 +554,7 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
|
|||
|
||||
/* let the kset specific function add its stuff */
|
||||
if (uevent_ops && uevent_ops->uevent) {
|
||||
retval = uevent_ops->uevent(kset, kobj, env);
|
||||
retval = uevent_ops->uevent(kobj, env);
|
||||
if (retval) {
|
||||
pr_debug("kobject: '%s' (%p): %s: uevent() returned "
|
||||
"%d\n", kobject_name(kobj), kobj,
|
||||
|
|
Loading…
Reference in New Issue