genirq/affinity: Fix node generation from cpumask
Commit34c3d9819f
("genirq/affinity: Provide smarter irq spreading infrastructure") introduced a better IRQ spreading mechanism, taking account of the available NUMA nodes in the machine. Problem is that the algorithm of retrieving the nodemask iterates "linearly" based on the number of online nodes - some architectures present non-linear node distribution among the nodemask, like PowerPC. If this is the case, the algorithm lead to a wrong node count number and therefore to a bad/incomplete IRQ affinity distribution. For example, this problem were found in a machine with 128 CPUs and two nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly distributed). This led to a wrong affinity distribution which then led to a bad mq allocation for nvme driver. Finally, we take the opportunity to fix a comment regarding the affinity distribution when we have _more_ nodes than vectors. Fixes:34c3d9819f
("genirq/affinity: Provide smarter irq spreading infrastructure") Reported-by: Gabriel Krisman Bertazi <gabriel@krisman.be> Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Gabriel Krisman Bertazi <gabriel@krisman.be> Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: linux-pci@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: hch@lst.de Link: http://lkml.kernel.org/r/1481738472-2671-1-git-send-email-gpiccoli@linux.vnet.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
parent
f082f02c47
commit
c0af524372
|
@ -37,10 +37,10 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
|
||||||
|
|
||||||
static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
|
static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk)
|
||||||
{
|
{
|
||||||
int n, nodes;
|
int n, nodes = 0;
|
||||||
|
|
||||||
/* Calculate the number of nodes in the supplied affinity mask */
|
/* Calculate the number of nodes in the supplied affinity mask */
|
||||||
for (n = 0, nodes = 0; n < num_online_nodes(); n++) {
|
for_each_online_node(n) {
|
||||||
if (cpumask_intersects(mask, cpumask_of_node(n))) {
|
if (cpumask_intersects(mask, cpumask_of_node(n))) {
|
||||||
node_set(n, *nodemsk);
|
node_set(n, *nodemsk);
|
||||||
nodes++;
|
nodes++;
|
||||||
|
@ -82,7 +82,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
|
||||||
nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
|
nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If the number of nodes in the mask is less than or equal the
|
* If the number of nodes in the mask is greater than or equal the
|
||||||
* number of vectors we just spread the vectors across the nodes.
|
* number of vectors we just spread the vectors across the nodes.
|
||||||
*/
|
*/
|
||||||
if (affv <= nodes) {
|
if (affv <= nodes) {
|
||||||
|
|
Loading…
Reference in New Issue