net: sched: shrink struct Qdisc

The struct Qdisc has a lot of holes, especially after commit
a53851e2c3 ("net: sched: explicit locking in gso_cpu fallback"),
which as a side effect, moved the fields just after 'busylock'
on a new cacheline.

Since both 'padded' and 'refcnt' are not updated frequently, and
there is a hole before 'gso_skb', we can move such fields there,
saving a cacheline without any performance side effect.

Before this commit:

pahole -C Qdisc net/sche/sch_generic.o
	# ...
        /* size: 384, cachelines: 6, members: 25 */
        /* sum members: 236, holes: 3, sum holes: 92 */
        /* padding: 56 */

After this commit:
pahole -C Qdisc net/sche/sch_generic.o
	# ...
	/* size: 320, cachelines: 5, members: 25 */
	/* sum members: 236, holes: 2, sum holes: 28 */
	/* padding: 56 */

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Paolo Abeni 2018-05-25 16:28:44 +02:00 committed by David S. Miller
parent 102cd90963
commit e9be0e993d
1 changed files with 2 additions and 2 deletions

View File

@ -85,6 +85,8 @@ struct Qdisc {
struct net_rate_estimator __rcu *rate_est;
struct gnet_stats_basic_cpu __percpu *cpu_bstats;
struct gnet_stats_queue __percpu *cpu_qstats;
int padded;
refcount_t refcnt;
/*
* For performance sake on SMP, we put highly modified fields at the end
@ -97,8 +99,6 @@ struct Qdisc {
unsigned long state;
struct Qdisc *next_sched;
struct sk_buff_head skb_bad_txq;
int padded;
refcount_t refcnt;
spinlock_t busylock ____cacheline_aligned_in_smp;
spinlock_t seqlock;