It has been observed that in zero-loss benchmarks, when a
slow traffic rate is being tested, the IRQ timer coalescing
parameter was set too high, and the ethernet controller
would start dropping packets because the job ring back half
wouldn't be executed in time before the ethernet controller
would fill its buffers, thereby significantly reducing the
zero-loss performance figures.
Empirical testing has shown that the best zero-loss performance
is achieved when IRQ coalescing is set to minimum values and/or
turned off, since apparently the job ring driver already implements
an adequately-performing general-purpose IRQ mitigation strategy
in software.
Whilst we could go with minimal count (2-8) and timing settings
(192-256), we prefer just turning h/w coalescing altogether off
to minimize setkey latency (due to split key generation), and
for consistent cross-SoC performance (the SEC vs. core clock
ratio changes).
Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>