xen-netfront: request Tx response events more often

Trying to batch Tx response events results in poor performance because
this delays freeing the transmitted skbs.

Instead use the standard RING_FINAL_CHECK_FOR_RESPONSES() macro to be
notified once the next Tx response is placed on the ring.

Signed-off-by: Malcolm Crossley <malcolm.crossley@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Malcolm Crossley 2016-01-26 17:12:44 +00:00 committed by David S. Miller
parent 3b89624ab5
commit 7d0105b533
1 changed files with 3 additions and 12 deletions

View File

@ -364,6 +364,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
RING_IDX cons, prod; RING_IDX cons, prod;
unsigned short id; unsigned short id;
struct sk_buff *skb; struct sk_buff *skb;
bool more_to_do;
BUG_ON(!netif_carrier_ok(queue->info->netdev)); BUG_ON(!netif_carrier_ok(queue->info->netdev));
@ -398,18 +399,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
queue->tx.rsp_cons = prod; queue->tx.rsp_cons = prod;
/* RING_FINAL_CHECK_FOR_RESPONSES(&queue->tx, more_to_do);
* Set a new event, then check for race with update of tx_cons. } while (more_to_do);
* Note that it is essential to schedule a callback, no matter
* how few buffers are pending. Even if there is space in the
* transmit ring, higher layers may be blocked because too much
* data is outstanding: in such cases notification from Xen is
* likely to be the only kick that we'll get.
*/
queue->tx.sring->rsp_event =
prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
mb(); /* update shared area */
} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
xennet_maybe_wake_tx(queue); xennet_maybe_wake_tx(queue);
} }