Vaibhav Nagarnaik found that modifying the ring buffer size could cause

a huge latency in the system because it does a while loop to free pages
 without releasing the CPU (on non preempt kernels). In a case where there
 are hundreds of thousands of pages to free it could actually cause a system
 stall. A properly place cond_resched() solves this issue.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCW6GGJhQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qo2dAQDN4SUsItEc28ij5vYKoP1mSLt8aax1
 1UoIHrh1pTLUMQD+PSlbtZnUq27vfGwyEFrIWLQ5eeDy3IESkQzoXWcs0gY=
 =HpN3
 -----END PGP SIGNATURE-----

Merge tag 'trace-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Steven writes:
  "Vaibhav Nagarnaik found that modifying the ring buffer size could cause
   a huge latency in the system because it does a while loop to free pages
   without releasing the CPU (on non preempt kernels). In a case where there
   are hundreds of thousands of pages to free it could actually cause a system
   stall. A properly place cond_resched() solves this issue."
This commit is contained in:
Greg Kroah-Hartman 2018-09-19 07:41:46 +02:00
commit f21f7fa263
1 changed files with 2 additions and 0 deletions

View File

@ -1546,6 +1546,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
tmp_iter_page = first_page;
do {
cond_resched();
to_remove_page = tmp_iter_page;
rb_inc_page(cpu_buffer, &tmp_iter_page);