optimize CrocodocDocument.update_process_states

there is an index on process_states, but postgres can't use it cause
find_in_batches is ordering by id. so use the slave instead, which
allows (nay, forces) using a cursor, and no ordering, so postgres
can use it's index. (still do updates on the master)

Change-Id: I6f7a18f7ffa7a1a1971e4eb5195af890e2038fbd
Reviewed-on: https://gerrit.instructure.com/23593
Tested-by: Jenkins <jenkins@instructure.com>
Reviewed-by: Simon Williams <simon@instructure.com>
Product-Review: Cody Cutrer <cody@instructure.com>
QA-Review: Cody Cutrer <cody@instructure.com>
This commit is contained in:
Cody Cutrer 2013-08-22 09:56:00 -06:00
parent 18b660231a
commit 229abad7f2
1 changed files with 28 additions and 24 deletions

View File

@ -125,7 +125,9 @@ class CrocodocDocument < ActiveRecord::Base
def self.update_process_states
bs = Setting.get('crocodoc_status_check_batch_size', '45').to_i
Shackles.activate(:slave) do
CrocodocDocument.where(:process_state => %w(QUEUED PROCESSING)).find_in_batches do |docs|
Shackles.activate(:master) do
statuses = []
docs.each_slice(bs) do |sub_docs|
statuses.concat CrocodocDocument.crocodoc_api.status(sub_docs.map(&:uuid))
@ -154,4 +156,6 @@ class CrocodocDocument < ActiveRecord::Base
end
end
end
end
end
end