canvas-lms/lib/outcomes/result_analytics.rb

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

247 lines
10 KiB
Ruby
Raw Permalink Normal View History

# frozen_string_literal: true
#
# Copyright (C) 2013 - present Instructure, Inc.
#
# This file is part of Canvas.
#
# Canvas is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the Free
# Software Foundation, version 3 of the License.
#
# Canvas is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
#
module Outcomes
module ResultAnalytics
include CanvasOutcomesHelper
include OutcomeResultResolverHelper
Rollup = Struct.new(:context, :scores)
Result = Struct.new(:learning_outcome, :score, :count, :hide_points) # rubocop:disable Lint/StructNewOverride
# Public: Queries learning_outcome_results for rollup.
#
Exclude outcome results from muted asgmts/quizzes closes OUT-2304 performance: Indices are used throughout the scoped query. Shard.current.id => 1773 base query: LearningOutcomeResult.active.where(context_code:'course_1079845',user_id:3306819,learning_outcome_id:1397026) without scope: ---------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) with scope (`exclude_muted_associations`): ---------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=6.99..20.54 rows=1 width=268) Join Filter: ((learning_outcome_results.association_type)::text = 'Assignment'::text) Filter: (((ra.muted IS NULL) AND (qa.muted IS NULL) AND (sa.muted IS NULL)) OR (ra.muted IS FALSE) OR (qa.muted IS FALSE) OR (sa.muted IS FALSE)) -> Nested Loop Left Join (cost=6.56..17.88 rows=1 width=270) -> Nested Loop Left Join (cost=6.13..15.72 rows=1 width=277) Join Filter: ((learning_outcome_results.association_type)::text = 'Quizzes::Quiz'::text) -> Nested Loop Left Join (cost=5.71..13.06 rows=1 width=269) Join Filter: (((rassoc.association_type)::text = 'Assignment'::text) AND ((rassoc.purpose)::text = 'grading'::text)) -> Nested Loop Left Join (cost=5.28..10.85 rows=1 width=293) Join Filter: ((learning_outcome_results.association_type)::text = 'RubricAssociation'::text) -> Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) -> Index Scan using rubric_associations_pkey on rubric_associations rassoc (cost=0.42..2.64 rows=1 width=33) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments ra (cost=0.43..2.19 rows=1 width=9) Index Cond: (id = rassoc.association_id) -> Index Scan using quizzes_pkey on quizzes (cost=0.43..2.65 rows=1 width=16) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments qa (cost=0.43..2.15 rows=1 width=9) Index Cond: (id = quizzes.assignment_id) -> Index Scan using assignments_pkey on assignments sa (cost=0.43..2.65 rows=1 width=9) Index Cond: (id = learning_outcome_results.association_id) test plan: - create two course-level outcomes - create an assignment with a single question, and align the 1st outcome via a rubric - create a quiz bank with a single auto-gradeable question (e.g. true/false), and align the 2nd outcome to it - create a quiz that pulls the single question from the quiz bank above - as a student, submit to the assignment and the quiz - as a teacher, assess the assignment, providing a score to the rubric - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * both outcomes have results in the sLMGB - as a teacher, mute the assignment in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the quiz bank has results in the sLMGB - as a teacher, mute the quiz in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * no outcomes should have results in the sLMGB - as a teacher, unmute the assignment in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the assignment has results in the sLMGB Change-Id: I0ea05eedd29383501cc9306bcedcfa67aee4cd67 Reviewed-on: https://gerrit.instructure.com/155210 Tested-by: Jenkins Reviewed-by: Neil Gupta <ngupta@instructure.com> Product-Review: Neil Gupta <ngupta@instructure.com> QA-Review: Neil Gupta <ngupta@instructure.com>
2018-06-22 06:52:49 +08:00
# user - User requesting results.
# opts - The options for the query. In a later version of ruby, these would
# be named parameters.
# :users - The users to lookup results for (required)
# :context - The context to lookup results for (required)
# :outcomes - The outcomes to lookup results for (required)
#
# Returns a relation of the results, suitably ordered.
Exclude outcome results from muted asgmts/quizzes closes OUT-2304 performance: Indices are used throughout the scoped query. Shard.current.id => 1773 base query: LearningOutcomeResult.active.where(context_code:'course_1079845',user_id:3306819,learning_outcome_id:1397026) without scope: ---------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) with scope (`exclude_muted_associations`): ---------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=6.99..20.54 rows=1 width=268) Join Filter: ((learning_outcome_results.association_type)::text = 'Assignment'::text) Filter: (((ra.muted IS NULL) AND (qa.muted IS NULL) AND (sa.muted IS NULL)) OR (ra.muted IS FALSE) OR (qa.muted IS FALSE) OR (sa.muted IS FALSE)) -> Nested Loop Left Join (cost=6.56..17.88 rows=1 width=270) -> Nested Loop Left Join (cost=6.13..15.72 rows=1 width=277) Join Filter: ((learning_outcome_results.association_type)::text = 'Quizzes::Quiz'::text) -> Nested Loop Left Join (cost=5.71..13.06 rows=1 width=269) Join Filter: (((rassoc.association_type)::text = 'Assignment'::text) AND ((rassoc.purpose)::text = 'grading'::text)) -> Nested Loop Left Join (cost=5.28..10.85 rows=1 width=293) Join Filter: ((learning_outcome_results.association_type)::text = 'RubricAssociation'::text) -> Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) -> Index Scan using rubric_associations_pkey on rubric_associations rassoc (cost=0.42..2.64 rows=1 width=33) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments ra (cost=0.43..2.19 rows=1 width=9) Index Cond: (id = rassoc.association_id) -> Index Scan using quizzes_pkey on quizzes (cost=0.43..2.65 rows=1 width=16) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments qa (cost=0.43..2.15 rows=1 width=9) Index Cond: (id = quizzes.assignment_id) -> Index Scan using assignments_pkey on assignments sa (cost=0.43..2.65 rows=1 width=9) Index Cond: (id = learning_outcome_results.association_id) test plan: - create two course-level outcomes - create an assignment with a single question, and align the 1st outcome via a rubric - create a quiz bank with a single auto-gradeable question (e.g. true/false), and align the 2nd outcome to it - create a quiz that pulls the single question from the quiz bank above - as a student, submit to the assignment and the quiz - as a teacher, assess the assignment, providing a score to the rubric - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * both outcomes have results in the sLMGB - as a teacher, mute the assignment in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the quiz bank has results in the sLMGB - as a teacher, mute the quiz in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * no outcomes should have results in the sLMGB - as a teacher, unmute the assignment in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the assignment has results in the sLMGB Change-Id: I0ea05eedd29383501cc9306bcedcfa67aee4cd67 Reviewed-on: https://gerrit.instructure.com/155210 Tested-by: Jenkins Reviewed-by: Neil Gupta <ngupta@instructure.com> Product-Review: Neil Gupta <ngupta@instructure.com> QA-Review: Neil Gupta <ngupta@instructure.com>
2018-06-22 06:52:49 +08:00
def find_outcome_results(user, opts)
required_opts = %i[users context outcomes]
required_opts.each { |p| raise "#{p} option is required" unless opts[p] }
users, context, outcomes = opts.values_at(*required_opts)
results = LearningOutcomeResult.active.with_active_link.where(
context_code: context.asset_string,
user_id: users.map(&:id),
learning_outcome_id: outcomes.map(&:id)
)
Merge OS and Canvas results in LMGB closes OUT-5127 flag=outcome_service_results_to_canvas - Test plan: 1. Turn on outcome_service_results_to_canvas FF 2. Turn on Learning Mastery Grade book FF 3. In a course, create the following: 1. Outcomes: - Outcome 1, Outcome 2, Outcome 3 2. Course Item Bank - Align the bank to Outcome 1 - Add one question to it 3. Classic Quizzes: - Classic Quiz 1 - Add the question from the Course Item Bank - Classic Quiz 2 - Add the question from the Course Item Bank 4. New Quizzes - New Quiz 1 - Add 1 question and align the whole quiz to Outcome 2 - New Quiz 2 - Add 1 question and align the whole quiz to Outcome 3 5. 3 Students - Student A, Student B, Student C 4. Load LMGB - results should be empty but should see Outcomes associated to the Course 5. Act as Student A - Take Classic quiz 1 and answer correctly - Take Classic quiz 2 and answer incorrectly 6. Stop acting as Student A 7. Load LMGB - Results should show for Student A 8. Act as Student B - Take New Quiz 1 and answer correctly 9. Stop acting as Student B 10. Load LMGB - Results should show for Student A & B 11. Act as Student C - Take New Quiz 2 and answers correctly 12. Stop acting as Student C 13. Load LMGB - Results should show for Student A, B & C 14. Set the Student A as inactive course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "inactive") 15. Reload LMGB - Results should only show for Student B & C 16. Set Student B’s enrollment as concluded course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "completed") 17. Reload LMGB - results should only show for Student C Change-Id: I9f2b6b242d9c1c2b4bd238dd11205c875660daa0 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/301450 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Angela Gomba <angela.gomba@instructure.com> Product-Review: Kyle Rosenbaum <krosenbaum@instructure.com>
2022-09-12 23:07:05 +08:00
# muted associations is applied to remove assignments that students
exclude muted assocations from OS results in sLMGB closes OUT-5299 flag=outcome_service_results_to_canvas test plan: - tests pass in Jenkins: please pay extra attention to the test plan. These tests should line up with the tests in learning_outcome_result_spec.rb to ensure all assignments that are classified as muted are removed from the OS results. - For testing in the UI: - In a course with outcomes, create 2 New Quizzes - Each quiz should be aligned to a different outcome - Turn on LMGB, sLMGB, and Outcome Service Results to Canvas FF on. - Take each quiz as a student - As a teacher, confirm: - both outcomes have results in the LMGB - both outcomes have results in the sLMGB - results will be identified as a mastered/unmaster pill. The list of assignments & its aligning mastery in PS OUT-5297 & OUT-5298 - As a student, confirm: - both outcomes has results in the sLMGB - As a teacher, mute 1 quiz in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - As a teacher, confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - Only 1 outcome results is displaying in the sLMGB - As a teacher, mute the 2 quiz in the gradebook and confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - 0 outcome results is displaying in the sLMGB - As a teacher, unmute both quizzes & confirm: - both outcomes has results in the LMGB & sLMGB - As a student, confirm: - both outcomes has results in the sLMGB Change-Id: I21e085b3e856410cfe89ce57db2271f05858c097 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/302565 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Dave Wenzlick <david.wenzlick@instructure.com> Product-Review: Chrystal Langston <chrystal.langston@instructure.com>
2022-10-05 05:31:58 +08:00
# are not yet allowed to view:
Merge OS and Canvas results in LMGB closes OUT-5127 flag=outcome_service_results_to_canvas - Test plan: 1. Turn on outcome_service_results_to_canvas FF 2. Turn on Learning Mastery Grade book FF 3. In a course, create the following: 1. Outcomes: - Outcome 1, Outcome 2, Outcome 3 2. Course Item Bank - Align the bank to Outcome 1 - Add one question to it 3. Classic Quizzes: - Classic Quiz 1 - Add the question from the Course Item Bank - Classic Quiz 2 - Add the question from the Course Item Bank 4. New Quizzes - New Quiz 1 - Add 1 question and align the whole quiz to Outcome 2 - New Quiz 2 - Add 1 question and align the whole quiz to Outcome 3 5. 3 Students - Student A, Student B, Student C 4. Load LMGB - results should be empty but should see Outcomes associated to the Course 5. Act as Student A - Take Classic quiz 1 and answer correctly - Take Classic quiz 2 and answer incorrectly 6. Stop acting as Student A 7. Load LMGB - Results should show for Student A 8. Act as Student B - Take New Quiz 1 and answer correctly 9. Stop acting as Student B 10. Load LMGB - Results should show for Student A & B 11. Act as Student C - Take New Quiz 2 and answers correctly 12. Stop acting as Student C 13. Load LMGB - Results should show for Student A, B & C 14. Set the Student A as inactive course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "inactive") 15. Reload LMGB - Results should only show for Student B & C 16. Set Student B’s enrollment as concluded course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "completed") 17. Reload LMGB - results should only show for Student C Change-Id: I9f2b6b242d9c1c2b4bd238dd11205c875660daa0 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/301450 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Angela Gomba <angela.gomba@instructure.com> Product-Review: Kyle Rosenbaum <krosenbaum@instructure.com>
2022-09-12 23:07:05 +08:00
# Assignment Grades have not be posted yet (i.e. Submission.posted_at = nil)
exclude muted assocations from OS results in sLMGB closes OUT-5299 flag=outcome_service_results_to_canvas test plan: - tests pass in Jenkins: please pay extra attention to the test plan. These tests should line up with the tests in learning_outcome_result_spec.rb to ensure all assignments that are classified as muted are removed from the OS results. - For testing in the UI: - In a course with outcomes, create 2 New Quizzes - Each quiz should be aligned to a different outcome - Turn on LMGB, sLMGB, and Outcome Service Results to Canvas FF on. - Take each quiz as a student - As a teacher, confirm: - both outcomes have results in the LMGB - both outcomes have results in the sLMGB - results will be identified as a mastered/unmaster pill. The list of assignments & its aligning mastery in PS OUT-5297 & OUT-5298 - As a student, confirm: - both outcomes has results in the sLMGB - As a teacher, mute 1 quiz in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - As a teacher, confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - Only 1 outcome results is displaying in the sLMGB - As a teacher, mute the 2 quiz in the gradebook and confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - 0 outcome results is displaying in the sLMGB - As a teacher, unmute both quizzes & confirm: - both outcomes has results in the LMGB & sLMGB - As a student, confirm: - both outcomes has results in the sLMGB Change-Id: I21e085b3e856410cfe89ce57db2271f05858c097 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/302565 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Dave Wenzlick <david.wenzlick@instructure.com> Product-Review: Chrystal Langston <chrystal.langston@instructure.com>
2022-10-05 05:31:58 +08:00
# PostPolicy.post_manually is false & the submission is not posted
Merge OS and Canvas results in LMGB closes OUT-5127 flag=outcome_service_results_to_canvas - Test plan: 1. Turn on outcome_service_results_to_canvas FF 2. Turn on Learning Mastery Grade book FF 3. In a course, create the following: 1. Outcomes: - Outcome 1, Outcome 2, Outcome 3 2. Course Item Bank - Align the bank to Outcome 1 - Add one question to it 3. Classic Quizzes: - Classic Quiz 1 - Add the question from the Course Item Bank - Classic Quiz 2 - Add the question from the Course Item Bank 4. New Quizzes - New Quiz 1 - Add 1 question and align the whole quiz to Outcome 2 - New Quiz 2 - Add 1 question and align the whole quiz to Outcome 3 5. 3 Students - Student A, Student B, Student C 4. Load LMGB - results should be empty but should see Outcomes associated to the Course 5. Act as Student A - Take Classic quiz 1 and answer correctly - Take Classic quiz 2 and answer incorrectly 6. Stop acting as Student A 7. Load LMGB - Results should show for Student A 8. Act as Student B - Take New Quiz 1 and answer correctly 9. Stop acting as Student B 10. Load LMGB - Results should show for Student A & B 11. Act as Student C - Take New Quiz 2 and answers correctly 12. Stop acting as Student C 13. Load LMGB - Results should show for Student A, B & C 14. Set the Student A as inactive course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "inactive") 15. Reload LMGB - Results should only show for Student B & C 16. Set Student B’s enrollment as concluded course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "completed") 17. Reload LMGB - results should only show for Student C Change-Id: I9f2b6b242d9c1c2b4bd238dd11205c875660daa0 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/301450 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Angela Gomba <angela.gomba@instructure.com> Product-Review: Kyle Rosenbaum <krosenbaum@instructure.com>
2022-09-12 23:07:05 +08:00
# Assignment grading_type is not_graded
exclude muted assocations from OS results in sLMGB closes OUT-5299 flag=outcome_service_results_to_canvas test plan: - tests pass in Jenkins: please pay extra attention to the test plan. These tests should line up with the tests in learning_outcome_result_spec.rb to ensure all assignments that are classified as muted are removed from the OS results. - For testing in the UI: - In a course with outcomes, create 2 New Quizzes - Each quiz should be aligned to a different outcome - Turn on LMGB, sLMGB, and Outcome Service Results to Canvas FF on. - Take each quiz as a student - As a teacher, confirm: - both outcomes have results in the LMGB - both outcomes have results in the sLMGB - results will be identified as a mastered/unmaster pill. The list of assignments & its aligning mastery in PS OUT-5297 & OUT-5298 - As a student, confirm: - both outcomes has results in the sLMGB - As a teacher, mute 1 quiz in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - As a teacher, confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - Only 1 outcome results is displaying in the sLMGB - As a teacher, mute the 2 quiz in the gradebook and confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - 0 outcome results is displaying in the sLMGB - As a teacher, unmute both quizzes & confirm: - both outcomes has results in the LMGB & sLMGB - As a student, confirm: - both outcomes has results in the sLMGB Change-Id: I21e085b3e856410cfe89ce57db2271f05858c097 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/302565 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Dave Wenzlick <david.wenzlick@instructure.com> Product-Review: Chrystal Langston <chrystal.langston@instructure.com>
2022-10-05 05:31:58 +08:00
# see result_analytics_spec.rb for more details around what is excluded/included
Exclude outcome results from muted asgmts/quizzes closes OUT-2304 performance: Indices are used throughout the scoped query. Shard.current.id => 1773 base query: LearningOutcomeResult.active.where(context_code:'course_1079845',user_id:3306819,learning_outcome_id:1397026) without scope: ---------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) with scope (`exclude_muted_associations`): ---------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=6.99..20.54 rows=1 width=268) Join Filter: ((learning_outcome_results.association_type)::text = 'Assignment'::text) Filter: (((ra.muted IS NULL) AND (qa.muted IS NULL) AND (sa.muted IS NULL)) OR (ra.muted IS FALSE) OR (qa.muted IS FALSE) OR (sa.muted IS FALSE)) -> Nested Loop Left Join (cost=6.56..17.88 rows=1 width=270) -> Nested Loop Left Join (cost=6.13..15.72 rows=1 width=277) Join Filter: ((learning_outcome_results.association_type)::text = 'Quizzes::Quiz'::text) -> Nested Loop Left Join (cost=5.71..13.06 rows=1 width=269) Join Filter: (((rassoc.association_type)::text = 'Assignment'::text) AND ((rassoc.purpose)::text = 'grading'::text)) -> Nested Loop Left Join (cost=5.28..10.85 rows=1 width=293) Join Filter: ((learning_outcome_results.association_type)::text = 'RubricAssociation'::text) -> Nested Loop (cost=4.86..8.20 rows=1 width=268) -> Bitmap Heap Scan on learning_outcome_results (cost=4.42..5.54 rows=1 width=268) Recheck Cond: ((user_id = 3306819) AND (learning_outcome_id = 1397026)) Filter: ((context_code)::text = 'course_1079845'::text) -> BitmapAnd (cost=4.42..4.42 rows=1 width=0) -> Bitmap Index Scan on index_learning_outcome_results_association (cost=0.00..1.73 rows=27 width=0) Index Cond: (user_id = 3306819) -> Bitmap Index Scan on index_learning_outcome_results_on_learning_outcome_id (cost=0.00..2.44 rows=123 width=0) Index Cond: (learning_outcome_id = 1397026) -> Index Scan using content_tags_pkey on content_tags (cost=0.43..2.66 rows=1 width=8) Index Cond: (id = learning_outcome_results.content_tag_id) Filter: ((workflow_state)::text <> 'deleted'::text) -> Index Scan using rubric_associations_pkey on rubric_associations rassoc (cost=0.42..2.64 rows=1 width=33) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments ra (cost=0.43..2.19 rows=1 width=9) Index Cond: (id = rassoc.association_id) -> Index Scan using quizzes_pkey on quizzes (cost=0.43..2.65 rows=1 width=16) Index Cond: (id = learning_outcome_results.association_id) -> Index Scan using assignments_pkey on assignments qa (cost=0.43..2.15 rows=1 width=9) Index Cond: (id = quizzes.assignment_id) -> Index Scan using assignments_pkey on assignments sa (cost=0.43..2.65 rows=1 width=9) Index Cond: (id = learning_outcome_results.association_id) test plan: - create two course-level outcomes - create an assignment with a single question, and align the 1st outcome via a rubric - create a quiz bank with a single auto-gradeable question (e.g. true/false), and align the 2nd outcome to it - create a quiz that pulls the single question from the quiz bank above - as a student, submit to the assignment and the quiz - as a teacher, assess the assignment, providing a score to the rubric - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * both outcomes have results in the sLMGB - as a teacher, mute the assignment in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the quiz bank has results in the sLMGB - as a teacher, mute the quiz in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * no outcomes should have results in the sLMGB - as a teacher, unmute the assignment in the gradebook - as a teacher, confirm: * both outcomes have results in the LMGB * both outcomes have results in the sLMGB - as a student, confirm: * only the outcome associated with the assignment has results in the sLMGB Change-Id: I0ea05eedd29383501cc9306bcedcfa67aee4cd67 Reviewed-on: https://gerrit.instructure.com/155210 Tested-by: Jenkins Reviewed-by: Neil Gupta <ngupta@instructure.com> Product-Review: Neil Gupta <ngupta@instructure.com> QA-Review: Neil Gupta <ngupta@instructure.com>
2018-06-22 06:52:49 +08:00
unless context.grants_any_right?(user, :manage_grades, :view_all_grades)
results = results.exclude_muted_associations
end
Merge OS and Canvas results in LMGB closes OUT-5127 flag=outcome_service_results_to_canvas - Test plan: 1. Turn on outcome_service_results_to_canvas FF 2. Turn on Learning Mastery Grade book FF 3. In a course, create the following: 1. Outcomes: - Outcome 1, Outcome 2, Outcome 3 2. Course Item Bank - Align the bank to Outcome 1 - Add one question to it 3. Classic Quizzes: - Classic Quiz 1 - Add the question from the Course Item Bank - Classic Quiz 2 - Add the question from the Course Item Bank 4. New Quizzes - New Quiz 1 - Add 1 question and align the whole quiz to Outcome 2 - New Quiz 2 - Add 1 question and align the whole quiz to Outcome 3 5. 3 Students - Student A, Student B, Student C 4. Load LMGB - results should be empty but should see Outcomes associated to the Course 5. Act as Student A - Take Classic quiz 1 and answer correctly - Take Classic quiz 2 and answer incorrectly 6. Stop acting as Student A 7. Load LMGB - Results should show for Student A 8. Act as Student B - Take New Quiz 1 and answer correctly 9. Stop acting as Student B 10. Load LMGB - Results should show for Student A & B 11. Act as Student C - Take New Quiz 2 and answers correctly 12. Stop acting as Student C 13. Load LMGB - Results should show for Student A, B & C 14. Set the Student A as inactive course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "inactive") 15. Reload LMGB - Results should only show for Student B & C 16. Set Student B’s enrollment as concluded course = Course.find(#{course id}) user = User.find(#{student user id}) StudentEnrollment.where( course_id: course.id, user_id: user.id).update( workflow_state: "completed") 17. Reload LMGB - results should only show for Student C Change-Id: I9f2b6b242d9c1c2b4bd238dd11205c875660daa0 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/301450 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Angela Gomba <angela.gomba@instructure.com> Product-Review: Kyle Rosenbaum <krosenbaum@instructure.com>
2022-09-12 23:07:05 +08:00
# LOR hidden is populated for non-scoring rubrics only which is set
# by checking Don't post Outcomes results to Learning Mastery Gradebook`
# when adding a rubric to an assignment
# also see rubric_assessment.create_outcome_result
unless opts[:include_hidden]
results = results.where(hidden: false)
end
order_results_for_rollup results
end
# Public: Queries Outcome Service to return for outcome results.
#
# user - User requesting results.
# opts - The options for the query. In a later version of ruby, these would
# be named parameters.
# :users - The users to lookup results for (required)
# :context - The context to lookup results for (required)
# :outcomes - The outcomes to lookup results for (required)
# :assignments - The assignments to lookup results for (required)
#
# Returns json object
def find_outcomes_service_outcome_results(opts)
required_opts = %i[users context outcomes assignments]
required_opts.each { |p| raise "#{p} option is required" unless opts[p] }
users, context, outcomes, assignments = opts.values_at(*required_opts)
user_uuids = users.pluck(:uuid).join(",")
assignment_ids = assignments.pluck(:id).join(",")
exclude muted assocations from OS results in sLMGB closes OUT-5299 flag=outcome_service_results_to_canvas test plan: - tests pass in Jenkins: please pay extra attention to the test plan. These tests should line up with the tests in learning_outcome_result_spec.rb to ensure all assignments that are classified as muted are removed from the OS results. - For testing in the UI: - In a course with outcomes, create 2 New Quizzes - Each quiz should be aligned to a different outcome - Turn on LMGB, sLMGB, and Outcome Service Results to Canvas FF on. - Take each quiz as a student - As a teacher, confirm: - both outcomes have results in the LMGB - both outcomes have results in the sLMGB - results will be identified as a mastered/unmaster pill. The list of assignments & its aligning mastery in PS OUT-5297 & OUT-5298 - As a student, confirm: - both outcomes has results in the sLMGB - As a teacher, mute 1 quiz in the gradebook: https://community.canvaslms.com/docs/DOC-12961-4152724339 - As a teacher, confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - Only 1 outcome results is displaying in the sLMGB - As a teacher, mute the 2 quiz in the gradebook and confirm: - both outcomes are displaying in the LMGB & sLMGB - As a student, confirm: - 0 outcome results is displaying in the sLMGB - As a teacher, unmute both quizzes & confirm: - both outcomes has results in the LMGB & sLMGB - As a student, confirm: - both outcomes has results in the sLMGB Change-Id: I21e085b3e856410cfe89ce57db2271f05858c097 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/302565 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Reviewed-by: Dave Wenzlick <david.wenzlick@instructure.com> QA-Review: Dave Wenzlick <david.wenzlick@instructure.com> Product-Review: Chrystal Langston <chrystal.langston@instructure.com>
2022-10-05 05:31:58 +08:00
outcome_ids = outcomes.pluck(:id).join(",")
get_lmgb_results(context, assignment_ids, "canvas.assignment.quizzes", outcome_ids, user_uuids)
end
# Converts json results from OS API to LearningOutcomeResults and removes duplicate result data
# Tech debt: decouple conversion and removing duplicates
#
# results - OS api results json (see get_lmgb_results)
# context - results context (aka current course)
#
# Returns an array of LearningOutcomeResult objects
def handle_outcomes_service_results(results, context, outcomes, users, assignments)
# if results are nil - FF is turned off for the given context
# if results are empty - no results were matched
if results.blank?
Rails.logger.warn("No Outcome Service outcome results found for context: #{context.uuid}")
return nil
end
# return resolved results list of Rollup objects
resolve_outcome_results(results, context, outcomes, users, assignments)
end
# Internal: Add an order clause to a relation so results are returned in an
# order suitable for rollup calculations.
#
# relation - The relation to add an order clause to.
#
# Returns the resulting relation
def order_results_for_rollup(relation)
relation.joins(:user)
.order(User.sortable_name_order_by_clause)
.order("users.id ASC, learning_outcome_results.learning_outcome_id ASC, learning_outcome_results.id ASC")
end
# Public: Generates a rollup of each outcome result for each user.
#
# results - An Enumeration of properly sorted LearningOutcomeResult objects.
# The results should be sorted by user id and then by outcome id.
#
# users - (Optional) Ensure rollups are included for users in this list.
# A listed user with no results will have an empty score array.
#
# excludes - (Optional) Specify additional values to exclude. "missing_user_rollups" excludes
# rollups for users without results.
#
# context - (Optional) The current context making the function call which will be used in
# determining the current_method chosen for calculating rollups.
#
# Returns an Array of Rollup objects.
def outcome_results_rollups(results:, users: [], excludes: [], context: nil)
rollups = results.group_by(&:user_id).map do |_, user_results|
Rollup.new(user_results.first.user, rollup_user_results(user_results, context))
end
if excludes.include? "missing_user_rollups"
rollups
else
add_missing_user_rollups(rollups, users)
end
end
# Public: Calculates an average rollup for the specified results
#
# results - An Enumeration of properly sorted LearningOutcomeResult objects.
# context - The context to use for the resulting rollup.
#
# Returns a Rollup.
Add median aggregate rollups closes OUT-2144 test plan: - create a course-level outcome with default rubric criterion - create an assignment that is aligned to that outcome - create two sections in a course (in Settings) - create 6 student users, split evenly between the two sections - masquerade as each student and submit to the assignment - in speedgrader, assess the rubrics with the following scores: 1st section students: 5, 3, 0 2nd section students: 4, 2, 1 - perform an authenticated API call using a tool like Postman to fetch the average aggregate score: > /api/v1/courses/<course id>/outcome_rollups?aggregate=course - confirm the average score is 2.5 - perform the same call again, but fetch the median aggregate score: > /api/v1/courses/<course id>/outcome_rollups?aggregate=course&aggregate_stat=median - confirm the median score is 2.5 - determine the "section_id" values for each of the two sections above by running this in a Rails console: > CourseSection.last(2).map(&:id) - append "&section=<section id>" to the average aggregate score call above, and confirm the average scores: 1st section: 2.67 2nd section: 2.33 - append ""&section=<section id>" to the median aggregate score call above, and confirm the median scores: 1st section: 3 2nd section: 2 Change-Id: I5701fd3edc1ff423caf4735406ee1bc3b5b1b011 Reviewed-on: https://gerrit.instructure.com/156486 Reviewed-by: Frank Murphy <fmurphy@instructure.com> Tested-by: Jenkins Reviewed-by: Matt Berns <mberns@instructure.com> QA-Review: Dariusz Dzien <ddzien@instructure.com> Product-Review: Augusto Callejas <acallejas@instructure.com>
2018-07-07 03:31:53 +08:00
def aggregate_outcome_results_rollup(results, context, stat = "mean")
rollups = outcome_results_rollups(results:, context:)
rollup_scores = rollups.map(&:scores).flatten
outcome_results = rollup_scores.group_by(&:outcome).values
aggregate_results = outcome_results.map do |scores|
scores.map { |score| Result.new(score.outcome, score.score, score.count, score.hide_points) }
end
opts = { aggregate_score: true, aggregate_stat: stat, **mastery_scale_opts(context) }
aggregate_rollups = aggregate_results.map do |result|
RollupScore.new(outcome_results: result, opts:)
end
Rollup.new(context, aggregate_rollups)
end
# Internal: Generates a rollup of the outcome results, Assuming all the
# results are for the same user.
#
# user_results - An Enumeration of LearningOutcomeResult objects for a user
# sorted by outcome id.
#
# Returns an Array of RollupScore objects
def rollup_user_results(user_results, context = nil)
fix outcome calcs for mix of assignments & quizzes fixes OUT-460 test plan: - create 2 outcomes, one with decaying average and one w/ n_mastery - attach each outcome to an assignment and a quiz - it's not reccomended to use exactly 5 questions for quiz testing since this has the potential to obfuscate possible calc errors - log in as a student and submit to the assignment/take the quiz - as the teacher/admin, asses the outcome on the assignment in speedgrader. It's reccomended to get a high score on at least one quiz that's being tested, in order to ensure the mastery score on the result does not exceed the max possible score for the outcome - view the students outcome scores in the lmgb and student lmgb to confirm the score's accuracy try various scores, but here's an initial example assuming Outcome A is decaying avg, and Outcome B is n_mastery Outcome A - Attach Outcome A to two assignments and two quizzes - Submit as the student, first to the two assignments, scoring a 3.0 and a 2.0, then on a quiz in which you get 90% on the aligned bank - on the final/most recent bank, score a 40% - the score for the Outcome should be 2.41 Outcome B - Attach Outcome B to two assignments and three quizzes - Submit as the student. Order does not matter, but ensure scores of 3.0 and 3.5 on the assignments, and 20%, 50%, and 80% on the quizzes. - the score for the Outcome should be 3.5 Change-Id: If99d8ab6a3791137e407ab43fd8af2c0d69058d5 Reviewed-on: https://gerrit.instructure.com/93333 Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> Tested-by: Jenkins QA-Review: Cemal Aktas <caktas@instructure.com> Product-Review: McCall Smith <mcsmith@instructure.com>
2016-10-21 04:39:59 +08:00
filtered_results = user_results.reject { |r| r.score.nil? }
opts = mastery_scale_opts(context)
fix outcome calcs for mix of assignments & quizzes fixes OUT-460 test plan: - create 2 outcomes, one with decaying average and one w/ n_mastery - attach each outcome to an assignment and a quiz - it's not reccomended to use exactly 5 questions for quiz testing since this has the potential to obfuscate possible calc errors - log in as a student and submit to the assignment/take the quiz - as the teacher/admin, asses the outcome on the assignment in speedgrader. It's reccomended to get a high score on at least one quiz that's being tested, in order to ensure the mastery score on the result does not exceed the max possible score for the outcome - view the students outcome scores in the lmgb and student lmgb to confirm the score's accuracy try various scores, but here's an initial example assuming Outcome A is decaying avg, and Outcome B is n_mastery Outcome A - Attach Outcome A to two assignments and two quizzes - Submit as the student, first to the two assignments, scoring a 3.0 and a 2.0, then on a quiz in which you get 90% on the aligned bank - on the final/most recent bank, score a 40% - the score for the Outcome should be 2.41 Outcome B - Attach Outcome B to two assignments and three quizzes - Submit as the student. Order does not matter, but ensure scores of 3.0 and 3.5 on the assignments, and 20%, 50%, and 80% on the quizzes. - the score for the Outcome should be 3.5 Change-Id: If99d8ab6a3791137e407ab43fd8af2c0d69058d5 Reviewed-on: https://gerrit.instructure.com/93333 Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> Tested-by: Jenkins QA-Review: Cemal Aktas <caktas@instructure.com> Product-Review: McCall Smith <mcsmith@instructure.com>
2016-10-21 04:39:59 +08:00
filtered_results.group_by(&:learning_outcome_id).map do |_, outcome_results|
RollupScore.new(outcome_results:, opts:)
end
end
def mastery_scale_opts(context)
return {} unless context.is_a?(Course) && context.root_account.feature_enabled?(:account_level_mastery_scales)
@mastery_scale_opts ||= {}
@mastery_scale_opts[context.asset_string] ||= begin
method = context.resolved_outcome_calculation_method
mastery_scale = context.resolved_outcome_proficiency
{
calculation_method: method&.calculation_method,
calculation_int: method&.calculation_int,
points_possible: mastery_scale&.points_possible,
mastery_points: mastery_scale&.mastery_points,
ratings: mastery_scale&.ratings_hash
}
end
end
# Internal: Adds rollups rows for users that did not have any results
#
# rollups - The list of rollup objects based on existing results.
# users - The list of User objects that should have results.
#
# Returns the modified rollups list. Users without rollups will have a
# rollup row with an empty scores array.
def add_missing_user_rollups(rollups, users)
missing_users = users - rollups.map(&:context)
rollups + missing_users.map { |u| Rollup.new(u, []) }
end
# Public: Gets rating percents for outcomes based on rollup
#
# Returns a hash of outcome id to array of rating percents
ensure course scale is used in the LMGB closes OUT-4042 flag=account_level_mastery_scales test-plan: - Generate student result data for assignments aligned to an account rubric: > Generate account level mastery scales, if not already created (values between 0-100 makes things easier) > Create an account rubric > Create a course, students, with an assignment using account rubric > Assess a student using the account rubric assignment - Create a course level mastery scale for this course - Repeat the above steps, but with a course rubric > Using vastly different numbers (1000, 900, .., helps make the differences apparent) - Enable the LMGB, SLMGB feature flag - Visit the LMGB - Verify that only course level mastery levels are used in the gradebook, and results associated with the account level rubric have been scaled appropriately - Verify that the LMGB column tooltip chart accurately shows the percents using course level scales - Verify the filters on the right of the LMGB are using course level scales - Turn off the account_level_mastery_scales FF - Verify that the LMGB uses account scales for the filter and column charts - Verify that the results are now scaled using the account level scale Change-Id: Ie95f3347a0f1bd326d50c4adf6c972c0cf528715 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/252322 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Product-Review: Jody Sailor Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> QA-Review: Brian Watson <bwatson@instructure.com>
2020-11-10 04:28:20 +08:00
def rating_percents(rollups, context: nil)
counts = {}
ensure course scale is used in the LMGB closes OUT-4042 flag=account_level_mastery_scales test-plan: - Generate student result data for assignments aligned to an account rubric: > Generate account level mastery scales, if not already created (values between 0-100 makes things easier) > Create an account rubric > Create a course, students, with an assignment using account rubric > Assess a student using the account rubric assignment - Create a course level mastery scale for this course - Repeat the above steps, but with a course rubric > Using vastly different numbers (1000, 900, .., helps make the differences apparent) - Enable the LMGB, SLMGB feature flag - Visit the LMGB - Verify that only course level mastery levels are used in the gradebook, and results associated with the account level rubric have been scaled appropriately - Verify that the LMGB column tooltip chart accurately shows the percents using course level scales - Verify the filters on the right of the LMGB are using course level scales - Turn off the account_level_mastery_scales FF - Verify that the LMGB uses account scales for the filter and column charts - Verify that the results are now scaled using the account level scale Change-Id: Ie95f3347a0f1bd326d50c4adf6c972c0cf528715 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/252322 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Product-Review: Jody Sailor Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> QA-Review: Brian Watson <bwatson@instructure.com>
2020-11-10 04:28:20 +08:00
outcome_proficiency_ratings = if context&.root_account&.feature_enabled?(:account_level_mastery_scales)
context.resolved_outcome_proficiency.ratings_hash
end
rollups.each do |rollup|
rollup.scores.each do |score|
next unless score.score
ensure course scale is used in the LMGB closes OUT-4042 flag=account_level_mastery_scales test-plan: - Generate student result data for assignments aligned to an account rubric: > Generate account level mastery scales, if not already created (values between 0-100 makes things easier) > Create an account rubric > Create a course, students, with an assignment using account rubric > Assess a student using the account rubric assignment - Create a course level mastery scale for this course - Repeat the above steps, but with a course rubric > Using vastly different numbers (1000, 900, .., helps make the differences apparent) - Enable the LMGB, SLMGB feature flag - Visit the LMGB - Verify that only course level mastery levels are used in the gradebook, and results associated with the account level rubric have been scaled appropriately - Verify that the LMGB column tooltip chart accurately shows the percents using course level scales - Verify the filters on the right of the LMGB are using course level scales - Turn off the account_level_mastery_scales FF - Verify that the LMGB uses account scales for the filter and column charts - Verify that the results are now scaled using the account level scale Change-Id: Ie95f3347a0f1bd326d50c4adf6c972c0cf528715 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/252322 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Product-Review: Jody Sailor Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> QA-Review: Brian Watson <bwatson@instructure.com>
2020-11-10 04:28:20 +08:00
outcome = score.outcome
next unless outcome
ensure course scale is used in the LMGB closes OUT-4042 flag=account_level_mastery_scales test-plan: - Generate student result data for assignments aligned to an account rubric: > Generate account level mastery scales, if not already created (values between 0-100 makes things easier) > Create an account rubric > Create a course, students, with an assignment using account rubric > Assess a student using the account rubric assignment - Create a course level mastery scale for this course - Repeat the above steps, but with a course rubric > Using vastly different numbers (1000, 900, .., helps make the differences apparent) - Enable the LMGB, SLMGB feature flag - Visit the LMGB - Verify that only course level mastery levels are used in the gradebook, and results associated with the account level rubric have been scaled appropriately - Verify that the LMGB column tooltip chart accurately shows the percents using course level scales - Verify the filters on the right of the LMGB are using course level scales - Turn off the account_level_mastery_scales FF - Verify that the LMGB uses account scales for the filter and column charts - Verify that the results are now scaled using the account level scale Change-Id: Ie95f3347a0f1bd326d50c4adf6c972c0cf528715 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/252322 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Product-Review: Jody Sailor Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> QA-Review: Brian Watson <bwatson@instructure.com>
2020-11-10 04:28:20 +08:00
ratings = outcome_proficiency_ratings || outcome.rubric_criterion[:ratings]
next unless ratings
ensure course scale is used in the LMGB closes OUT-4042 flag=account_level_mastery_scales test-plan: - Generate student result data for assignments aligned to an account rubric: > Generate account level mastery scales, if not already created (values between 0-100 makes things easier) > Create an account rubric > Create a course, students, with an assignment using account rubric > Assess a student using the account rubric assignment - Create a course level mastery scale for this course - Repeat the above steps, but with a course rubric > Using vastly different numbers (1000, 900, .., helps make the differences apparent) - Enable the LMGB, SLMGB feature flag - Visit the LMGB - Verify that only course level mastery levels are used in the gradebook, and results associated with the account level rubric have been scaled appropriately - Verify that the LMGB column tooltip chart accurately shows the percents using course level scales - Verify the filters on the right of the LMGB are using course level scales - Turn off the account_level_mastery_scales FF - Verify that the LMGB uses account scales for the filter and column charts - Verify that the results are now scaled using the account level scale Change-Id: Ie95f3347a0f1bd326d50c4adf6c972c0cf528715 Reviewed-on: https://gerrit.instructure.com/c/canvas-lms/+/252322 Tested-by: Service Cloud Jenkins <svc.cloudjenkins@instructure.com> Product-Review: Jody Sailor Reviewed-by: Augusto Callejas <acallejas@instructure.com> Reviewed-by: Michael Brewer-Davis <mbd@instructure.com> QA-Review: Brian Watson <bwatson@instructure.com>
2020-11-10 04:28:20 +08:00
counts[outcome.id] = Array.new(ratings.length, 0) unless counts[outcome.id]
idx = ratings.find_index { |rating| rating[:points] <= score.score }
counts[outcome.id][idx] = counts[outcome.id][idx] + 1 if idx
end
end
counts.each { |k, v| counts[k] = to_percents(v) }
counts
end
def to_percents(count_arr)
total = count_arr.sum
return count_arr if total.zero?
count_arr.map { |v| (100.0 * v / total).round }
end
class << self
include ResultAnalytics
end
end
end