canvas-lms/doc/examples/question_specific_statistic...

487 lines
12 KiB
Markdown
Raw Normal View History

Based on the question type it represents, the `question_statistics` document
may include extra metrics. You can find these metrics below.
<a class="bookmark" id="multiple-choice-question-stats"></a>
#### Multiple Choice
```javascript
{
// Number of students who have picked any choice.
"responses": 4,
"answers": [
{
// Unique ID of this answer.
"id": "3866",
// The readable answer text.
"text": "Red",
// Number of students who picked this answer.
"responses": 3,
// Whether this answer is a correct one.
"correct": true
},
// An incorrect choice:
{
"id": "2040",
"text": "Green",
"correct": false,
"responses": 1
},
// The "No Answer" - students who didn't make any choice:
{
"id": "none",
"text": "No Answer",
"responses": 2,
"correct": false
}
],
// Number of students who have answered this question.
"answered_student_count": 0,
// Number of students who rank in the top bracket (the top 27%)
// among the submitters which have also answered this question.
"top_student_count": 0,
// Number of students who rank in the middle bracket (the middle 46%)
// among the submitters which have also answered this question.
"middle_student_count": 0,
// Number of students who rank in the bottom bracket (the bottom 27%)
// among the submitters which have also answered this question
"bottom_student_count": 0,
// Number of students who have answered this question correctly.
"correct_student_count": 0,
// Number of students who have answered this question incorrectly.
"incorrect_student_count": 0,
// Ratio of students who have answered this question correctly.
"correct_student_ratio": 0,
// Ratio of students who have answered this question incorrectly.
"incorrect_student_ratio": 0,
// Number of students who rank in the top bracket (the top 27%) among
// the submitters which have also provided a correct answer to this question.
"correct_top_student_count": 0,
// Number of students who rank in the middle bracket (the middle 46%) among
// the submitters which have also provided a correct answer to this question.
"correct_middle_student_count": 0,
// Number of students who rank in the bottom bracket (the bottom 27%) among
// the submitters which have also provided a correct answer to this question.
"correct_bottom_student_count": 0,
// Variance of *all* the scores.
"variance": 0,
// Standard deviation of *all* the scores.
"stdev": 0,
// Denotes the ratio of students who have answered this question correctly,
// which should give an indication of how difficult the question is.
"difficulty_index": 0,
// The reliability, or internal consistency, coefficient of all the scores
// as measured by the Cronbach's alpha algorithm. Value ranges between 0 and
// 1.
//
// Note: This metric becomes available only in quizzes with more than fifteen
// submissions.
"alpha": null,
// A point biserial correlation coefficient for each of the question's
// answers. This metric helps measure the efficiency of an individual
// question: the calculation looks at the difference between high-scorers
// who chose this answer and low-scorers who also chose this answer.
//
// See the reference above for a description of each field.
"point_biserials": [
{
"answer_id": 3866,
"point_biserial": null,
"correct": true,
"distractor": false
},
{
"answer_id": 2040,
"point_biserial": null,
"correct": false,
"distractor": true
},
{
"answer_id": 7387,
"point_biserial": null,
"correct": false,
"distractor": true
},
{
"answer_id": 4082,
"point_biserial": null,
"correct": false,
"distractor": true
}
]
}
```
<a class="bookmark" id="fimb-question-stats"></a>
#### Fill in Multiple Blanks
```javascript
{
// Number of students who have filled at least one blank.
"responses": 2,
// Number of students who have filled every blank.
"answered": 2,
// Number of students who filled all blanks correctly.
"correct": 1,
// Number of students who filled one or more blanks correctly.
"partially_correct": 0,
// Number of students who didn't fill any blank correctly.
"incorrect": 1,
// Each entry in the answer set represents a blank and responses to
// its pre-defined answers:
"answer_sets": [
{
"id": "70dda5dfb8053dc6d1c492574bce9bfd", // md5sum of the blank
"text": "color", // the blank_id
"answers": [
// Students who filled in this blank with this correct answer:
{
"id": "9711",
"text": "Red",
"responses": 3,
"correct": true
},
// Students who filled in this blank with this other correct answer:
{
"id": "2700",
"text": "Blue",
"responses": 0,
"correct": true
},
// Students who filled in this blank with something else:
{
"id": "other",
"text": "Other",
"responses": 1,
"correct": false
},
// Students who left this blank empty:
{
"id": "none",
"text": "No Answer",
"responses": 1,
"correct": false
}
]
}
]
}
```
Quiz Stats - Multiple Answers Ports the generation of stats for that question type to the CQS gem. Changes: - no longer exposing "user_ids" - can now identify students who skipped the question Closes CNVS-13089 TEST PLAN ---- ---- - create a quiz with multiple-answer question(s) - take it by a number of students and cover the following cases: - answer correctly by picking only the right choices - answer almost correctly by: 1. picking only 1 right choice 2. picking 1 right and 1 wrong choices 3. picking everything - answer incorrectly by picking only the incorrect choice(s) - don't answer at all - get the stats from the API: - for "responses", "correct", and "partially_correct", verify they meet the specification in the docs - also for the "responses" field in each document in "answers" - verify that there is an answer document with "none" for an id with "responses" that reflect the number of students that skipped the question - visit ember quiz stats: - verify the "Attempts: X out of Y" should read the "responses" field out of the total quiz participant count - verify the donut chart reads the correct "correct" response ratio - verify there is a "No-Answer" bar - expand the question details: - verify that all the choices are displayed, and the correct choices are highlighted in GREEEN Change-Id: Ibc08b6f521f9cae35dd16950c68c164d7e27d95d Reviewed-on: https://gerrit.instructure.com/35736 QA-Review: Caleb Guanzon <cguanzon@instructure.com> Reviewed-by: Derek DeVries <ddevries@instructure.com> Tested-by: Jenkins <jenkins@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-06-02 13:10:39 +08:00
#### Multiple Answers
```javascript
{
// Number of students who have picked any choice.
"responses": 3,
// Number of students who have picked all the right choices.
"correct": 1,
// Number of students who have picked at least one of the right choices,
// but may have also picked a wrong one.
"partially_correct": 2,
"answers": [
{
// Unique ID of this answer choice.
"id": "5514",
// Displayable choice text.
"text": "A",
// Number of students who picked this choice.
"responses": 3,
// Whether this choice is part of the answer.
"correct": true
},
// Here's the second part of the correct answer:
{
"id": "4261",
"text": "B",
"responses": 1,
"correct": true
},
// And here's a distractor:
{
"id": "3322",
"text": "C",
"responses": 2,
"correct": false
},
// "Missing" answers:
//
// This is an auto-generated answer to account for all students who
// left this question unanswered.
{
"id": "none",
"text": "No Answer",
"responses": 0,
"correct": false
}
]
}
```
#### Multiple Dropdowns
Multiple Dropdown question statistics look just like the statistics for [Fill In Multiple Blanks](#fimb-question-stats).
Quiz Stats - File Upload & Formula Add support for generating stats for File Upload and Formula questions. Similar to Essay metrics but adjusted to calculate properly. Closes CNVS-13169 TEST PLAN ---- ---- - create a quiz with those question types - take the quiz: - answer both question types by at least one student, but leave it unanswered by another so we can test the "responses" metric - test the stats: GET /api/v1/courses/:course_id/quizzes/:quiz_id/statistics - the "responses" metric should count the number of students who provided an answer - the "graded" metric should read the number of students whose answers you've graded so far - the "full_credit" one should read 0 until you grade it and give them a score higher than, or equal to, the maximum points possible - "point_distribution" is similar to that of Essays; it is an array of objects that track every score you gave the students and the number of students who received those scores - grade the scores and re-test to verify the metrics update correctly - verify that the API documentation is updated to include those question types PS: the "responses" field will read "the number of students who uploaded a file" for File Upload, and "the number of students who wrote any answer" for Formula. Change-Id: I890eafe018e000eef88de782bd7e86b5259df5d5 Reviewed-on: https://gerrit.instructure.com/35112 Tested-by: Jenkins <jenkins@instructure.com> Reviewed-by: Jason Madsen <jmadsen@instructure.com> QA-Review: Caleb Guanzon <cguanzon@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-05-19 16:18:34 +08:00
<a class="bookmark" id="essay-question-stats"></a>
#### Essay
```javascript
{
Quiz Stats [Backend] - Gem & Essays Refactoring the generation of quiz question statistics into its own gem. This patch adds support for Essay question statistics. Closes CNVS-12725 TEST PLAN ---- ---- - create a quiz with an essay question - perform an API request to retrieve the quiz statistics - ensure that the following metrics are generated and correct: - "graded": number of students whose answers have been graded by the teacher - "full_credit": number of students who received a full score - "point_distribution": a list of scores and the number of students who received them (so if 2 students got graded for 3 points, it should have a key of 2 and a value of 3), un-graded submissions should be keyed under null - "responses": number of students who answered the question (wrote anything) - "user_ids": IDs of those students - documentation for QuizStatistics -> Essay should be updated with the new stats > Other things to test - verify that the old statistics page still renders: /courses/:course_id/quizzes/:quiz_id/statistics - verify that you can still generate both student and item analysis CSV reports API endpoint for quiz stats: /api/v1/courses/:courseid/quizzes/:quiz_id/statistics [GET] Change-Id: Ib15434ff4cef89ac211c1f4602d1ee609ef48ec4 Reviewed-on: https://gerrit.instructure.com/33990 Tested-by: Jenkins <jenkins@instructure.com> QA-Review: Caleb Guanzon <cguanzon@instructure.com> Reviewed-by: Derek DeVries <ddevries@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-04-20 16:22:49 +08:00
// The number of students whose responses were graded by the teacher so
// far.
"graded": 5,
// The number of students who got graded with a full score.
"full_credit": 4,
// Number of students who wrote any kind of answer.
"resposes": 5,
Quiz Stats [Backend] - Gem & Essays Refactoring the generation of quiz question statistics into its own gem. This patch adds support for Essay question statistics. Closes CNVS-12725 TEST PLAN ---- ---- - create a quiz with an essay question - perform an API request to retrieve the quiz statistics - ensure that the following metrics are generated and correct: - "graded": number of students whose answers have been graded by the teacher - "full_credit": number of students who received a full score - "point_distribution": a list of scores and the number of students who received them (so if 2 students got graded for 3 points, it should have a key of 2 and a value of 3), un-graded submissions should be keyed under null - "responses": number of students who answered the question (wrote anything) - "user_ids": IDs of those students - documentation for QuizStatistics -> Essay should be updated with the new stats > Other things to test - verify that the old statistics page still renders: /courses/:course_id/quizzes/:quiz_id/statistics - verify that you can still generate both student and item analysis CSV reports API endpoint for quiz stats: /api/v1/courses/:courseid/quizzes/:quiz_id/statistics [GET] Change-Id: Ib15434ff4cef89ac211c1f4602d1ee609ef48ec4 Reviewed-on: https://gerrit.instructure.com/33990 Tested-by: Jenkins <jenkins@instructure.com> QA-Review: Caleb Guanzon <cguanzon@instructure.com> Reviewed-by: Derek DeVries <ddevries@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-04-20 16:22:49 +08:00
// A set of maps of scores and the number of students who received
// each score.
"point_distribution": [
{ "score": 0, "count": 1 },
{ "score": 1, "count": 1 },
{ "score": 3, "count": 3 }
]
}
```
#### Matching
```javascript
{
// Number of students who have matched at least one answer.
"responses": 2,
// Number of students who have matched all answers.
"answered": 2,
// Number of students who have matched all answers correctly with their
// right-hand sides.
"correct": 1,
// Number of students who have matched one or more answers correctly
// with their right-hand sides.
"partially_correct": 0,
// Number of students who have not matched any answer with their correct
// right-hand side.
"incorrect": 1,
// Each entry in the answer set represents the left-hand side of the match
// along with all the possible matches on the right-side
"answer_sets": [
{
// id of the answer
"id": "1",
// the left-hand side of the match
"text": "What does the color red look like?",
// the available matches
"answers": [
// Students who chose this match for this answer set:
{
// match_id
"id": "9711",
// right-hand side of the match
"text": "Red",
"responses": 3,
"correct": true
},
// Students who chose an incorrect match:
{
"id": "2700",
"text": "Blue",
"responses": 0,
"correct": false
},
// Students who did not make any match:
{
"id": "none",
"text": "No Answer",
"responses": 1,
"correct": false
}
]
}
]
}
```
Quiz Stats - File Upload & Formula Add support for generating stats for File Upload and Formula questions. Similar to Essay metrics but adjusted to calculate properly. Closes CNVS-13169 TEST PLAN ---- ---- - create a quiz with those question types - take the quiz: - answer both question types by at least one student, but leave it unanswered by another so we can test the "responses" metric - test the stats: GET /api/v1/courses/:course_id/quizzes/:quiz_id/statistics - the "responses" metric should count the number of students who provided an answer - the "graded" metric should read the number of students whose answers you've graded so far - the "full_credit" one should read 0 until you grade it and give them a score higher than, or equal to, the maximum points possible - "point_distribution" is similar to that of Essays; it is an array of objects that track every score you gave the students and the number of students who received those scores - grade the scores and re-test to verify the metrics update correctly - verify that the API documentation is updated to include those question types PS: the "responses" field will read "the number of students who uploaded a file" for File Upload, and "the number of students who wrote any answer" for Formula. Change-Id: I890eafe018e000eef88de782bd7e86b5259df5d5 Reviewed-on: https://gerrit.instructure.com/35112 Tested-by: Jenkins <jenkins@instructure.com> Reviewed-by: Jason Madsen <jmadsen@instructure.com> QA-Review: Caleb Guanzon <cguanzon@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-05-19 16:18:34 +08:00
#### File Upload
File Upload question statistics look just like the statistics for [Essays](#essay-question-stats).
#### Formula
Formula question statistics look just like the statistics for [Essays](#essay-question-stats).
#### Numerical
```javascript
{
// Number of students who have provided any kind of answer.
"responses": 2,
// Number of students who have provided a correct answer.
"correct": 1,
// Number of students who have provided a correct answer and received full
// credit or higher.
"full_credit": 2,
// Number of students who have provided an answer which was not correct.
"incorrect": 1,
"answers": [
{
// Unique ID of this answer.
"id": "9711",
// This metric contains a formatted version of the correct answer
// ready for display.
"text": "15.00",
// Number of students who provided this answer.
"responses": 3,
// Whether this answer is a correct one.
"correct": true,
// Lower and upper boundaries of the answer range. This is consistent
// regardless of the answer type (e.g., exact vs range).
//
// In the case of exact answers, the range will be the exact value
// minus plus the defined margin.
"value": [ 13.5, 16.5 ],
// Margin of error tolerance. This is always zero for range answers.
"margin": 1.5
},
// "Other" answers:
//
// This is an auto-generated answer that will be present if any student
// provides a number for an answer that is incorrect (doesn't map to
// any of the pre-defined answers.)
{
"id": "other",
"text": "Other",
"responses": 0,
"correct": false
},
// "Missing" answers:
//
// This is an auto-generated answer to account for all students who
// left this question unanswered.
{
"id": "none",
"text": "No Answer",
"responses": 0,
"correct": false
}
]
}
```
### Short Answer (aka Fill in The Blank)
```javascript
{
// Number of students who have written anything.
"responses": 2,
// Number of students who have written a correct answer.
"correct": 2,
"answers": [
{
// Unique ID of this answer.
"id": "4684",
// The readable answer text.
"text": "Something",
// Number of students who picked this answer.
"responses": 3,
// Whether this answer is a correct one.
"correct": true
},
// Another correct answer:
{
"id": "1797",
"text": "Very cool.",
"responses": 0,
"correct": true
},
// "Other" answers:
//
// This is an auto-generated answer that will be present if any student
// does write an answer, but is incorrect.
{
"id": "other",
"text": "Other",
"responses": 0,
"correct": false
},
// "Missing" answers:
//
// This is an auto-generated answer to account for all students who
// left this question unanswered.
{
"id": "none",
"text": "No Answer",
"responses": 0,
"correct": false
}
]
}
```
#### True/False
Quiz Stats - Multiple Answers Ports the generation of stats for that question type to the CQS gem. Changes: - no longer exposing "user_ids" - can now identify students who skipped the question Closes CNVS-13089 TEST PLAN ---- ---- - create a quiz with multiple-answer question(s) - take it by a number of students and cover the following cases: - answer correctly by picking only the right choices - answer almost correctly by: 1. picking only 1 right choice 2. picking 1 right and 1 wrong choices 3. picking everything - answer incorrectly by picking only the incorrect choice(s) - don't answer at all - get the stats from the API: - for "responses", "correct", and "partially_correct", verify they meet the specification in the docs - also for the "responses" field in each document in "answers" - verify that there is an answer document with "none" for an id with "responses" that reflect the number of students that skipped the question - visit ember quiz stats: - verify the "Attempts: X out of Y" should read the "responses" field out of the total quiz participant count - verify the donut chart reads the correct "correct" response ratio - verify there is a "No-Answer" bar - expand the question details: - verify that all the choices are displayed, and the correct choices are highlighted in GREEEN Change-Id: Ibc08b6f521f9cae35dd16950c68c164d7e27d95d Reviewed-on: https://gerrit.instructure.com/35736 QA-Review: Caleb Guanzon <cguanzon@instructure.com> Reviewed-by: Derek DeVries <ddevries@instructure.com> Tested-by: Jenkins <jenkins@instructure.com> Product-Review: Ahmad Amireh <ahmad@instructure.com>
2014-06-02 13:10:39 +08:00
True/False question statistics look just like the statistics for [Multiple-Choice](#multiple-choice-question-stats).