It’s not unusual to see assessment reports where the findings are summarized as such: “23% met expectations, 52% exceeded expectations, and 25% did not meet expectations.” That’s a standard approach used at many institutions to satisfy assessment requirements and meet accreditors’ demands.
What do they really mean, though? What do they really tell you, other than the majority of your students are exceptional? How do these results inform continuous improvement?
The less specific the results are, the less useful or meaningful they are.
For assessment findings to have any value, they should be precise and aligned with the goal being measured. Imagine you were assessing students’ information literacy. Rather than report “23% met expectations, 52% exceeded expectations, and 25% did not meet expectations,” the results might be presented as “23% of students were able to access information using simple search strategies, 52% were able to access information using a variety of search strategies, and 25% accessed information randomly, unable to distinguish relevance and quality.” That level of detail informs how you might plan instruction.