I am a big promoter of course-embedded assessments, what
Linda Suskie describes as “course assessments that do double duty, providing
information not only on what students have learned in the course but also on
their progress in achieving program or institutional goals” (Suskie 27) .
Research papers, capstone projects, presentations,
evaluations from clinical or internship supervisors—these are authentic
assessments that allow us to gather evidence of student performance in our
programs without necessarily adding to our workload. Suskie credits course-embedded assessments
with keeping assessment processes manageable, and, because they are developed
locally, “they match up well with learning goals” (27).
While course-embedded assessments are often course
assignments, the assignment itself is not
an assessment measure, and grades earned are not considered direct, valid
evidence of student performance.
That last sentence reads like assessment doublespeak, so
think of the difference this way. The
assignment might be to compose a research paper and then present the work
orally to the class. How the paper and presentation are
assessed is the assessment measure.
Typically, papers and presentations are scored by rubric, a scoring
guide that provides clear and detailed descriptive criteria for what constitutes
excellent work and what signifies an unacceptable performance. The rubric, therefore, is the assessment measure, assuming that what it is
measuring aligns to a learning goal.
Likewise, specific questions on an exam that measure a
learning goal may serve as an assessment measure, while the complete
examination, like the research paper, is the assignment.
When assignments are confused with assessment measures, the
results do not produce specific enough information about student learning that
has implications for continuous improvement.
A typical example of such assessment findings might read, “73% of
student earned grades of 80 or higher on the presentation. Target was
achieved.” Such findings report how
students performed on a given assignment in a given class on a given day, but
they do not indicate anything about where student performance was especially
strong, and where it was less successful.
Were the presentations well organized?
Were the delivery techniques effective?
Did the students use technology well to convey their messages?
Assessment goes beyond grading and analyzes patterns of
student performance. Good assessment
methods help identify these patterns.
To learn about the advantages and disadvantages of specific
assessment methods, visit https://www.utica.edu/academic/Assessment/new/resources.cfm or contact the Office of
Academic Assessment in 127 White Hall.
Works Cited
Suskie, Linda. Assessing Student Learning: A
Common Sense Guide. San Franscisco: Jossey-Bass, 2009.
No comments:
Post a Comment