There’s no question that assessment is often regarded as a fill-in-the-blank, paint-by-number bureaucratic enterprise. For too long, assessment specialists and accrediting agencies promoted a linear approach where faculty make specific changes in courses and curriculum based on assessment findings with the aim of improving student learning and then re-assess to document the effects of these changes.
Yet Natasha Jankowski, former executive director of the National Institute for Learning Outcomes Assessment asks, “Can one ever actually know that the changes made enhanced student learning?”
So much influences a person’s growth and development
in the years between starting a degree and earning one. Oral communication
skills, for example, might be
developed a great deal in an undergraduate curriculum, but so, too, might these
abilities be improved by the experiences a student has in the co-curricular
environment or the world of work.
There are modifications we can make in an effort to
maximize student learning and ensure that all students have the opportunity to
develop the knowledge, skills, and competencies faculty consider critical in a
discipline. However, we simply cannot make emphatic claims to a causal
relationship between what we did and the extent to which it improved student
learning.
Jankowski advocates for a different approach to how we
might demonstrate educational effectiveness. She argues that the meaning we draw
from assessment findings, our understanding of the data, constitutes the
important narrative. This meaning shapes the story we derive from the evidence
we have gathered.
“In assessment there is so much doing that there is
limited time, if any, built into reflecting upon the data and deciding what it
all says, what argument might be made, and what story it tells about students
and their learning” (page 11).
Stories give evidence meaning. The 2020-2021
assessment report from the Department of Philosophy documents an important
story about teaching and learning in a pandemic where COVID fatigue resulted in
students’ failing to complete assignments as well as an increase in cheating.
The quantitative findings suggest that student learning was on a downward
trend. But the numbers alone don’t tell the story. The meaning inferred from
the numbers by the faculty quoted in the report’s narrative does.
The report from the English Department provides an
illustration that shows how students achieve learning beyond that which is articulated in a program goal. Students who
participate in the design and creation of Ampersand,
the College’s literary journal, “go beyond” the goal of making authorial
choices: “[T]hey learn to collaborate, they learn skills of layout and editing
as they produce a publication that appears in both print and online forms.”
Evidence-based stories—stories informed by the
quantitative and qualitative evidence we systematically gather—are how we best
illustrate the value and impact of our individual programs and of higher
education. These stories also tell us what we need to change or improve in our
teaching, course content, and curriculum.
“Some of our stories are tragedies,” Jankowski writes,
“and some are tales of heroics and adventures” (page 12). They provide us with
a richer, deeper, and more meaningful way to discuss assessment findings than
the linear, formulaic approach does. Whether our stories have a happy endings
or sad conclusions, they deserve to be told.
Jankowski, N. (2021, February). Evidence-Based Storytelling in Assessment. Occasional Paper No. 50). Urbana: IL: University of Illinois and Indiana University. National Institute for Learning Outcomes Assessment.
No comments:
Post a Comment