I remember reading an assessment report years ago in which the
department faculty wrote, “Most of the students met expectations for this goal,
and those that did not were just lazy and didn’t study.”
Few declarations make an assessment committee’s eyes roll
more than a statement like that. From
the very beginning, proponents of assessment made learning a shared
responsibility between instructor and student, and the ultimate purpose of
assessment was to inform continuous improvement. Blaming students for poor learning outcomes
is antithetical to the very principles advanced by the assessment movement.
And yet, the fact remains that there are students who do not
actively engage in their learning, who miss numerous classes or habitually
arrive late, who often neglect to complete assignments, who do not prepare for
exams with the intensity required, and who spend more class time scrolling
through their phones than attending to the lecture or discussion.
To interpret student learning assessment results without
giving some consideration to student engagement may be to miss if not the whole
point, then at least an important part of the narrative. So argued prominent members of
the English faculty last fall.
So when the department assembled
in February to discuss assessment findings for both their majors and core
courses, they also conferred on how they might include evidence of student
participation and engagement in future assessments.
A small group of us met to review various checklists,
rubrics, and protocols that measure student participation and engagement, and
after a lively conversation, the faculty selected criteria they thought best
reflected how they would determine whether or not a student is appropriately
engaged in a course. One member
observed, “We talk about student engagement all the time, but we haven’t
defined what it is and we haven’t systematically measured it”—even though many
of these faculty make participation part of the students’ final grade.
The plan is to assess each student’s
participation/engagement according to the selected criteria. This assessment aims to answer a number of
questions raised by the faculty. How
might measuring student engagement better help us understand student success in
our courses? Is a lack of engagement
more prevalent in Core courses than in courses designed for the major? Are we inclined to overstate a lack of
student engagement in our courses? How might the results of this assessment
help us better understand the results gathered from course-embedded
assessments?
The scholarship of teaching and learning offers countless
examples and studies describing how active student participation and engagement
improves learning. This is obvious to
anyone who ever stood before a group of students. The work being done by the English faculty
has the potential to document the extent to which this might be true.
Great column, but what is the answer? (He said, sounding like the typical undergraduate). I have tried to use participation grades in the past, but felt the criteria were either mechanical (attendance, # of discussion posts,etc.) or very subjective. I have done a little better with using rubrics to grade discussion posts, but I am still less than thrilled with the results. And how do I measure participation in a more global semester long sense? What are some ways the English dept. is measuring engagement? Thanks.
ReplyDelete