Program assessment, when done well, enables us to tell a compelling story, one that showcases our department’s strengths. It gives us a story to share with (and hopefully attract) prospective students, donors, accreditors, and grant-funding agencies.
These are the stories that will reinspire public confidence
in higher education.
Thanks to program assessment, the biology faculty can report
that their majors exceed nation-wide and peer school averages in all content
areas on the major field test, a standardized exam given in the senior year. The
Education Department can say that 100% of superintendents, assistant
superintendents, and principals from area school districts indicated they would
definitely hire a Utica University graduate.
On the flip side, good program assessment also informs
continuous improvement. For instance, it illuminates where the curriculum might
be modified to enhance student success in the program, or how course content
might be changed either to improve student performance or to ensure that the
curriculum is current.
Course-level assessments in and of themselves don’t help
departments narrate their story. Instead, they tell us how a group of students
in a single course performed on an assignment in a given day. Such information is critically important to
the instructor teaching the course, but one cannot make reasonable or reliable
claims about how well students are performing in a program by simply reporting
course-level results. Course-level
assessment results do not highlight program strengths. Neither do they inform
continuous improvement at the program-level.
This does not mean that course-embedded assessments can’t be
used for program assessment. Research papers, capstone projects, presentations,
evaluations from clinical or internship supervisors—these are authentic
assessments that allow us to gather evidence of student performance in our
programs without necessarily adding to our workload.
However, the results of these assessments should be analyzed
with respect to program goals. If using a capstone project as the assessment
artifact, for instance, is it evaluated using a rubric that aligns with the
program goals? If the department is measuring student performance at various
transition points in the curriculum, are all faculty using a common instrument
and are the results analyzed in aggregate, showing how student performance
improves (or doesn’t improve) as one advances through the curriculum?
A few years ago, in a blog on this very topic, I wrote, “It
might be tempting to think that a handful of course-level assessments will add
up to program assessment, but they do not. Program assessment considers the
bigger picture—program-level goals—and is typically outcomes based.” The best
place to start is at the program’s conclusion in a capstone experience, using a
culminating assignment that measures all of the program learning goals.
As departments plan for assessment in 2024-2025, the
Academic Assessment Committee urges them to consider where and how they will do
program assessment. You’d be surprised at how much less cumbersome
assessment is when you aren’t reporting on multiple course-level assessments!