Student learning is typically measured using direct or indirect methods. Direct measures provide clear evidence of what students have and have not learned, evidence that assessment leader Linda Suskie says a skeptic would trust.
In contrast, indirect evidence may reflect students’ perceptions of what they
learned or what they probably
learned. In the world of assessment, indirect evidence is considered less
compelling and less reliable than direct evidence.
I do not dismiss indirect evidence as quickly as some
of my assessment colleagues do. Grades are an example of indirect evidence that
tell us what students probably learned (assuming the course had clear learning
objectives). True, they often measure more than just what was learned in a
course (e.g. participation, attendance), so there is a limit to how we might
use them in our program assessments. But since so many crucial decisions are
made based on grades—class rank, scholarships, financial aid, acceptance into
graduate or professional school—we have to acknowledge that they are some
measure of student knowledge and ability.
Similarly, students’ perceptions of their learning
provide some insight into educational effectiveness. I’ve always been surprised
when I’ve heard people dismiss findings as “student opinion.” The opinions of our most important
stakeholders should be respectfully considered.
Utica University’s Master of Business Administration
Program (M.B.A.) is a case study for how indirect assessments may be used to
identify areas for program improvement. For the past few assessment cycles,
M.B.A. students have completed an exit survey at the close of their program. This survey asks about the
importance of specific knowledge, skills, and competence, each of which is a
learning outcome of the program, and further asks them to indicate the extent
to which the program helped them achieve these learning outcomes.
This outbound survey, an indirect assessment, measures
students’ perceptions of educational gains in both the core curriculum and the
areas of specialization. It allows for a systematic collection of assessment
results using a sustainable process that yields actionable findings.
In the 2020-2021 assessment cycle, students reported
lower ratings for the Accounting and Finance specialization than the desired
target. This prompted the M.B.A. faculty to review the curriculum to ensure
that accounting and finance concepts were being adequately reinforced in the
appropriate classes, including those taught by adjuncts.
Another area the faculty identified as an opportunity
for improvement was related to a diversity goal. The learning goal states the
students will “Examine business solutions and problems from a global
perspective and assess how cultural differences impact business." 21.8% rated this a “3” on a 5-point scale
when asked how much their M.B.A. education helped them develop this knowledge.
At present, cultural differences are discussed in the leadership class, and
significant time is spent on the topic in a global consumer course, where the
final project centers around various culture clusters of the world. However,
based on this finding, the faculty is investigating ways they might weave cultural
diversity more into the curriculum.
A direct measure of student learning in the M.B.A.
Program is the Peregrine Assessment, a standardized test that measures
graduates’ knowledge and provides benchmark data from peer institutions.
Similarly, biology is another case study in how direct
and indirect measures might be combined to tell a meaningful narrative about
student learning. Students graduating from this degree program take the Major
Field Test in Biology, a standardized examination that measures a learning goal
addressing key principles of biological fields. They also respond to a senior
survey that asks them to rate how well they believe they achieved the program’s
learning goals.
Direct assessments will probably always be considered
more trustworthy than indirect ones. But indirect methods—survey findings,
acceptances into graduate/professional schools, graduate employment—help us
shape a more comprehensive narrative about student learning in our programs.
Surveys have the added benefit of giving students an opportunity to share their
feedback on a program or learning experience, thereby giving them agency in our
assessment and planning processes.
Works
Cited
Suskie, Linda. (2009). Assessing Student Learning: A Common Sense Guide. 2nd ed. San
Francisco:
Jossey-Bass.
Great topic. Good info. Thanks!
ReplyDelete