Erik Gilbert’s recently published commentary in The Chronicle of Higher Education (January
12, 2018), “An Insider’s Take on Assessment:
It May Be Worse Than You Thought,” voices a healthy skepticism about
assessment that is not uncommon to college faculty and certainly not without
merit. Gilbert argues that assessment
results are not really useful and don’t affect change in student
learning. As the Academic Assessment
Coordinating Committee (AACC) learned when its members administered a survey
about assessment practices to UC full-time faculty in October 2017, many of our
colleagues agree with Gilbert’s claim.
The majority of faculty who responded to that survey indicated that the
assessments they do for the institution
have not been meaningful. (An executive
summary of this survey’s results may be accessed at www.utica.edu/academic/Assessment/new/.)
What the AACC takes issue with are Gilbert’s bold and
emphatic conclusions that “assessment does not work” and that assessors
themselves, as well as college administrators, know this and yet persist with
the charade. He offers no evidence to
substantiate the latter claim, but the former one he supports with a few ripe
lines cherry-picked from David Eubanks’ article, “A Guide for the Perplexed.” However, Eubanks never says that assessment
doesn’t work. He asserts that assessment
results have limited usefulness because
the methods used to collect and analyze them are often poor.
Trudy Banta and Charles Blaich, whose research Eubanks cites in his article and Gilbert later references
in his commentary, found little evidence that assessment resulted in “actual
change” and even less evidence of improvements being monitored for any duration
of time “to see if the desired outcomes are attained.” But they don’t conclude assessment doesn’t
work. They offer a number of plausible
explanations why there is little evidence about “closing the assessment
loop.” One such possibility that AACC
members concur with is that institutions might be imposing unrealistic timelines for
assessment. “Collecting and reviewing
reliable evidence from multiple sources can take several years,” they write, but
“state mandates or impatient campus leaders may exert pressure for immediate
action.” Banta and Blaich also note that
most assessment processes focus more on collecting data and less on sharing it
and engaging faculty or other stakeholders in conversations about it.
Gilbert is misleading when he argues that “academic
administrators have been acquiescent about assessment for so long” in order to justify
educational offerings that do not involve traditional faculty. He attributes the following context to the
assessment movement: expanded use of
adjunct faculty, growth of dual enrollment, and an increase of online
education. In reality, the outcomes assessment
“movement” in higher education has been part of the higher education landscape
for three decades. As with any paradigm
shift, it resulted from and was influenced by a variety of factors: employer perceptions that college graduates
were poorly prepared; American students’ mediocre performance compared to their
peers from other nations; the rising costs associated with higher education
that had various stakeholders questioning its value; the landmark Spellings
Commission report in 2007 that demanded accountability from colleges and
universities and urged accreditors to evaluate institutions based on their success with respect to student
achievement; and two Presidential administrations spanning 16 years that
required regional accreditors to “tie the renewal of accreditation every five
years to serious effort on the part of colleges and universities under review
to create meaningful learning outcomes measures” (Neuman) .
Erik Gilbert may think it is time to dismiss assessment, but
we do not believe that it will soon be eliminated from the higher education agenda
or that accrediting bodies will revise their standards and no longer require
programs or institutions to show evidence of student achievement and continuous
improvement. If Gilbert is correct in
his perception that “assessment has not caused (and probably will not cause)
positive changes in student learning,” then it is our responsibility to change
that. Eubanks proposes a “future where
assessment leaders work closely with institutional researchers and scholars to
create large sets of high-quality data” that he believes will meaningfully
measure student achievement. Banta and
Blaich recommend that “Assessment efforts must be upgraded to ensure that they
are far more likely than they are at present to lead to improvements in student
learning.”
We believe assessment should be faculty driven, where
faculty in each department selects the best metrics to measure how well
students are achieving both course-level and program learning goals. We believe our efforts and our processes
should be informed by research and scholarship in assessment, teaching, and learning,
and not founded on opinion or anecdote.
To these ends, we invite our colleagues to join us at the March 23 Faculty
Forum to discuss how we might improve our processes so that they are meaningful
really do improve student outcomes.
Works Cited
Blaich, Trudy W. Banta and Charles. "Closing the
Assessment Loop." Change: The Magazine of Higher Learning (2011):
22-27.
Eubanks, David. "A Guide for the
Perplexed." Intersection Autumn 2017: 4-13.
Gilbert, Erik. "An Insider's Take on Assessment:
It May Be Worse Than You Thought." The Chronicle of Higher Education
12 January 2018.
Neuman, W. Russell. "Charting the Future of US
Higher Education: A Look at the Spellings Report Ten Years Later." AAC
& U Liberal Education (Winter 2017): 6-13.
No comments:
Post a Comment