By Steven M. Specht, John Schwoebel
and Tyson C. Kreiger
Over the past two
decades, assessment of various teaching- and learning-related endeavors in
higher education has become ubiquitous -- and by many accounts, onerous. Most
local, state and federal entities affiliated with accreditation and monetary
decision-making have increasingly mandated submission of assessment outcomes from
all levels of higher education.
One of the annual
assessments our department has been doing since 2001 involves monitoring
whether our students are learning the content material in the various sub-disciplines within psychology – something that aligns with one of our
overarching program goals. In order to assess whether students are acquiring
and retaining content, we have been administering an objective content exam (i.e.,
multiple-choice) within the context of our History of Psychology course, which
is required of all majors and restricted to enrollment of senior-level students.
From 2001 through 2012, the content exam consisted of 90 multiple-choice
questions covering nine general content areas in psychology (i.e., physiological,
learning and motivation, sensation and perception, cognitive, social,
developmental, intelligence/personality, abnormal/clinical, statistics, and
research design) taken from a pool of questions from one of the premier general
psychology textbooks. During the 2011-2012 academic year, administrative
entities suggested that the department use an external content exam (i.e., an
instrument that was not created by
the faculty of the department). Since fall 2012, the department has used the
“external” Major Field Test in Psychology developed by the Educational
Testing Service (Princeton, NJ).
For
three recent semesters, students’ percentile ranking on the ETS exam was
correlated with their cumulative psychology grade point average, and their
overall cumulative grade point average. All of the correlation coefficients
were positive and statistically significant.
Another strategy that our
department has employed to assess students’ acquisition of content knowledge within individual courses involves
administration of a short test within the first week of the semester, followed by
administration of the same test during the last week of the semester (i.e., a
pre-test, post-test strategy). We have used this strategy for a variety of
courses, including Introductory Psychology (PSY101), Statistics for the
Behavioral Sciences (PSY211), and Research Methods (PSY312) courses. One of our
simple analyses of these data consists of looking at the relationship between
the changes (delta) in scores from pre-test (i.e., administered at the
beginning of the semester) to post-test (i.e., administered at the end of the
semester), and students’ overall grade for the course. The results of these
analyses revealed positive and statistically significant correlations for each
of these courses.
In response to administrative
requests over the years for assessment measures of students’ content knowledge
other than course grades, our department has collected various auxiliary data. These
auxiliary data are virtually all positively correlated with course grades and
seem to clearly demonstrate that course grades would be adequate measures of
students’ knowledge of content. This is consistent with what Suskie (2009) has
suggested in terms of well-crafted and administered objective exams to assess
content knowledge. Of course, this position depends upon the integrity of
individual faculty members with regard to grading practices. And the
less-than-perfect correlations probably reflect the fact that it is common for
most faculty to integrate factors other than content knowledge (e.g.,
attendance and participation, writing skills, extra credit opportunities) when
assigning grades. In light of our findings, it appears that additional
assessment instruments are mostly redundant with traditional, time-tested
(albeit, not too “sexy”) course grades.