Gathering evidence of student learning or program effectiveness is only a means to an
end. Assessment results are meaningful when they are useful and relevant. What follows are some excellent examples of how departments and faculty
have used assessment to strengthen program offerings or make inquiries into
student success.
The Biology Department examined two years’
worth of results from graduating seniors’ performance on the Major Field
Achievement Test (MFAT) and discovered that students’ knowledge of ecology
(especially at the levels of community, ecosystem, and biome) was an area of
weakness. The faculty then examined the curriculum map for the biology
major and noted that students are only exposed to material in ecology in the
general biology course, unless they elect to take an ecology course. This
discovery had obvious implications for curricular changes and/or modifications
to the major requirements, including adding ecology as a required course for
the major.
The departmental faculty also discussed the learning outcomes for the capstone
courses offered in the major. Since students may choose the course they wish to complete, the faculty determined
that the learning goals should be comparable for each. Specifically, one capstone course was
redesigned as a writing intensive course to ensure its consistency with the
other capstone experience.
The Physics Department recently submitted a
proposal to the Curriculum Committee for a 1-credit optional course designed to
give students practice applying concrete problem-solving strategies to a wide
range of physics problems. This curriculum change was made when the faculty observed that a number of students
struggled with the math required in PHY 151 and were unable to complete the
homework required to become proficient with the subject
matter.
The History Department made a significant curriculum change by changing a 1-credit gateway course to a 3-credit one. This modification was informed by student performance in the capstone course. Not only
will this change allow for a “broader, deeper, and more comprehensive foundation
for the methods-and-research spine of the program,” it will also allow the
faculty to implement a more sustainable assessment plan that measures growth in
the major.
MBA faculty see assessment as integral to the scholarship
of teaching and learning (SoTL). Stephanie
Nesbitt, Matt Marmet, and Tracy Balduzzi will have the results of one of their
assessments published in the spring edition of The Business Research Consortium Academic Journal of Education. This article, titled “The Impact of Behavioral
Engagement on Outcomes in Graduate Business Blended Learning Environments,” is
part of a larger study on student engagement/student success. They will also be presenting their work in
both poster form and a 60-minute presentation at the Teaching Professor
Conference in Atlanta, GA this June.
David Fontaine implemented the Athlete Viewpoint survey to measure coaches’ performance, student-athletes’ satisfaction, and student
learning outcomes. He used the findings to transform the athletic program at UC. Michael Cross, Penn State University Assistant Director of Athletics and
co-founder of Athlete Viewpoint, commended Fontaine’s work in a blog published
at ultimatesportsinsider.com. Cross
writes, “Having real data and analytics should be incorporated into all areas
of [an] athletic program to enhance performance, mentor, lead, and support
decision making.” He credits Dave
Fontaine with doing just that, and he urges other athletic directors to follow this
leadership example.
Thursday, February 22, 2018
Thursday, February 8, 2018
How I Talk About Citizenship in the Classroom
By Austen Givens
I can think of few contemporary topics less fashionable to
discuss in the classroom than good citizenship.
But I am also hard pressed to identify any period of
American history since World War II in which discussions of good citizenship were
as important, or more vital, than now.
I teach courses on homeland security and cybersecurity at
Utica College. Most of my students aspire to positions of public trust. Alumni
from our suite of academic programs—consisting of criminal intelligence
analysis, criminal justice, cybersecurity, and financial crime
investigation—enter law enforcement agencies, the military, intelligence
agencies, banks, and insurance firms, for the most part. Virtually all of these
positions demand that our alumni exhibit the qualities of good citizens.
So, what are these qualities? And how do I teach them?
Five traits—or virtues—spring to mind: honesty, integrity,
justice, love of public service, and respect for the U.S. Constitution.
I use assignments, activities, and lead by example to
promote these qualities of good citizenship.
Honesty
The free exchange of ideas is vital to healthy democracies. So,
I encourage a free exchange of ideas in my classes. That free exchange extends
to views that some could find offensive or vile. Reasonable adults may disagree
forcefully. Yet they can do so in a civil, professional manner. If a student
wants to air a controversial viewpoint, I let her do it. And if other students
wish to attack that controversial idea, I gently—but firmly—encourage them to
remain focused on the idea itself, not the person who aired it, and to demolish
the idea with evidence and facts.
Integrity
I try to model integrity for my students. Over the years I
have made my share of mistakes in the classroom. I’ve written unclear quiz
questions, for instance. And it is almost always a student, not me, who
identifies these poorly written quiz questions. I try to convert these mistakes
into a learning opportunity. I publically thank the student for pointing out
the issue with these questions, even if the student has been gracious enough to
bring it to my attention privately. And I bias corrective action in favor of
students’ interests. Those ambiguous quiz questions? I usually give the entire
class credit for them, regardless of their actual responses. If I expect my
students to act with integrity, then I am convinced that I must show them what
integrity looks like in action.
Justice
To me, justice means doing what is fair and proportionate
consistently. In a graduate course that I teach, CYB-667 – Critical Incident
Command, Response, and All-Hazards, I have students participate in a peer
exchange activity to reinforce this. Students produce a written report, then
swap these reports with their classmates and score them using a rubric that I
provide. This is designed to do at least two things: first, it helps students
to understand the value of peer editing; and second, this activity permits
students to assess work in an objective way. Evaluating a situation, a person, or
a report with reference to objective standards is an exercise in justice,
besides also being a highly competent way to work. Cultivating justice in this
way reinforces a good habit that can be used over a lifetime.
Love of Public
Service
I am persuaded that public servants must love public service
to be truly effective. In practice, this means that public servants must learn
to act in ways that are ultimately consistent with the public interest. That
requires critical thinking skills.
I’ve taught multiple classes at Utica College about
terrorism, such as CRJ-305 – Terrorism and CRJ-307 – Homeland Security and
Counterterrorism. In written assignments in these classes I try to re-direct
students’ attention to what may be unspoken or downplayed—the constitutional
rights of terrorism suspects, for example, or the costs associated with
prosecutors seeking the death penalty in terrorism cases rather than long
prison sentences.
I do this because in the world of counterterrorism acting in
the public interest often means restraining government action. This notion runs
contrary to the let’s-blow-em’-to-smithereens political rhetoric about
terrorism that we often hear in popular media.
But more importantly, it is about learning to act first with
the public interest in mind.
Respect for the U.S.
Constitution
I try to emphasize to students that basic principles like
the freedom of speech (1st Amendment), the right to bear arms (2nd
Amendment), and protections against unreasonable searches and seizures (4th
Amendment) are woven into the fabric of American life.
For example, much has been made of hate speech and “fake
news” on the Internet, and these topics are often fodder for written
discussions in my online classes.
Calls to strengthen or relax Constitutional provisions to
deal with these problems inevitably return to questions about the Constitution
itself. The Constitution may set forth impossible-to-achieve ideals. If we are
to move forward as a society, however, we must at least respect the
Constitution. For whatever flaws it contains, it also provides well-designed
guardrails for life in a republic.
***
These are some of the tools and techniques that I use to
teach good citizenship at Utica College. Do you do anything along these lines?
Do you have any new activities or ideas that you could recommend to make good
citizenship come alive in the classroom? Let me know!
Wednesday, January 31, 2018
Thursday, January 25, 2018
The AACC Responds to Erik Gilbert's Commentary on Assessment
Erik Gilbert’s recently published commentary in The Chronicle of Higher Education (January
12, 2018), “An Insider’s Take on Assessment:
It May Be Worse Than You Thought,” voices a healthy skepticism about
assessment that is not uncommon to college faculty and certainly not without
merit. Gilbert argues that assessment
results are not really useful and don’t affect change in student
learning. As the Academic Assessment
Coordinating Committee (AACC) learned when its members administered a survey
about assessment practices to UC full-time faculty in October 2017, many of our
colleagues agree with Gilbert’s claim.
The majority of faculty who responded to that survey indicated that the
assessments they do for the institution
have not been meaningful. (An executive
summary of this survey’s results may be accessed at www.utica.edu/academic/Assessment/new/.)
What the AACC takes issue with are Gilbert’s bold and
emphatic conclusions that “assessment does not work” and that assessors
themselves, as well as college administrators, know this and yet persist with
the charade. He offers no evidence to
substantiate the latter claim, but the former one he supports with a few ripe
lines cherry-picked from David Eubanks’ article, “A Guide for the Perplexed.” However, Eubanks never says that assessment
doesn’t work. He asserts that assessment
results have limited usefulness because
the methods used to collect and analyze them are often poor.
Trudy Banta and Charles Blaich, whose research Eubanks cites in his article and Gilbert later references
in his commentary, found little evidence that assessment resulted in “actual
change” and even less evidence of improvements being monitored for any duration
of time “to see if the desired outcomes are attained.” But they don’t conclude assessment doesn’t
work. They offer a number of plausible
explanations why there is little evidence about “closing the assessment
loop.” One such possibility that AACC
members concur with is that institutions might be imposing unrealistic timelines for
assessment. “Collecting and reviewing
reliable evidence from multiple sources can take several years,” they write, but
“state mandates or impatient campus leaders may exert pressure for immediate
action.” Banta and Blaich also note that
most assessment processes focus more on collecting data and less on sharing it
and engaging faculty or other stakeholders in conversations about it.
Gilbert is misleading when he argues that “academic
administrators have been acquiescent about assessment for so long” in order to justify
educational offerings that do not involve traditional faculty. He attributes the following context to the
assessment movement: expanded use of
adjunct faculty, growth of dual enrollment, and an increase of online
education. In reality, the outcomes assessment
“movement” in higher education has been part of the higher education landscape
for three decades. As with any paradigm
shift, it resulted from and was influenced by a variety of factors: employer perceptions that college graduates
were poorly prepared; American students’ mediocre performance compared to their
peers from other nations; the rising costs associated with higher education
that had various stakeholders questioning its value; the landmark Spellings
Commission report in 2007 that demanded accountability from colleges and
universities and urged accreditors to evaluate institutions based on their success with respect to student
achievement; and two Presidential administrations spanning 16 years that
required regional accreditors to “tie the renewal of accreditation every five
years to serious effort on the part of colleges and universities under review
to create meaningful learning outcomes measures” (Neuman) .
Erik Gilbert may think it is time to dismiss assessment, but
we do not believe that it will soon be eliminated from the higher education agenda
or that accrediting bodies will revise their standards and no longer require
programs or institutions to show evidence of student achievement and continuous
improvement. If Gilbert is correct in
his perception that “assessment has not caused (and probably will not cause)
positive changes in student learning,” then it is our responsibility to change
that. Eubanks proposes a “future where
assessment leaders work closely with institutional researchers and scholars to
create large sets of high-quality data” that he believes will meaningfully
measure student achievement. Banta and
Blaich recommend that “Assessment efforts must be upgraded to ensure that they
are far more likely than they are at present to lead to improvements in student
learning.”
We believe assessment should be faculty driven, where
faculty in each department selects the best metrics to measure how well
students are achieving both course-level and program learning goals. We believe our efforts and our processes
should be informed by research and scholarship in assessment, teaching, and learning,
and not founded on opinion or anecdote.
To these ends, we invite our colleagues to join us at the March 23 Faculty
Forum to discuss how we might improve our processes so that they are meaningful
really do improve student outcomes.
Works Cited
Blaich, Trudy W. Banta and Charles. "Closing the
Assessment Loop." Change: The Magazine of Higher Learning (2011):
22-27.
Eubanks, David. "A Guide for the
Perplexed." Intersection Autumn 2017: 4-13.
Gilbert, Erik. "An Insider's Take on Assessment:
It May Be Worse Than You Thought." The Chronicle of Higher Education
12 January 2018.
Neuman, W. Russell. "Charting the Future of US
Higher Education: A Look at the Spellings Report Ten Years Later." AAC
& U Liberal Education (Winter 2017): 6-13.
Wednesday, November 29, 2017
Involving Students in Assessment
By Ann Damiano
In her keynote address at the Assessment Network of New York
conference (April 2017), Natasha Jankowski, Director of the National Institute
for Learning Outcomes, challenged participants to develop assessment processes
that are student-centered. She
concluded that assessment is something we should do with students, not something that is done to students.
Multiple stakeholders should be involved in our assessment
efforts, particularly when it comes to communicating and interpreting results, as
well as generating plans based on these results. Students
are our most important stakeholder, and so their involvement in the process
is imperative.
One way is to include students in the dissemination plan for
institutional survey results. Findings
from NSSE, the Noel-Levitz Student Satisfaction Inventory, and even the Student
Opinion on Teaching (SOOT) might be shared with student leaders. If warranted, students could collaborate with
personnel in Academic and Student Affairs to create plans or makes
recommendations based on the survey results.
For example, if NSSE findings indicate that less than 60% of seniors
perceive the College contributed to their understanding of people different
from them, students might propose ways the institution could improves its
curricular and co-curricular offerings so that we are more successful at
achieving this tenet of our mission.
When assessing student learning goals, we should not assume
students share the same operational definitions as their faculty. That they might not underscores the
importance of getting their input into what results mean, and likewise,
highlights the importance of using multiple methods to assess a single
goal.
Most recently (and at my previous institution), I assembled
two student groups to review results related to integrating knowledge, problem-solving,
quantitative reasoning, and intercultural competence. For each of these learning goals, the
findings from diverse sources either conflicted with one another or the results
indicated that no matter what “improvements” faculty made to the curriculum, we
were still not achieving the desired outcomes.
The students brought a different perspective to the discussion than that
articulated by the three faculty groups that reviewed the data. Important insights from the students included
the following:
- Students defined “integrating knowledge” as applying classroom learning to real-life situations, whereas faculty used it to refer to apply what was learned in one course to another;
- Problem-solving is best developed in the co-curricular experience, where students are often forced to derive solutions independently, as opposed to in the curricular experience, which is much more structured and faculty-directed;
- While the college may provide numerous offerings related to inclusion and diversity, a lack of diversity on the faculty combined with pedagogies that do not promote inclusion and the absence of global perspectives in courses throughout the curriculum potentially contributed to students not achieving the desired outcome related to intercultural competence.
The students’ interpretations of assessment findings dared the
faculty to make improvements that challenged them in ways their own conclusions
had not. Rethinking one’s pedagogy, for
instance, requires much greater effort and imagination than adjusting course
requirements or modifying an assessment instrument. Yet new pedagogical approaches may be necessary if we are going to help students achieve outcomes.
Collaborating with students on assessment results expands
our understanding of what the results might mean. As one faculty member noted, including
students in our processes “sends a message that we do this for the students,
that they’re the major stakeholder, and they literally have a seat at the
table.”
Wednesday, November 1, 2017
The "Apprenticeship Model" for Enhanced Student Learning
By Steven M. Specht
Like
most students who choose to major in psychology, while I was an undergraduate,
I enjoyed my abnormal psychology course and, at the time, didn’t really
understand why I needed to take statistics. Also, like most students, I could
have completed my bachelor’s degree by simply taking the required courses without
being involved directly with conducting research or doing any kind of internship.
But because I took my schoolwork seriously and I expended the time and effort
to write and revise the papers I submitted for classes, my professors noticed.
After earning an “A” in Research Methods -- one of the most challenging courses
I took in college – Dr. James McCroskery asked if I would be interested in
doing research with him the following semester. I was thrilled and jumped at
the opportunity. Our research examined the relationship between the Type-A
behavior pattern and self-reports of minor body symptoms (e.g., headaches, skin
rashes, insomnia). Our work eventually led to a presentation at the annual
meetings of the Eastern Psychological Association in 1982.
My
experience as an undergraduate research assistant for one of my professors
became the springboard to my work as a research assistant at Colgate University
and my eventual admission into the doctoral program in psychobiology at
Binghamton University.
In
fact, the “apprentice model” is the tradition in scholarly training in the
empirical sciences. My doctoral advisor typically ran her lab with four or five
graduate student “apprentices” and a cadre of undergraduate research
assistants. In addition to publishing papers while in graduate school, we were
expected to present our research at the annual meetings of the Society for
Neuroscience and the Eastern Psychological Association (as my undergraduate
advisor had done). This all makes sense when you think about the fact that empirical
research is not simply learning a collection of facts, but rather is an
intensive scholarly enterprise which requires being actively involved with the
processes of science.
I
am proud to say that I have continued the tradition of the apprenticeship model
throughout my career both at Lebanon Valley and Utica College by inviting promising
students to get involved with the research that I have conducted over the
years. Their involvement affords them with opportunities to learn the process
of research and to present their work at local, regional and national
conferences. And as they had done for me, these learning experiences typically transform
students’ lives by making them more competitive graduate school candidates or
potential employees.
But
since this is an assessment blog, I suppose I should mention something about
assessment. Assessing programmatic outcomes from an apprenticeship model is fairly
straightforward and requires no rubric. Although the gold standard of
publication is often elusive; the silver standard of conference presentations
is easy to document and is generally recognized as externally validated and
valued accomplishments. Virtually all departments are well aware of these
standards. A potential problem arises when administrators don’t realize how
potentially useful these data are for the institution in terms of assessment
(and potential marketing and advancement).
Assessing
the outcomes that students gain individually from being part of a research
“apprentice” program is perhaps more challenging. The face validity of such involvement seems
apparent. For anyone who has worked with students in this capacity, the
transformation of the students seems “obvious.” It might seem reasonable,
however, to compare graduate school acceptance rates or employment rates of
students who were apprentices during their undergraduate years with those who
were not involved. These data would be confounded by differences in levels of
“pre-apprentice” motivation or initiation. It is also typical that students who
work closely with a faculty member obtain more impressive and informative
letters of recommendation.
But
now perhaps we have gone too far down the assessment path. It is traditional in
the empirical sciences (and other disciplines) to provide opportunities for
students to become involved with active learning which transform them as
scholars and citizens. Hmmm, “tradition, opportunity, transformation”… coincidentally,
that’s the Utica College slogan that preceded “Never Stand Still”.
Wednesday, October 25, 2017
The Provost Reflects on Assessment at UC
With respect to academic assessment, the last several years
at Utica College, especially 2016-2017, have been busy, somewhat painful, and
incredibly fruitful in terms of developing a meaningful, coherent, and useful
system. I say that with a touch of irony,
since I know that Utica College’s faculty members have always assessed
their students’ learning and their own success in teaching them, and the
College has been a pioneer in some practices, like systematic and regular
program reviews, that have become standards in our profession.
Nonetheless, we have realized for some time that we have not
kept pace sufficiently with our colleagues at other institutions in terms of
the emerging best professional practices for generating, reporting, and acting
on academic assessment data. So…the last
few years have been a heavy lift, getting back up to speed and recapturing our
former leadership position. It would be
idle not to admit that one of the spurs to action has been the prospect of our
reaccreditation review by Middle States.
But, of course, that is one of the purposes of reaccreditation – to spur
institutions on in a process of self-examination and recognition of areas to
improve.
More important, I think, has been the increasing recognition
by all of us that, while the national “assessment movement” has been associated
with more than its share of hyperbolic rhetoric, it has behind it some very
important, and highly academic, values.
As academics, we value evidence.
As academics we value action that is impelled by careful scrutiny of
evidence and rational planning based on it.
We value processes that are systematic rather than haphazard and
idiosyncratic. We value progress rather
than stagnation, and challenge rather than complacency. Most importantly, I have never seen a
faculty more engaged with and committed to its students than UC’s faculty. We value being able to bring out the best of
ourselves on their behalf.
My sense is that this is a faculty that may have taken its
time shaping a highly formed academic assessment agenda and culture, yet is now
rapidly developing a high level of skill in, and sense of the importance of, assessment
in achieving our long-standing educational goals. As assessment processes become more and more
embedded in our individual professional lives and in our institutional fabric,
we realize increasingly how imperative it is that we engage in effective
assessment in order to identify specific areas of weakness and strength. This positions us to address the current
challenges we face and to improve the student experience. We expect this increasingly of ourselves and
of each other, which will not change after the Middle States team leaves. Very quickly we have become dependent on
assessment, a dependency that is both healthy and empowering, and constitutes a
resource that we will use increasingly as we pursue our aspirations for our
students and ourselves.
Subscribe to:
Posts (Atom)
Reporting and Analyzing Assessment Findings
It’s not unusual to see assessment reports where the findings are summarized as such: “23% met expectations, 52% exceeded expectations, a...
-
It’s not unusual to see assessment reports where the findings are summarized as such: “23% met expectations, 52% exceeded expectations, a...
-
Student learning is typically measured using direct or indirect methods. Direct measures provide clear evidence of what students have and ha...
-
Erick Montenegro and Natasha A. Jankowski advocate for the importance of using various methods to assess students’ knowledge as opposed to ...