Thursday, April 25, 2019

Paula Carey Reflects on What She Learned from Serving on the Assessment Committee


For six years, Paula Carey, Associate Professor of Occupational Therapy, served on the Academic Assessment Coordinating Committee.  She read program reviews, served as the lead reader for annual goal reports and self-studies, kept minutes of meetings, and, in a spirit of cooperation, shared her expertise with faculty colleagues and members of the committee.

Paula is retiring from thirty-plus years at Utica College, which obviously means her tenure as a member of AACC is drawing to a close.  But before she sings the final verse of the Alma Mater, she shared her reflections with me on what she learned from serving on the assessment committee.  She also offered some advice regarding what the College needs to do to build and support a culture of assessment.

The best part about serving on AACC, according to Paula, is being exposed to the wonderful work being done in other departments.  Faculty tend to be insulated in their own departments, she said, operating in silos. Reading assessment reports and talking to faculty during the program review process deepened her appreciation for the work her colleagues in all disciplines were doing.  She described their work as “very student-centered” and her faculty peers as “dedicated to teaching and learning.” The assessment reports and program review self-studies showed her just how intentional faculty are about curriculum and pedagogy, and how much effort they put into making an impact in students’ lives. 

“We’re all dealing with similar problems,” Paula said, “but each department deals with them differently.”   As a faculty, we need to have more opportunities to share what we are doing with one another and to exchange ideas across departments and disciplines.  Her work on AACC gave her the chance to do just that and convinced her of how beneficial these conversations are.

Serving on AACC also gave Paula more of an institutional perspective.  “I learned where Utica College might strengthen some of its processes, and I learned where there is a shortage of resources.” 

Paula has some advice for the College’s administration. 

“The process works better when it is faculty driven and respectful of faculty perspective,” she stated.  She further said that campus leaders must acknowledge the time, energy, and effort to do assessment well. 

“We should look for champions of assessment who can assist with the process,” she recommends.  
   
Thank you, Paula, for your service to UC and the Academic Assessment Coordinating Committee.  Your insights will help steer us on the assessment path just as they served us on the committee. 

Wednesday, April 17, 2019

Wicked Problems and Assessment


I’ll admit it.  When John Camey, Interim Dean of the School of Business and Justice Studies, told me he published a paper titled “The Wicked Problem of Assessment,” I reacted defensively.  Imagine, I thought, if I referred to someone’s discipline or the work they had been doing for over two decades as a “wicked problem.” 

But let’s face it.  We learn more from diverse views than from those that reflect our own.  As Walter B. Greenwood, my undergraduate professor in contemporary lit, said, “Having old certainties destroyed by new considerations is one of the hazards of reading.”

I read the paper.

Far from disagreeing with what John Camey and his co-author, Carolyn E. Johnson, said, I thought they made a lot of practical sense.  

A wicked problem, I learned, was not one that is inherently evil, but is one for which there is no straightforward solution.  Camey and Johnson offer ten reasons why assessment meets the criteria of a wicked problem.  A few of these help explain why assessment frustrates faculty, and, more importantly, how assessment professionals and accrediting bodies may have unwittingly been the cause of certain vexations.   

The authors note that when accreditors first required institutions to assess student learning, or provide “assurance of learning,” they offered little assistance with how to achieve this. Faculty were supposed to “figure it out.” However, the methods they had been using for years—i.e. grades—were not considered valid measures.  So how were faculty supposed to navigate this new terrain?  As Camey and Johnson observe, “An entire industry of workshops, seminars, conferences and travelling consultants [grew] up to help.” Needless to say, each of these had a price tag, and some were quite hefty.

Wicked problems, Camey and Johnson explain, do not lead to right or wrong answers, but only good or bad solutions.  This might be where assessment is most frustrating, because “good” or “bad” are arbitrary judgements.  We see this in the accreditation process.  One visiting team may consider the assessment efforts at an institution to be acceptable, while a second visiting team rules them not good enough.  Even within an institution, what constitutes “good enough” for one group of members on an assessment committee may not be sufficient for the team that replaces them.  No small wonder that in 2018, 36% of faculty responding to a survey on assessment culture agreed that assessment is "based on the whims of the people in charge," and in 2017, UC faculty described assessment as a moving target where the expectations continuously changed. 

At most institutions, the assessment cycle is the academic year.  On an annual basis, faculty assess student learning and document the results in some type of report.  Besides reporting results, faculty are expected to reflect upon the findings and articulate how they will be used to improve teaching and learning.  In subsequent assessment cycles, they should provide evidence as to whether the changes they made resulted in better learning.  

This process, which many institutions like to graphically represent as a continuing sequence in a circular flow, may not facilitate effective, meaningful assessment.  Camey and Johnson assert, it might take “multiple semesters before sufficient data can be gathered to determine whether our solution is good, bad, or not quite good enough.”  This claim echoes Trudy Banta’s and Charles Blaich’s conclusion  that “Collecting and reviewing reliable evidence from multiple sources can take several years” but “state mandates or impatient campus leaders may exert pressure for immediate action.” 

Camey and Johnson offer a solution to the wicked problem of assessment:  an assessment committee that is chaired by a designated leader, an “assessment czar,” and comprised of faculty who rotate off periodically.  At UC, we already have such a solution in place, but is it sufficient?

It strikes me that it is also important to have clearly defined criteria that communicate what “good” assessment is, and it is critical to view assessment as a process of inquiry and analysis, not a fill-in-the-blank, paint-by-number activity. 

But let’s hear from faculty on this topic.  How might we collaborate not to solve but to address this wicked problem of assessment?





Works Cited


Banta, T.W., Blaich, C. (2011). Closing the Assessment Loop. Change: The Magazine of Higher Learning22- 27.
Camey, J. P., Johnson, C. E. (2011). The Wicked Problem of Assessment. Journal of Business and Behavioral Sciences, 23(2), 68-83.
Sam Houston University. (2018).  2018 National Faculty Survey of Assessment Culture.  https://www.shsu.edu/research/survey-of-assessment-ulture/documents/Nationwide%20Faculty.pdf

Wednesday, April 3, 2019

Measuring Student Engagement and Participation

By Ann Damiano with Elizabeth Threadgill


I remember reading an assessment report years ago in which the department faculty wrote, “Most of the students met expectations for this goal, and those that did not were just lazy and didn’t study.” 

Few declarations make an assessment committee’s eyes roll more than a statement like that.  From the very beginning, proponents of assessment made learning a shared responsibility between instructor and student, and the ultimate purpose of assessment was to inform continuous improvement.  Blaming students for poor learning outcomes is antithetical to the very principles advanced by the assessment movement.

And yet, the fact remains that there are students who do not actively engage in their learning, who miss numerous classes or habitually arrive late, who often neglect to complete assignments, who do not prepare for exams with the intensity required, and who spend more class time scrolling through their phones than attending to the lecture or discussion. 

To interpret student learning assessment results without giving some consideration to student engagement may be to miss if not the whole point, then at least an important part of the narrative. So argued prominent members of the English faculty last fall.   

So when the department assembled in February to discuss assessment findings for both their majors and core courses, they also conferred on how they might include evidence of student participation and engagement in future assessments.  
 
A small group of us met to review various checklists, rubrics, and protocols that measure student participation and engagement, and after a lively conversation, the faculty selected criteria they thought best reflected how they would determine whether or not a student is appropriately engaged in a course.  One member observed, “We talk about student engagement all the time, but we haven’t defined what it is and we haven’t systematically measured it”—even though many of these faculty make participation part of the students’ final grade.

The plan is to assess each student’s participation/engagement according to the selected criteria.  This assessment aims to answer a number of questions raised by the faculty.  How might measuring student engagement better help us understand student success in our courses?  Is a lack of engagement more prevalent in Core courses than in courses designed for the major?  Are we inclined to overstate a lack of student engagement in our courses? How might the results of this assessment help us better understand the results gathered from course-embedded assessments? 

The scholarship of teaching and learning offers countless examples and studies describing how active student participation and engagement improves learning.  This is obvious to anyone who ever stood before a group of students.  The work being done by the English faculty has the potential to document the extent to which this might be true. 


Reflection as A Means of Measuring the Transformative Potential of Higher Education

Several years ago (and at another institution), I attended a meeting where a faculty member was presenting a revised general education curri...