Wednesday, September 18, 2019

But We've Always Done Assessment: Course-Level Versus Program-Level Assessment


When faculty argue that they have always done assessment and made changes to improve learning, they are absolutely correct.  The 2018 self-study report completed by the Department of Physics calls this “assessment by rumination,” asserting that it “happens all the time among faculty” and was happening “long before ‘rubrics’ and ‘operational goals’ . . .  barged their way into the lexicon of assessment.”

What faculty are referring to when they make these claims is course-level assessment. Course-level assessment has been happening for centuries.  However, the type of assessment that will inspire public confidence in higher education is not course-level, but program-level or institution-level. These are the assessments that provide the evidence we need to tell our stories to external stakeholders, including prospective students, parents, donors, accreditors, and grant-funding agencies.

Imagine you are promoting your academic program to a group of parents and prospective students.  You want to attract the best and the brightest students in the audience, and, while you aren’t crass enough to say it, you want each family to spend over $100,000 and probably amass significant debt. 

You can tell them what faculty colleagues at every other college they visit will tell them:  we have a great faculty dedicated to excellence in teaching; our curriculum is current, relevant, and exciting; we offer students opportunities for research, internships, and community-engagement; in our program, you are a person, not a number. 

But how might you tell a story that distinguishes your program and offers evidence supporting your claims?   

Findings from course-level assessments won’t help you here.  No parent or prospective student will care that last spring semester, 83% of the students in XXX-course met or exceeded expectations on a quiz.  They might, however, be interested to learn that in UC’s physics program, students in introductory and intermediate-level courses often exceed the national average on a standardized pre/post assessment.  Likewise, they would probably be interested to know that internship supervisors evaluate how well criminal justice students apply what they learned in their program to a real-world setting.  

Program-level assessment is not more important than course-level.  It’s just different—and it serves a different purpose.  It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not. Program assessment considers the bigger picture—program-level goals—and is typically outcomes based.

Assessment guru Linda Suskie recommends that “Program-level outcomes are often best assessed with a significant assignment or project completed shortly before students graduate.”  

As departments plan for the 2019-2020 assessment cycle, consider how you might use your assessments to tell your program’s story.  You might even find that your assessment efforts become more simplified, more organic, and less burdensome than they have been. 

Thursday, April 25, 2019

Paula Carey Reflects on What She Learned from Serving on the Assessment Committee


For six years, Paula Carey, Associate Professor of Occupational Therapy, served on the Academic Assessment Coordinating Committee.  She read program reviews, served as the lead reader for annual goal reports and self-studies, kept minutes of meetings, and, in a spirit of cooperation, shared her expertise with faculty colleagues and members of the committee.

Paula is retiring from thirty-plus years at Utica College, which obviously means her tenure as a member of AACC is drawing to a close.  But before she sings the final verse of the Alma Mater, she shared her reflections with me on what she learned from serving on the assessment committee.  She also offered some advice regarding what the College needs to do to build and support a culture of assessment.

The best part about serving on AACC, according to Paula, is being exposed to the wonderful work being done in other departments.  Faculty tend to be insulated in their own departments, she said, operating in silos. Reading assessment reports and talking to faculty during the program review process deepened her appreciation for the work her colleagues in all disciplines were doing.  She described their work as “very student-centered” and her faculty peers as “dedicated to teaching and learning.” The assessment reports and program review self-studies showed her just how intentional faculty are about curriculum and pedagogy, and how much effort they put into making an impact in students’ lives. 

“We’re all dealing with similar problems,” Paula said, “but each department deals with them differently.”   As a faculty, we need to have more opportunities to share what we are doing with one another and to exchange ideas across departments and disciplines.  Her work on AACC gave her the chance to do just that and convinced her of how beneficial these conversations are.

Serving on AACC also gave Paula more of an institutional perspective.  “I learned where Utica College might strengthen some of its processes, and I learned where there is a shortage of resources.” 

Paula has some advice for the College’s administration. 

“The process works better when it is faculty driven and respectful of faculty perspective,” she stated.  She further said that campus leaders must acknowledge the time, energy, and effort to do assessment well. 

“We should look for champions of assessment who can assist with the process,” she recommends.  
   
Thank you, Paula, for your service to UC and the Academic Assessment Coordinating Committee.  Your insights will help steer us on the assessment path just as they served us on the committee. 

Wednesday, April 17, 2019

Wicked Problems and Assessment


I’ll admit it.  When John Camey, Interim Dean of the School of Business and Justice Studies, told me he published a paper titled “The Wicked Problem of Assessment,” I reacted defensively.  Imagine, I thought, if I referred to someone’s discipline or the work they had been doing for over two decades as a “wicked problem.” 

But let’s face it.  We learn more from diverse views than from those that reflect our own.  As Walter B. Greenwood, my undergraduate professor in contemporary lit, said, “Having old certainties destroyed by new considerations is one of the hazards of reading.”

I read the paper.

Far from disagreeing with what John Camey and his co-author, Carolyn E. Johnson, said, I thought they made a lot of practical sense.  

A wicked problem, I learned, was not one that is inherently evil, but is one for which there is no straightforward solution.  Camey and Johnson offer ten reasons why assessment meets the criteria of a wicked problem.  A few of these help explain why assessment frustrates faculty, and, more importantly, how assessment professionals and accrediting bodies may have unwittingly been the cause of certain vexations.   

The authors note that when accreditors first required institutions to assess student learning, or provide “assurance of learning,” they offered little assistance with how to achieve this. Faculty were supposed to “figure it out.” However, the methods they had been using for years—i.e. grades—were not considered valid measures.  So how were faculty supposed to navigate this new terrain?  As Camey and Johnson observe, “An entire industry of workshops, seminars, conferences and travelling consultants [grew] up to help.” Needless to say, each of these had a price tag, and some were quite hefty.

Wicked problems, Camey and Johnson explain, do not lead to right or wrong answers, but only good or bad solutions.  This might be where assessment is most frustrating, because “good” or “bad” are arbitrary judgements.  We see this in the accreditation process.  One visiting team may consider the assessment efforts at an institution to be acceptable, while a second visiting team rules them not good enough.  Even within an institution, what constitutes “good enough” for one group of members on an assessment committee may not be sufficient for the team that replaces them.  No small wonder that in 2018, 36% of faculty responding to a survey on assessment culture agreed that assessment is "based on the whims of the people in charge," and in 2017, UC faculty described assessment as a moving target where the expectations continuously changed. 

At most institutions, the assessment cycle is the academic year.  On an annual basis, faculty assess student learning and document the results in some type of report.  Besides reporting results, faculty are expected to reflect upon the findings and articulate how they will be used to improve teaching and learning.  In subsequent assessment cycles, they should provide evidence as to whether the changes they made resulted in better learning.  

This process, which many institutions like to graphically represent as a continuing sequence in a circular flow, may not facilitate effective, meaningful assessment.  Camey and Johnson assert, it might take “multiple semesters before sufficient data can be gathered to determine whether our solution is good, bad, or not quite good enough.”  This claim echoes Trudy Banta’s and Charles Blaich’s conclusion  that “Collecting and reviewing reliable evidence from multiple sources can take several years” but “state mandates or impatient campus leaders may exert pressure for immediate action.” 

Camey and Johnson offer a solution to the wicked problem of assessment:  an assessment committee that is chaired by a designated leader, an “assessment czar,” and comprised of faculty who rotate off periodically.  At UC, we already have such a solution in place, but is it sufficient?

It strikes me that it is also important to have clearly defined criteria that communicate what “good” assessment is, and it is critical to view assessment as a process of inquiry and analysis, not a fill-in-the-blank, paint-by-number activity. 

But let’s hear from faculty on this topic.  How might we collaborate not to solve but to address this wicked problem of assessment?





Works Cited


Banta, T.W., Blaich, C. (2011). Closing the Assessment Loop. Change: The Magazine of Higher Learning22- 27.
Camey, J. P., Johnson, C. E. (2011). The Wicked Problem of Assessment. Journal of Business and Behavioral Sciences, 23(2), 68-83.
Sam Houston University. (2018).  2018 National Faculty Survey of Assessment Culture.  https://www.shsu.edu/research/survey-of-assessment-ulture/documents/Nationwide%20Faculty.pdf

Wednesday, April 3, 2019

Measuring Student Engagement and Participation

By Ann Damiano with Elizabeth Threadgill


I remember reading an assessment report years ago in which the department faculty wrote, “Most of the students met expectations for this goal, and those that did not were just lazy and didn’t study.” 

Few declarations make an assessment committee’s eyes roll more than a statement like that.  From the very beginning, proponents of assessment made learning a shared responsibility between instructor and student, and the ultimate purpose of assessment was to inform continuous improvement.  Blaming students for poor learning outcomes is antithetical to the very principles advanced by the assessment movement.

And yet, the fact remains that there are students who do not actively engage in their learning, who miss numerous classes or habitually arrive late, who often neglect to complete assignments, who do not prepare for exams with the intensity required, and who spend more class time scrolling through their phones than attending to the lecture or discussion. 

To interpret student learning assessment results without giving some consideration to student engagement may be to miss if not the whole point, then at least an important part of the narrative. So argued prominent members of the English faculty last fall.   

So when the department assembled in February to discuss assessment findings for both their majors and core courses, they also conferred on how they might include evidence of student participation and engagement in future assessments.  
 
A small group of us met to review various checklists, rubrics, and protocols that measure student participation and engagement, and after a lively conversation, the faculty selected criteria they thought best reflected how they would determine whether or not a student is appropriately engaged in a course.  One member observed, “We talk about student engagement all the time, but we haven’t defined what it is and we haven’t systematically measured it”—even though many of these faculty make participation part of the students’ final grade.

The plan is to assess each student’s participation/engagement according to the selected criteria.  This assessment aims to answer a number of questions raised by the faculty.  How might measuring student engagement better help us understand student success in our courses?  Is a lack of engagement more prevalent in Core courses than in courses designed for the major?  Are we inclined to overstate a lack of student engagement in our courses? How might the results of this assessment help us better understand the results gathered from course-embedded assessments? 

The scholarship of teaching and learning offers countless examples and studies describing how active student participation and engagement improves learning.  This is obvious to anyone who ever stood before a group of students.  The work being done by the English faculty has the potential to document the extent to which this might be true. 


Tuesday, March 5, 2019

Assessment as Inquiry into Student Learning


When the assessment movement started in the mid-1980s, its advocates envisioned a paradigm shift in higher education, a move away from the “professor-as-disseminator-of knowledge” model and towards one that promoted active, collaborative learning.  For this approach to succeed, however, faculty needed to articulate clearly what they wanted students to learn and consider carefully how well they were learning it.    

If one approaches assessment as something to satisfy institutional requirements or accreditation standards, then assessment is little more than tedious bean-counting that yields meaningless quantitative information.  On the other hand, if one regards assessment as its original proponents intended it to be—deliberate and thoughtful inquiry into student learning—it can result in important discoveries that improve pedagogy and curriculum.     

Consider the assessment process as analogous to any other process of discovery, whether creative, scientific, or problem-solving.   It starts with defining the problem or asking the question:  How do we know students are learning this material or developing this ability?  The next “step” is to investigate or observe.  In the creative process, this might involve research or brainstorming; in the problem-solving process, like the assessment process, it means gathering data and any other evidence that may possibly answer the question.  Once ideas are generated or data have been collected, patterns begin to emerge.  These patterns provide insights and direction; they shape meaning.  When we are writing, this might be when we discover a lead or focus.  In assessment, these patterns provide insight into where students are performing well, and where they are less successful at meeting the learning goals.  This doesn’t necessarily happen after one assessment, any more than it happens in writing after reading one research article.  Patterns emerge only after sustained and systematic data collection.

Reflection is critical to any process of discovery, and assessment is no exception.  Reflecting on and interpreting what results might mean are important; this cannot be emphasized enough.  Likewise, formulating solutions to address specific findings, particularly those that are disappointing or below target, is an integral part of the discovery process.  Positive discoveries are made from assessment as well, just as they are through the scientific and creative processes, and assessment should be a way to celebrate these successes. 

As with any process of discovery, our inquiry into student learning will have its limitations.  But perfection cannot prevent us from doing something well.  Our commitment to an educational mission is a commitment to thoughtful analysis of student learning, to asking the right questions and exploring the possible implications inherent in what we discover. 

Thursday, February 14, 2019

Capturing A Rich Narrative: Experiential Learning Opportunities


If assessment provides a way of telling our story, then tracking experiential learning opportunities is probably one of the most exciting parts of the narrative.

By “experiential learning,” I am not referring to a good or even great experience, like taking students to an art museum or engaging them in a community service activity for one afternoon.  I am talking about those hands-on experiences that occur over a period of time and enhance deeper learning.  As many of the departmental assessment reports document, these high impact experiences are integral to a Utica College education.

In a number of academic departments, these types of experiences result in student presentations at regional or national conferences.

  • Last October, 3 students attended the Seaway Section of the Mathematical Association of America (MAA) meeting at the University of Toronto Mississauga.   This spring, 1 student will present at the MAA Seaway Section meeting at St. John Fisher. 

  • From 2017 through 2018, 5 chemistry students presented their research at the American Chemical Society’s national conferences, and one presented at the CSTEP Statewide Conference. 

  • 15 students have been included as co-authors on presentations made at regional and national psychology conferences from 2017-2019. Two students have also been included as co-authors with a faculty member in a prestigious professional journal publication.

  • In the geoscience program, students engage in field trips during lab periods and on weekends. They also participate in internships, independent research, and may opt for a 4 to 6 week field camp experience to study the geologic features of a particular region.  In 2017, 2 undergraduates presented posters at a professional conference, and 1 student’s research was published in Northeastern Geographer. 


Experiential learning isn’t realized solely in conducting research and giving presentations, however.  Students are writing for the Tangerine.  They are performing on stage in musicals and dramatic productions.  They are studying abroad.  They are completing internships.  And sometimes experiential learning happens right in the classroom or during residencies, as in the case of the Financial Crime Management program.  In this program, graduate students get hands-on experiences using computing software and financial analysis tools and applying them to real-world criminal cases in economic crime.

Experiential learning exposes students to new opportunities and often takes them outside their comfort zones.  In MGT/PRL 345, students spend spring break in New York City, where their instructor has arranged for them to visit with UC alumni and other top communications professionals at agencies such as G & S Business Communications, the Wall Street Journal, Glamour, NBC News, the New York Power Authority, and the 9/11 Memorial and Museum.  Student reflections indicate that this experience is a transformative one, especially for those who come from small, rural towns where opportunities are limited and who have never visited a large city.  One student wrote, “In college, it’s hard to figure out where you firmly belong or it’s difficult to see yourself in five years.  But when you visit an [organization] and you feel like you could belong there, it’s an empowering feeling.”

Now if these aren’t impressive outcomes, I don’t know what are.  



Monday, February 4, 2019

Formative Assessment

Formative assessment refers to the approaches instructors use in their classrooms to determine what students are understanding or not understanding.  It is and has always been integral to effective teaching. Christopher R. Gareis from William and Mary’s School of Education notes, “What we now call the ‘Socratic method’ essentially amounts to using questions to assess understanding, to guide learning, and ultimately, to foster critical thinking” (Gareis).  Socrates’ persistent questions represent one method of formative assessment.

Methods of formative assessment are diverse.  They include having students summarize what they learned on a 3 x 5 index card before leaving class; asking them to build or create something that shows they are able to apply what they learned; requesting them to provide feedback or respond to a question using a clicker or Twitter voting; urging students to complete a self-assessment of their work, using the same rubric or matrix that the instructor does.  The important aspect of formative assessment is that it is a way to guide instruction and provide feedback to students.  Whether it is graded or not is the prerogative of the individual instructor. 

Utica College’s physics department offers an excellent example of how formative assessment engages students, stimulates curiosity, and promotes a sense of community.  Since 2011, the department has offered a 1-credit seminar where students and faculty read a book relevant to the discipline and engage in online discussions and face-to-face conversations, the latter facilitated by students.  Faculty participate less as “experts” and more as members of a learning community, exploring themes and new ideas in collaboration with the enrolled undergraduates.  These ongoing discussions and the attendant questions and responses represent formative assessment at its finest.  In a recently published article, “A Multilevel Seminar for Physics Majors:  A Good Deal for Everyone,” the physics faculty describe how this approach to student learning has enhanced student engagement in the learning experience and resulted in student growth. 

In the MBA program, new students are required to complete a one-page essay where they analyze their reasons for pursuing an MBA degree.  Each essay is scored using the AAC & U Value Rubric for Critical Thinking.  The scored rubric is intended to provide students with feedback on their critical thinking and writing skills. It also introduces them to the expectations of graduate-level work and familiarizes them with the criteria that will be used to assess their work.  This formative assessment serves another important purpose as well:  it is used to refer students to services and resources that might be used to support them in their graduate coursework.

A faculty member in Wellness and Adventure Education brings experiential learning into his traditional classroom by using “real-world” projects and simulations.  After students engage in a group activity, they reflect on their performance, providing each other with feedback and insights.  Students reflect further in writing on the experience and what they learned both from the experience and their peers’ feedback. 

These kinds of formative assessments provide texture to the assessment narrative. So let’s hear from you.  What are you doing in your classroom or in your program to guide instruction and give students feedback on their learning?    


Works Cited

Gareis, Christopher R. "The Forgotten Art of Formative Assessment." February/March 2006. William and Mary School of Education. 23 January 2019.
L.S. Dake, J. Ribaudo, and L.H. Day. "A Multiplevel Seminar for Physics Majors: A Good Deal for Everyone." The Physics Teacher. December 2018: 630-632.

Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...