Monday, September 23, 2024

Distinguishing Program Assessment from Course-Level Assessment

Program assessment, when done well, enables us to tell a compelling story, one that showcases our department’s strengths. It gives us a story to share with (and hopefully attract) prospective students, donors, accreditors, and grant-funding agencies.

These are the stories that will reinspire public confidence in higher education.   

Thanks to program assessment, the biology faculty can report that their majors exceed nation-wide and peer school averages in all content areas on the major field test, a standardized exam given in the senior year. The Education Department can say that 100% of superintendents, assistant superintendents, and principals from area school districts indicated they would definitely hire a Utica University graduate. 

On the flip side, good program assessment also informs continuous improvement. For instance, it illuminates where the curriculum might be modified to enhance student success in the program, or how course content might be changed either to improve student performance or to ensure that the curriculum is current.

Course-level assessments in and of themselves don’t help departments narrate their story. Instead, they tell us how a group of students in a single course performed on an assignment in a given day.  Such information is critically important to the instructor teaching the course, but one cannot make reasonable or reliable claims about how well students are performing in a program by simply reporting course-level results.  Course-level assessment results do not highlight program strengths. Neither do they inform continuous improvement at the program-level.

This does not mean that course-embedded assessments can’t be used for program assessment. Research papers, capstone projects, presentations, evaluations from clinical or internship supervisors—these are authentic assessments that allow us to gather evidence of student performance in our programs without necessarily adding to our workload.

However, the results of these assessments should be analyzed with respect to program goals. If using a capstone project as the assessment artifact, for instance, is it evaluated using a rubric that aligns with the program goals? If the department is measuring student performance at various transition points in the curriculum, are all faculty using a common instrument and are the results analyzed in aggregate, showing how student performance improves (or doesn’t improve) as one advances through the curriculum? 

A few years ago, in a blog on this very topic, I wrote, “It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not. Program assessment considers the bigger picture—program-level goals—and is typically outcomes based.” The best place to start is at the program’s conclusion in a capstone experience, using a culminating assignment that measures all of the program learning goals.

As departments plan for assessment in 2024-2025, the Academic Assessment Committee urges them to consider where and how they will do program assessment. You’d be surprised at how much less cumbersome assessment is when you aren’t reporting on multiple course-level assessments!

Tuesday, October 17, 2023

Reflection as A Means of Measuring the Transformative Potential of Higher Education

Several years ago (and at another institution), I attended a meeting where a faculty member was presenting a revised general education curriculum to the Board of Trustees. She described where in the curriculum students would have the opportunity to reflect on their learning and their educational goals. At one point, a Trustee impatiently said, “What’s with all this reflection? Where’s the academic rigor?!”

To equate reflection with a lack of intellectual rigor is somewhat akin to promoting the unexamined life. 

Reflection has long been recognized as critical to the learning process. Cynthia Roberts writes, “Critical reflection can be used as a way to integrate theory with practice, can facilitate insights, and stimulate self-discovery” (117).

From an assessment perspective, giving students opportunities to reflect is a valid way to assess what they are truly achieving in our courses. Their reflections also provide us with a way to observe and possibly measure the transformative power of higher education.

One such example comes from Sharon Kanfoush, Professor of Geology. At the conclusion of a general education geology class, she asks students to reflect on what they learned in the course that surprised them. One student wrote that when he enrolled in the class, he thought it would just be about “rocks and dirt.” To his surprise, he discovered that “Geology is actually relevant to our everyday lives” because it teaches us about the threats of global warming and enhances our understanding of catastrophic events, such as earthquakes and landslides.

Assistant Professor of Psychology, Kaylee Seddio, is an advocate of student research, which, she says, provides students with the chance to expand their knowledge of a topic while making connections to content in other courses. She encourages students to conduct research that has personal value to them. In doing so, the research itself becomes a type of reflection: not only do students increase their knowledge of a topic, they also enhance their knowledge of self.

Reflection is an important component of experiential education. In PSY 470, Deborah Pollack, Assistant Professor of Psychology, requires a final paper where students reflect on their internship experiences and articulate what they learned. In many cases, what they learned changed them.  One student credits her internship with influencing her decision to pursue a graduate degree, something she says she was previously opposed to doing. Another described how his internship at the Kelberman Center—“by far the most valuable experience [he had] while studying at Utica”—helped him recognize how much diversity exists among autistic persons and changed his stereotypical thinking about autism. The experience further clarified his educational and career goals and allowed him to “not only grow as a student but also as a person.”

Amy Haver, Assistant Professor of Nursing, uses a quantitative measure of transformation in the RN to BSN degree program. A 16-item survey developed by Dr. Annette Becker, former program director of Nursing, asks graduating students to rate how their professional abilities changed from the time they started the program. An overwhelming majority agreed that their education improved their abilities to facilitate quality outcomes and promote patient health, to care for diverse patients in a variety of settings, to advocate for patients in a variety of situations, and to identify critical issues and trends affecting health care delivery.

Measuring transformation may be more difficult than measuring cognitive growth, but it’s worth the attempt to do so. As Becker notes, “Cognitive outcomes reflected in knowledge and skills are weaker indicators of sustainable learning than measures reflecting internal change in the student” (15).

If we really want to assess whether students are achieving the promise of higher education, we must consider how the experience transformed them. By keeping this out of our assessment narratives, we risk promoting higher education as being solely transactional, its value measured only by job placement rates and post-graduation salaries.


Works Cited

Becker, Annette. “Personal Transformation in RNs Who Recently Graduated from an RN to BSN Program.” Journal of Transformative Education, Oct. 2017, pp.1-19.

Roberts, Cynthia. “Developing Future Leaders: The Role of Reflection in the Classroom.” Journal of Leadership Education. Summer 2008, pp. 116- 130.

 


Wednesday, September 6, 2023

A New Narrative and a Changed Landscape

Last year’s assessment reports from academic and co-curricular departments highlighted the effect of the COVID-19 pandemic on Utica University, specifically, and higher education, in general. Each report narrated a story not only about student learning, but also about the crippling realities that impacted student learning. They signaled a need for change.

In previous assessment cycles, reports focused on where and how students were successfully achieving learning outcomes.  They reported findings that biology majors’ mean scores on a standardized assessment were higher than the national norm, and 100% of Utica University students who applied to medical school were accepted. They showed how English majors’ writing skills and ability to interpret literature improved as they progressed through their program. They indicated that Utica University MBA students outperformed their peer group on a standardized outbound examination, and how undergraduate business majors improved in all subject areas from first-year to senior year. They documented how Utica University interns not only benefited from their internship experiences, but also contributed to the agency or organization where they interned. They included the percent of students passing licensure or certification examinations.

The reports from 2022-2023 included findings such as these, but the reflections on these findings, either included in the reports themselves or in the discussions with the Academic Assessment Committee, told a deeper story. 

They narrated stories of students impacted by inflation to the point where they had to choose between buying food or buying gasoline for their cars, students who often come to class hungry, and whose faculty had to feed them before any instruction could commence.

They told of students with weak motivation in the classroom, because for nearly two years, they could “succeed” doing the minimum, and cheating was so easy, they mistook it for efficiency.

They described the Sisyphean efforts students made to succeed in clinicals, in the classroom, on the playing field while grappling with severe mental health issues.

They provided insight into an exhausted faculty that valiantly tried to carry students through the worst of the pandemic while also experiencing personal loss and challenge and declining morale in the workplace.

They told of Student Affairs professionals attempting to provide quality programs and services with limited resources to do so.

These reports told a story that needs to be heard. More importantly, they indicated a need for real change, a paradigm shift in the way higher education serves its students. They suggested that institutions need to provide support for their students that is more wholistic than it has been traditionally, support that extends beyond academic assistance. The Chronicle of Higher Education report Reimaging the Student Experience suggests that higher education will become an opportunity reserved for the most privileged members of our society if colleges and universities fail to provide much needed support for students.  

The evidence is clear, and it’s in our own stories. We cannot go back to business as usual. It isn’t the same as it was before the pandemic and it will be a long time before it is--if it ever is. 

Thursday, November 3, 2022

Transformation: An Unmeasured, Undocumented, Undiscussed Outcome

There is no shortage of evidence showing that today’s students are pursuing higher education in order to secure employment. The American Freshman Survey, used by colleges and universities since 1966 to collect data on incoming college students, reports that in 2019, 83.5% of first-year students said getting a better job was a very important reason for attending college, followed by 78.6% who said getting training for a specific job. In contrast, the majority of students entering college in 1975 cited “Being an authority in my field” (73.0%) and “developing a meaningful philosophy of life” (67.3%) as very important reasons.

Regardless of why students are pursuing higher education and regardless of the discipline they study, a post-secondary education has the potential to transform learners in ways we cannot articulate in our learning goals or capture with our assessment measures. To paraphrase Hamlet (and take liberties with what he said), there are more things in heaven and earth than are dreamt of in our rubrics.

Stories of student transformation are important to tell; they have a place in our assessment narrative.

These stories are not to be confused with the student testimonials found on virtually every college website, quips and quotes from graduates that credit their academic program with giving them the skills and confidence to pursue their dreams. Neither are they nestled in the emails students write their faculty at the close of a term that report how much they learned in a class and what an inspiration the faculty member was.

In fact, stories of transformation aren’t about us or the institution. They are about the individual student, the person.

They are the story of a college junior, a young woman who, after completing a seminar in Willa Cather one summer, embarked on a cross-country trip with her boyfriend and, rather than visit Texas, planned it so their itinerary included a trip to Red Cloud, Nebraska, just so she could see for herself the landscape Cather memorialized in her fiction.

They are the story of a college sophomore who was so profoundly affected by William Faulkner’s The Sound and the Fury, she wrote an original musical composition where each score represented the psyches of the principal characters, the notes ultimately colliding in a chaos of sound symbolic of the characters’ demise and discord.

These small but significant moments of student transformation likewise transform us. A visit to Cather’s prairie is suddenly more accessible. Faulkner’s characters are experienced in an entirely new way through a different sense.

Anthologizing these stories will help us identify recurring motifs, present us with a unique portrait of student success, and provide illustrative examples of how we achieve our educational mission. 

In her historical novel Death Comes for the Archbishop, Willa Cather uses parables, legends, and vignettes to narrate the story of the early Catholic Church in the Southwest.  Why can’t we try something like this in assessment? Not to replace the quantitative assessments we are currently doing (just as Cather’s novel didn’t replace actual historical accounts), but to capture a narrative that currently remains untold.

It’s worth a try, isn’t it?

 

 

 

Wednesday, October 19, 2022

Indirect Assessments: How Useful Are They?

Student learning is typically measured using direct or indirect methods. Direct measures provide clear evidence of what students have and have not learned, evidence that assessment leader Linda Suskie says a skeptic would trust.   

In contrast, indirect evidence may reflect students’ perceptions of what they learned or what they probably learned. In the world of assessment, indirect evidence is considered less compelling and less reliable than direct evidence.

I do not dismiss indirect evidence as quickly as some of my assessment colleagues do. Grades are an example of indirect evidence that tell us what students probably learned (assuming the course had clear learning objectives). True, they often measure more than just what was learned in a course (e.g. participation, attendance), so there is a limit to how we might use them in our program assessments. But since so many crucial decisions are made based on grades—class rank, scholarships, financial aid, acceptance into graduate or professional school—we have to acknowledge that they are some measure of student knowledge and ability.

Similarly, students’ perceptions of their learning provide some insight into educational effectiveness. I’ve always been surprised when I’ve heard people dismiss findings as “student opinion.”  The opinions of our most important stakeholders should be respectfully considered.

Utica University’s Master of Business Administration Program (M.B.A.) is a case study for how indirect assessments may be used to identify areas for program improvement. For the past few assessment cycles, M.B.A. students have completed an exit survey at the close of their program. This  survey asks about the importance of specific knowledge, skills, and competence, each of which is a learning outcome of the program, and further asks them to indicate the extent to which the program helped them achieve these learning outcomes.

This outbound survey, an indirect assessment, measures students’ perceptions of educational gains in both the core curriculum and the areas of specialization. It allows for a systematic collection of assessment results using a sustainable process that yields actionable findings.

In the 2020-2021 assessment cycle, students reported lower ratings for the Accounting and Finance specialization than the desired target. This prompted the M.B.A. faculty to review the curriculum to ensure that accounting and finance concepts were being adequately reinforced in the appropriate classes, including those taught by adjuncts.

Another area the faculty identified as an opportunity for improvement was related to a diversity goal. The learning goal states the students will “Examine business solutions and problems from a global perspective and assess how cultural differences impact business."  21.8% rated this a “3” on a 5-point scale when asked how much their M.B.A. education helped them develop this knowledge. At present, cultural differences are discussed in the leadership class, and significant time is spent on the topic in a global consumer course, where the final project centers around various culture clusters of the world. However, based on this finding, the faculty is investigating ways they might weave cultural diversity more into the curriculum.

A direct measure of student learning in the M.B.A. Program is the Peregrine Assessment, a standardized test that measures graduates’ knowledge and provides benchmark data from peer institutions.

Similarly, biology is another case study in how direct and indirect measures might be combined to tell a meaningful narrative about student learning. Students graduating from this degree program take the Major Field Test in Biology, a standardized examination that measures a learning goal addressing key principles of biological fields. They also respond to a senior survey that asks them to rate how well they believe they achieved the program’s learning goals.

Direct assessments will probably always be considered more trustworthy than indirect ones. But indirect methods—survey findings, acceptances into graduate/professional schools, graduate employment—help us shape a more comprehensive narrative about student learning in our programs. Surveys have the added benefit of giving students an opportunity to share their feedback on a program or learning experience, thereby giving them agency in our assessment and planning processes.

 

Works Cited

Suskie, Linda. (2009). Assessing Student Learning: A Common Sense Guide. 2nd ed. San

Francisco: Jossey-Bass.   

 

Monday, October 3, 2022

Underprepared or Underserved?

I have been hearing complaints for three generations about how poorly prepared college students are.   

It’s true that some students are not fully prepared for a post-secondary education. For a variety of complex reasons, some students may require extra services and support. They may even require additional coursework. What people fail to realize, however, is that the “underprepared” student is not a new phenomenon in American higher education.

In 1636, Harvard College opened in the American colonies to train clergy for the new commonwealth. Courses were taught in Latin; textbooks were written in Latin and Greek. Some of Harvard’s students benefitted from having been apprenticed to ministers prior to enrolling at the college. Through these apprenticeships, they learned Latin and Greek.   

Not everyone had this advantage, however, and many of those that didn’t were unschooled in Latin and Greek. In other words, they were underprepared for their course of study. The institution responded by providing tutorial services to assist those young men in learning the classical languages.

Fast forward 200-plus years, when Harvard faculty grumbled about how poorly their students wrote. To address the lack of students’ preparation in formal writing, Harvard faculty in 1874 introduced a freshman composition course, a staple in undergraduate education ever since.

The preponderance of preparatory programs in colleges and universities during the 19th and 20th centuries serves as evidence that a portion of American students entered higher education lacking the skills needed to compete. In her landmark text Improving Student Learning Skills, Martha Maxwell writes, “By 1915, three hundred fifty colleges in the United States reported to the U.S. commissioner of education that they had college preparatory departments” designed to help students develop the skills and competencies they would need to persist towards a degree.

Students less adequately prepared for post-secondary educations were enrolled in all types of institutions, from the public land-grant universities to private, highly selective ones. Maxwell states that in 1907, more than half the students matriculating at Harvard, Yale, Princeton, and Columbia did not meet the admissions requirements, and in the 1950s, experts reported that 2/3 of the students entering college lacked the reading and study skills necessary for success.

In the mid-1960s, larger numbers of traditional-aged college learners sought admission to post-secondary institutions than in previous years, and colleges and universities were opening their doors to a more diverse group of learners. These changes, too, resulted in the need for support services for those students who might have been less prepared than some of their peers for academic success.   

We know the pandemic has had a significant impact on student performance. A New York Times report (https://www.nytimes.com/2022/09/01/us/national-test-scores-math-reading-pandemic.html) cites findings from the National Assessment of Educational Progress testing that showed a drop in math and reading scores of the 9-year olds who completed the assessment. Assessment reports from academic departments for the 2019-2020 and 2020-2021 assessment cycles provide evidence of declining student performance, motivation, and satisfaction.

That said, students referred to as “underprepared” have had a place in colleges and universities from the very start, because, as Maxwell writes, “American higher education has historically had an egalitarian thrust.” An equity-minded approach recognizes that underprepared doesn’t mean unqualified or incapable. An equity-minded approach recognizes that being underprepared is often a consequence of being underserved or, like those Harvard students in the mid-17th century, not having all the advantages enjoyed by other students. Institutions committed to the principles of democracy, diversity, equity, and inclusion are committed to serving these students without judgment.  

 

Works Cited

Maxwell, Martha. Improving Student Learning Skills. Clearwater, H & H Publishing Company, Inc., 1997.

Mervosh, Sarah. “The Pandemic Erased Two Decades of Progress in Math and Reading.” New York Times, 1 September 2022, https://www.nytimes.com/2022/09/01/us/national-test-scores-math-reading-pandemic.html. Accessed 27 September 2022.

 

 

 

Thursday, April 7, 2022

Using Multiple Assessment Methods to Measure Student Performance: An Equitable Practice

 When reviewing annual assessment reports from academic departments, members of the Academic Assessment Committee look to see if a department uses multiple methods to assess its learning goals. To do so constitutes an exemplary practice. The reason for this is simple: every assessment method has its limitations, and it can be misleading to make judgments or develop plans based on data gathered from one instrument. If we want reliable results that can be used to shape a compelling narrative and assist our planning efforts, we need more than one approach to tell us about students’ successes and failures.  

This holds true for course-level assessment as well. Linda Suskie asserts, “The greater the variety of evidence, the more confidently you can infer that students [in your classes] have indeed learned what you want them to.”

It’s also a matter of equity. Different learners demonstrate their learning in different ways. To voice a commitment to the principles of diversity, equity, and inclusion but measure student learning using a single instrument that favors some learners over others is a contradictory practice. 

Using multiple assessment methods in the classroom aligns with the guidelines for Universal Design for Learning, which consider the diverse needs and abilities of all students enrolled in a classroom. These guidelines emphasize multiple means of having students engage with learning, recognize the what of learning, and demonstrate the how of learning.

Some educators advance the idea of giving students choice when it comes to how their learning will be measured. Examples include having students complete fill-in-the-blank type questions or completing multiple choice questions, writing a long essay or composing three short answers.

I have no experience with this particular approach, so I cannot testify to its merits (and quite frankly, I see it as potentially problematic). A more plausible strategy would simply be to use a variety of assignments—papers, presentations, objective tests, surveys—to assess performance.  That is, do not rely solely on one method, such as the mid-term and final exam. 

Understandably, some programs need to prepare students to be successful on a single summative assessment: a certification or licensing examination, a standardized admissions test to graduate or professional school. It is unlikely we will see professional boards or state agencies change how they measure knowledge and skills. Developing students’ ability to be successful on a single test makes sense.

But more often than not, we have the opportunity to use multiple assessment methods, both summative and formative. Doing so is not only a better way to measure actual learning, it is also a more equitable practice, one that recognizes diversity as the norm in our classes.

 

 

Works Cited

CAST (2018). Universal Design for Learning Guidelines version 2.2. Retrieved from

  http://udlguidelines.cast.org

Suskie, Linda. (2009). Assessing Student Learning: A Common Sense Guide. 2nd ed. San

Francisco: Jossey-Bass.   

 

Distinguishing Program Assessment from Course-Level Assessment

Program assessment, when done well, enables us to tell a compelling story, one that showcases our department’s strengths. It gives us a stor...