Wednesday, February 2, 2022

Undergraduate Intern Assesses How Well Students Learning During the Pandemic

On a Student Voice survey administered to college students in May 2021, 52% of respondents said they learned less during the 2020 – 2021 academic year than they had in pre-COVID years, and close to a quarter of first-year students reported feeling very underprepared for college.

Students might have perceived that they learned less in the first year of the pandemic, but did they?

Senior psychology major, Jacqueline Lewis, posed this question in her internship experience at James Madison University (JMU) in summer 2021. Jacqueline was one of three undergraduate interns selected to work on an independent project supporting the mission of JMU’s Center for Assessment and Research Studies.

Her initial training included becoming familiar with assessment and learning how to use statistical software. Working with a mentor, a professor affiliated with the Center for Assessment and Research Studies, Jacqueline then designed a project to measure how well first-year students developed information literacy skills, one of JMU’s general education competencies. Her research also considered the role motivation plays in learning.

At James Madison University, an institution serving more than 19,000 undergraduates, students participate in two assessment days: one during their first-year orientation and the second after earning 45 – 70 credits hours. This allows the university to implement a pre-and-post test design that examines total score growth, objective level growth, and item growth over time.

Jacqueline collected the pre-test data and the post-test results, the latter of which were generated after students completed an online curriculum in information literacy. She also gathered data on a Student Opinion Scale, an instrument that measures two aspects of motivation: effort and importance.

An analysis of both sets of results showed an increase in the mean total score from the pre-test to the post-test, and a statistically significant mean difference in effort and importance, leading her to conclude that yes, despite the adverse impact of the pandemic on the student experience, learning was happening!

This internship experience exposed Jacqueline to an area of study she was not familiar with: assessment and the scholarship of teaching and learning. In addition, it expanded her opportunities for graduate study and gave her access to a professional network.

In November 2021, Jacqueline Lewis presented her work at the Virginia Assessment Group Conference, thus contributing to the body of scholarship in the field of assessment. Her work also added to the growing narrative about the pandemic’s impact on college students’ learning, a topic that will attract researchers for at least the next five years. Jacqueline herself is continuing her research in this area by collaborating with her Utica College advisor and mentor, Dr. Kaylee Seddio, on measuring ADHD and anxiety in college students during the pandemic. 

"I am extremely grateful for my experience at JMU and for those who helped me get there,” Lewis said. “I value the opportunity and all that I have learned, but more so the feeling that I helped make an important contribution to understanding learning during this unprecedented time."

 

 

Wednesday, December 8, 2021

AAC's Response to the North Pole's Gift-Giving Process

The Academic Assessment Committee recently reviewed the gift-giving process practiced at the North Pole. What follows is the committee’s feedback and suggestions regarding these practices.

Santa has articulated an outcome ("Children should be nice"), but the wording is too ambiguous, and, therefore, hard to measure. Further, he has not specified an appropriate target. Should 100% of the children be "nice" 100% of the time? 90%? 75%? Without a clearly defined target, Santa risks being arbitrary and inconsistent in his assessment of children's behaviors. 

 The methods Santa uses to assess children's behavior and distinguish between "naughty" and "nice" are not apparent. How often is he able to observe each child first-hand?   Are there others who are involved in this assessment? Santa's elves or Mrs. Claus, for example? Given how high stakes this assessment is, there should be multiple individuals engaged in observing and evaluating children's behavior, and there should be 90% inter-rater reliability. Further, each individual child's behavior should be observed on numerous occasions throughout the assessment cycle. This may not be a sustainable plan, however, particularly given the recent cuts made to the workshop staff as a result of the pandemic and the shortages caused by the supply chain problem.  

 It is also not clear what instrument is used to document children’s' behaviors. Does Santa use a rubric that articulates clear criteria regarding the kinds of behaviors he regards as "nice" versus "naughty?" Do these criteria take into account cultural differences?  In other words, are Santa's assessment practices equitable?  

 The results of Santa's assessments have never been published or analyzed. What percent of the world's population under the age of 10 gets what they asked for? What percent receives coal in their stockings? Are there specific trends that Santa has observed over a period of time--say the last 100 years?  Has the percentage of naughty children increased? Decreased?

Since the purpose of assessment is to inform improvement, how are Santa's findings shared with and used by parents? Or do his "naughty" and "nice" lists remain in a drawer in his office at the North Pole and referenced only during the Christmas season as a prop?  How might parents (and perhaps even teachers) use them to help develop children’s character?

The committee recommends that Santa reflect more deeply on his assessment processes and consult with other characters such as the Easter Bunny and the Tooth Fairy to get ideas for how he might design an assessment plan that is fair, useful, and sustainable.  

 

Wednesday, November 10, 2021

Overcoming the COVID Shift: The Impact of Resilience and Effort During a Pandemic

 By Matthew Marmet

The COVID-19 pandemic has certainly presented challenges and opportunities across all sectors.  The fact that it is going to take eight weeks for me to get a replacement window for my basement is indicative of the supply chain struggles we are currently experiencing.  The focus of this piece, though, will be on the world of education, where one of the biggest challenges we faced is what I have come to call the “COVID shift.”

I am not touting myself as the inventor of some groundbreaking, trademarkable term when I say COVID shift.  I use it simply because it perfectly captures what our institution experienced.  The COVID shift, for us, meant a shift from traditional on-ground delivery of educational materials (pre-COVID shift) to either completely virtual or hybrid learning environments (post-COVID shift).

Before moving forward, I’d like to take a walk back in time, to a simpler place where students came to class not worried about whether they remembered their masks.  A time when group activities and other pedagogical techniques could be easily implemented to supplement in-class lecture. For me, this was the time of the flipped classroom, a move away from what my favorite instructional designer likes to call “straight lecturing.”  Instead, class time is spent with students engaging in other activities and problem-solving tasks, which have been shown to have a positive impact on student attitudes and performance.

The COVID shift forced us to un-flip. In-class activities involving group work and close student interaction were not much of a possibility. With my love for the flipped classroom, the question I asked myself was: How can I avoid transitioning to a straight lecture style of teaching in these restricted environments?  My students had been thrust into a less-than-ideal situation, so I put the onus on myself to help them stay engaged.

What I ended up doing varied depending on the environment.  During the time classes were completely virtual, I tried to create fun, interesting (and sometimes embarrassing) demonstrations that I would conduct on camera for the students. Additionally, I would hold virtual brown bag lunches with my students, where an entire class session was dedicated to creating a relaxing environment for them.  Students were able to take the time to separate themselves from the minutiae of being stuck at home with their parents.  Interestingly, as a brief aside, I asked my students during class about the specific issues they were facing because of the pandemic.  Being “stuck at home” was near the top of this list. Once we were able to graduate to the hybrid environment, I shifted in-class activities from the group level to the individual level whenever it made sense.

Although most students seemed engaged at face value during these class sessions, I was curious to know if the positive academic impacts of student engagement would shine through in the post-COVID shift environment.  This helped to inform a research question for a study I just presented at the northeast regional ACBSP conference. It addressed whether there was a significant difference in academic success between pre-COVID shift students and post-COVID shift students. Naturally, my hope was that no difference would exist between these two groups.  I’ll save the statistical jargon and present the results of my study in two words: It worked!

With these findings, one question came to mind: Why did this happen?  Our students went through major changes to how they were used to experiencing college, but still managed to achieve success. To me, there were factors on both the student side and the faculty side that came into play. And these two factors were student resilience and faculty effort, which ultimately lead to student success.

In terms of student resilience, I’d like to provide an anecdotal quote from a student during one of our brown bag lunch sessions.  They said, “I am afraid to go outside, not because of getting sick, but because I am worried about getting attacked because I am Asian.”  During a time when the threat of such attacks was very real, imagine trying to succeed at anything when this is where you are mentally.  But I will tell you that this particular student did succeed, along with many others for several reasons.  First, they were willing to put in the work.  Rather than throw in the towel, they faced the difficulties of the course and the new learning environments head on.  I think this also speaks to our students’ ability to adapt, which I thanked them for on numerous occasions across these semesters.  And finally, “scrappy” is the perfect term to describe our students, a lot of whom are first generation college students who had to fight and claw just to go college in the first place.  They were not about to let this hurdle get in the way of their continuing education.

From the faculty effort side, I dove into the qualitative data from students who filled out the SOOT.  I share this data not to brag or boast, but because it is one of my proudest moments thus far in my short tenure in academia.  In both the pre- and post-COVID shift environments, language like “genuinely cared about the welfare of the students” and “cared about us as human beings” was offered.

Please remember I said at the beginning of this entry that the COVID-19 pandemic has presented us with both challenges AND opportunities.  I think these words speak to the opportunities we can find in all the current disruption, maybe the most important of which is the chance to get in touch with the human side of whatever it is we’re doing.  Success matters, but how you get across that finish line, to me, matters more.  Empathy and compassion will always win, and we might even be surprised with how robust the results are, regardless of the environment we find ourselves in.

 


Wednesday, October 27, 2021

Involving Students in Program Assessments

 For too long, assessment has been something we do to students, not something we do for or with students. Nicholas Curtis and Robin Anderson note that “The current systems of program-level assessment in the United States does not incorporate (or mention) students other than as sources of information” (page 7).

 We need to change that.

 If an institution calls itself student-centered and claims it values inclusion, it follows that students should have agency in certain educational processes, such as assessment. Further, when we analyze and interpret assessment findings and survey results, we are limited by our own perspectives and biases. Involving students at some point in our assessment efforts will expand and improve our understanding of what the results mean.

Curtis and Anderson write, “Without the involvement of  . . . students, our thoughts about the intended educational experiences . . .  are not going to relate to the actual experiences of our students” (page 10). In addition, our knowledge and understanding may often be restricted to a single course or major and not embrace the totality—the Gestalt—of the student experience.

 Giving students agency in assessment is not relinquishing authority to them. Rather, it is collaborating with them in an effort to identify ways a program or the institution might improve the educational experience it offers.

How a department involves students in its assessment processes depends on what its members want to learn or understand. One strategy would be to present assessment results to a representative sample of students and ask them to share their understanding of the findings or describe an experience they had that is illustrative of the finding. An example of this was when UC students in March 2020 had the chance to respond to the results from the climate survey. When presented with the survey finding that students of color do not feel welcomed in classes taught by white faculty, students described instances where white faculty deliberately looked to white students to answer questions in class and times when white faculty remained silent about racial incidents that occurred on campus. Soliciting this kind of information from students better positions faculty to address a finding that initially left them feeling defensive and confused.

 A second way to involve students in assessment is to ask them how they understand the learning goals. How can we be certain, for instance, that students define goals such as problem-solving and teamwork the same way we do? I recall years ago asking students to explain why the college was receiving consistently low ratings from students when asked how well their coursework developed their problem-solving skills. Students explained that they didn’t consider problem-solving as being addressed in the curriculum. Instead, they saw it as something they developed more in their co-curricular experiences, where they might be given a project to complete and it was entirely up to them to manage all the steps needed to bring the project to fruition. In their classes, they explained, all the “problems” were solved by the instructor who organized the course.

 A third possible way to involve students is to include them in planning. Ask them what kinds of assessments might truly capture the student experience in the program. Collaborate with them on implementing action plans to address areas where students may be underperforming in the program.

 Including students in our assessment processes should be done thoughtfully and sparingly, but it should be done. Curtis and Anderson observe that creating student-faculty partnerships “spurred interesting and deep conversations about the benefits of thinking and assessing at the program-level rather than the classroom-level” (page 10). This is especially important since faculty generally focus on and care primarily about their individual courses, whereas students consider a single course as part of a larger experience.

 A former faculty colleague of mine articulated the benefit he saw in involving students in assessment: “It sends a message that we do this for the students, that they’re the major stakeholders, and that they literally have a seat at the table.” Another stated, “[They] help faculty and departments understand that we engage in assessment processes for the benefit of our students. Including them communicates back to the student body the importance of assessment and what [the college] does to ensure that they receive a quality education.”

 Work Cited

Curtis, N.A., & Anderson, R.D. (2021, May). A Framework for Developing Student-Faculty Partnerships in Program-Level Student Learning Outcomes Assessment. (Occasional Paper No. 53). Urbana, IL: University of Illinois and Indiana University. National Institute for Learning  Outcomes Assessment.             



Wednesday, September 22, 2021

Evidence-Based Storytelling: How Assessment Helps Us Tell a Compelling Narrative

There’s no question that assessment is often regarded as a fill-in-the-blank, paint-by-number bureaucratic enterprise. For too long, assessment specialists and accrediting agencies promoted a linear approach where faculty make specific changes in courses and curriculum based on assessment findings with the aim of improving student learning and then re-assess to document the effects of these changes.  

Yet Natasha Jankowski, former executive director of the National Institute for Learning Outcomes Assessment asks, “Can one ever actually know that the changes made enhanced student learning?”

So much influences a person’s growth and development in the years between starting a degree and earning one. Oral communication skills, for example, might be developed a great deal in an undergraduate curriculum, but so, too, might these abilities be improved by the experiences a student has in the co-curricular environment or the world of work.

There are modifications we can make in an effort to maximize student learning and ensure that all students have the opportunity to develop the knowledge, skills, and competencies faculty consider critical in a discipline. However, we simply cannot make emphatic claims to a causal relationship between what we did and the extent to which it improved student learning.

Jankowski advocates for a different approach to how we might demonstrate educational effectiveness. She argues that the meaning we draw from assessment findings, our understanding of the data, constitutes the important narrative. This meaning shapes the story we derive from the evidence we have gathered. 

“In assessment there is so much doing that there is limited time, if any, built into reflecting upon the data and deciding what it all says, what argument might be made, and what story it tells about students and their learning” (page 11).

Stories give evidence meaning. The 2020-2021 assessment report from the Department of Philosophy documents an important story about teaching and learning in a pandemic where COVID fatigue resulted in students’ failing to complete assignments as well as an increase in cheating. The quantitative findings suggest that student learning was on a downward trend. But the numbers alone don’t tell the story. The meaning inferred from the numbers by the faculty quoted in the report’s narrative does.

The report from the English Department provides an illustration that shows how students achieve learning beyond that which is articulated in a program goal. Students who participate in the design and creation of Ampersand, the College’s literary journal, “go beyond” the goal of making authorial choices: “[T]hey learn to collaborate, they learn skills of layout and editing as they produce a publication that appears in both print and online forms.”

Evidence-based stories—stories informed by the quantitative and qualitative evidence we systematically gather—are how we best illustrate the value and impact of our individual programs and of higher education. These stories also tell us what we need to change or improve in our teaching, course content, and curriculum.

“Some of our stories are tragedies,” Jankowski writes, “and some are tales of heroics and adventures” (page 12). They provide us with a richer, deeper, and more meaningful way to discuss assessment findings than the linear, formulaic approach does. Whether our stories have a happy endings or sad conclusions, they deserve to be told. 


Jankowski, N. (2021, February). Evidence-Based Storytelling in Assessment. Occasional Paper No. 50). Urbana: IL: University         of Illinois and Indiana University. National Institute for Learning Outcomes Assessment.


Tuesday, September 7, 2021

Capturing the COVID Narrative

The full implications of how COVID-19 will impact higher education probably won’t be realized for another four or five years. Still, there’s much we already know about the pandemic’s effect: lower than expected enrollments, fiscal challenges, and increased disenfranchisement among students are just a few of the immediate consequences.  

The 2020-2021 assessments in co-curricular and student support services provide authentic and compelling evidence of COVID’s effect on Utica College students and operations. In particular, the report from the Department of Athletics offers a narrative about a year when “all of the plans in place . . . had been compromised,” a year of “twists, turns, starts, and stops” for which there was no playbook.

One method used to assess the student-athlete experience is the “Athlete Viewpoint,” an instrument that measures student-athlete commitment and well-being, team culture, academic advising, institutional acceptance, and, in 2020, the impact of COVID.

A significant finding from the department’s assessment was that more than half of student-athletes who responded to the survey indicated challenges with mental health. The report notes that COVID-19 created a “rapidly changing landscape” and “student athletes did not know what roadmap to follow . . .  to successfully start and complete a season.”

In response to this finding, the department plans to provide student-athletes access to more internal and external mental health and general health and wellness resources.

While the 2020-2021 academic year witnessed countless challenges and disappointments for UC’s Department of Athletics, the report also shows that despite these hardships, our student-athletes persisted and, in some instances, thrived. Close to 90% of conference games were completed, 43.75% of eligible teams finished among the top 4 teams in their conference, 31.25% qualified for playoffs, and over a quarter of the teams advanced to the conference championship.

Assessment is more than quantitative evidence gathered for the purpose of compliance. It is a way to use evidence to tell a story. The report from the Department of Athletics is a story of student-athlete resilience, and, I might venture to add, story of historical significance.  

Wednesday, March 24, 2021

How the Pandemic Influenced Teaching and Assessment in One Program

 A group of international educators writing about online teaching during the COVID-19 crisis note that when life returns to normal, “the worst thing that could happen is not learning from the crisis we experienced” (Rapanta, et al. 941).

Assistant Professor of Wellness and Adventure Education (WAE), Timothy Abraham, probably agrees. At a recent meeting of the Academic Assessment Committee, Abraham stated that in his program, instructors have no intention of returning to their pre-pandemic approach to teaching and learning.  

He said, “What we learned during this pandemic improved how we teach our students.”

Like most of us, Abraham misses being able to interact daily with students and colleagues, particularly when eating lunch in the cafeteria. And a great deal of the instruction in his discipline requires close contact with students, so the abrupt change to a virtual environment last spring posed a considerable challenge.

In the fall semester, however, he discovered that the hybrid approach resulted in his using face-to-face instructional time more productively.

While acknowledging that lectures have value, Abraham spends face-to-face time having students engage in hands-on, active learning. Likewise, his WAE colleague, assistant professor Megen Hemstrought, uses face-to-face time to incorporate 21st century skills like critical thinking, problem solving, collaboration, and technology literacy.

Hemstrought says that when students came to class two or three times a week, as in the past, many did not prepare ahead of time. She finds that having class less often (usually once weekly) motivates students to prepare better in advance so they can have more robust conversations, do more meaningful active learning, and delve deeper into the topics at hand.  She uses TED talks, textbook readings, and articles to get students prepared before coming to class. 

Abraham creates ”discovery activities” to help students make connections to the material and pique their interest to learn more. He then uses asynchronous learning modalities in Engage, the College’s learning management system to “fill in the holes.” Students may watch the instructional videos on their own time and at their own pace, giving this strategy the added benefit of supporting an individualized approach to teaching. The Knowmia tool used to create videos and make them accessible to all learners provides video analytics so that instructors can see how much of each video is viewed and how much time each student spends attending to the lecture.

This, Abraham contends, gives faculty a more objective way to measure student participation in a course than how they might have been doing in a traditional on-ground class.

Abraham and Hemstrought agree that not only have their pedagogical methods improved, so have their assessment strategies. Abraham reports, “I’m not always using ‘tests’ to assess learning, like I’ve done in the past.” Instead, he is opting for writing assignments, reflections, and practical application projects.

“It creates a little more work grading on my end, but I want to give them an assessment that makes them think. Plus, this prevents them from simply looking up answers at the same time they’re taking a test at home on Engage.”

Rapanta, et all. say that how we respond to a crisis “may precipitate enhanced learning and teaching practices in the postdigital era” (924).

This has certainly been true in the Wellness and Adventure Education program and probably in other programs as well at Utica College. It’s an important narrative to document.


 Work Cited

Rapanta, Chrysi, Luca Botturi, Peter Goodyear, Lourdes Guardia, and Marguerite Koole. “Online University Teaching During and After the Covid-19 Crisis: Refocusing Teacher Presence andLearning Activity. Postdigital Science and Education, vol. 2, 2020, pp. 923 – 945.

Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...