Tuesday, March 29, 2022

When the Results Are Good, Showcase Them!

Good assessment helps us determine where improvements are needed. We know this. It’s basic to good teaching, and it happens all the time in courses. It might mean rephrasing or eliminating exam questions, redesigning a rubric for greater clarity, or implementing active learning strategies to help students learn difficult material.

Continuous improvement is not the sole purpose of assessment, however. Assessment, by definition, is a way to identify weaknesses and strengths. It should, and often does, yield results that might be celebrated and used to tell others about our programs. This aspect of assessment is often overlooked, though. In part, that is because it has been drummed into our heads that assessment is a way to ferret out where our efforts are unsuccessful. Another reason is that humans are hardwired for negative bias. We look at survey results, for example, and tend to see only where students are dissatisfied or unhappy.

Assessments at Utica University, particularly those done by academic departments, have produced celebration-worthy findings that that underscore the value of our programs, and we need to share these results with prospective students, their parents, potential employers, advisory boards, and community partners. They are what distinguishes us.  

In biology, for example, the results earned on the Major Field Tests in molecular biology and genetics, cell biology, organismal biology, and population biology, evolution, and ecology are benchmarked with national institutional means. Utica University’s mean scores were higher than the national mean in each subject area.

Likewise, physics students continue to surpass the national average on a survey tool measuring conceptual knowledge of mechanics.

English majors demonstrate marked improvements in their writing abilities at the end of or close to the end of their academic tenure at the University, evidence of the value-added of this program.

On a standardized pre/post assessment, students in undergraduate business programs and the MBA program demonstrated significant growth in their knowledge and skills, and the mean scores earned on the post assessment were higher than those earned by an Accreditation Council for Business Schools and Programs peer group.   

A student satisfaction survey administered in November 2021 showed high levels of satisfaction with support offerings, namely library resources and services, career services, tutoring, academic advising, and the availability of counseling.

What’s most encouraging about all these findings is that they came from a year when pandemic-weary students showed signs of disengagement and poor motivation.

Assessment leader Linda Suskie writes, “The higher education community has a long-standing culture of keeping its light under the proverbial bushel basket and not sharing the story of its successes with its public audiences.”

It’s time we changed that. It’s time we move the narrative past clichés and use assessment results to showcase the strengths of our programs and our students, as well as to improve our educational effectiveness.  








                                                                        Work Cited

Linda Suskie.  Assessing Student Learning: A Common Sense Guide. 2nd ed. 

                San Francisco: Jossey- Bass, 2009.   


Tuesday, March 1, 2022

Standardized Testing: A One-Size-Fits-All Assessment

 By Michelle Boucher

When we first started with formal department-wide assessment in our department, we identified the  skills we want our students to develop as chemists. These are the very skills that the American Chemical Society (ACS) expect students to demonstrate in a certified program. It made sense for us to assess these skills using ACS standardized exams as final examinations in those classes for which ACS offered a standardized exam.

This assessment strategy wasn’t perfect, but the exams offered the potential for a beautiful tool for assessment. They are created by a committee of faculty from diverse institutions, they are checked by the Examinations Institute Board of Trustees, and data on student performance is collected and collated by the ACS and distributed to every school administering the specific exams. Specific means and medians for each question and the exam overall are available from a reasonably large pool of students who took the exam nationwide in a given year. The exams are refreshed every 5-10 years through committee, and fresh student outcome data is collected.

While this all sounds fantastic in theory, the reality of the exams proved less utopian. The committee of faculty from “diverse institutions” who write the exam do not typically include equal representation from schools that serve first generation students or schools that are historically black schools or from community colleges. The faculty represented are typically a mix of faculty from Ph.D. granting institutions, highly selective small liberal arts colleges, and perhaps only 1-2 faculty (out of 17-20) from all the “other types of schools” (community colleges, small comprehensive colleges like ours, etc.). All the exams are political in one form or another. The ACS is clearly dictating, through this exam, what it feels to be of importance in the specific course. For example, one iteration of the organic exam had two questions (out of 70) on green chemistry, when there was a faculty representative on the committee who was a green chemist. The most recent organic exam has 4 carbonyl reactions that are specific subfields and also “named reactions” (after the chemist(s) who discovered or publicized the use of the reactions) in part as a direct answer to a recent push in chemical education to minimize the use of “named reactions” in the interests of promoting a more inclusive classroom experience.

Additionally, there are issues concerning equity and standardized exams. It has been shown, time and again in sociological and pedagogical literature, that there are inherent equity issues around standardized exams. There continues to be discussion and research nationwide around what root causes exist that lead to underserved students (students of color, students from lower economic brackets, first-generation students) and students who identify as women to perform at lower levels on standardized exams. Regardless of the reason, the faculty in our department believe disparities exist. We see it in our incoming students, who benefit greatly from the holistic application review that Utica University offers and are often high achieving students with poor standardized test scores, and we see it on our final standardized exams, where students who have performed exceptionally well all semester choke on a standardized exam.

This past year, we experienced another issue with these standardized exams:  we could not vouch for their reliability. The standardized exam results, in fact all of the assessment data, showed little to no impact on our student education and the 2020-2021 COVID-19 experience. While that makes us go “Yay!”, we know that our students right now have fewer skills than they would typically have at this point in their education. We know that our students are faring better than some cohorts at other schools; we all talk with multiple people at other institutions and are active in ChemEd circles nationally, and we can see where our students place. We know our students have a much smaller “COVID-lag” than cohorts at other institutions. We are proud of that. But we know there is a knowledge and experience gap, and our assessment methods do not show that.

We are making the move away from the ACS standardized exams, or we plan to use them the way we want to use them and bend them to our own wills. There is absolutely no good reason for our program to be dictated to, our learning goals determined and defined, by a committee of homogenous professors protecting a status quo that we have dedicated our professional lives to overthrowing.  In our department, the age of one-size-fits-all assessment and the lies it propagated is over.


Wednesday, February 2, 2022

Undergraduate Intern Assesses How Well Students Learning During the Pandemic

On a Student Voice survey administered to college students in May 2021, 52% of respondents said they learned less during the 2020 – 2021 academic year than they had in pre-COVID years, and close to a quarter of first-year students reported feeling very underprepared for college.

Students might have perceived that they learned less in the first year of the pandemic, but did they?

Senior psychology major, Jacqueline Lewis, posed this question in her internship experience at James Madison University (JMU) in summer 2021. Jacqueline was one of three undergraduate interns selected to work on an independent project supporting the mission of JMU’s Center for Assessment and Research Studies.

Her initial training included becoming familiar with assessment and learning how to use statistical software. Working with a mentor, a professor affiliated with the Center for Assessment and Research Studies, Jacqueline then designed a project to measure how well first-year students developed information literacy skills, one of JMU’s general education competencies. Her research also considered the role motivation plays in learning.

At James Madison University, an institution serving more than 19,000 undergraduates, students participate in two assessment days: one during their first-year orientation and the second after earning 45 – 70 credits hours. This allows the university to implement a pre-and-post test design that examines total score growth, objective level growth, and item growth over time.

Jacqueline collected the pre-test data and the post-test results, the latter of which were generated after students completed an online curriculum in information literacy. She also gathered data on a Student Opinion Scale, an instrument that measures two aspects of motivation: effort and importance.

An analysis of both sets of results showed an increase in the mean total score from the pre-test to the post-test, and a statistically significant mean difference in effort and importance, leading her to conclude that yes, despite the adverse impact of the pandemic on the student experience, learning was happening!

This internship experience exposed Jacqueline to an area of study she was not familiar with: assessment and the scholarship of teaching and learning. In addition, it expanded her opportunities for graduate study and gave her access to a professional network.

In November 2021, Jacqueline Lewis presented her work at the Virginia Assessment Group Conference, thus contributing to the body of scholarship in the field of assessment. Her work also added to the growing narrative about the pandemic’s impact on college students’ learning, a topic that will attract researchers for at least the next five years. Jacqueline herself is continuing her research in this area by collaborating with her Utica College advisor and mentor, Dr. Kaylee Seddio, on measuring ADHD and anxiety in college students during the pandemic. 

"I am extremely grateful for my experience at JMU and for those who helped me get there,” Lewis said. “I value the opportunity and all that I have learned, but more so the feeling that I helped make an important contribution to understanding learning during this unprecedented time."

 

 

Wednesday, December 8, 2021

AAC's Response to the North Pole's Gift-Giving Process

The Academic Assessment Committee recently reviewed the gift-giving process practiced at the North Pole. What follows is the committee’s feedback and suggestions regarding these practices.

Santa has articulated an outcome ("Children should be nice"), but the wording is too ambiguous, and, therefore, hard to measure. Further, he has not specified an appropriate target. Should 100% of the children be "nice" 100% of the time? 90%? 75%? Without a clearly defined target, Santa risks being arbitrary and inconsistent in his assessment of children's behaviors. 

 The methods Santa uses to assess children's behavior and distinguish between "naughty" and "nice" are not apparent. How often is he able to observe each child first-hand?   Are there others who are involved in this assessment? Santa's elves or Mrs. Claus, for example? Given how high stakes this assessment is, there should be multiple individuals engaged in observing and evaluating children's behavior, and there should be 90% inter-rater reliability. Further, each individual child's behavior should be observed on numerous occasions throughout the assessment cycle. This may not be a sustainable plan, however, particularly given the recent cuts made to the workshop staff as a result of the pandemic and the shortages caused by the supply chain problem.  

 It is also not clear what instrument is used to document children’s' behaviors. Does Santa use a rubric that articulates clear criteria regarding the kinds of behaviors he regards as "nice" versus "naughty?" Do these criteria take into account cultural differences?  In other words, are Santa's assessment practices equitable?  

 The results of Santa's assessments have never been published or analyzed. What percent of the world's population under the age of 10 gets what they asked for? What percent receives coal in their stockings? Are there specific trends that Santa has observed over a period of time--say the last 100 years?  Has the percentage of naughty children increased? Decreased?

Since the purpose of assessment is to inform improvement, how are Santa's findings shared with and used by parents? Or do his "naughty" and "nice" lists remain in a drawer in his office at the North Pole and referenced only during the Christmas season as a prop?  How might parents (and perhaps even teachers) use them to help develop children’s character?

The committee recommends that Santa reflect more deeply on his assessment processes and consult with other characters such as the Easter Bunny and the Tooth Fairy to get ideas for how he might design an assessment plan that is fair, useful, and sustainable.  

 

Wednesday, November 10, 2021

Overcoming the COVID Shift: The Impact of Resilience and Effort During a Pandemic

 By Matthew Marmet

The COVID-19 pandemic has certainly presented challenges and opportunities across all sectors.  The fact that it is going to take eight weeks for me to get a replacement window for my basement is indicative of the supply chain struggles we are currently experiencing.  The focus of this piece, though, will be on the world of education, where one of the biggest challenges we faced is what I have come to call the “COVID shift.”

I am not touting myself as the inventor of some groundbreaking, trademarkable term when I say COVID shift.  I use it simply because it perfectly captures what our institution experienced.  The COVID shift, for us, meant a shift from traditional on-ground delivery of educational materials (pre-COVID shift) to either completely virtual or hybrid learning environments (post-COVID shift).

Before moving forward, I’d like to take a walk back in time, to a simpler place where students came to class not worried about whether they remembered their masks.  A time when group activities and other pedagogical techniques could be easily implemented to supplement in-class lecture. For me, this was the time of the flipped classroom, a move away from what my favorite instructional designer likes to call “straight lecturing.”  Instead, class time is spent with students engaging in other activities and problem-solving tasks, which have been shown to have a positive impact on student attitudes and performance.

The COVID shift forced us to un-flip. In-class activities involving group work and close student interaction were not much of a possibility. With my love for the flipped classroom, the question I asked myself was: How can I avoid transitioning to a straight lecture style of teaching in these restricted environments?  My students had been thrust into a less-than-ideal situation, so I put the onus on myself to help them stay engaged.

What I ended up doing varied depending on the environment.  During the time classes were completely virtual, I tried to create fun, interesting (and sometimes embarrassing) demonstrations that I would conduct on camera for the students. Additionally, I would hold virtual brown bag lunches with my students, where an entire class session was dedicated to creating a relaxing environment for them.  Students were able to take the time to separate themselves from the minutiae of being stuck at home with their parents.  Interestingly, as a brief aside, I asked my students during class about the specific issues they were facing because of the pandemic.  Being “stuck at home” was near the top of this list. Once we were able to graduate to the hybrid environment, I shifted in-class activities from the group level to the individual level whenever it made sense.

Although most students seemed engaged at face value during these class sessions, I was curious to know if the positive academic impacts of student engagement would shine through in the post-COVID shift environment.  This helped to inform a research question for a study I just presented at the northeast regional ACBSP conference. It addressed whether there was a significant difference in academic success between pre-COVID shift students and post-COVID shift students. Naturally, my hope was that no difference would exist between these two groups.  I’ll save the statistical jargon and present the results of my study in two words: It worked!

With these findings, one question came to mind: Why did this happen?  Our students went through major changes to how they were used to experiencing college, but still managed to achieve success. To me, there were factors on both the student side and the faculty side that came into play. And these two factors were student resilience and faculty effort, which ultimately lead to student success.

In terms of student resilience, I’d like to provide an anecdotal quote from a student during one of our brown bag lunch sessions.  They said, “I am afraid to go outside, not because of getting sick, but because I am worried about getting attacked because I am Asian.”  During a time when the threat of such attacks was very real, imagine trying to succeed at anything when this is where you are mentally.  But I will tell you that this particular student did succeed, along with many others for several reasons.  First, they were willing to put in the work.  Rather than throw in the towel, they faced the difficulties of the course and the new learning environments head on.  I think this also speaks to our students’ ability to adapt, which I thanked them for on numerous occasions across these semesters.  And finally, “scrappy” is the perfect term to describe our students, a lot of whom are first generation college students who had to fight and claw just to go college in the first place.  They were not about to let this hurdle get in the way of their continuing education.

From the faculty effort side, I dove into the qualitative data from students who filled out the SOOT.  I share this data not to brag or boast, but because it is one of my proudest moments thus far in my short tenure in academia.  In both the pre- and post-COVID shift environments, language like “genuinely cared about the welfare of the students” and “cared about us as human beings” was offered.

Please remember I said at the beginning of this entry that the COVID-19 pandemic has presented us with both challenges AND opportunities.  I think these words speak to the opportunities we can find in all the current disruption, maybe the most important of which is the chance to get in touch with the human side of whatever it is we’re doing.  Success matters, but how you get across that finish line, to me, matters more.  Empathy and compassion will always win, and we might even be surprised with how robust the results are, regardless of the environment we find ourselves in.

 


Wednesday, October 27, 2021

Involving Students in Program Assessments

 For too long, assessment has been something we do to students, not something we do for or with students. Nicholas Curtis and Robin Anderson note that “The current systems of program-level assessment in the United States does not incorporate (or mention) students other than as sources of information” (page 7).

 We need to change that.

 If an institution calls itself student-centered and claims it values inclusion, it follows that students should have agency in certain educational processes, such as assessment. Further, when we analyze and interpret assessment findings and survey results, we are limited by our own perspectives and biases. Involving students at some point in our assessment efforts will expand and improve our understanding of what the results mean.

Curtis and Anderson write, “Without the involvement of  . . . students, our thoughts about the intended educational experiences . . .  are not going to relate to the actual experiences of our students” (page 10). In addition, our knowledge and understanding may often be restricted to a single course or major and not embrace the totality—the Gestalt—of the student experience.

 Giving students agency in assessment is not relinquishing authority to them. Rather, it is collaborating with them in an effort to identify ways a program or the institution might improve the educational experience it offers.

How a department involves students in its assessment processes depends on what its members want to learn or understand. One strategy would be to present assessment results to a representative sample of students and ask them to share their understanding of the findings or describe an experience they had that is illustrative of the finding. An example of this was when UC students in March 2020 had the chance to respond to the results from the climate survey. When presented with the survey finding that students of color do not feel welcomed in classes taught by white faculty, students described instances where white faculty deliberately looked to white students to answer questions in class and times when white faculty remained silent about racial incidents that occurred on campus. Soliciting this kind of information from students better positions faculty to address a finding that initially left them feeling defensive and confused.

 A second way to involve students in assessment is to ask them how they understand the learning goals. How can we be certain, for instance, that students define goals such as problem-solving and teamwork the same way we do? I recall years ago asking students to explain why the college was receiving consistently low ratings from students when asked how well their coursework developed their problem-solving skills. Students explained that they didn’t consider problem-solving as being addressed in the curriculum. Instead, they saw it as something they developed more in their co-curricular experiences, where they might be given a project to complete and it was entirely up to them to manage all the steps needed to bring the project to fruition. In their classes, they explained, all the “problems” were solved by the instructor who organized the course.

 A third possible way to involve students is to include them in planning. Ask them what kinds of assessments might truly capture the student experience in the program. Collaborate with them on implementing action plans to address areas where students may be underperforming in the program.

 Including students in our assessment processes should be done thoughtfully and sparingly, but it should be done. Curtis and Anderson observe that creating student-faculty partnerships “spurred interesting and deep conversations about the benefits of thinking and assessing at the program-level rather than the classroom-level” (page 10). This is especially important since faculty generally focus on and care primarily about their individual courses, whereas students consider a single course as part of a larger experience.

 A former faculty colleague of mine articulated the benefit he saw in involving students in assessment: “It sends a message that we do this for the students, that they’re the major stakeholders, and that they literally have a seat at the table.” Another stated, “[They] help faculty and departments understand that we engage in assessment processes for the benefit of our students. Including them communicates back to the student body the importance of assessment and what [the college] does to ensure that they receive a quality education.”

 Work Cited

Curtis, N.A., & Anderson, R.D. (2021, May). A Framework for Developing Student-Faculty Partnerships in Program-Level Student Learning Outcomes Assessment. (Occasional Paper No. 53). Urbana, IL: University of Illinois and Indiana University. National Institute for Learning  Outcomes Assessment.             



Wednesday, September 22, 2021

Evidence-Based Storytelling: How Assessment Helps Us Tell a Compelling Narrative

There’s no question that assessment is often regarded as a fill-in-the-blank, paint-by-number bureaucratic enterprise. For too long, assessment specialists and accrediting agencies promoted a linear approach where faculty make specific changes in courses and curriculum based on assessment findings with the aim of improving student learning and then re-assess to document the effects of these changes.  

Yet Natasha Jankowski, former executive director of the National Institute for Learning Outcomes Assessment asks, “Can one ever actually know that the changes made enhanced student learning?”

So much influences a person’s growth and development in the years between starting a degree and earning one. Oral communication skills, for example, might be developed a great deal in an undergraduate curriculum, but so, too, might these abilities be improved by the experiences a student has in the co-curricular environment or the world of work.

There are modifications we can make in an effort to maximize student learning and ensure that all students have the opportunity to develop the knowledge, skills, and competencies faculty consider critical in a discipline. However, we simply cannot make emphatic claims to a causal relationship between what we did and the extent to which it improved student learning.

Jankowski advocates for a different approach to how we might demonstrate educational effectiveness. She argues that the meaning we draw from assessment findings, our understanding of the data, constitutes the important narrative. This meaning shapes the story we derive from the evidence we have gathered. 

“In assessment there is so much doing that there is limited time, if any, built into reflecting upon the data and deciding what it all says, what argument might be made, and what story it tells about students and their learning” (page 11).

Stories give evidence meaning. The 2020-2021 assessment report from the Department of Philosophy documents an important story about teaching and learning in a pandemic where COVID fatigue resulted in students’ failing to complete assignments as well as an increase in cheating. The quantitative findings suggest that student learning was on a downward trend. But the numbers alone don’t tell the story. The meaning inferred from the numbers by the faculty quoted in the report’s narrative does.

The report from the English Department provides an illustration that shows how students achieve learning beyond that which is articulated in a program goal. Students who participate in the design and creation of Ampersand, the College’s literary journal, “go beyond” the goal of making authorial choices: “[T]hey learn to collaborate, they learn skills of layout and editing as they produce a publication that appears in both print and online forms.”

Evidence-based stories—stories informed by the quantitative and qualitative evidence we systematically gather—are how we best illustrate the value and impact of our individual programs and of higher education. These stories also tell us what we need to change or improve in our teaching, course content, and curriculum.

“Some of our stories are tragedies,” Jankowski writes, “and some are tales of heroics and adventures” (page 12). They provide us with a richer, deeper, and more meaningful way to discuss assessment findings than the linear, formulaic approach does. Whether our stories have a happy endings or sad conclusions, they deserve to be told. 


Jankowski, N. (2021, February). Evidence-Based Storytelling in Assessment. Occasional Paper No. 50). Urbana: IL: University         of Illinois and Indiana University. National Institute for Learning Outcomes Assessment.


Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...