Wednesday, March 24, 2021

How the Pandemic Influenced Teaching and Assessment in One Program

 A group of international educators writing about online teaching during the COVID-19 crisis note that when life returns to normal, “the worst thing that could happen is not learning from the crisis we experienced” (Rapanta, et al. 941).

Assistant Professor of Wellness and Adventure Education (WAE), Timothy Abraham, probably agrees. At a recent meeting of the Academic Assessment Committee, Abraham stated that in his program, instructors have no intention of returning to their pre-pandemic approach to teaching and learning.  

He said, “What we learned during this pandemic improved how we teach our students.”

Like most of us, Abraham misses being able to interact daily with students and colleagues, particularly when eating lunch in the cafeteria. And a great deal of the instruction in his discipline requires close contact with students, so the abrupt change to a virtual environment last spring posed a considerable challenge.

In the fall semester, however, he discovered that the hybrid approach resulted in his using face-to-face instructional time more productively.

While acknowledging that lectures have value, Abraham spends face-to-face time having students engage in hands-on, active learning. Likewise, his WAE colleague, assistant professor Megen Hemstrought, uses face-to-face time to incorporate 21st century skills like critical thinking, problem solving, collaboration, and technology literacy.

Hemstrought says that when students came to class two or three times a week, as in the past, many did not prepare ahead of time. She finds that having class less often (usually once weekly) motivates students to prepare better in advance so they can have more robust conversations, do more meaningful active learning, and delve deeper into the topics at hand.  She uses TED talks, textbook readings, and articles to get students prepared before coming to class. 

Abraham creates ”discovery activities” to help students make connections to the material and pique their interest to learn more. He then uses asynchronous learning modalities in Engage, the College’s learning management system to “fill in the holes.” Students may watch the instructional videos on their own time and at their own pace, giving this strategy the added benefit of supporting an individualized approach to teaching. The Knowmia tool used to create videos and make them accessible to all learners provides video analytics so that instructors can see how much of each video is viewed and how much time each student spends attending to the lecture.

This, Abraham contends, gives faculty a more objective way to measure student participation in a course than how they might have been doing in a traditional on-ground class.

Abraham and Hemstrought agree that not only have their pedagogical methods improved, so have their assessment strategies. Abraham reports, “I’m not always using ‘tests’ to assess learning, like I’ve done in the past.” Instead, he is opting for writing assignments, reflections, and practical application projects.

“It creates a little more work grading on my end, but I want to give them an assessment that makes them think. Plus, this prevents them from simply looking up answers at the same time they’re taking a test at home on Engage.”

Rapanta, et all. say that how we respond to a crisis “may precipitate enhanced learning and teaching practices in the postdigital era” (924).

This has certainly been true in the Wellness and Adventure Education program and probably in other programs as well at Utica College. It’s an important narrative to document.


 Work Cited

Rapanta, Chrysi, Luca Botturi, Peter Goodyear, Lourdes Guardia, and Marguerite Koole. “Online University Teaching During and After the Covid-19 Crisis: Refocusing Teacher Presence andLearning Activity. Postdigital Science and Education, vol. 2, 2020, pp. 923 – 945.

Tuesday, March 9, 2021

Educating for Professional Success

I’m not surprised when I hear that employers are dissatisfied with the skills recent college graduates bring to the workplace. I’ve been hearing that since the 1980s. However, employers’ complaints historically focused on what are called “soft skills,” abilities that are typically associated with the liberal arts or the co-curricular experience.

More recently, employers have criticized college graduates for lacking “hard skills” as well, and large corporations report investing $30,000 to $50,000 a year to fill the gaps between what employers want and what college graduates have learned.

Utica College’s Associate Professor of Computer Science, Ronny L. Bull, has been addressing this gap for years.  

It all starts with building relationships. Bull has ties with several area employers, and he uses these connections to learn how prepared UC computer science graduates are when they enter the profession. Through informal conversations, he learns where graduates’ knowledge and skills are lacking, and he uses this information to make modifications to the curriculum and to refine pedagogy. For example, when he learned that graduates needed more knowledge and experience with coding, he revised all of his 100-level to 400-level courses in order to teach coding skills that tied back to the course objectives. Well-crafted coding projects reinforced the understanding of this content by engaging the students. As students progressed through their degree program, they could expect more and more challenging coding problems presented to them in higher level courses.  A secondary effect was that they were better prepared for the real world.

“It’s not just about knowledge, though,” he said. “Interpersonal skills, teamwork, and problem-solving are other competencies we address through our curricular offerings, since these ‘soft skills’ are critical to a graduate’s success as well.”

By networking with employers, Dr. Bull is able to “create a pipeline of people who know how to do a job.” His success is indicated by the fact that area employers ask for Utica College graduates, mostly because “they don’t have to train them,” Bull stated.

A carefully designed curriculum that has hands-on experiences built into its courses and that provides  students with real-world opportunities during their undergraduate years allows Dr. Bull to develop students’ knowledge and skills so that they can be successful in the professional workplace. More importantly, however, it gives him insight into individual students’ interests, abilities, and aptitudes. In turn, this allows him to match student interns and graduates with employers.

Helping students see what kinds of opportunities exist in the computing industry is part of the education Dr. Bull provides his students. Prior to the pandemic, he regularly brought students to tour local facilities and meet employers. On many of these visits, the employers they were introduced to were Utica College computer science alumni! It isn’t just career-readiness that Bull emphasizes, however. The curriculum and its emphasis on real-world problems and projects also prepares students for graduate education, and over the years, Bull has brought dozens of students to conferences sponsored by ACM, the world’s largest educational and scientific computing society.

Professor Bull also relies on the relationships he has established with computer science graduates when it comes to assessing the curriculum. Several years ago, he set up a discord server where students and faculty could collaborate, share, and chat. This created a community within the department, and when students graduated, they remained part of this group. Alumni were able to share feedback about the curriculum with their former faculty—feedback that was then used to make changes. 

The work that Ronny Bull does soliciting feedback from employers and alumni and using this information to develop curriculum and create experiential learning opportunities is an example of assessment at its finest, seamlessly woven into the fabric of what he us doing to develop students into successful professionals and lifelong learners.

 



Wilkie, Dana. “Employers Say College Grads Lack Hard Skills, Too.” Society for Human Resource Management, 21 Octover 2019, www.shrm.org/resourcestools. Accessed 3 March 2021. 

 

 

Wednesday, February 24, 2021

The Limits and Usefulness of Anecdotal Evidence in Assessment

 We’ve all been at those meetings where someone steers the conversation with claims such as, “Students don’t like having early morning classes, and when they do, they leave the institution.” There’s no source for this statement; there’s no evidence in any survey, focus group, or analysis of enrollment data to substantiate the claim.

People typically respond to statements such as that one by saying, “That’s anecdotal evidence.” Not so. Unsubstantiated claims don’t rise to the level of anecdotal evidence. They remain exactly what they are: unverified statements, usually intended to promote an agenda or enhance a speaker’s credibility.

Anecdotal evidence refers to stories about people and their experiences.  By itself, an anecdote is not reliable evidence. However, when anecdotes are used judiciously in conjunction with quantitative data, they provide insight into what the data might mean. Further, anecdotes have emotional appeal. They remind us why we should care about the data in the first place.

Student responses to the climate survey data at UC (March 2019) provide excellent examples of how anecdotes may be paired with data to advance understanding.  On the survey, more than half of students of color reported feeling that they did not matter in classes taught by white faculty, and compared to white students, fewer students of color reported feeling affirmed by white faculty.

When students were asked to respond to these data points by narrating their experiences, they described occasions where white faculty directed their questions solely to white students and where white faculty ignored racist comments made by other students in the class. They gave examples of classes where the majority of students performed poorly on an exam and the professors indicated that the reason is because the course material is rigorous, and the students are unprepared for the demands of the subject. They mentioned classes where the instructor never learned their names and seldom acknowledged them outside of class.

On the plus side, they spoke of positive experiences that made them feel affirmed, supported, and part of a community.

These stories—anecdotes—give texture and meaning to data that might perplex, dismay, or be easily dismissed.

The limits of anecdotal evidence, which is usually based on individual experience, should be obvious. Its usefulness, though, cannot be overlooked. One benefit is that invites inquiry and may have implications for research. Its major benefit is described by Michael Shermer, publisher of Skeptic: “Anecdotes  . . . help in explaining data points that do not make sense. Hearing stories about data points that do not make intuitive sense can uncover the hidden variables that are really driving the results.”

Work Cited

Shermer, Michael. "How Anecdotal Evidence Can Undermine Scientific Results: Why Subjective Anecdotes Often Trump Objective Data." Scientific American (2008).

Wednesday, September 30, 2020

Standards-Based Grading: An Equitable Approach to Assessment

Equitable assessment practices measure students’ performance by using methods that are most appropriate to the individual learner. Standards-based grading is an excellent example of an equitable assessment practice, one that reflects a student-centered pedagogy.

At Utica College, Xiao Xiao, Professor of Mathematics, uses and advocates for standards-based grading. He notes that students learn material at different paces. Traditional grading practices measure whether students have learned the required material within a specified amount of time (e.g. by the end of every week, by the end of every month, and then by the end of the semester). In contrast, standards-based grading focuses on measuring whether students have learned the material by the end of the semester while giving frequent feedback along the way. It helps level the playing field for slow learners and encourages deeper learning for everyone.

Standards-based grading requires the instructor to articulate clearly the precise learning objectives addressed in the course.  Students are informed on a weekly basis what learning objectives will be covered and assessed in the course. If a student does not do well on a specific assessment, the instructor provides feedback on his/her/their performance, directs the student to additional resources related to the material, and then gives further assessment opportunities for students to achieve the related learning objective(s). By making each assessment low stakes, students are not punished if they need more time to learn the material.  

Xiao notes that mistakes are valuable to learning. He states, “Mistakes should be expected when a person is learning something new. Traditional grading punishes students for making those mistakes that are part of the learning process, especially when students don’t learn fast enough.”

He further maintains that using standards-based grading in a course enhances student success. Since the precise learning objectives are clearly articulated, students know exactly what they are expected to learn. This approach works especially well in courses where student backgrounds and levels of academic preparation vary.

Xiao additionally notes that standards-based grading may have the added benefit of minimizing academic dishonesty in the virtual learning environment. He explains, “Students are given frequent low-stake quizzes and, if needed, future assessment opportunities to demonstrate their learning. They are provided with ongoing feedback to promote their learning. There are no monthly big exams that might invite cheating.”

While Xiao believes that standards-based grading is less work than traditional grading because instructors no longer have to spend time considering partial credit, he says preparing a course where standards-based grading is used takes a significant amount of time.

“If someone wants to use standards-based grading, I recommend prepping a month or two in advance.”

Xiao concludes that standards-based grading forces him to think hard about what he really wants students to learn in his classes and what he considers less important for them to know. “It helps instructors to clarify what kind of learning students should achieve,” he says.

And without question, that is good pedagogy and good assessment.

 

Tuesday, September 15, 2020

Equitable Assessment Practices

 Erick Montenegro and Natasha A. Jankowski advocate for the importance of using various methods to assess students’ knowledge as opposed to a one-size-fits-all approach. They write, “There is an assumption at play in the field of assessment that while there are multiple ways for student to learn, students need to demonstrate learning in specific ways for it to count” (Equity & Assessment, page 6). Since the mid-1990s, researchers have argued that most assessment measures are not culturally responsive or designed with respect to nontraditional and underserved populations. We need to change that if we are truly committed to equity in our educational offerings.

So how might we begin?

Give students agency in our assessment processes. Assessment, whether it is course-level or institution-level, should be something we engage in with students, not something we do to students or to judge students. Some faculty have asked students to help them develop the learning objectives for a program or course. This helps students understand better what they are expected to achieve and gives them greater ownership of their learning.

      Students might also be invited to develop the rubrics that will be used to measure their performance. By collaborating on rubric criteria, students are asked to think carefully and reflect deeply on what constitutes an exemplary performance as opposed to a mediocre one.

      Students definitely have a rightful place at the table when faculty and/or administrators gather to interpret and analyze assessment results. When analyzing survey results, for example, it is helpful to know how students understand the questions or define specific terms.

      Dis-aggregate results or data to see if we are achieving equity. When considering student performance in our programs or at the institution, if the N is large enough, it is worth dis-aggregating the findings by demographic groups to see if there are gaps in student performance. If certain groups of students are performing better than others, we are not achieving the goals of equity.

         Ask equity-minded questions. At its best, assessment is thoughtful inquiry into student learning. If we explain performance gaps by saying “Some students just aren’t prepared,” we are being deficit-minded, not equity-minded. If we continue to have courses intended to “weed out” students, we are being deficit-minded, not equity-minded. Instead of saying the students are not prepared or can’t make it, ask if we are providing the right kinds of opportunities to prepare them and are we assessing the efficacy of these opportunities.

Conversations about equity and our assessment processes are starting to take root. These discussions offer us new ways to think about student learning—how it is achieved and how it is demonstrated. As Montenegro and Jankowski observe, “If assessment is about demonstrating learning, then we need to allow students the space to show their knowledge . . . [H]ow we assess and the process of assessment itself . . . should align with the students we have, empowering them with narratives to share and document their learning journey” (Equity & Assessment, page 15).

References

Finley, Ashley and Tia McNair, Assessing Underserved Students' Engagement in High-Impact Practices.  AAC & U: Washington, D.C.. 2013. 

Montenergo, Erick and Natasha A. Jankowski. A New Decade for Assessment: Embedding Equity into    Assessment Praxis. National Institute for Learning Outcomes Assessment, January 2020. 

Montenergo, Erick and Natasha A. Jankowsi. Equity and Assessment: Moving Towards Culturally Responsive Assessment. National Institute for Learning Outcomes Assessment, 2017 

Wednesday, April 22, 2020

Surviving the Shutdown and Preserving One's Sanity

The Academic Assessment Coordinating Committee invites you to the table (metaphorically) to assess the work committee members have engaged in since the shutdown began six weeks ago.

Outcome: To create tantalizing, nutritious foods that will help the individual committee member best survive the shutdown and possibly preserve his/her/their sanity or family harmony.

Assessment Method:  Artifacts scored by rubric

Rubric:


3
2
1
Presentation
Artfully arranged to enhance the food’s aesthetic appeal.
Looks appetizing enough, but the presentation minimizes the value of aesthetic appeal.
Looks as if the food is ready for the compost heap.
Nutritional Value
Meets the nutritional requirements recommended by the US Department of Agriculture; a single serving does not exceed the daily caloric intake for overall health.
Some nutritional value, but may be too high in fats and sugars and insufficient in protein and fiber content.
Will ultimately contribute to heart disease and other debilitating conditions.
Emotional Value
Every morsel was satisfying and increased euphoria.
May cause momentary guilt that is easily dismissed because, after all, we are living in a state of high anxiety.
Allows you to feel smug and superior about your healthy diet.


Evidence:


Donna Dolansky, Easter Ham 

Donna Dolansky, Birthday Cake (chocolate)


Donna Dolansky, Portuguese Easter Bread

Donna Dolansky, Ham, Asparagus & Cheese Frittata


Dan Tagliarina, Apple Pie

Dan Tagliarini, Homemade Pizza


Rob Swenszkowski, Fish Tacos 

Rob Swenszkowski, Ice Cream Sundaes


Rob Swenszkowski, Grilled steak, Bombay Madras, and a Garden Salad


Katie Spires, Tri-color Pasta with Chicken & Marinara Sauce

Tim Abraham, Pierogis

Tim Abraham, North African Chickpea Stew over Quinoa

Ann Damiano, Linguine with clam sauce

Jason Denman, Thai Noodles with curried beef and black soy
Jason Denman, Homemade spring rolls
Jason Denman, The Next Morning





Tuesday, March 3, 2020

Qualitative Assessment: A Better Option for Small Departments


Most conversations about assessment methods tend to focus on whether they are direct or indirect, program-level or course-level, formative or summative.  Rarely does the discussion address whether a goal is best measured by quantitative or qualitative methods. 

Bresciani, Gardner, and Hickmott write that quantitative assessments have been the traditional favorites when it comes to measuring student outcomes.  Quantitative methods include test scores, rubric ratings, survey results, and performance indicators.  Linda Suskie asserts that quantitative methods are preferred over qualitative ones because accreditors and public audiences “find quantitative results more convincing” (page 32) and because people doing the assessments tend to be more comfortable or familiar with quantitative measures.  

The best assessment plan includes multiple, diverse methods, provided they are organic to the discipline and are reliable, authentic measures of student performance.   Michele Hanson advocates for a “combination of qualitative and quantitative assessment,” particularly if we want to gather evidence on the educational experience and learning opportunities we are providing.  Qualitative methods might be used to assess how well certain educational experiences promote student learning and success, while quantitative measures may be used to assess student accomplishment.  To examine evidence of student success without assessing the opportunities that promote or prohibit it is to overlook a critical element in the assessment narrative. 

That said, we do not live in the best of all possible worlds. As those who serve on the Academic Assessment Coordinating Committee have observed, quantitative methods may not be a good option for small programs with few majors.  It might take years, possibly even a decade, before small departments achieve a sample size large enough to determine trends or patterns.  Another problem is that using quantitative measures for small sample sizes does not produce reliable results.  One outstanding student will skew the results favorably; one poor student will have an adverse effect on results.

Qualitative assessment methods—notes from interviews or observations, reflective writings, focus groups, online or classroom discussions—may be the better option for smaller departments.  These types of methods are not lacking in merit.  Suskie argues that quantitative methods are “underused and underappreciated in many assessment circles” (page 32), and she further contends that they “can give us fresh insight and help discover problems—and solutions—that can’t be found through quantitative assessments alone” (page 33). 

If assessment is a way departments can tell their stories, then they need to use methods that make the most sense given the nature and scope of their discipline and the number of students enrolled in the major.  The assessment challenges faced by smaller departments  differ from those larger programs experience, so it makes sense that the solutions will differ as well. 



Works Cited
Bresciani, Marilee J., Megan Moore Gardner, and Jessica Hickmott.  
               Demonstrating Student Success: A Practical Guide to Outcomes-Based 
               Assessment of Learning and Development in Student Affairs. Sterling:  
               Stylus Publishing, 2009.
Hanson, Michele J., “Using Assessment Trends in Planning, Decision-Making, 
              and Improvement.” Trends in Assessment:  Ideas, Opportunities, and 
              Issues for Higher Education. Ed. Stephen P. Hundley & Susan Kahn.  
              Sterling:  Stylus, 2019.  175-193. Print.
Suskie, Linda.  Assessing Student Learning: A Common Sense Guide. 
             San Francisco: Jossey-Bass, 2009. 
               

Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...