Wednesday, October 30, 2019

Closing the Loop: A Strategy to Improve Students’ Writing Skills in an Accounting Class

By Donna Dolansky

I’ve been teaching ACC 401 every spring term since I started at Utica College three years ago.  In this course, we assess students’ communication skills (oral and written) with the expectation that they will be performing at the proficiency-level.   Our target is that 100% of students will achieve 80% or higher on a final paper scored using a rubric.

The first year I taught the course (Spring 2017), 87% of students achieved the target and 13% did not.  At the same time, I was serving on a search committee for a biology faculty member, and Larry Aaronson, another committee member, mentioned to me that he assigns a novel each year in his biology class and asks students to blog about the book.  He added that he wrote a paper about how this improved their ability to communicate in writing.

I thought this was a great idea, so in the Spring 2018 semester, I assigned a novel related to auditing and asked students to write responses to a series of questions I raised about the reading.  I also asked them to write any general observations they had about the book.  This provided me with an opportunity to review students’ writing and offer feedback on a low stakes assignment.

The results?  Students’ writing slightly improved, and, more importantly, students believed that they benefitted from the experience. When asked whether or not they found the assignment useful, the majority of students responded affirmatively, saying they found the reading beneficial and their writing skills improved.  Further, students credited the assignment with developing their critical thinking abilities. 

A number of students acknowledged the importance of reading and writing in “real life work,” and so they perceived this assignment as helping them develop essential skills for today’s professional workforce. 

In the words of one student, “When we were first told of the assignment I was worried I would dread having to read the book because I thought it was going to be boring. I enjoyed this book so much however, that I have recommended it to many of the people I work 

Friday, October 18, 2019

Assignment or Assessment: What's the Difference?

I am a big promoter of course-embedded assessments, what Linda Suskie describes as “course assessments that do double duty, providing information not only on what students have learned in the course but also on their progress in achieving program or institutional goals” (Suskie 27)

Research papers, capstone projects, presentations, evaluations from clinical or internship supervisors—these are authentic assessments that allow us to gather evidence of student performance in our programs without necessarily adding to our workload.  Suskie credits course-embedded assessments with keeping assessment processes manageable, and, because they are developed locally, “they match up well with learning goals” (27). 

While course-embedded assessments are often course assignments, the assignment itself is not an assessment measure, and grades earned are not considered direct, valid evidence of student performance. 

That last sentence reads like assessment doublespeak, so think of the difference this way.  The assignment might be to compose a research paper and then present the work orally to the class.  How the paper and presentation are assessed is the assessment measure.  Typically, papers and presentations are scored by rubric, a scoring guide that provides clear and detailed descriptive criteria for what constitutes excellent work and what signifies an unacceptable performance. The rubric, therefore, is the assessment measure, assuming that what it is measuring aligns to a learning goal.   

Likewise, specific questions on an exam that measure a learning goal may serve as an assessment measure, while the complete examination, like the research paper, is the assignment.

When assignments are confused with assessment measures, the results do not produce specific enough information about student learning that has implications for continuous improvement.  A typical example of such assessment findings might read, “73% of student earned grades of 80 or higher on the presentation. Target was achieved.”  Such findings report how students performed on a given assignment in a given class on a given day, but they do not indicate anything about where student performance was especially strong, and where it was less successful.  Were the presentations well organized?  Were the delivery techniques effective?  Did the students use technology well to convey their messages? 

Assessment goes beyond grading and analyzes patterns of student performance.  Good assessment methods help identify these patterns. 

To learn about the advantages and disadvantages of specific assessment methods, visit  https://www.utica.edu/academic/Assessment/new/resources.cfm  or contact the Office of Academic Assessment in 127 White Hall.


Works Cited

Suskie, Linda. Assessing Student Learning: A Common Sense Guide. San Franscisco: Jossey-Bass, 2009.


Wednesday, October 9, 2019

Using Course-Embedded Assessments for Program-Level Purposes


In my previous blog, I wrote, “It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not.” 

If this is true, how might course-embedded assessments—i.e. exam questions, papers, projects, labs, performances—be used for program-level (or even institution-level) assessment?

The History Department offers an excellent example of just how course-embedded assessment translates to the program level. In this major, a capstone project was scored using a rubric where each item aligned to a program-learning goal.  The results were analyzed with respect to which program learning goals students were successfully achieving at the proficiency level.  Since the project was completed over the course of a full year, beginning in spring semester of the junior year and ending in the spring semester of the senior year, findings might also be analyzed to determine growth, or value-added, in the major.  Finally, the recommendations and action plans made were program-level ones, namely a curriculum revision that exposes students to historical methodology earlier in the program and requires more writing-intensive courses.   

Similarly, the English faculty developed a departmental rubric to score student papers in courses required for the major or for Core.  Rubric items align with the program-level learning goals.  The rubric was applied by individual instructors, and results were analyzed to see if student performance improved from 100-level courses to 300/400-level ones.  Majors were compared to non-majors, giving the faculty another way to view the data. Such analyses yielded program-level insights into student learning: the department identified where its majors were performing well and where the faculty may need to be more intentional about developing students’ competencies.   

Another example of how course-embedded assessments translate to the program-level may be found in Philosophy.  Philosophy faculty applied a departmental rubric to particular assignments in their classes that targeted relevant program learning goals and course objectives.  They measured student performance in courses that introduce the learning goal and compared these findings with results gathered from courses that reinforce the learning.  This assessment plan has allowed the faculty to demonstrate successful student learning in their program and has also indicated a need for improved adjunct faculty development.

Course-embedded assessments are a sustainable, organic approach to assessment and, because the work is generated in a course, students are probably motivated to do well.   That said, if they are going to be used for program assessment, the measures must align with program goals, multiple measures should be use to assess learning at various transition points in the curriculum, and the data should be analyzed with respect to the program’s goals. 

Wednesday, September 18, 2019

But We've Always Done Assessment: Course-Level Versus Program-Level Assessment


When faculty argue that they have always done assessment and made changes to improve learning, they are absolutely correct.  The 2018 self-study report completed by the Department of Physics calls this “assessment by rumination,” asserting that it “happens all the time among faculty” and was happening “long before ‘rubrics’ and ‘operational goals’ . . .  barged their way into the lexicon of assessment.”

What faculty are referring to when they make these claims is course-level assessment. Course-level assessment has been happening for centuries.  However, the type of assessment that will inspire public confidence in higher education is not course-level, but program-level or institution-level. These are the assessments that provide the evidence we need to tell our stories to external stakeholders, including prospective students, parents, donors, accreditors, and grant-funding agencies.

Imagine you are promoting your academic program to a group of parents and prospective students.  You want to attract the best and the brightest students in the audience, and, while you aren’t crass enough to say it, you want each family to spend over $100,000 and probably amass significant debt. 

You can tell them what faculty colleagues at every other college they visit will tell them:  we have a great faculty dedicated to excellence in teaching; our curriculum is current, relevant, and exciting; we offer students opportunities for research, internships, and community-engagement; in our program, you are a person, not a number. 

But how might you tell a story that distinguishes your program and offers evidence supporting your claims?   

Findings from course-level assessments won’t help you here.  No parent or prospective student will care that last spring semester, 83% of the students in XXX-course met or exceeded expectations on a quiz.  They might, however, be interested to learn that in UC’s physics program, students in introductory and intermediate-level courses often exceed the national average on a standardized pre/post assessment.  Likewise, they would probably be interested to know that internship supervisors evaluate how well criminal justice students apply what they learned in their program to a real-world setting.  

Program-level assessment is not more important than course-level.  It’s just different—and it serves a different purpose.  It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not. Program assessment considers the bigger picture—program-level goals—and is typically outcomes based.

Assessment guru Linda Suskie recommends that “Program-level outcomes are often best assessed with a significant assignment or project completed shortly before students graduate.”  

As departments plan for the 2019-2020 assessment cycle, consider how you might use your assessments to tell your program’s story.  You might even find that your assessment efforts become more simplified, more organic, and less burdensome than they have been. 

Thursday, April 25, 2019

Paula Carey Reflects on What She Learned from Serving on the Assessment Committee


For six years, Paula Carey, Associate Professor of Occupational Therapy, served on the Academic Assessment Coordinating Committee.  She read program reviews, served as the lead reader for annual goal reports and self-studies, kept minutes of meetings, and, in a spirit of cooperation, shared her expertise with faculty colleagues and members of the committee.

Paula is retiring from thirty-plus years at Utica College, which obviously means her tenure as a member of AACC is drawing to a close.  But before she sings the final verse of the Alma Mater, she shared her reflections with me on what she learned from serving on the assessment committee.  She also offered some advice regarding what the College needs to do to build and support a culture of assessment.

The best part about serving on AACC, according to Paula, is being exposed to the wonderful work being done in other departments.  Faculty tend to be insulated in their own departments, she said, operating in silos. Reading assessment reports and talking to faculty during the program review process deepened her appreciation for the work her colleagues in all disciplines were doing.  She described their work as “very student-centered” and her faculty peers as “dedicated to teaching and learning.” The assessment reports and program review self-studies showed her just how intentional faculty are about curriculum and pedagogy, and how much effort they put into making an impact in students’ lives. 

“We’re all dealing with similar problems,” Paula said, “but each department deals with them differently.”   As a faculty, we need to have more opportunities to share what we are doing with one another and to exchange ideas across departments and disciplines.  Her work on AACC gave her the chance to do just that and convinced her of how beneficial these conversations are.

Serving on AACC also gave Paula more of an institutional perspective.  “I learned where Utica College might strengthen some of its processes, and I learned where there is a shortage of resources.” 

Paula has some advice for the College’s administration. 

“The process works better when it is faculty driven and respectful of faculty perspective,” she stated.  She further said that campus leaders must acknowledge the time, energy, and effort to do assessment well. 

“We should look for champions of assessment who can assist with the process,” she recommends.  
   
Thank you, Paula, for your service to UC and the Academic Assessment Coordinating Committee.  Your insights will help steer us on the assessment path just as they served us on the committee. 

Wednesday, April 17, 2019

Wicked Problems and Assessment


I’ll admit it.  When John Camey, Interim Dean of the School of Business and Justice Studies, told me he published a paper titled “The Wicked Problem of Assessment,” I reacted defensively.  Imagine, I thought, if I referred to someone’s discipline or the work they had been doing for over two decades as a “wicked problem.” 

But let’s face it.  We learn more from diverse views than from those that reflect our own.  As Walter B. Greenwood, my undergraduate professor in contemporary lit, said, “Having old certainties destroyed by new considerations is one of the hazards of reading.”

I read the paper.

Far from disagreeing with what John Camey and his co-author, Carolyn E. Johnson, said, I thought they made a lot of practical sense.  

A wicked problem, I learned, was not one that is inherently evil, but is one for which there is no straightforward solution.  Camey and Johnson offer ten reasons why assessment meets the criteria of a wicked problem.  A few of these help explain why assessment frustrates faculty, and, more importantly, how assessment professionals and accrediting bodies may have unwittingly been the cause of certain vexations.   

The authors note that when accreditors first required institutions to assess student learning, or provide “assurance of learning,” they offered little assistance with how to achieve this. Faculty were supposed to “figure it out.” However, the methods they had been using for years—i.e. grades—were not considered valid measures.  So how were faculty supposed to navigate this new terrain?  As Camey and Johnson observe, “An entire industry of workshops, seminars, conferences and travelling consultants [grew] up to help.” Needless to say, each of these had a price tag, and some were quite hefty.

Wicked problems, Camey and Johnson explain, do not lead to right or wrong answers, but only good or bad solutions.  This might be where assessment is most frustrating, because “good” or “bad” are arbitrary judgements.  We see this in the accreditation process.  One visiting team may consider the assessment efforts at an institution to be acceptable, while a second visiting team rules them not good enough.  Even within an institution, what constitutes “good enough” for one group of members on an assessment committee may not be sufficient for the team that replaces them.  No small wonder that in 2018, 36% of faculty responding to a survey on assessment culture agreed that assessment is "based on the whims of the people in charge," and in 2017, UC faculty described assessment as a moving target where the expectations continuously changed. 

At most institutions, the assessment cycle is the academic year.  On an annual basis, faculty assess student learning and document the results in some type of report.  Besides reporting results, faculty are expected to reflect upon the findings and articulate how they will be used to improve teaching and learning.  In subsequent assessment cycles, they should provide evidence as to whether the changes they made resulted in better learning.  

This process, which many institutions like to graphically represent as a continuing sequence in a circular flow, may not facilitate effective, meaningful assessment.  Camey and Johnson assert, it might take “multiple semesters before sufficient data can be gathered to determine whether our solution is good, bad, or not quite good enough.”  This claim echoes Trudy Banta’s and Charles Blaich’s conclusion  that “Collecting and reviewing reliable evidence from multiple sources can take several years” but “state mandates or impatient campus leaders may exert pressure for immediate action.” 

Camey and Johnson offer a solution to the wicked problem of assessment:  an assessment committee that is chaired by a designated leader, an “assessment czar,” and comprised of faculty who rotate off periodically.  At UC, we already have such a solution in place, but is it sufficient?

It strikes me that it is also important to have clearly defined criteria that communicate what “good” assessment is, and it is critical to view assessment as a process of inquiry and analysis, not a fill-in-the-blank, paint-by-number activity. 

But let’s hear from faculty on this topic.  How might we collaborate not to solve but to address this wicked problem of assessment?





Works Cited


Banta, T.W., Blaich, C. (2011). Closing the Assessment Loop. Change: The Magazine of Higher Learning22- 27.
Camey, J. P., Johnson, C. E. (2011). The Wicked Problem of Assessment. Journal of Business and Behavioral Sciences, 23(2), 68-83.
Sam Houston University. (2018).  2018 National Faculty Survey of Assessment Culture.  https://www.shsu.edu/research/survey-of-assessment-ulture/documents/Nationwide%20Faculty.pdf

Wednesday, April 3, 2019

Measuring Student Engagement and Participation

By Ann Damiano with Elizabeth Threadgill


I remember reading an assessment report years ago in which the department faculty wrote, “Most of the students met expectations for this goal, and those that did not were just lazy and didn’t study.” 

Few declarations make an assessment committee’s eyes roll more than a statement like that.  From the very beginning, proponents of assessment made learning a shared responsibility between instructor and student, and the ultimate purpose of assessment was to inform continuous improvement.  Blaming students for poor learning outcomes is antithetical to the very principles advanced by the assessment movement.

And yet, the fact remains that there are students who do not actively engage in their learning, who miss numerous classes or habitually arrive late, who often neglect to complete assignments, who do not prepare for exams with the intensity required, and who spend more class time scrolling through their phones than attending to the lecture or discussion. 

To interpret student learning assessment results without giving some consideration to student engagement may be to miss if not the whole point, then at least an important part of the narrative. So argued prominent members of the English faculty last fall.   

So when the department assembled in February to discuss assessment findings for both their majors and core courses, they also conferred on how they might include evidence of student participation and engagement in future assessments.  
 
A small group of us met to review various checklists, rubrics, and protocols that measure student participation and engagement, and after a lively conversation, the faculty selected criteria they thought best reflected how they would determine whether or not a student is appropriately engaged in a course.  One member observed, “We talk about student engagement all the time, but we haven’t defined what it is and we haven’t systematically measured it”—even though many of these faculty make participation part of the students’ final grade.

The plan is to assess each student’s participation/engagement according to the selected criteria.  This assessment aims to answer a number of questions raised by the faculty.  How might measuring student engagement better help us understand student success in our courses?  Is a lack of engagement more prevalent in Core courses than in courses designed for the major?  Are we inclined to overstate a lack of student engagement in our courses? How might the results of this assessment help us better understand the results gathered from course-embedded assessments? 

The scholarship of teaching and learning offers countless examples and studies describing how active student participation and engagement improves learning.  This is obvious to anyone who ever stood before a group of students.  The work being done by the English faculty has the potential to document the extent to which this might be true. 


Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...