Wednesday, January 29, 2020

FERPA and Student Learning Assessment


The Family Educational Rights and Privacy Act, known as FERPA, is a federal law protecting the privacy of education records.  Any personally identifiable information that is linked to a student and makes it possible to identify that student would be protected under FERPA.

School officials may have access to students’ education records only in the event of a legitimate educational interest—i.e. the education record is needed for that official to meet his/her/their professional responsibilities.  An obvious example would be an academic advisor who requires a student’s education record in order to recommend which courses the student must take or to counsel the student with respect to his/her/their academic performance.

School officials who serve on the Academic Assessment Coordinating Committee and who review assessment findings and supporting evidence do not have a legitimate educational interest in the education record of individual students. Representatives from accrediting agencies who might wish to review assessment reports and findings most definitely should not have access to student records.

For this reason, when student artifacts are being submitted as part of the annual goal report or program review, all identifiable information should be scrubbed from the document.  Likewise, if “raw” data are attached as supporting evidence for an assessment finding, all identifying information (students’ names, ID numbers) should be removed.  This is especially important, because once the report is published and accessible electronically, it is no longer password protected.

When assessments are course-embedded, departments should avoid identifying the faculty associated with the scores or assessment findings as well.  This information may be useful to a department chair or whoever is coordinating the assessments, but is should not be for AACC’s consumption.

As we prepare to report on 2019 - 2020 assessment results, keep in mind these ways to protect student privacy and remain in compliance with an important federal regulation. 

For further information about FERPA, please visit the following site: https://studentprivacy.ed.gov/sites/default/files/resource_document/file/SRO_FAQs_2-5-19_0.pdf.

Wednesday, November 20, 2019

Using and Sharing Assessment Results

Assessment specialists and accreditors agree that doing assessment is simply not enough.  Sharing and using assessment results are probably the most important part of the assessment process—and, as numerous assessment specialists testify, the most challenging.

A great example of how to use assessment results is from the Department of Business and Economics.  

This department completed a carefully planned assessment during New Student Orientation. Included in this plan was how to use the results.  Business faculty modified the learning objectives they were given from the Office of Student Success for the Faculty Session and created a lively presentation focused on belonging to an academic community and developing strategies for success within this community. At the start of the session, the faculty polled the 39 majors in attendance about what they thought was most important to their academic success.  At the presentation’s close, they asked participants to identify which of those areas they feel they need the most help developing.

In a series of dynamic emails the following week, faculty discussed their interpretations of the poll findings.  The results and the analysis of results were then shared with the UCC 101 faculty to be used in lesson planning.

What makes this example meritorious is that the faculty planned the assessment by giving advance thought to how they would use the results.  Jillian Kinzie, Pat Hutchings, and Natasha Jankowski assert, “Institutions that effectively use assessment results focus sharply from the beginning of any assessment initiative on how results will be used” (Kuh 61, emphasis added).  

Sharing assessment results via evidence-based storytelling helps institutions communicate results that are meaningful to external audiences.  An added benefit is that audiences are spared from being overwhelmed by mind-numbing data and copious bullet points. It's a great assessment strategy for small departments that have few majors.

An excellent example of using assessment results to tell a story comes from the Department of Athletics.  The athletics staff used multiple methods, direct and indirect, to measure the impact of community engagement on student athletes.  The information was shared by the student athletes themselves in a video that highlights the value of community engagement as an educational goal for sports teams.  This video  (https://ucpioneers.com/sports/2019/9/6/pioneers-in-the-community.aspx) is posted on the Utica Pioneers webpage and accessible to external audiences, including prospective students.    

Simply reporting assessment findings sometimes amounts to little more than bean counting.  Assessment becomes a more meaningful enterprise when results are used to improve educational effectiveness and shared to tell our story.

Works Cited

George Kuh, et al. Using Evidence of Student Learning to Improve Higher Education. San Francisco : Jossey-Bass, 2015. 51-72.


Wednesday, November 13, 2019

Crossing the Rubricon

By Kevin Pry, Associate Professor of English, Lebanon Valley College

When Julius Caesar took his legions across the Rubricon River into Italy and marched on Rome to change the old Republic forever, he knew there was no turning back--he was committed wholeheartedly to discarding an old set of assumptions and practices for new ones.  My experiences with assessment have put me into a situation that would have felt familiar to one of Caesar's veteran legionaries, for in the struggle to improve our assessment, I have had to push beyond my traditional understanding of how to use rubrics.  I have had to develop a methodology that has given new scope and effectiveness to the way I devise assignments, evaluate student work, and assess the results.  I jokingly call this change, "Crossing the Rubricon."

In the past, I used rubrics to grade major written or oral assignments, using them like checklists to determine whether or not students demonstrated their skill so that I could give specific feedback to them for the future. I was an old grading centurion following the old Roman regulations, more for discipline's sake than as an innovative tactician in the war on ignorance.  But I noticed that the use of conventional rubrics often seemed to penalize students in assignments where I was trying to promote risk-taking and creativity.  For example, in acting classes, there are some techniques and concepts that can only be learned by trying to employ them and failing at one's initial attempts to do them.

This led to Epiphany #1:  One can devise a rubric that puts a positive grade value on how useful a student's unsuccessful attempts at employing a technique was to promoting class discussions and student learning.

Of course, I had always reviewed the results of student learning, analyzing how they met/failed to meet criteria.  Before, I responded to their failures by trying new ways of teaching or discussing bewildering or confusing material.  I hadn't shifted the structure of my tried-and-true assignments because they worked for most students.  When I made the decision to cross the Rubricon and devise detailed rubrics for both large and small assignments, I discovered that the act of thinking in detail about how to use rubrics to generate evidence for course and program assessment led me to zero in on the instructions and prompts for each task, fine-tuning these to line them up with desired outcomes in a far more coherent and obvious manner.  This, naturally, is a major step in improving outcomes.

Thus, Epiphany #2:  Rubric writing is an artistically satisfying task, requiring you to analyze what you really want students to accomplish in an assignment.  Aligning prompts and instructions, criteria for evaluation, and desired outcomes produces important insights into where you should be focusing your energy and technique as a teacher.

With the push to "close the loop," I feared that the mechanics of having to assess multiple courses for multiple objectives might consume too much time and efforts.  But the insight that one detailed rubric can be made to assess multiple objectives in one cleverly designed assignment led to Epiphany #3:  That's what they meant by "work smarter, not harder."

Wednesday, October 30, 2019

Closing the Loop: A Strategy to Improve Students’ Writing Skills in an Accounting Class

By Donna Dolansky

I’ve been teaching ACC 401 every spring term since I started at Utica College three years ago.  In this course, we assess students’ communication skills (oral and written) with the expectation that they will be performing at the proficiency-level.   Our target is that 100% of students will achieve 80% or higher on a final paper scored using a rubric.

The first year I taught the course (Spring 2017), 87% of students achieved the target and 13% did not.  At the same time, I was serving on a search committee for a biology faculty member, and Larry Aaronson, another committee member, mentioned to me that he assigns a novel each year in his biology class and asks students to blog about the book.  He added that he wrote a paper about how this improved their ability to communicate in writing.

I thought this was a great idea, so in the Spring 2018 semester, I assigned a novel related to auditing and asked students to write responses to a series of questions I raised about the reading.  I also asked them to write any general observations they had about the book.  This provided me with an opportunity to review students’ writing and offer feedback on a low stakes assignment.

The results?  Students’ writing slightly improved, and, more importantly, students believed that they benefitted from the experience. When asked whether or not they found the assignment useful, the majority of students responded affirmatively, saying they found the reading beneficial and their writing skills improved.  Further, students credited the assignment with developing their critical thinking abilities. 

A number of students acknowledged the importance of reading and writing in “real life work,” and so they perceived this assignment as helping them develop essential skills for today’s professional workforce. 

In the words of one student, “When we were first told of the assignment I was worried I would dread having to read the book because I thought it was going to be boring. I enjoyed this book so much however, that I have recommended it to many of the people I work 

Friday, October 18, 2019

Assignment or Assessment: What's the Difference?

I am a big promoter of course-embedded assessments, what Linda Suskie describes as “course assessments that do double duty, providing information not only on what students have learned in the course but also on their progress in achieving program or institutional goals” (Suskie 27)

Research papers, capstone projects, presentations, evaluations from clinical or internship supervisors—these are authentic assessments that allow us to gather evidence of student performance in our programs without necessarily adding to our workload.  Suskie credits course-embedded assessments with keeping assessment processes manageable, and, because they are developed locally, “they match up well with learning goals” (27). 

While course-embedded assessments are often course assignments, the assignment itself is not an assessment measure, and grades earned are not considered direct, valid evidence of student performance. 

That last sentence reads like assessment doublespeak, so think of the difference this way.  The assignment might be to compose a research paper and then present the work orally to the class.  How the paper and presentation are assessed is the assessment measure.  Typically, papers and presentations are scored by rubric, a scoring guide that provides clear and detailed descriptive criteria for what constitutes excellent work and what signifies an unacceptable performance. The rubric, therefore, is the assessment measure, assuming that what it is measuring aligns to a learning goal.   

Likewise, specific questions on an exam that measure a learning goal may serve as an assessment measure, while the complete examination, like the research paper, is the assignment.

When assignments are confused with assessment measures, the results do not produce specific enough information about student learning that has implications for continuous improvement.  A typical example of such assessment findings might read, “73% of student earned grades of 80 or higher on the presentation. Target was achieved.”  Such findings report how students performed on a given assignment in a given class on a given day, but they do not indicate anything about where student performance was especially strong, and where it was less successful.  Were the presentations well organized?  Were the delivery techniques effective?  Did the students use technology well to convey their messages? 

Assessment goes beyond grading and analyzes patterns of student performance.  Good assessment methods help identify these patterns. 

To learn about the advantages and disadvantages of specific assessment methods, visit  https://www.utica.edu/academic/Assessment/new/resources.cfm  or contact the Office of Academic Assessment in 127 White Hall.


Works Cited

Suskie, Linda. Assessing Student Learning: A Common Sense Guide. San Franscisco: Jossey-Bass, 2009.


Wednesday, October 9, 2019

Using Course-Embedded Assessments for Program-Level Purposes


In my previous blog, I wrote, “It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not.” 

If this is true, how might course-embedded assessments—i.e. exam questions, papers, projects, labs, performances—be used for program-level (or even institution-level) assessment?

The History Department offers an excellent example of just how course-embedded assessment translates to the program level. In this major, a capstone project was scored using a rubric where each item aligned to a program-learning goal.  The results were analyzed with respect to which program learning goals students were successfully achieving at the proficiency level.  Since the project was completed over the course of a full year, beginning in spring semester of the junior year and ending in the spring semester of the senior year, findings might also be analyzed to determine growth, or value-added, in the major.  Finally, the recommendations and action plans made were program-level ones, namely a curriculum revision that exposes students to historical methodology earlier in the program and requires more writing-intensive courses.   

Similarly, the English faculty developed a departmental rubric to score student papers in courses required for the major or for Core.  Rubric items align with the program-level learning goals.  The rubric was applied by individual instructors, and results were analyzed to see if student performance improved from 100-level courses to 300/400-level ones.  Majors were compared to non-majors, giving the faculty another way to view the data. Such analyses yielded program-level insights into student learning: the department identified where its majors were performing well and where the faculty may need to be more intentional about developing students’ competencies.   

Another example of how course-embedded assessments translate to the program-level may be found in Philosophy.  Philosophy faculty applied a departmental rubric to particular assignments in their classes that targeted relevant program learning goals and course objectives.  They measured student performance in courses that introduce the learning goal and compared these findings with results gathered from courses that reinforce the learning.  This assessment plan has allowed the faculty to demonstrate successful student learning in their program and has also indicated a need for improved adjunct faculty development.

Course-embedded assessments are a sustainable, organic approach to assessment and, because the work is generated in a course, students are probably motivated to do well.   That said, if they are going to be used for program assessment, the measures must align with program goals, multiple measures should be use to assess learning at various transition points in the curriculum, and the data should be analyzed with respect to the program’s goals. 

Wednesday, September 18, 2019

But We've Always Done Assessment: Course-Level Versus Program-Level Assessment


When faculty argue that they have always done assessment and made changes to improve learning, they are absolutely correct.  The 2018 self-study report completed by the Department of Physics calls this “assessment by rumination,” asserting that it “happens all the time among faculty” and was happening “long before ‘rubrics’ and ‘operational goals’ . . .  barged their way into the lexicon of assessment.”

What faculty are referring to when they make these claims is course-level assessment. Course-level assessment has been happening for centuries.  However, the type of assessment that will inspire public confidence in higher education is not course-level, but program-level or institution-level. These are the assessments that provide the evidence we need to tell our stories to external stakeholders, including prospective students, parents, donors, accreditors, and grant-funding agencies.

Imagine you are promoting your academic program to a group of parents and prospective students.  You want to attract the best and the brightest students in the audience, and, while you aren’t crass enough to say it, you want each family to spend over $100,000 and probably amass significant debt. 

You can tell them what faculty colleagues at every other college they visit will tell them:  we have a great faculty dedicated to excellence in teaching; our curriculum is current, relevant, and exciting; we offer students opportunities for research, internships, and community-engagement; in our program, you are a person, not a number. 

But how might you tell a story that distinguishes your program and offers evidence supporting your claims?   

Findings from course-level assessments won’t help you here.  No parent or prospective student will care that last spring semester, 83% of the students in XXX-course met or exceeded expectations on a quiz.  They might, however, be interested to learn that in UC’s physics program, students in introductory and intermediate-level courses often exceed the national average on a standardized pre/post assessment.  Likewise, they would probably be interested to know that internship supervisors evaluate how well criminal justice students apply what they learned in their program to a real-world setting.  

Program-level assessment is not more important than course-level.  It’s just different—and it serves a different purpose.  It might be tempting to think that a handful of course-level assessments will add up to program assessment, but they do not. Program assessment considers the bigger picture—program-level goals—and is typically outcomes based.

Assessment guru Linda Suskie recommends that “Program-level outcomes are often best assessed with a significant assignment or project completed shortly before students graduate.”  

As departments plan for the 2019-2020 assessment cycle, consider how you might use your assessments to tell your program’s story.  You might even find that your assessment efforts become more simplified, more organic, and less burdensome than they have been. 

Reporting and Analyzing Assessment Findings

  It’s not unusual to see assessment reports where the findings are summarized as such:  “23% met expectations, 52% exceeded expectations, a...