Wednesday, November 29, 2017

Involving Students in Assessment

By Ann Damiano

In her keynote address at the Assessment Network of New York conference (April 2017), Natasha Jankowski, Director of the National Institute for Learning Outcomes, challenged participants to develop assessment processes that are student-centered. She concluded that assessment is something we should do with students, not something that is done to students.

Multiple stakeholders should be involved in our assessment efforts, particularly when it comes to communicating and interpreting results, as well as generating plans based on these results.  Students are our most important stakeholder, and so their involvement in the process is imperative.

One way is to include students in the dissemination plan for institutional survey results.  Findings from NSSE, the Noel-Levitz Student Satisfaction Inventory, and even the Student Opinion on Teaching (SOOT) might be shared with student leaders.  If warranted, students could collaborate with personnel in Academic and Student Affairs to create plans or makes recommendations based on the survey results.  For example, if NSSE findings indicate that less than 60% of seniors perceive the College contributed to their understanding of people different from them, students might propose ways the institution could improves its curricular and co-curricular offerings so that we are more successful at achieving this tenet of our mission. 

When assessing student learning goals, we should not assume students share the same operational definitions as their faculty.  That they might not underscores the importance of getting their input into what results mean, and likewise, highlights the importance of using multiple methods to assess a single goal. 

Most recently (and at my previous institution), I assembled two student groups to review results related to integrating knowledge, problem-solving, quantitative reasoning, and intercultural competence.  For each of these learning goals, the findings from diverse sources either conflicted with one another or the results indicated that no matter what “improvements” faculty made to the curriculum, we were still not achieving the desired outcomes.  The students brought a different perspective to the discussion than that articulated by the three faculty groups that reviewed the data.  Important insights from the students included the following:

  • Students defined “integrating knowledge” as applying classroom learning to real-life situations, whereas faculty used it to refer to apply what was learned in one course to another;
  • Problem-solving is best developed in the co-curricular experience, where students are often forced to derive solutions independently, as opposed to in the curricular experience, which is much more structured and faculty-directed;
  • While the college may provide numerous offerings related to inclusion and diversity, a lack of diversity on the faculty combined with pedagogies that do not promote inclusion and the absence of global perspectives in courses throughout the curriculum potentially contributed to students not achieving the desired outcome related to intercultural competence. 

The students’ interpretations of assessment findings dared the faculty to make improvements that challenged them in ways their own conclusions had not.  Rethinking one’s pedagogy, for instance, requires much greater effort and imagination than adjusting course requirements or modifying an assessment instrument.  Yet new pedagogical approaches may be necessary if we are going to help students achieve outcomes.


Collaborating with students on assessment results expands our understanding of what the results might mean.  As one faculty member noted, including students in our processes “sends a message that we do this for the students, that they’re the major stakeholder, and they literally have a seat at the table.”  

Wednesday, November 1, 2017

The "Apprenticeship Model" for Enhanced Student Learning

By Steven M. Specht

            Like most students who choose to major in psychology, while I was an undergraduate, I enjoyed my abnormal psychology course and, at the time, didn’t really understand why I needed to take statistics. Also, like most students, I could have completed my bachelor’s degree by simply taking the required courses without being involved directly with conducting research or doing any kind of internship. But because I took my schoolwork seriously and I expended the time and effort to write and revise the papers I submitted for classes, my professors noticed. After earning an “A” in Research Methods -- one of the most challenging courses I took in college – Dr. James McCroskery asked if I would be interested in doing research with him the following semester. I was thrilled and jumped at the opportunity. Our research examined the relationship between the Type-A behavior pattern and self-reports of minor body symptoms (e.g., headaches, skin rashes, insomnia). Our work eventually led to a presentation at the annual meetings of the Eastern Psychological Association in 1982.
            My experience as an undergraduate research assistant for one of my professors became the springboard to my work as a research assistant at Colgate University and my eventual admission into the doctoral program in psychobiology at Binghamton University.
            In fact, the “apprentice model” is the tradition in scholarly training in the empirical sciences. My doctoral advisor typically ran her lab with four or five graduate student “apprentices” and a cadre of undergraduate research assistants. In addition to publishing papers while in graduate school, we were expected to present our research at the annual meetings of the Society for Neuroscience and the Eastern Psychological Association (as my undergraduate advisor had done). This all makes sense when you think about the fact that empirical research is not simply learning a collection of facts, but rather is an intensive scholarly enterprise which requires being actively involved with the processes of science.
            I am proud to say that I have continued the tradition of the apprenticeship model throughout my career both at Lebanon Valley and Utica College by inviting promising students to get involved with the research that I have conducted over the years. Their involvement affords them with opportunities to learn the process of research and to present their work at local, regional and national conferences. And as they had done for me, these learning experiences typically transform students’ lives by making them more competitive graduate school candidates or potential employees.
            But since this is an assessment blog, I suppose I should mention something about assessment. Assessing programmatic outcomes from an apprenticeship model is fairly straightforward and requires no rubric. Although the gold standard of publication is often elusive; the silver standard of conference presentations is easy to document and is generally recognized as externally validated and valued accomplishments. Virtually all departments are well aware of these standards. A potential problem arises when administrators don’t realize how potentially useful these data are for the institution in terms of assessment (and potential marketing and advancement).
            Assessing the outcomes that students gain individually from being part of a research “apprentice” program is perhaps more challenging.  The face validity of such involvement seems apparent. For anyone who has worked with students in this capacity, the transformation of the students seems “obvious.” It might seem reasonable, however, to compare graduate school acceptance rates or employment rates of students who were apprentices during their undergraduate years with those who were not involved. These data would be confounded by differences in levels of “pre-apprentice” motivation or initiation. It is also typical that students who work closely with a faculty member obtain more impressive and informative letters of recommendation.

            But now perhaps we have gone too far down the assessment path. It is traditional in the empirical sciences (and other disciplines) to provide opportunities for students to become involved with active learning which transform them as scholars and citizens. Hmmm, “tradition, opportunity, transformation”… coincidentally, that’s the Utica College slogan that preceded “Never Stand Still”.

Reflection as A Means of Measuring the Transformative Potential of Higher Education

Several years ago (and at another institution), I attended a meeting where a faculty member was presenting a revised general education curri...