Alternative Assessment Criteria, but How?

There has been a tremendous amount of discussions concerning the goals and practices of Digital Humanities within the setting of undergraduate instruction, and I welcome not only the increased attention but also the marked shift that has taken place on how this discussions are framed. Interestingly, the question at hand has shifted from whether to embrace DH pedagogies or not, but how to best incorporate DH into the undergraduate curriculum.

Certainly, given SIF’s mission to consider course matter from new vantage points and to develop innovative pedagogical practices, the ‘how to’ premise is far more appealing to us than the ‘whether or not’ when it comes to a 21st century informed curriculum. But still, there is a lot of reservation, especially from faculty, on going digital with their undergraduate classes. One of the main reasons is that faculty members—while not opposed to the idea of creating course content and assignments that engage students with digital artifacts—find it difficult to develop suitable assessment criteria. After all, at the end of a semester, instructors need to assign grades based on the students’ performance.

Thus, one area where we might apply the lever is the assignment rubric as the ubiquitous and guiding paradigm of student assessment. The rubric seems to be the place where the prospect of assigning creative and organic digital projects clashes with the idea of assessing student work with standardized metrics. While we can certainly see rubric based assessment as a big innovation when it was invented in the 1960s, its pervasiveness in higher education today creates a dilemma for faculty members (me included) who would like to go digital with their classes but worry about assessment, and rightly so.

Last week, I had the pleasure of attending this month’s Digital Pedagogy Meetup, which provides a setting for educators to share assignments, methods, theories, and/or resources that focus on student engagement and learning in the Digital Humanities. At the meeting, David Morgen, who is the coordinator of the writing program at Emory, discussed alternative assessment strategies in the context of DH course content. He calls for so-called ‘dynamic criteria mapping’ as a new, alternative assessment model to overcome the seeming gridlock (for lack of a better term) of the rubric as a set of decisions designed to standardize what’s valuable and what’s not.

‘Dynamic Criteria Mapping,’ or DCM in short, follows a grounds-up approach to assessment rather than the top down structure prescribed by the rubric. That means that assessment criteria for a project would only be developed after students have had time to create a working draft of their projects. Moreover, assessment criteria for a given project are discussed with the students. Of course, while such an approach may very well help students in thinking through questions of what counts as a successful project and what doesn’t, this requires not only a lot of confidence from the instructor to allow students to work on projects without knowing in advance on how to assess the finished product; it also requires the instructor to be relatively tech-savvy already to be able to anticipate potential projects that students might propose. David Morgen here suggests that instructors should meet regularly and share insights to stay current. He argues that rubrics not only make it more difficult for students to find their own paths of learning, but that they also don’t allow instructors to assess student development over time. DCM, by contrast, would provide a promising avenue to gather information on real practice.

While I’m very interested in the idea of active learning, and while I do not dismiss the inherent problems with rubric based assessment, I’m still not sure on how this might work in practice. For example, I often find myself lamenting about the limited time that I have with students in the class to discuss content and I’m worried that involving students more in the development of assessment criteria might take away too much time in-class. Maybe this would require decreasing the overall amount of assignments during the semester? Or maybe a balanced approach with rubric-based assignments at the beginning of the semester and a more organic development of assessment criteria for final projects? I’m really not sure, but what is clear to me is that class logistics also play a role. Assessment has to remain manageable.

So, if you’ve read this, I’d really like to use this blog space to start a conversation about this. Do you have experiences to share concerning the collaborative development of assessment criteria? How has this played out for you? How have your students responded? In more general terms, what’s your view on the rubric? What other ways have you found helpful in promoting active learning?

I look forward to your comments!

Thomas

Leave a Reply

Your email address will not be published. Required fields are marked *