Crowdsource Grading on Digital Assignments

Screen Shot 2015-02-17 at 8.58.40 PMThis semester, I’m trying new assessment methods with my graduate students: contract grading and crowdsource grading (for major projects). For me, crowdsource grading is not just about students making decisions about the quality of their peers’ work; it’s also about working together as a class to decide what in fact constitutes quality work. (Or in our case, what constitutes an “A,” “B” and “C” grade on major projects.)

I’ve been collaboratively creating assignment criteria with my digital writers for a couple of years now. My method is informed by the work of **Chanon Adsanatham, who leads students through a number of scaffolded activities that ultimately result in student-generated grading criteria, and ***Jody Shipka, who asks students to turn in a “Statement of Goals and Choices” (SOGC), a statement that documents their rhetorical decisions, along with their digital projects. My method is best described as Adsanatham+Shipka+genre analysis. I ask students to collaboratively conduct a simplified version of a genre analysis (what is successful/effective about this project? What is ineffective/unsuccessful about this project?) as a way to generate assessment criteria. Based on our collaborative work together, I design a rubric, share it with the students, and make any adjustments they feel is necessary. My students also turn in a reflection essay (or what Shipka would call SOGC) along with all digital projects.

For this digital project, I opted not to do the Tanya method in class. Rather, I asked students to make a list of assessment criteria and from these lists, we’ll generate the “final” version together on Thursday in class.

As I’ve been reviewing the articles I assigned this week, I keep coming back to my learning goals and outcomes for this class. One stated outcome is “Students will possess knowledge about how theory and concepts related to digital writing (as studied and taken up in the discipline of Composition/Rhetoric) informs the production of digital writing.” (I should have written analysis and circulation as well!) Over the past four weeks, we’ve read, discussed and pulled apart various theories and concepts. If we thought of how all of these theories and concepts inform the production of digital writing all at once, our heads would explode. If we, as digital writers, had to constantly return to, and consider, everything we’ve learned about say, the function of images, while we’re composing, our heads would explode.

In both of my DW courses, I keep reminding students that writers make choices. Digital writers make choices. We think about how we might best achieve our communicative purpose with regards to our audience. We think about how we can achieve particular effects, how we can convey meaning, how we can utilize modalities to make our work accessible to a wide range of people, for example.

This brings me back to assessment and the question I’d like us to ask ourselves: what theories, concepts or themes might inform the choices we make as digital writers while composing this project. When we discuss assessment criteria for this project, I will offer up one way we can think about assessing this project: focus our attention on one or two concepts, themes or theories, allow them to guide us in composing the project, and use them to determine what constitutes quality work (or in other words assessment criteria). (I think it’s important for us to really think about the fact we are creating moving images not still images. Much of the scholarship we’ve read focuses on still images, and there’s a big difference between the two.)

Below are some possibilities (many of which overlap):

1. Audience involvement, identification, and role in the meaning making process
2. Modality affordances (individually and collectively) and their meaning-making capabilities3. Intertextuality (the ways images work in relation to each other: reference, difference, similarities)
4. The psychology of images (or in our case moving images) and how material functions as a persuasive mechanism
5. The use and function of persuasive appeals (ethos, pathos, logos)
6. The relationship between text and image
7. Design elements (Kress and Leewuen’s description of layout)
8. The use of rhetorical strategies often used in alphabetic text like metonymy, synecdoche, amplification
9. The use of pictorial and non-pictorial icons, and their role in the meaning-making process

This assessment method calls for a reflection essay, one that will allow students to really engage with the chosen concept, theme or theory in relation to their piece of digital writing. It’s also an opportunity for them to talk about how they executed or were unable to execute the vision they had for their project (due to constraints like using Fair Use, CC, and/or public domain material). So I imagine the assessment criteria will focus on both the video and the reflection, or students may even just want the reflection to be graded.

I’m very interested to hear what other folks in the class came up with. I’m very much looking forward to Thursday (god willing it doesn’t snow for the umpteenth!)

**Chanon Adsanatham. “Integrating Assessment and Instruction: Using Student-Generated Grading Criteria to Evaluate Multimodal Digital Projects.”
***Jody Shipka. “Negotiating Rhetorical, Material, Methodological and Technological Difference: Evaluating Multimodal Designs.” College Composition and Communication 61.1 (2009): 343-366.

4 thoughts on “Crowdsource Grading on Digital Assignments

  1. M.P. Carver says:

    What follows is long-winded and overly opinionated, but I wanted to share what’s been spinning around my head whenever the topics of crowdsourcing grading and grading contracts come up:

    I’m not against crowdsource grading/grading contracts necessarily (as you say in your post “it’s also about working together as a class to decide what in fact constitutes quality work.”), but I find that rather than making grading less stressful they end up making grading the center of the class (like the link above to an article “No Grading, More Learning” that’s /entirely/ about grading and assessment). In my opinion learning is already too intricately linked with the idea of grading as the metaphorical carrot, and a more complicated group method of grading just emphasizes that by giving students more agency in the process. I think crowdsourcing /evaluation/ is absolutely fantastic, both as a learning tool for the evaluators and to generate feedback for the evaluatee, but grading is best left out of it. Crowdsource grading /may/ displace the stress of grades for some students, but I think in a lot of cases it can also encourage surface evaluation. I will say honestly I would hold back critiquing another person’s work in class if my comments had the power to effect them negatively in terms of grades, and I thought those grades might matter to them. If it was feedback I thought would be valuable to improve their work (esp. creative work) I might try and give it outside of class, or I might let it pass.

    Full disclosure, I was spoiled as an undergrad; I went to Brown University where there are no core requirements, any class can be taken pass-fail (many courses even require pass-fail grading) and there are no GPAs. I’ve found that having to now consider grading as a graduate student has been a hassle, especially since I’m in the odd position of having no professional goals that will be effected by having or not having an MA. If I didn’t have to maintain a certain GPA to keep an assistantship I’d do my best to not let grading concerns even cross my mind. All of that has had a big influence on my thoughts about the matter.

    Like

    • Dan says:

      M.P. I appreciate what you have to say here, and I must say, Brown just sounds awesome!

      From the perspective of a high school teacher, the amount of stress grades put on students is absolutely absurd. My best students claw and fight for every point in every circumstance. What is worse, their parents. I had an AP parent needle me about why they are not allowed extra credit in an AP class. I could not believe it. I actually laughed when I got the email.

      I agree that the two are intrinsically linked in someways, but maybe it is time we try and break it a bit. Being a student in today’s education system, unfortunately, is about numbers and statistics, quantifying knowledge. They do not see the benefit in learning something unless it will be useful towards those digits. I have had students tell me they will not do something unless it “counts.” I’d like them to define “count.”

      I am starting to sound sour and that is not my intention. I wish grades were not the end all be all, and I am seeing crowdsource grading as a way to perhaps alleviate some of the stress students may feel.

      Like

  2. kateartz says:

    While generally I do like the idea of grading criteria that can be flexible and is based on student input, I think I have to agree with MP. It feels disruptive and halting to the composing/creating process to have to establish your own grading criteria before (or during) the composition process. I don’t see a great advantage to establishing specific criteria in advance of a project; it only seems to invite (or even demand) students to design their work exclusively for the purpose of meeting that criteria (ie writing for the teacher/grade). To ask someone to establish their own criteria also feels like a recipe for writer’s block to me. If I’ve chosen to be evaluated on a particular aspect of my assignment, I am now going to be about 10x more anxious about that very aspect. The expectations feel much higher, I think, when the student is responsible for setting their own bar. Again, I feel like the timeline is important here, because choosing an area to be evaluated on AFTER the fact seems like a really effective way to get constructive feedback, without disrupting your ability to compose and create.

    Like

Leave a comment