What is Peer Review?
Peer Review refers to the practice of giving and getting critiques from others in order to improve something–such as a text, application, service. Research, as summarized below, has found that peer review
Peer review not only minimizes bias and improves the quality of work, but also signals the authority of the evaluated text. This process, which involves having multiple people evaluate a piece of work, aims to provide different perspectives and counterbalance individual interpretations. Acknowledging that evaluations often rely on individual judgment, which is inherently subjective and susceptible to personal bias, the peer review process strives to diminish this subjectivity by incorporating multiple perspectives. These evaluators collectively challenge individual biases, providing a more comprehensive and objective assessment. The resulting peer-reviewed work, having stood up to rigorous scrutiny, thus gains a measure of authority in its respective field.
The peer review process manifests differently across various fields, each with its unique considerations. In academic research, for instance, it is used to validate the quality of scholarly articles before they’re published. Experts in the relevant field scrutinize the research methodologies and findings to ensure they are robust and reliable. In the corporate world, peer review might involve colleagues or superiors evaluating a project or proposal, offering feedback to improve the final output and ensure it aligns with company standards–and the needs and expectations of the client. Within the realm of software development, it takes the form of code review where fellow developers inspect code for errors or inefficiencies. In educational settings, students often engage in peer review to critique and learn from each other’s work. Despite the varying forms, the core of peer review remains consistent: using collective wisdom to counterbalance individual bias, improve quality, and convey authority.
Types of Peer Reviews
The type of peer review, the number of people on a review panel, and the goals of the writer seeking the review are somewhat dependent on the importance of a decision.
In instances where the outcome will have significant implications, peer review tends to involve multiple anonymous reviewers. For example, when departments of admission at colleges and universities screen applications, they typically ask multiple individuals to review, rank, and vote on applications. Similarly, when undergraduates submit an honors thesis, it is common to have three reviews. At the graduate level, across academic disciplines, multiple reviewers are typically involved in critical milestones such as the thesis proposal or defense.
Collaborative Peer Review
Goal: Enhancing the quality and effectiveness of the project through collaboration and mutual learning.
Formative Peer Review
Provides feedback and guidance from one or more critics to support the writer’s growth and learning.
Goal: Facilitating skill development and improvement through constructive feedback.
Summative Peer Review
Evaluates the final product against predefined criteria or standards.
Goal: Making decisions regarding publication, grading, funding, or other significant outcomes based on the overall quality and alignment with standards.
Directed Peer Review
Reviewers follow specific guidelines or processes when providing feedback or assessing work.
Examples of Directed Peer Reviews
- Guide to Structured Revision — this is an example of a structured approach to substantive revision. Teachers or managers could ask you to follow a structured approach like this one in order to take the emotion out of revision and editing.
- Edit for Clarity — this structured exercise focuses on checking texts for brevity.
Open Peer Review or Transparent Peer Review
Reviewers openly identify themselves when providing feedback or evaluating others’ work.
Goal: Promoting transparency, accountability, and direct engagement among peers in work settings.
Problems with Peer Review
Complaints from Students:
Bias and Inconsistency:
- Concerns about reviewers holding personal biases that can influence evaluations, potentially favoring or discriminating against certain methodologies, perspectives, or individuals.
- Frustration with the inconsistency in the quality and rigor of reviews, which hinders students’ ability to assess their work accurately.
Lack of Constructive Feedback:
- Complaints about the absence of actionable suggestions for improvement in peer reviews.
- Frustration when reviewers solely focus on pointing out flaws without providing clear guidance for enhancement.
- Perception that engaging with peer feedback might lead to going down a “rabbit hole” without significant improvement to their grades.
Desire for a Good Grade:
- Priority given to receiving a good grade on assignments.
- Concerns that engaging in peer review may consume time without directly contributing to improved grades.
- Questioning the value of peer review if it does not have a tangible impact on their overall academic performance.
Perception of Unfairness:
- Feeling that the peer review process may lack fairness, with higher expectations for certain institutions or individuals.
- Doubts about equal opportunities for all scholars due to potential biases in the evaluation process.
Feeling of Disparity for Better Writers:
- Better writers in the class may feel like they are contributing more than they are receiving in peer review.
- Concerns that their efforts are not equally matched by their peers’ contributions.
- Encouraging students to take a long-term view, understanding that engaging in peer review allows them to practice their reviewing skills, enhance their ability to provide constructive feedback, and develop important interpersonal competencies.
Complaints from Teachers:
- Challenges in managing the time-consuming nature of the peer review process within limited timeframes.
- Potential delays in providing timely feedback due to lengthy review timelines.
- Impediments to collaboration opportunities and the timely dissemination of research findings.
- Difficulties in ensuring an equitable distribution of reviews among students.
- Balancing the workload, especially in larger classes or across multiple sections.
- Ensuring each student receives a fair number of reviews and feedback.
Training and Consistency:
- Concerns about varying levels of expertise among student reviewers.
- Inconsistencies in evaluations and insufficient feedback due to inadequate training or guidelines.
- Challenges in relying on peer reviews as a reliable assessment tool.
Complaints from Professional Writers:
Delays in Publication:
- Reliance on timely publication for reputation building and career advancement.
- Frustration with potential delays caused by lengthy peer review timelines.
- Negative impact on collaboration opportunities and the overall impact of the research.
Lack of Expertise:
- Doubts about the expertise of reviewers, particularly in specialized or niche fields.
- Concerns that evaluations may not accurately reflect the quality or significance of the work.
- Desire for knowledgeable reviewers who can provide insightful feedback.
- Mental and emotional exhaustion caused by repeated revisions requested by multiple reviewers.
- Overwhelm stemming from the iterative nature of the peer review process.
- Potential hindrance to progress on other projects due to extensive revisions.
Why Does Peer Review Matter?
Peer review is of paramount importance as it serves:
- Gatekeeper for knowledge claims:
- Experts critically review research, methods, results, recommendations, and conclusions.
- Ensures meticulous vetting of new knowledge claims.
- Upholds integrity and credibility of scholarly discourse.
- Maintains standards of the scientific community.
- Substantive role during composing:
- Popular practice in academic and workplace contexts:
- Students conduct peer reviews in high school and college, especially in composition, creative writing, professional writing, and technical writing courses.
- Researchers undergo peer review when submitting articles, books, or grant proposals:
- Ensures alignment with readers’ expectations
- Validates scholarly rigor
- Contributes to knowledge advancement.
What should I do if the paper has loads and loads of errors?
When working on an article undergoing peer review, it’s important to prioritize addressing the bigger picture issues rather than focusing solely on editing for minor errors. Instead, direct your attention to revision concerns, such as the thesis, overall structure, voice, tone, and substance of the article. Consider the following steps:
- Evaluate the Structure:
- Examine the article’s organization and flow. Ensure that the sections and paragraphs are logically sequenced and connected. Make revisions to clarify the main points and enhance the overall coherence of the article.
- Refine Voice and Tone:
- Pay attention to the consistency and appropriateness of the article’s voice and tone. Aim for a professional and engaging tone that aligns with the intended audience and purpose of the article. Revise sections where the voice may be inconsistent or where the tone needs adjustment to better convey the intended message.
- Strengthen the Thesis or Argument:
- Review the central thesis or argument of the article. Make sure it is clearly stated and well-supported by evidence or research. Revise and refine the thesis or argument as needed to ensure its strength and relevance.
- Enhance Substance and Supporting Evidence:
- Evaluate the substance and depth of the article’s content. Ensure that the arguments are well-developed, supported by credible evidence, and relevant to the topic. Add or strengthen supporting evidence where necessary to bolster the article’s overall substance.
- Seek Feedback from Peers or Mentors: Share the article with trusted colleagues, mentors, or experts in the field for feedback. Request their insights on the bigger picture issues, such as clarity, coherence, and the strength of the arguments. Use their feedback to guide further revisions and improvements.
Once you have addressed these broader concerns and have a well-structured and substantively strong article, you can then shift your focus to editing for grammar, punctuation, and other minor errors. Prioritizing revision and addressing the bigger picture issues will ensure that your article is compelling, coherent, and effectively conveys your ideas to readers.
What is a Peer Reviewed Article? Book? Website?
Are Peer Reviewed Works More Authoritative Than Non Peer Reviewed Works?
Works that have been peer reviewed are generally more authoritative than works that are self published. In academic and scientific journals, journal editors give reviewers guidelines to follow when critiquing submissions. For instance, reviewers may be asked to give feedback
- on whether the document is informed by existing scholarship, especially past articles published in the journal
- on the research methodology
- on the style of writing.
It’s not uncommon for authors to need to revise articles multiple times before they pass peer review and the article is published.
That said, as addressed above, there are different types of feedback, and not all feedback is created equal. You cannot necessarily assume that a peer-reviewed text is authoritative. There are instances, for example, when reviewers may have had a conflict of interest or some sort of bias that interfered with their interpretation and critique. Over time, many peer-reviewed articles have been debunked so whereas peer review is a signal of authority, it’s not a guarantee.
When During Composing is Peer Reviewed Advised?
Traditionally, writers, speakers, knowledge workers wait to receive peer reviews once their texts are fairly well developed. However, when possible, you may find it strategic to seek feedback sooner rather than later. After all, peer review can inform rhetorical analysis (especially audience awareness) and rhetorical reasoning.
How Can I Get the Most from Peer Review as an Author?
- Embrace a Growth Mindset:
- Approach peer review with an open mind and a willingness to learn and improve. Recognize that constructive criticism and diverse perspectives can help refine your work and elevate its quality.
- Clearly Communicate Expectations:
- When submitting your manuscript, provide guidelines or a brief note to reviewers outlining specific aspects or areas you would like them to focus on. Clearly stating your expectations can guide reviewers and ensure their feedback aligns with your objectives.
- Be Open to Critique:
- Recognize that peer review is meant to provide constructive feedback aimed at strengthening your work. Approach reviewer comments with a receptive attitude, even if they challenge your initial assumptions. Consider each suggestion carefully and evaluate how it can enhance your manuscript.
- Respond Thoughtfully:
- When addressing reviewer comments, respond thoughtfully and respectfully. Provide detailed explanations for any revisions made and highlight how you have addressed their concerns. Clearly articulate the changes made to demonstrate your attentiveness to their feedback.
- Seek Clarification:
- If any reviewer comments are unclear or require further explanation, don’t hesitate to seek clarification. Engage in a respectful dialogue with reviewers to ensure you fully understand their suggestions and can implement them effectively.
- Maintain Professionalism:
- Maintain a professional demeanor throughout the peer review process. Avoid becoming defensive or dismissive of critical feedback. Remember that constructive criticism is intended to help you improve, and maintaining a professional approach will strengthen your reputation among reviewers and editors.
- Engage in Peer Review Activities:
- Actively participate in peer review activities within your academic or professional community. By engaging in reciprocal peer review, you contribute to the development of a supportive environment and gain insights into the reviewing process from the perspective of a reviewer.
- Learn from Other Reviews:
- Take the opportunity to learn from the reviews of others. Reading and analyzing peer reviews of similar manuscripts can provide valuable insights into common pitfalls, recurring feedback themes, and effective strategies for revision.
- Continuously Refine Your Work:
- Peer review is an iterative process. Use the feedback received to refine and enhance your work continually. Embrace the mindset of ongoing improvement, recognizing that each round of peer review offers an opportunity to strengthen your manuscript.
- Celebrate Success and Learn from Rejections:
- Whether your manuscript is accepted or rejected, view each outcome as a learning experience. Celebrate successes, but also reflect on rejection to identify areas for improvement. Use the feedback provided to refine your work for future submissions.
Appendix – Empirical Studies on Peer Review
Below are some informal notes from research on peer review. (My apolgies this isnt up-to-date)
|Chang, C.Y-h. (2016). Two decades of research in L2 peer review. Journal of Writing Research, 8(1): 81-117.
|One hundred and three (N=103) peer review studies contextualized in L2 composition classrooms and published between 1990 and 2015 were reviewed. To categorize constructs in research studies, this researcher used Lai’s (2010) three Ps dimensions (perceptions, process, and product). Perceptions are the beliefs and attitudes of peer review. Process refers to the learning process or implementation procedures of peer review. Product is the learning outcomes of peer review. A thematic analysis of the studies’ constructs showed that perception studies examined learners’ general perceptions/attitudes, Asian students’ perceptions/attitudes (cultural influences), and learner perceptions of peer feedback in comparison to self and/or computerized feedback. Process studies discussed the effects of training, checklists/rubrics, writer-reviewer relationships, the nature of peer feedback, communicative language, timing of teacher feedback on peer feedback, grouping strategies, as well as communicative medium. Product research, on the other hand, investigated peer feedback adoption rates and ratio of peer-influenced revisions, effects of peer review on writers’ revision quality, effects of peer review on reviewers’ gains, and effects of peer review on writers’ self revision. In light of this review, research gaps are identified and suggestions for future research are offered.
|Effectiveness of Guided Peer Review of Student Essays in a Large Undergraduate Biology Course
International Journal of Teaching & Learning in Higher Education; 2015, Vol. 27 Issue 1, p56-68
|Abstract: Instructors and researchers often consider peer review an integral part of the writing process, providing myriad benefits for both writers and reviewers. Few empirical studies, however, directly address the relationship between specific methodological changes and peer review effectiveness, especially outside the composition classroom. To supplement these studies, this paper compares types of student commentary received between a control and guided rubric in an introductory biology course in order to determine if guided questions augment the amount of “feedforward” responses, questions and suggestions that consider the next draft and are reported to be more beneficial than feedback. Results indicate that guided rubrics significantly increase “feedfoward” observations and reduce less useful categories of feedback, such as problem detection and meanness. Differences between rubrics, however, had limited influence on student attitudes post-peer review. Consequently, potential strategies for further improving student ratings and keeping mean commentary at a minimum are discussed
|The Effect of Peer Review on Student Learning Outcomes in a Research Methods Course
Teaching Sociology, v43 n3 p201-213 Jul 2015. 13 pp.
|Abstract: In this study, we test the effect of in-class student peer review on student learning outcomes using a quasiexperimental design. We provide an assessment of peer review in a quantitative research methods course, which is a traditionally difficult and technical course. Data were collected from 170 students enrolled in four sections of a quantitative research methods course, two in-class peer review sections, and two sections that did not incorporate in-class peer review over two semesters. For the two sections with peer review, content scheduled for the days during which peer review was used in class was delivered through the online course management system. We find that in-class peer review did not improve final grades or final performance on student learning outcomes, nor did it affect performance differences between drafts and final assignments that measured student learning objectives. Further, it took time away from in-class delivery of course content in course sections that used in-class peer review. If peer review is utilized, we recommend it be assigned as an out-of-class assignment so it does not interfere with in-class teaching.
|Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity
In The Internet and Higher Education April 2015 25:78-84
|Abstract:Although the effectiveness of online peer assessment for writing performance has been documented, some students may not perceive or attain learning benefits due to receiving certain types of feedback messages. This study therefore aims to explore the role of feedback messages in students’ writing performance. The feedback messages given by 47 undergraduate students in a three-round online peer assessment review were firstly examined by a series of content analyses regarding the affective, cognitive, and metacognitive aspects of the comments. The influences of the feedback messages on the students’ performance progression during the three rounds of review were then explored. The results show that cognitive feedback (e.g.,direct correction) was more helpful for the students’ writing learning gains than was affective feedback (e.g., praising comments) and metacognitive feedback (e.g., reflecting comments). However, this effect on the students’ performance progression decreased in the last stage of the activity. •Examining the role of peer feedback in undergraduate students’ writing performance•Cognitive feedback was more helpful for learning than was affective and metacognitive feedback.•Feedback effects on the performance progression decreased.
|How Quality Is Recognized by Peer Review Panels: The Case of the Humanities
Michèle Lamont , Joshua Guetzkow
Research Assessment in the Humanities
This paper summarizes key findings of our research on peer review, which challenge the separation between cognitive and non-cognitive aspects of evaluation. Here we highlight some of the key findings from this research and discuss its relevance for understanding academic evaluation in the humanities. We summarize the role of informal rules, the impact of evaluation settings on rules, definitions of originality, and comparisons between the humanities, the social sciences and history. Taken together, the findings summarized here suggest a research agenda for developing a better empirical understanding of the specific characteristics of peer review evaluation in the humanities as compared to other disciplinary clusters.
|Assessment & Evaluation in Higher Education Journal, Volume 41, Issue 2, 2016
Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings
Hongli Li, Yao Xiong, Xiaojiao Zang, Mindy L. Kornhaber, Youngsun Lyu,Kyung Sun Chung & Hoi K. Suen
|AbstractGiven the wide use of peer assessment, especially in higher education, the relative accuracy of peer ratings compared to teacher ratings is a major concern for both educators and researchers. This concern has grown with the increase of peer assessment in digital platforms. In this meta-analysis, using a variance-known hierarchical linear modelling approach, we synthesise findings from studies on peer assessment since 1999 when computer-assisted peer assessment started to proliferate. The estimated average Pearson correlation between peer and teacher ratings is found to be .63, which is moderately strong. This correlation is significantly higher when: (a) the peer assessment is paper-based rather than computer-assisted; (b) the subject area is not medical/clinical; (c) the course is graduate level rather than undergraduate or K-12; (d) individual work instead of group work is assessed; (e) the assessors and assessees are matched at random; (f) the peer assessment is voluntary instead of compulsory; (g) the peer assessment is non-anonymous; (h) peer raters provide both scores and qualitative comments instead of only scores; and (i) peer raters are involved in developing the rating criteria. The findings are expected to inform practitioners regarding peer assessment practices that are more likely to exhibit better agreement with teacher assessment.
|Sun DL, Harris N, Walther G, Baiocchi M (2015) Peer Assessment Enhances StudentLearning: The Results of a Matched Randomized Crossover Experiment in a College Statistics Class.PLoS ONE 10(12): e0143177. doi:10.1371/journal.pone.0143177
|Abstract:Feedback has a powerful influence on learning, but it is also expensive to provide. In large classes it may even be impossible for instructors to provide individualized feedback. Peer assessment is one way to provide personalized feedback that scales to large classes. Besides these obvious logistical benefits, it has been conjectured that students also learn from the practice of peer assessment. However, this has never been conclusively demonstrated. Using an online educational platform that we developed, we conducted an in-class matched-set, randomized crossover experiment with high power to detect small effects. We establish that peer assessment causes a small but significant gain in student achievement. Our study also demonstrates the potential of web-based platforms to facilitate the design of high-quality experiments to identify small effects that were previously not detectable
|Impact of Peer Review on Creative Self-efficacy and Learning Performance in Web 2.0Learning Activities
Journal of Educational Technology & Society, Vol. 19, No. 2
|ABSTRACTMany studies have pointed out the significant contrast between the creative nature of Web 2.0 learning activities and the structured learning in school. This study proposes an approach to leveraging Web 2.0 learning activities and classroom teaching to help students develop both specific knowledge and creativity based on Csikzentmihalyi’s system model of creativity. The approach considers peer review as the core component in the Web 2.0 learning activities with the aim of engaging students in the creative learning paradigm. To gain a better understanding of the impact of such an approach on students’ confidence and performance, this study gathered and analyzed the works developed by 53 sixth graders in a Web 2.0 storytelling activity, as well as details of their creative self-efficacy. The results show that those students who experienced the peer review using a set of storytelling rubrics produced significantly more sophisticated stories than those who did not. Furthermore, the peer review did not exert a significant negative influence on the students’ creative self-efficacy. It was also found that the experimental group’s (students experiencing the peer review) creative self-efficacy consistently reflected their performance, while the control group’s creative self-efficacy did not. Such results support that the peer review process may help students to build a sophisticated level of reflection upon their creative work in Web 2.0 learning activities.
|Using Peer Review Approach to Promote Online Learning in HigherEducation
SITE 2016 – Savannah, GA, United States, March 21-26, 2016
|Abstract:Peer review approach has been studied for a while in traditional classroom settings and already been recognized as one of the effective instructional techniques to enhance students’ learning and critical thinking. Research studies showed that peer review approach was oftenutilized in writing intensive courses and few studies have indicated that peer review approach can be effectively implemented within the teacher preparation program (Bentley, & Brown, 2014; Price, Goldberg, Patterson, & Heft, 2013). This research study will utilize peer-review approach in the three inclusive and multidisciplinary courses in two different states within the United States and investigate peer review approach using online platform in online or distance education. The research team will demonstrate evidence-based examples using peer-reviewapproach first, followed by research design and preliminary results. Students’ perceptions of using this approach in the distance education will be included and discussed. Peer review approach might be an innovative and effective way for students’ interaction, multidimensionality of engagement with meaningful discussion through reviewing others’ work and promote active learning in online or distance education mode.
|‘I’m not here to learn how to mark someone else’s stuff’: an investigation of an online peer-to-peer review workshop tool
Assessment & Evaluation in Higher Education
Volume 40, Issue 1, 2015(published online 2014)
|AbstractIn this article, we explore the intersecting concepts of fairness, trust and temporality in relation to the implementation of an online peer-to-peer review Moodle Workshop tool at a Sydney metropolitan university. Drawing on qualitative interviews with unit convenors and online surveys of students using the Workshop tool, we seek to highlight a complex array of attitudes, both varied and contested, towards online peer assessment. In particular, we seek to untangle convenors’ positive appraisal of the Workshop tool as a method of encouraging ‘meta-cognitive’ skills, and student perceptions relating to the redistribution of staff marking workload vis-à-vis the peer review tool as ‘unfair’, ‘time-consuming’ and ‘unprofessional’. While the Workshop tool represents an innovative approach to the development of students’ meta-cognitive attributes, the competitive atmosphere that circulates, and is quietly encouraged, within the tertiary education sector limits the true collaborative pedagogical potential and capacities approach built into peer-to-peer review initiatives like the Workshop tool.
|Acknowledging Students’ Collaborations through Peer Review: A Footnoting Practice
Volume 64, Issue 2, 2016
|AbstractStudent-to-student peer review or peer feedback is commonly used in student-centered or active-learning classrooms. In this article, we describe a footnoting exercise that we implemented in two of our undergraduate courses as one way to encourage students to acknowledge collaborations and contributions made during peer-review processes. This exercise was developed in response to a teaching problem well documented in the literature and often experienced firsthand in our own courses: students do not always transfer to future iterations of their work what they have learned in response to previous feedback. Following a description of the footnoting exercise, we analyze the resulting footnotes that the students produced. Finally, we discuss possible improvements to the assignment and avenues for future research.
|The Reliability and Validity of Peer Review of Writing in High School AP English Classes
Christian Schunn, Amanda Godley and Sara DeMartino
|AbstractOne approach to writing instruction that has been shown to improve secondary students’ academic writing without increasing demands on teachers’ time is peer review. However, many teachers and students worry that students’ feedback and assessment of their peers’ writing is less accurate than teachers’. This study investigated whether Advanced Placement (AP) English students from diverse high school contexts can accurately assess their peers’ writing if given a clear rubric. The authors first explain the construction of the rubric, a student-friendly version of the College Board’s scoring guide. They then examine the reliability and validity of the students’ assessments by comparing them with their teachers’ and trained AP scorers’ assessments. The study found that students’ assessments were more valid than the ones provided by a single teacher and just as valid as the ones provided by expert AP scorers. Students’ and teachers’ perceptions of the peer review process are also discussed.
|Teacher modeling on EFL reviewers’ audience-aware feedback and affectivity in L2 peer review
Carrie Yea-huey Chang
|This exploratory classroom research investigated how prolonged one-to-one teacher modeling (the teacher demonstrating desirable behaviors as a reviewer) in feedback to student reviewers’ essays may enhance their audience-aware feedback and affectivity in peer review. Twenty-seven EFL Taiwanese college students from a writing class participated in asynchronous web-based peer reviews. Training was conducted prior to peer reviews, and the teacher modeled the desirable reviewer behaviors in her feedback to student reviewers’ essays to prolong the training effects. Pre-modeling (narration) and post-modeling (process) reviews were analyzed for audience-aware feedback and affectivity. Reviewers’ audience awareness was operationalized as their understanding of reviewer–reviewee/peer–peer relationship and reviewees’ needs of revision-oriented feedback on global writing issues to improve the draft quality. Paired t-tests revealed significantly higher percentages of global feedback and collaborative stance(revision-oriented suggestions), more socio-affective functions,and a higher percentage of personal, non-evaluative reader feedback and a lower percentage of non-personal evaluator feedback in the post-modeling reviews. Such a difference, however, was not found in review tone. Overall, our findings confirm that EFL student reviewers can learn peer review skills through observation of their teachers and use of complementary tools such as checklists.
|A new approach towards marking large-scale complex assessments: Developing a distributed marking system that uses an automatically scaffolding and rubric-targeted interface for guided peer-review
Alvin Vista, Esther Care, Patrick Griffin
|Currently, complex tasks incur significant costs to mark, becoming exorbitant for courses with large number of students (e.g., in MOOCs). Large scale assessments are currently dependent on automated scoring systems. However, these systems tend to work best in assessments where correct responses can be explicitly defined.There is considerable scoring challenge when it comes to assessing tasks that require deeper analysis and richer responses.Structured peer-grading can be reliable, but the diversity inherent in very large classes can be a weakness for peer-grading systems because it raises objections that peer-reviewers may not have qualifications matching the level of the task being assessed. Distributed marking can offer a solution to handle both the volume and complexity of these assessments.We propose a solution wherein peer scoring is assisted by a guidance system to improve peer-review and increase the efficiency of large scale marking of complex tasks. The system involves developing an engine that automatically scaffolds the target paper based on predefined rubrics so that relevant content and indicators of higher level thinking skills are framed and drawn to the attention of themarker. Eventually, we aim to establish that the scores produced are comparable to scores produced by expert raters.
|A comparison of peer and tutor feedback
Assessment & Evaluation in Higher Education
Volume 40, Issue 1, 2015(published online 2014)
|AbstractWe report on a study comparing peer feedback with feedback written by tutors on a large, undergraduate software engineering programming class. Feedback generated by peers is generally held to be of lower quality to feedback from experienced tutors, and this study sought to explore the extent and nature of this difference. We looked at how seriously peers undertook the reviewing task, differences in the level of detail in feedback comments and differences with respect to tone (whether comments were positive, negative or neutral, offered advice or addressed the author personally). Peer feedback was also compared by academic standing, and by gender. We found that, while tutors wrote longer comments than peers and gave more specific feedback, in other important respects (such as offering advice) the differences were not significant.
|Tucker, R. (2014). Sex does not matter: Gender bias and gender differences in peer assessments of contributions to group work. Assessment & Evaluation in Higher Education, 39(3), 293-309.
|This paper considers the possibility of gender bias in peer ratings for contributions to team assignments, as measured by an online self-and-peer assessment tool. The research was conducted to determine whether peer assessment led to reliable and fair marking outcomes. The methodology of Falchikov and Magin was followed in order to test their finding that gender has no discernable impact on peer ratings. Data from over 1500 participants at two universities enrolled in four different degree programmes were analysed. The research indicates an absence of gender bias in six case studies. The research also found that women received significantly higher ratings than men.
page 298: “In total, 1523 students participated in the study; making a total 18,814 assessments […] The sample size makes this study the largest published analysis we are aware of of gender differences in self-and-peer assessment.”
|Takeda, S., & Homberg, F. (2014). The effects of gender on group work process and achievement: An analysis through self- and peer-assessment. British Educational Research Journal, 40(2), 373-396.
|This study examines the effects of gender on group work process and performance using the self- and peer-assessment results of 1001 students in British higher education formed into 192 groups. The analysis aggregates all measures on the group level in order to examine the overall group performance. Further, a simple regression model is used to capture the effects of group gender compositions. Results suggest that students in gender balanced groups display enhanced collaboration in group work processes. The enhanced collaboration could be associated with less social loafing behaviours and more equitable contributions to the group work. However, the results imply that this cooperative learning environment does not lead to higher student performance. Students’ comments allow us to explore possible reasons for this finding. The results also indicate underperformance by all-male groups and reduced collaborative behaviours by solo males in male gender exception groups (i.e., groups consisting of one male student and other members being female).
|Liu, X., & Li, L. (2014). Assessment training effects on student assessment skills and task performance in a technology-facilitated peer assessment. Assessment & Evaluation In Higher Education, 39(3), 275-292. doi:10.1080/02602938.2013.823540
|This study examines the impact of an assessment training module on student assessment skills and task performance in a technology-facilitated peer assessment. The participants completed an assessment training exercise, prior to engaging in peer-assessment activities. During the training, students reviewed learning concepts, discussed marking criteria, graded example projects and compared their evaluations with the instructor’s evaluation. Results of data analysis indicate that the assessment training led to a significant decrease in the discrepancy between student ratings and instructor rating of example projects.
|Kearney, S. (2013). Improving engagement: the use of ‘Authentic self-and peer-assessment for learning’ to enhance the student learning experience. Assessment & Evaluation In Higher Education, 38(7), 875-891. doi:10.1080/02602938.2012.751963
|This article contends that students are significantly and detrimentally disengaged from the assessment process as a result of traditional assessments that do not address key issues of learning. Notable issues that arose from observations and questioning of students indicated that vast proportions of students were not proofreading their own work were not collaborating on tasks; had not been involved in the development of assessment tasks; and that students had insufficient skills in relation to their ability to evaluate their own efforts.
|Esfandiari, R., & Myford, C. M. (2013). Severity differences among self-assessors, peer-assessors, and teacher assessors rating EFL essays. Assessing Writing, 18(2), 111-131.
|We compared three assessor types (self-assessors, peer-assessors, and teacher assessors) to determine whether they differed in the levels of severity they exercised when rating essays. We analyzed the ratings of 194 assessors who evaluated 188 essays that students enrolled in two state-run universities in Iran wrote. The assessors employed a 6-point analytic scale to provide ratings on 15 assessment criteria. The results of our analysis showed that of the three assessor types, teacher assessors were the most severe while self-assessors were the most lenient, although there was a great deal of variability in the levels of severity that assessors within each type exercised.
|Mulder, R.A., Baik, C., Naylor, R. & Pearce, J. (2013). How does student peer review influence perceptions, engagement and academic outcomes? A case study. Assessment and Evaluation in Higher Education.
|This is a report on research investigating how assessment outcomes related to student perceptions and the content of peer reviews. The case study relates to a third-year undergraduate subject.
|Khonbi, Z., & Sadeghi, K. (2013). The Effect of Assessment Type (Self Vs. Peer) on Iranian University EFL Students’ Course Achievement. Procedia – Social And Behavioral Sciences, 70(Akdeniz Language Studies Conference, May, 2012, Turkey), 1552-1564. doi:10.1016/j.sbspro.2013.01.223
|The effect of self- and peer-assessment was studied on Iranian university EFL students’ course achievement. The participants (19 and 21 students in self- and peer-assessment groups, respectively, and all from Urmia University) were pretested on their current Teaching Methods knowledge. After receiving relevant instruction and training, while the peer-assessment group was engaged in peer-assessment activities for four sessions, the self-assessment group was busy with self-assessment tasks. An achievement post-test (with phi dependability index of .90) was used to gauge performance differences at the end of the course. The application of ANCOVA indicated that the peer-assessment group outperformed the self- assessment group significantly, F(1,37) = 7.13, P = .01. Further findings and implications are discussed in the paper.
|Liu, E., & Lee, C. (2013). Using Peer Feedback to Improve Learning via Online Peer Assessment. Turkish Online Journal Of Educational Technology – TOJET, 12(1), 187-199.
|This study investigates the influence of various forms of peer observation and feedback on student learning. We recruited twelve graduate students enrolled in a course entitled, Statistics in Education and Psychology, at a university in northern Taiwan. Researchers adopted the case study method, and the course lasted for ten weeks. Students were first required to learn the content and complete homework assignments through online peer assessment activities. Data were collected from interviews and student journals for content analysis.
|Boase-Jelinek, D., Parker, J., & Herrington, J. (2013). Student reflection and learning through peer reviews. Issues in Education Research, 23(2), 119-131.
|This study looks at the results of using an online peer review system aiming to improve quality and consistency of feedback.
|Crossman, J. M., & Kite, S. L. (2012). Facilitating improved writing among students through directed peer review. Active Learning in Higher Education, 13(3), 219-229.
|This study contributes to scant empirical investigation of peer critique of writing among heterogeneously grouped native and nonnative speakers of English, now commonplace in higher education. This mixed-methods study investigated the use of directed peer review to improve writing among graduate students, the majority of whom were nonnative speakers of English. Following a modified version of the Optimal Model of peer critique of university coursework, statistically significant gains were realized between the initial draft of a business proposal and its final submission for each of the measured items: support, audience focus, writing conventions, and organization. In addition, during the qualitative phase, students were observed to identify how peer editors naturally engaged in probing and collaborative styles of feedback known as discovery mode interactions. Approximately 80% of the students engaged in interactions to clarify the text and align it with the author’s intentions, and approximately 37% sought to enhance and develop the text. Finally, the results suggest that the face-to-face peer review did improve the quality of a business communication assignment and implies a number of essential instructional practices toward improved writing and collaboration.
|Heyman, J. E., & Sailors, J. J. (2011). Peer assessment of class participation: applying peer nomination to overcome rating inflation. Assessment & Evaluation In Higher Education, 36(5), 605-618. doi:10.1080/02602931003632365
|In this paper, we describe the most common method used for these assessments and highlight some of its inherent challenges. We then propose an alternative method based on peer nominations. Two case studies illustrate the advantages of this method; we find that it is both easy for students to complete and provide instructors with valuable diagnostic information with which to provide feedback and assign grades.
|Li, L. (2011) How do students of diverse achievement levels benefit from peer assessment?International Journal for the Scholarship of Learning & Teaching, 5, 1-16.
|This study investigated how peer review benefits students at various achievement levels, and found that while all students show an increase in work quality, this difference was more pronounced for those students in early learning development stages. High-achieving students appeared to benefit from writing reviews and deepening their understanding of subject content, and students across the achievement levels generally showed positive responses to the exercise.
|Patchan, M., Schunn, C. & Clark, R. (2011) Writing in the natural sciences: understanding the effects of different types of reviewers on the writing process. Journal of Writing Research, 2, 365-393.
|This study compares how aspects of the writing process are affected when undergraduate students write papers to be evaluated by peers versus teaching staff. Results suggest that the quality of student drafts were greater when writing for peers; student reviewer comments were more detailed; and students made more revisions if they received feedback from peers.
|Li, L., Lui, X. & Steckelberg, A. (2010) Assessor or assessee: how student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41, 525-536.
|This study provides empirical data on how the roles of assessor and assessee impact student learning and results suggest that there is a significant relationship between the quality of feedback students provide and the quality of their own final product.
|Gielen, S., Tops, L., Dochy, F., Onghena, P. & Smeets, S. (2010) A comparative study of peer and teacher feedback and of various feedback forms in a secondary school writing curriculum. British Educational Research Journal
|In this study, Gielen et al. compare secondary student performance before and after experiencing different forms of peer feedback. Results of the study suggest that the most improvement comes when authors indicating their needs for feedback (i.e. what parts of the assignment they are seeking feedback on) rather than simply submitting the assignments for review.
|Hu, G. & Lam, S.T.E. (2010) Issues of cultural appropriateness and pedagogical efficacy: Exploring peer review in a second language writing class. Instructional Science, 38, 371-394.
|This paper reports on a study designed to investigate (a) whether peer review is an effective pedagogical activity with adult Chinese students in the teaching of second language academic writing, and (b) how factors such as perceptions of the influence of peer reviewers’ L2 proficiency, previous experience with peer review, etc. relate to the effectiveness of the pedagogical activity.
|Liang, J.-C. & Tsai, C.-C. (2010) Learning through science writing via online peer assessment in a college biology course. The Internet and Higher Education, 13, 242-247.
|In this paper, Lian and Tsai examine how undergraduate biology students’ writing improved with the use of peer assessment. Peer and expert reviewers evaluated and scored each assignment, and results indicate that peer assessment is of equal value to expert assessment. Student writing was significantly improved over the course of the peer assessment activity.
|Liang, M. Y. (2010). Using synchronous online peer response groups in EFL writing: Revision-related discourse. Language Learning & Technology, 14(1), 45-64.
|In recent years, synchronous online peer response groups have been increasingly used in English as a foreign language (EFL) writing. This article describes a study of synchronous online interaction among three small peer groups in a Taiwanese undergraduate EFL writing class. An environmental analysis of students’ online discourse in two writing tasks showed that meaning negotiation, error correction, and technical actions seldom occurred and that social talk, task management, and content discussion predominated the chat. Further analysis indicates that relationships among different types of online interaction and their connections with subsequent writing and revision are complex and depend on group makeup and dynamics. Findings suggest that such complex activity may not guarantee revision. Writing instructors may need to proactively model, scaffold and support revisionrelated online discourse if it is to be of benefit.
|Bouzidi, L., & Jaillet, A. (2009). Can Online Peer Assessment be Trusted?. Journal Of Educational Technology & Society, 12(4), 257-268.
|We carried out an experiment of online peer assessment in which 242 students, enrolled in 3 different courses, took part. The results show that peer assessment is equivalent to the assessment carried out by the professor in the case of exams requesting simple calculations, some mathematical reasoning, short algorithms, and short texts referring to the exact science field (computer science and electrical engineering)
|Lundstrom, K. & Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of Second Language Writing, 18, 30-43.
|This study compared the benefits of peer review to the reviewer to the student giving feedback, showing that more benefits were seen to student writing when the focus was on reviewing student work
|Cho, K. & Schunn, C.D. (2007). Scaffolded writing and rewriting in the discipline: a web-based reciprocal peer review system. Computers and Education 48: 409-426.
|This study focuses on the effects of giving feedback and whether students writing is improved through student peer review.
|Kim, M. (2009). The impact of an elaborated assessee’s role in peer assessment. Assessment & Evaluation in HIgher Education, 34(1), 105-114.
|The purpose of this study was to investigate the effects of an elaborated assessee’s role on metacognitive awareness, performance and attitude in peer assessment. Two intact groups (a total of 82 students) were randomly assigned to a treatment condition (having back‐feedback activity) or a control condition (not having back‐feedback activity). The results indicated that, regarding metacognitive awareness, when participants played the elaborated assessee role, they tended to report higher metacognitive awareness in their learning process. Regarding performance, when participants played the elaborated assessee role, they tended to receive better scores in making a concept map. Regarding attitude, when participants played the elaborated assessee role, they reported greater motivation towards the peer assessment. The findings of this study suggest instructional implications for those who want to employ peer assessment as a learning method by showing the effectiveness of a well‐developed role design, specifically one that includes the back‐feedback activity.
|Peer Review Re-Viewed: Investigating the Juxtaposition ofComposition Students’ Eye Movements and Peer-Review Processes
|Abstract:While peer review is a common practice in college composition courses, there is little consistency in approach and effectiveness within the field, owing in part to the dearth of empirical research that investigates peer-review processes. This study is designed to shed light on what a peer reviewer actually reads and attends to while providing peer-review feedback. Fifteen participants peer reviewed a student’s essay that had both holistic and surface-level errors. Using eye-trackingtechnology, we collected detailed and informative data about which parts of the text the peer reviewer looked at, how long the peer reviewer looked there, and where the peer reviewer looked next. These data were analyzed according to eye-movement research methodologies and juxtaposed with each peer reviewer’s comments and suggestions about the essay being reviewed during a typical peer-review exercise. Findings include an unexpected mismatch between what peer reviewers focus on, spend time on, and examine multiple times when reading and peer reviewing an essay and what they choose to give feedback about during the peer-review session. Implications of this study include a rethinking of the composition field’s widespread use of a global-to-localprogression during peer-review activities.
|Commenting on Writing Typology and Perceived Helpfulness of Comments from Novice Peer Reviewers and Subject Matter Experts
Cho, Schunn, Charney
|Pulled QuotesHowever valid and reliable they may be, the value of peer comments for student writers is unknown. Anecdotal evidence suggests that students actually find the task of reading and commenting on peers’ papers to be more helpful for revising than attempting to address their peers’ suggestions. Students have reported concerns that peers don’t take the task seriously, aren’t as qualified as the instructor in the subject matter, have had too little training in writing or practice at making comments, and are simply not the readers assigning grades (Artemeva & Logie, 2002). P 261
First, peer review systems provide students with opportunities to critique and reflect upon others’ work (Smyth, 2004). Second, peer review systems may pro-duce a more natural writing context; rather than an individual writing to anonsalient and even unrepresentative reader (i.e., the instructor), a student reads and writes within an active writing community (Applebee, 1981;Cohen & Riel, 1989; Mittan, 1989). P. 280
|van den Berg, I., Admiraal, W., & Pilot, A. (2006). Design principles and outcomes of peer assessment in higher education. Studies in Higher Education, 31(3), 341-356.
|This study was aimed at finding effective ways of organising peer assessment of written assignments in the context of teaching history at university level. To discover features yielding optimal results, several peer assessment designs were developed, implemented in courses and their learning outcomes evaluated. Outcomes were defined in terms of the revisions students made, the grades of the written products, and the perceived progress of products and writing skills. Most students processed peer feedback and perceived improvement in their writing as a result of peer assessment. Significant differences between grades of groups using or not using peer assessment were not found. Most teachers saw better‐structured interaction on the subject of writing problems in their classes. Important design features seemed to be the timing of peer assessment, so that it will not coincide with staff assessment, the assessment being reciprocal, and the assessment being performed in feedback groups of three or four students.
|Saito, H., & Fujita, T. (2004). Characteristics and user acceptance of peer rating in EFL writing classrooms (English). Language Teaching Research, 8(1), 31-54.
|This study addressed the following research questions: (1) How similar are peer, self- and teacher ratings of EFL writing?; (2) Do students favour peer ratings?; and (3) Does peer feedback influence students’ attitudes about peer rating?
|Paulus, T.M. (1999) The effect of peer and teacher feedback on student writing. Journal of Second Language Writing, 8, 265-289.
|This study examined the effect of feedback in an English as a Second Language classroom, and found that changes made as a result of teacher and peer feedback were more substantial than those students made on their own.
|Topping, K.J., Smith, E.F., Swanson, I. & Elliot, A. (2000) Formative peer assessment of academic writing between postgraduate students. Assessment & Evaluation in Higher Education, 25, 149-169.
|This study explored the reliability and validity of formative peer review, and found that staff and peer assessors showed a similar balance between positive and negative comments, and there was little conflict between their views.
|Topping, K. J. (2012). Peers as a source of formative and summative assessment. In J. H. McMillan (Ed.), Sage handbook of research on classroom assessment. (pp. 395-412). Thousand Oaks, CA & London: Sage.
|This chapter comprises a review of research on the role of peer assessment in elementary and secondary classrooms (K-12). The function of peer assessment can be formative, or summative, or both. A typology of peer assessment is given to help the reader identify the many different kinds. Theoretical perspectives on peer assessment are reviewed and described, and their implications for research teased out. Studies in elementary schools are then reviewed, describing how both student and teacher behavior changes when peer assessment is deployed, what the consequent effects on student achievement are, and what conditions are necessary for those effects to be maximized. The same is then done for studies in secondary schools. Both kinds of study are then critiqued in relation to future needs for the practical use of peer assessment in classrooms. A summary of the main findings in light of the critique leads to a statement of general directions and specific recommendations for future research.
|Bruffee, Kenneth A. Collaborative learning and the “Conversion of Mankind.” p 87
|Pulled QuoteThrough peer tutoring, we reasoned, teachers could reach students by organizing them to teach each other. Peer tutoring was a type of collaborative learning. It did not seem to change what people learned but, rather, the social context in which they learned it. Peer tutoring made learning a two-way street, since students’ work tended to improve when they got help from peer tutors and tutors learned from the students they helped and from the activity of tutoring itself. Peer tutoring harnessed the powerful educative force of peer influence that had been–and largely still is–ignored and hence wasted by traditional forms of education.
Meta-analyses of peer review
|Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322.
|48 quantitative peer assessment studies comparing peer and teacher marks were subjected to meta-analysis. Peer assessments were found to resemble more closely teacher assessments when global judgements based on well understood criteria are used rather than when marking involves assessing several individual dimensions. Similarly, peer assessments better resemble faculty assessments when academic products and processes, rather than professional practice, are being rated. Studies with high design quality appear to be associated with more valid peer assessments than those which have poor experimental design. Hypotheses concerning the greater validity of peer assessments in advanced rather than beginner courses and in science and engineering rather than in other discipline areas were not supported. In addition, multiple ratings were not found to be better than ratings by singletons. The study pointed to differences between self and peer assessments, which are explored briefly. Results are discussed and fruitful areas for further research in peer assessment are suggested.
|Wakimoto, D. K., & Lewis, R. E. (2014). Graduate student perceptions of eportfolios: Uses for reflection, development, and assessment. Internet and Higher Education, 21, 53-58.
|This study sought to explore graduate students’ perceptions of the value of creating eportfolios and ways of improving the eportfolio process. Overall, the students found the construction of their eportfolios to be useful in reflecting on their competencies and in gaining confidence in using technology. The students also valued the hands-on training sessions, peer review opportunities and model portfolios, and technological skills built by creating the eportfolios, which they stated may be useful in job searches.
|Wang, W. (2014). Students’ perceptions of rubric-referenced peer feedback on EFL writing: A longitudinal inquiry. Assessing Writing, 19, 80-96.
|The study seeks to investigate how students’ perceptions of peer feedback on their EFL writing change over time, the factors affecting their perceived usefulness of peer feedback for draft revision, and their opinions about the use of a rubric in the peer feedback practice. Fifty-three Chinese EFL learners, including six case study informants, participated in the study. The data collected consisted of questionnaires, interviews, and students’ reflective essays. The findings showed that the students’ perceived usefulness of peer feedback decreased over time, and that their perceived usefulness of peer feedback for draft revision was affected by five factors: (1) Students’ knowledge of assigned essay topics; (2) Students’ limited English proficiency; (3) Students’ attitudes towards the peer feedback practice; (4) Time constraints of the in-class peer feedback session; (5) Students’ concerns with interpersonal relationship. The students regarded the rubric as an explicit guide to evaluating their peers’ EFL writing, though negative perceptions were also reported. The paper ends with a discussion of the implementation of peer feedback in the Chinese EFL writing class and directions for future research.