Peer assessment

The educational technology and digital learning wiki
Jump to navigation Jump to search

Introduction

“Falchikov (1995) defines peer assessment as the process through which groups of individuals rate their peers.” (Dochy et al, 1999).

“Peer assessment is an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal-status learners. Products to be assessed can include writing, oral presentations, portfolios, test performance, or other skilled behaviors. Peer assessment can be summative or formative.” (Topping, 2009, Abstract)

According to Topping (2009:21), peer assessment activities can vary in a number of ways:

  • It can happen in different curriculum areas or subjects.
  • Different leaner products or outputs can be peer assessed, including: writing,

portfolios, oral presentations, test performance, and other skilled behaviors.

  • Assessors and the assessed may be either pairs or groups.
  • Directionality can vary: Peer assessment can be one-way or reciprocal.
  • Objectives of peer assessment may vary: The teacher may target cognitive or metacognitive gains, time saving, or other goals.
  • It can use computerized tools or not.
  • It can occur in or out of class

Peer assessment has been deployed at at any school level from elementary to graduate, from school learning to vocational training, etc. The main argument for using peer-assessment is pedagogical, i.e. a learning gain in several dimensions.

Stefani (1994), Dochy et al. (1999) and many other authors claim that, overall, students engaged within a well-designed peer-assessment framework, can make rational judgement. Student and teacher assessment correlate fairly well. Studies comparing peer-, self- and teacher assessment, also conclude that students have a realistic perception of their own abilities. Topping (1998) reports that 25 out of 31 reviewed papers show high correlation between peer and teacher assessment.

Benefits

Topping (2009:22-24), lists five benefits of peer assessment: feedback, cognitive gains, improvement in writing and in group work, saving in teacher time.

Feedback

One important benefit concerns feedback, firstly in terms on quantity but also with respect to dimensions of quality.

  • “Perhaps the most significant quality of peer assessment is that it is plentiful. Because there are more students than teachers in most classrooms, feedback from peers can be more immediate and individualized than can teacher feedback” (Topping 2009:22). Setups like MOOCs cannot even function without.
  • “Students react differently to feedback from adults and peers; the former is perceived as authoritative but ill-explained, yet the latter gives richer feedback that is open to negotiation (Cole, 1991)”, Topping (2009:22).
Cognitive gains

Both assessors and assesses can gain from giving and receiving peer feedback. According to Topping, 2009:23) gains can include more time on task and a greater sense of accountability. It then can improve questioning and assessment of understanding through increases self-disclosure. It may lead to earlier detection of misconception which in turn can lead to identification of knowledge gaps upon which the learners can act. Finally, there can be a general increase in reflection (meta-cognition).

According to Tseng & Tsai (2007),

The implementation of peer assessment, an alternative way of assessment for teachers, receives much attention in recent years (Rada and Hu, 2002 and Woolhouse, 1999) due to its effectiveness for students’ learning (Topping, 1998). This new assessment and learning strategy, modeled on the journal publication process of an academic society and based on social constructivism (Falchikov and Goldfinch, 2000 and Lin et al., 2001a), has been used extensively in diverse fields (e.g., Falchikov, 1995 and Freeman and McKenzie, 2002). In addition to helping students plan their own learning, identify their own strengths and weaknesses, target areas for remedial action, develop metacognitive and professional transferable skills, and enhance their reflective thinking and problem solving abilities during the learning experience (Sluijsmans et al., 1999, Smith et al., 2002 and Topping, 1998), peer assessment is also found to increase students’ interpersonal relationships in the classroom (Sluijsmans, Brand-Gruwel, & van Merriënboer, 2002).
Improvement in writing

“Evidence of the effectiveness of peer assessment in writing is substantial, particularly in the context of peer editing (O’Donnell & Topping, 1998). Here, peer assessment seems to be at least as effective in formative terms as teacher assessment, and sometimes more effective.” (Topping, 2009:23).

Improvement in group work

Peer assessment seems can improve group behavior. When student's had to rate group behavior it changed (Salend et al., 1993). It can improve help seeking and help giving (Ross, 1995).

Saving in teacher time

Peer assessment may save teacher time, but only in the long term, since quality peer assessment requires time for organization, training and monitoring.

Peer assessment methods and models

Assessment forms and characteristics

According to Dochy (1999), Kan & Lawler (1978) distinguished three forms of assessment:

  1. Peer ranking: Each member ranks all the others from best to worst, on either one or several factors.
  2. Peer nomination: Each member nominates one other to be the highest, in several dimensions of performance or characteristics
  3. Peer rating: each members rates each other on a given set of performances or characteristics using rating scales.

According to Strijbos & Sluijsmans (2010), “Topping (1998) derived 17 characteristics from a literature review, which were subsequently ordered in four clusters by Van den Berg, Admiraal, and Pilot (2006), and further expanded by Gielen (2007) and Strijbos, Ochoa, Sluijsmans, Segers, and Tillema (2009). The variety in characteristics of peer assessment is reflected in peer assessment reviews which reveal a high level of diversity and ambiguity in peer assessment practices, making it very difficult to understand how peer assessment contributes to learning.”.

Sarah Gielen (2007) presented the following typology of peer assessment, adapted from Topping (1998:252):

Gielen (2007) typology of peer assessment
Cluster (van den Berg, Admiraal, & Pilot, 2006b) Variable Range of Variation
Cluster I

The function of PA as an assessment instrument

1) Curriculum area/subject All
2) Objectives Of staff and/or students?

Time saving or cognitive/affective gains?

3) Focus Quantitative/summative or qualitative/formative or both?
4) Product/Output Tests/marks/grades or writing or oral presentations or other skilled behaviours?
5) Relation to staff assessment Substitutional or supplementary?
6) Official weight Contributing to assess the final official grade or not?
Cluster II

Interaction between peers

7) Directionality One-way, reciprocal, mutual?
8) Privacy Anonymous/confidential/public?
9) Contact Distance or face to face?
Cluster III

Composition of the feedback group

10) Year Same or cross year of study?
11) Ability Same or cross ability?
12) Constellation Assessors Individuals or pairs or groups?
13) Constellation Assessed Individuals or pairs or groups?
14) Place In/out of class?
15) Time Class time/free time/informally?
Cluster IV

Requirement & award

16) Requirement Compulsory or voluntary for assessors/ees?
17) Reward Course credit or other incentives or reinforcement for participation?

In addition, Gielen presents a list of implementation factors, also adapted from Topping (1998:265-267). These organizational arrangements of which we present a variant (Topping, 1999) below depend and/or interact with combinations of the peer type typology:

  • Clarifying expectations, objectives and acceptability
  • Matching participants and arranging contact
  • Developing and clarifying assessment criteria
  • Providing quality training
  • Specifying activities
  • Monitoring the process and coaching
  • Moderating reliability and validity
  • Evaluating and providing feedback.

Rounds in formative assessment

According to Tseng & Tsai (2007), “Lin et al. (2001a) found many students did not improve over two rounds in an on-line peer assessment study. Tsai et al. (2002) showed a similar finding. They used a three-round peer assessment model and pointed out that most students improved their work over three rounds, which was an optimal situation.”

Models

Liu et al. (2001b) present the following model for web-based peer reviewing. It was implemented with their own custom WPR System.

  1. The teacher posts the homework assignment.
  2. Each student prepares the homework in HTML format and uploads it to the WPR System.
  3. The system randomly assigns six reviewers for each student.
  4. Each reviewer rates and comments on others’ assignments in the system.
  5. The system distributes the ranks and comments back to each student and informs the teacher on the general status of the class.
  6. The author must revise the original assignment in line with comments received.
  7. Steps 2–6 are then repeated three more times.
  8. The teacher gives 1) an assignment grade based on the six reviewers’ grades and 2) a review grade about each reviewer’s comment quality.


General guidelines for implementation

Dochy et al (1999), formulate the following general guidelines for both self- and peer assessment.

  1. Training in skill to self-assess or to peer assess must be provided.
  2. Self-assessment takes time and may require support
  3. Self-assessment can be used easily for formative purposes, student should perceive this as learning tool
  4. A staff development programme will be needed, since habits of academics are hard to change
  5. In peer assessment, criteria must be defined beforehand, preferably jointly with students
  6. Peer assessment criteria should be presented in operational familiar terms to students
  7. Peer assessment can be used for summative assessment, but only in combination with other instruments.

Topping (1999) present some planning issues drawing on Topping (2003) and Webb & Farivar (1994). Below we present a short summary:

  1. Work with colleagues instead of developing the initiative alone.
  2. Clarify purpose, rationale, expectations, and acceptability with all stakeholders, in particular the students. Define the nature of products to be assessed.
  3. Involve participants in developing and clarifying assessment criteria.
  4. Match participants, aim for same-ability peer matching
  5. Provide training, examples, and practice, e.g. through a role play. Give feedback and coaching where needed.
  6. Provide guidelines, checklists, or other tangible scaffolding. However, keep it simple and too long (e.g. reminders on what to grade and how).
  7. Specify activities and timescale.
  8. Monitor and coach, but keep a low profile giving feedback and coaching as necessary. Examine the quality of peer feedback, in particular in the early days. If you sample, choose a high, middle and low ability student.
  9. Moderate reliability and validity of feedback, e.g. have more than one student grade a product and add your own.
  10. Evaluate and give feedback to students your observations of their performance as peer assessors and the reliability check (above)

“Peer assessment has been shown to be effective in a variety of contexts and with students of a wide range of ages and abilities. The reliability and validity of peer assessments tend to be at least as high, and often higher, than teacher assessments (Topping, 1998) Peer assessment requires training and practice, arguably on neutral products or performances before full implementation, which should feature monitoring and moderation. Given careful attention, a developmental process may be started that leads toward more sophisticated peer assessment, and the delivery of plentiful feedback that can help learners identify their strengths and weaknesses,target areas for remedial action, and develop metacognitive and other personal and professional skills.” Topping, (1999:26)

Software

Lin et al (2001a) citey by Tseng & Tsai (2007), argue that online (web-based) assessment has the following advantages:

  1. When students evaluate peers’ work through the web and anonymously, willingness to critique is facilitated.
  2. Web-based peer assessment allows teachers to monitor students’ progress during any period of the assessment process.
  3. Web-based peer assessment can decrease the cost of photocopying student's projects for their peer assessors.

The PEER project has a Desirable feature list for peer review software

Free stand-alone server-side systems to install

WebPA
WebPA is the open source version of an existing online peer moderated marking system, for the assessment of group work, already in use at Loughborough University since 1998. In principle it can be integrated with other LMSs, since it is IMS Learning Tools Interoperability according to Vickers, S. P. Booth, S. & Peacock, S. (2010) Creating Environments for Learning using Tightly Integrated Components (ceLTIc)
Project home page
WebPA Resource Pack
JISC Effective Assessment in a Digital Age - Loughborough and Hull (vido)
WebPA - An Online Peer Assessment System (download on source forge, 8/2013 as of 1/2014).

LMS modules (free & commercial)

Moodle - Workshop
“Students upload a file for peer assessment, and complete peer assessments on other students according to criteria previously set by the teacher. The teacher also sets the amount of assessments to be completed and has the option to upload sample answers. Each criteria has a point value, which can be set to be awarded as either a boolean all or nothing or to allow partial credit. There is also an option of including feedback with each criteria.” (Honeychurch et al. 2012: 4)
The tool is configured in two steps (Moodle 2.6x). Firstly, add the "workshop" activity and configure some general parameters (e.g. grade settings where you can select "rubric", submission settings, assessment settings and feedback). Secondly, you then can configure the "contents" (including the rubric) by clicking on the activity. The grades are not integrated with the grading scales, i.e. you only can use a 1-100 scale.
Blackboard (commercial)
Blackboard includes a self-and peer assessment building block (at least in some UK-based installations, Honeychurch, 2012)
Canvas
Allows for peer reviewing (but not grading) as of Jan 2014.

Other

Includes names found in the literature, or systems that do not seem to work, etc.

Aropä
Web-based peer review tool developed at the University of Auckland. Aropä is a web-based peer assessment support tool that has been used extensively in a wide variety of settings over the past three years.
This PHP/MySQL software does not seem to be publically available (see
Aropä homepage (login required)
Collaborative learning technologies, Ascilite 2009 tutorial materials (PDF).
Hamer et al. 2007 (PDF)
CASPAR
CASPAR (Computer Assisted Self and Peer Assessment Ratings) is an Internet based software tool developed to manage the assessment of group work more effectively. It was developed within and funded by the Centre for Excellence in Media Practice (CEMP) at Bournemouth University (Lugosi,2009: 86)
BESS
Bess Peer Assessment Software (Files not available ?)
not recommended link
WPR
Not available ?
(Liu et al., 2001b)
SWORD
Not available ?
(Cho, Schunn and Wilson, 2006)
Conference management software
Easychair
OpenConf (either free community edition or professional/commercial)

Areas

Oral presentations

“Falchikov (2005: p.16) hypothesizes that “involving students in the assessment of presentations is extremely beneficial” for developing self regulating skills. Students are expected to analyze their own behaviour and develop a better understanding of the nature of quality criteria. Cheng and Warren (2005) cite several studies that reported improved presentation performance due to peer assessment. Others adopt in this context videotaped feedback for self-assessments, and also report the attainment of improved oral presentation skills (Bourhis and Allen, 1998).” (De Grez et al., 2012).

De Grez et al. (2012) report both correlation and discrepancies between self- peer- and teacher assessment. Overall they argue that peer assessment is valuable, but can improve. E.g. they quote Falchikov (2005) who recommends developing evaluation criteria in close collaboration with students, but also Price and O'Donnovan (2006) who warns that detailed comprehensive indicators for assessment may become counter-productive. Giving students more practice to practise with assessment criteria is more important.

Links

  • PEER Peer Evaluation in Education Review, par of the Re-engineering Assessment Practices in Higher Education project at Strathclyde university, is a JISC-sponsored project. It contains links and papers

Bibliography

  • Ashley. K and I. Goldin (2011). Toward ai-enhanced computer-supported peer review in legal education, 24th International Conference on Legal Knowledge and Information Systems (JURIX) volume 235, 2011
  • Boud, D., Cohen, R. & Sampson, J. (2001) Peer learning and assessment, in: D. Boud, R. Cohen, & J. Sampson (Eds) Peer learning in higher education (London, Kogan Page), 67–81.
  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98, 891-901.
  • Conway, R., Kember, D., Sivan A., & Wu, M. (1993). Peer assessment of an individual's contribution to a group project, Assessment and Evaluation in Higher Education, 18, 45-56. http://dx.doi.org/10.1080/0260293930180104
  • Falchikov, N. Peer feedback marking: developing peer assessment, Innovations in Education and Training International, 32 (1995), pp. 175–187
  • Gielen, Sarah. (2007). Peer assessment as a tool for learning. Unpublished doctoral dissertation, Leuven University, Leuven, Belgium. Abstract/PDF.
  • Gielen, S., Dochy, F. and Onghena, P. (2010), An inventory of peer assessment diversity, Assessment & Evaluation in Higher Education, 1–19
  • N. Falchikov, J. Goldfinch, Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks, Review of Educational Research, 70 (2000), pp. 287–322.
  • Falchikov, N. (2005). Improving assessment through student involvement. Practical solutions for aiding learning in higher and further education. New York: RoutledgeFalmer.
  • Freeman, M; J. McKenzie, SPARK, a confidential web-based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects, British Journal of Educational Technology, 33 (2002), pp. 551–569.
  • Hamer, J., Kell, C and Spence, F (2007) Peer assessment using Aropä. Department of Computer Science Centre for Flexible and Distance Learning, University of Auckland, ACEE '07 Proceedings of the ninth Australasian conference on Computing education - Volume 66, 43-54. PDF Preprint.
  • Honeychurch, Sarah; Niall Barr, Craig Brown & John Hamer (2012). Peer Assessment Assisted by Technology, CAA, PDF (vida academia.edu)
  • Kulkarni. C, K. Pang-Wei, H. Le, D. Chia, K. Papadopoulos, D. Koller, and S. R. Klemmer (2013). Scaling self and peer assessment to the global design classroom, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2013.
  • Lin, S.S.J.; E.Z.F. Liu, S.M. Yuan, (2001a). Web-based peer assessment: feedback for students with various thinking-styles, Journal of Computer Assisted Learning, 17, 420–432.
  • Lin, S.S.J.; E.Z. Liu, S.M. Yuan (2001b), Web peer review: the learner as both adapter and reviewer, IEEE Transactions on Education, 44, 246–251.
  • Liu, Ngar-Fun and Carless, D (2006), Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), 279-290
  • Lugosi, Peter (2009), Computer assisted self and peer assessment: Applications, challenges and opportunities, Journal of Hospitality, Leisure, Sport and Tourism Education 9 (1). PDF preprint
  • Pearce, J., Mulder, R. and Baik, C. (2009) Involving students in peer review: case studies and practical strategies for university teaching, Centre for the Study of Higher Education, University of Melbourne.
  • Price, M., & O’Donovan, B. (2006) Improving performance through enhancing student understanding of criteria and feedback’, in C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education, pp.100-109. London: Routledge.
  • Piech Chris; Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, Daphne Koller (2013). Tuned Models of Peer Assessment in MOOCs, EDM 2013 Conference. PDF
  • Rada. R, K. Hu, (2002). Patterns in student–student commenting, IEEE Transactions on Education, 45 (2002), pp. 262–267.
  • Ross, J. A. (1995). Effects of feedback on student behavior in cooperative learning groups in a grade-7 math class. Elementary School Journal, 96, 125–143.
  • Russell, A. A. . Calibrated peer review - a writing and critical-thinking instructional tool. Teaching Tips: Innovations in Undergraduate Science Instruction, page 54, 2004
  • Sadler, P. M. and E. Good (2006). The impact of self-and peer-grading on student learning. Educational assessment, 11(1):1-31.
  • Sadler, D. R. (2010) Beyond feedback: Developing student capability in complex appraisal, Assessment and Evaluation in Higher Education, ??
  • Salend, S. J., Whittaker, C. R., & Reeder, E. (1993). Group evaluation—A collaborative, peer-mediated behavior management system. Exceptional Children, 59, 203–209.
  • Sluijsmans, D; F. Dochy, G. Moerkerke (1999). Creating a learning environment by using self-, peer- and co-assessment, Learning Environment Research, 1 pp. 293–319.
  • Sluijsmans. D; S. Brand-Gruwel, J.J.G. van Merriënboer (2002). Peer assessment training in teacher education: effects on performance and perceptions, Assessment and Evaluation in Higher Education, 27 (2002), pp. 443–454.
  • Smith. H, A. Cooper, L. Lancaster (2002). Improving the quality of undergraduate peer assessment: a case study from psychology, Innovations in Education and Teaching International, 39 (2002), pp. 71–81.
  • Strijbos, J.W.; T.A. Ochoa, D.M.A. Sluijsmans, M.S.R. Segers, H.H. Tillema, (2009). Fostering interactivity through formative peer assessment in (web-based) collaborative learning environments, in C. Mourlas, N. Tsianos, P. Germanakos (Eds.), Cognitive and emotional processes in web-based education: Integrating human factors and personalization, IGI Global, Hershey, PA, pp. 375–395.
  • Strijbos, Jan-Willem; Dominique Sluijsmans (2010). Unravelling peer assessment: Methodological, functional, and conceptual developments, Learning and Instruction, Volume 20, Issue 4, August 2010, Pages 265-269, ISSN 0959-4752, http://dx.doi.org/10.1016/j.learninstruc.2009.08.002.
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276. Abstract/PDF (Access restricted)
  • Topping, K. (2003) Self- and peer assessment in school and university: reliability, validity and utility, in M. Segers, F. Dochy, & E. Cacallar (Eds). Optimising new modes of assessment: In search of qualities and standards, pp.55-87. Dordrecht, The Netherlands: Kluwer Academic Publishers.
  • Topping, K. (2009) Peer assessment, Theory Into Practice, 48:20-27.
  • Tseng, S. C., & Tsai, C. C. (2007). Online peer assessment and the role of the peer feedback: A study of high school computer course. Computers and Education, 49, 1161–1174.
  • Van den Berg, I.; W. Admiraal, A. Pilot (2006). Design principles and outcomes of peer assessment in higher education, Studies in Higher Education, 31, 341–356.
  • Van Gennip, N.A.E.; M.S.R. Segers, H.H. Tillema, Peer assessment for learning from a social perspective: the influence of interpersonal variables and structural features, Educational Research Review, 4 (2009), pp. 41–54.
  • Vickerman, P (2009) Student perspectives on formative peer assessment: an attempt to deepen learning? Assessment and Evaluation in Higher Education, 34(2), 221-230.
  • Webb, N. M., & Farivar, S. (1994). Promoting helping behavior in cooperative small groups in middle school mathematics. American Educational Research Journal, 31, 369–395.
  • Woolhouse. M, Peer assessment: the participants’ perception of two activities on a further education teacher education course, Journal of Further and Higher Education, 23 (1999), pp. 211–219.