Learner assessment: Difference between revisions
m (→Various) |
m (→Quizzing) |
||
(20 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
This article will deal with assessment of student performance, i.e. student evaluation - we also will focus on more sophisticated evaluation rubrics for activity-based instructional designs... I know that assessment is weak in this wiki, but that can't be helped for the moment, it's not in the center of my interests - [[User:Daniel K. Schneider|Daniel K. Schneider]] 15:32, 15 August 2007 (MEST). | This article will deal with assessment of student performance, i.e. student evaluation - we also will focus on more sophisticated evaluation rubrics for activity-based instructional designs... I know that assessment is weak in this wiki, but that can't be helped for the moment, it's not in the center of my interests - [[User:Daniel K. Schneider|Daniel K. Schneider]] 15:32, 15 August 2007 (MEST). | ||
See also: | |||
* [[Teacher productivity tool]]s | |||
* [[Grading form]] | |||
* [[Peer assessment]] | |||
* [[learning analytics]] | |||
* etc. | |||
== First principles == | == First principles == | ||
Line 12: | Line 19: | ||
# Accountability: Evaluate programs (e.g. see [[course evaluation]] | # Accountability: Evaluate programs (e.g. see [[course evaluation]] | ||
Assessment should always be based on clear pedagogical objectives (i | Assessment should always be based on clear pedagogical objectives (i.e. what exactly learners should learn) and that also should be communicated to the learners. Pedagogical objectives can be very different. It is not the same to require from someone to able to recall "data or information" or to be able to "to pull together knowledge and to apply in a real world project", or even to be able to acquire knowledge that is useful for solving a given task ! (See [[learning type]] and [[learning level]]). | ||
According to Pellegrino (2009), there are three major elements that should lead to an assessment design and that must be coordinated : | According to Pellegrino (2009), there are three major elements that should lead to an assessment design and that must be coordinated : | ||
Line 61: | Line 68: | ||
=== Various rubrics and performance measures === | === Various rubrics and performance measures === | ||
See [[Grading form]] | |||
=== Gradebook technology === | |||
* [ | * See [[Gradebook]] | ||
=== Quizzing === | |||
* [ | See: | ||
* [[quizzing tool]] | |||
* [[Assessment management system]] | |||
=== Product-based evaluation === | === Product-based evaluation === | ||
Line 87: | Line 95: | ||
== Links == | == Links == | ||
=== Introductions === | |||
* [http://www.jiscinfonet.ac.uk/InfoKits/effective-use-of-VLEs/e-assessment e-Assessment], Effective Use of Virtual Learning Environments, JISC Infonet (last consulted 12:19, 8 October 2012 (CEST)). This is a good introduction based on the result of the [http://www.elicit.scotcit.ac.uk/ ELICIT] project. | |||
* [http://www.tel-thesaurus.net/wiki/index.php/E-Assessment E-Assessment]. TEL dictionary. As of oct 2012 it includes 388 references. | |||
=== Resource sites and pages === | |||
* [http://documentinglearningworkshop.wikispaces.com/Bibliography Documenting learning bibliography]. In-progress Bibliography from the MacArthur Documenting Learning Project | |||
* [http://assessment-reform-group.org/ Assessment Reform Group] (ARG). A UK project that ended in 2010, but produced a series of interesting results, including the influential book Gardner, John (2011) (ed), ''Assessment and Learning''. (2nd Ed), Sage | |||
** See [http://assessment-reform-group.org/publications/ publications] (including free texts for download) | |||
=== Web sites with rubrics === | === Web sites with rubrics === | ||
Line 92: | Line 113: | ||
* [http://www.rcampus.com/ rcampus.com] has an online tool called iRubric and an associated [http://www.rcampus.com/rubricshellc.cfm?mode=gallery&sms=publicrub gallery] with over 100'000 evaluation rubrics | * [http://www.rcampus.com/ rcampus.com] has an online tool called iRubric and an associated [http://www.rcampus.com/rubricshellc.cfm?mode=gallery&sms=publicrub gallery] with over 100'000 evaluation rubrics | ||
* [http://www.rubrics101.org/ Rubrics 101] (same as above) | * [http://www.rubrics101.org/ Rubrics 101] (same as above) | ||
* [http://rubistar.4teachers.org/ Rubistar] | |||
=== Journals === | === Journals === | ||
Line 98: | Line 120: | ||
* [http://www.bc.edu/research/intasc/jtla.html The Journal of Technology, Learning and Assessment (JTLA)] is a peer-reviewed, scholarly on-line journal. | * [http://www.bc.edu/research/intasc/jtla.html The Journal of Technology, Learning and Assessment (JTLA)] is a peer-reviewed, scholarly on-line journal. | ||
=== Criticisms of standardized tests === | |||
* [http://www.washingtonpost.com/blogs/answer-sheet/post/when-an-adult-took-standardized-tests-forced-on-kids/2011/12/05/gIQApTDuUO_blog.html When an adult took standardized tests forced on kids] by Valerie Strauss, Washinton Post dec/5/2011 | |||
== References == | == References == | ||
* Arum, Richard and Josipa Roksa (2011). Academically Adrift, University of Chicago Press. ([http://www.npr.org/2011/02/09/133310978/in-college-a-lack-of-rigor-leaves-students-adrift Quote]: [...] in the first two years of college, "with a large sample of more than 2,300 students, we observe no statistically significant gains in critical thinking, complex reasoning and writing skills for at least 45 percent of the students in our study.) | |||
* Baker, E. L., & O'Neil, H. F., Jr. (1995). Computer technology futures for the improvement of assessment. ''Journal of Science Education and Technology'', 4(1), 37-45. | |||
* Biggs, J. (1999, May). Assessment: An integral part of the teaching system. AAHE Bulletin, 59, 10-12. | * Biggs, J. (1999, May). Assessment: An integral part of the teaching system. AAHE Bulletin, 59, 10-12. | ||
* Nicol, D., & Milligan, C. (2006). Rethinking technology supported assessment practices in relation to the seven principles of good feedback practice. In C. Bryan & K. Clegg (Eds.), Innovative Assessment in Higher Education (pp. 64-77). London: Routledge. ISBN 0415356423 | * Nicol, D., & Milligan, C. (2006). Rethinking technology supported assessment practices in relation to the seven principles of good feedback practice. In C. Bryan & K. Clegg (Eds.), Innovative Assessment in Higher Education (pp. 64-77). London: Routledge. ISBN 0415356423 | ||
* Nicol, D. (2007). E‐assessment by design: using multiple‐choice tests to good effect. Journal of Further and Higher Education, 31(1), 53–64. https://doi.org/10.1080/03098770601167922 | |||
* Nicol, D. J., & Macfarlane Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback parctice. Studies in Higher Education, 31(2), 199-21. | * Nicol, D. J., & Macfarlane Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback parctice. Studies in Higher Education, 31(2), 199-21. | ||
Line 117: | Line 149: | ||
* Scriven, Michael (1999). The nature of evaluation part i: relation to psychology. Practical Assessment, Research & Evaluation, 6(11). Retrieved March 7, 2006 from [http://PAREonline.net/getvn.asp?v=6&n=11] | * Scriven, Michael (1999). The nature of evaluation part i: relation to psychology. Practical Assessment, Research & Evaluation, 6(11). Retrieved March 7, 2006 from [http://PAREonline.net/getvn.asp?v=6&n=11] | ||
[[Category:Evaluation methods and grids]] | [[Category: Evaluation methods and grids]] | ||
[[Category: Learner assessment and evaluation]] |
Latest revision as of 18:28, 16 May 2019
Definition
This article will deal with assessment of student performance, i.e. student evaluation - we also will focus on more sophisticated evaluation rubrics for activity-based instructional designs... I know that assessment is weak in this wiki, but that can't be helped for the moment, it's not in the center of my interests - Daniel K. Schneider 15:32, 15 August 2007 (MEST).
See also:
First principles
There are three purposes of assessment:
- Formative: Assist learning
- Summative: Measure individual achievement
- Accountability: Evaluate programs (e.g. see course evaluation
Assessment should always be based on clear pedagogical objectives (i.e. what exactly learners should learn) and that also should be communicated to the learners. Pedagogical objectives can be very different. It is not the same to require from someone to able to recall "data or information" or to be able to "to pull together knowledge and to apply in a real world project", or even to be able to acquire knowledge that is useful for solving a given task ! (See learning type and learning level).
According to Pellegrino (2009), there are three major elements that should lead to an assessment design and that must be coordinated :
- cognition (a model of how students represent knowledge and develop competence)
- observation (of learner tasks)
- interpretation (methods for making sense of the data)
Based on this model, evidence-centered design for assessment attempts to define:
- The claim space: Exactly what knowledge do we want students to have ? This requires that a domain is unpacked, i.e. that a panel of various stakeholders build an ontology of the domain that should be taught/learnt in a program.
- An evidence model: What is accepted as evidence, i.e. what features of student productions will provide evidence and how will we analyze it.
- A task model: What tasks will students perform to communicate their knowledge.
The claim space for professional training for example can be firstly defined in terms of various foci. Pellegrino (2009) suggests the following table:
Foci for explanation rationalist sociocultural performance task/knowledge analysis communal practices development trajectories of learning traj. of participation knowledge mental representations forms of mediated activitiy
Assessment in a transformative perspective
According to Pellegrino (2009), assessment can be trojan horse for educational and training programs, and in both a negative and positive sense:
- negative when it becomes de-factor driver of curriculum and instruction (in particular large scale evaluations)
- positive when it breaks down barriers (nature of compentence and expertise), similar in perspective to the role that educational technology can play.
The idea is that good assessment firstly requires the creation of an ontology of what students in a program have to learn. Creating this ontology requires stakeholders to sit together and to agree on some (4-7) major big ideas. Each of these then must be connected and also broken down to sub-ideas. Together with subject-oriented ideas, associated reasoning skills and cognitive frameworks also must be elaborated.
Using this organized "claim space" as starting point, instruction then can be designed in ways to meet these requirements. See also backwards design
Formative evaluation
According to Nicol and Milligan (2006) and Nicol and Macfarlane (2006), good formative evaluation:
- helps clarify what good performance is (goals, criteria, expected standards);
- facilitates the development of self-assessment (reflection) in learning;
- delivers high quality information to students about heir learning;
- encourages teacher and peer dialogue around learning;
- encourages positive motivational beliefs and self-esteem;
- provides opportunities to close the gap between current and desired performance;
- provides information to teachers that can be used to help shape teaching. (Nicol & Macfarlane-
Dick, 2006, p. 205).
Evaluation Tools
Links to assessment strategies
- The Assessment CyberGuide for Learning Goals and Outcomes in the Undergraduate Psychology Major, Task Force on Undergraduate Psychology Major Competencies
Board of Educational Affairs, American Psychological Association, retrieved March 2009. This is often cited as a good example overal assessement strategy. However, learning goals and outcomes also can be critized as just defining a "list of topic" as opposed to a coherent connected view of what psychology is.
Various rubrics and performance measures
See Grading form
Gradebook technology
- See Gradebook
Quizzing
See:
Product-based evaluation
- PBL Checklists, age-appropriate, customizable project checklists for written reports, multimedia projects, oral presentations, and science projects. The use of these checklists keeps students on track and allows them to take responsibility for their own learning through peer- and self-evaluation.
See also: learning e-portfolios
Plagiarism
There is a risk that learners use the Internet to copy/paste contents. In particular, when teachers do not apply project-oriented step-wise learning designs.
- See plagiarism
Links
Introductions
- e-Assessment, Effective Use of Virtual Learning Environments, JISC Infonet (last consulted 12:19, 8 October 2012 (CEST)). This is a good introduction based on the result of the ELICIT project.
- E-Assessment. TEL dictionary. As of oct 2012 it includes 388 references.
Resource sites and pages
- Documenting learning bibliography. In-progress Bibliography from the MacArthur Documenting Learning Project
- Assessment Reform Group (ARG). A UK project that ended in 2010, but produced a series of interesting results, including the influential book Gardner, John (2011) (ed), Assessment and Learning. (2nd Ed), Sage
- See publications (including free texts for download)
Web sites with rubrics
- rcampus.com has an online tool called iRubric and an associated gallery with over 100'000 evaluation rubrics
- Rubrics 101 (same as above)
- Rubistar
Journals
- Practical Assessment, Research & Evaluation, A peer-reviewed electronic journal. ISSN 1531-7714.
- The Journal of Technology, Learning and Assessment (JTLA) is a peer-reviewed, scholarly on-line journal.
Criticisms of standardized tests
- When an adult took standardized tests forced on kids by Valerie Strauss, Washinton Post dec/5/2011
References
- Arum, Richard and Josipa Roksa (2011). Academically Adrift, University of Chicago Press. (Quote: [...] in the first two years of college, "with a large sample of more than 2,300 students, we observe no statistically significant gains in critical thinking, complex reasoning and writing skills for at least 45 percent of the students in our study.)
- Baker, E. L., & O'Neil, H. F., Jr. (1995). Computer technology futures for the improvement of assessment. Journal of Science Education and Technology, 4(1), 37-45.
- Biggs, J. (1999, May). Assessment: An integral part of the teaching system. AAHE Bulletin, 59, 10-12.
- Nicol, D., & Milligan, C. (2006). Rethinking technology supported assessment practices in relation to the seven principles of good feedback practice. In C. Bryan & K. Clegg (Eds.), Innovative Assessment in Higher Education (pp. 64-77). London: Routledge. ISBN 0415356423
- Nicol, D. (2007). E‐assessment by design: using multiple‐choice tests to good effect. Journal of Further and Higher Education, 31(1), 53–64. https://doi.org/10.1080/03098770601167922
- Nicol, D. J., & Macfarlane Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback parctice. Studies in Higher Education, 31(2), 199-21.
- Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing What Students Know: The Science and Design of Educational Assessment. In N. R. C. Center for Education (Ed.). Washington, D.C.: National Academy Press. ISBN 0309072727 PDF version (commercial).
- Pellegrino, James W. "Assessment as a Trojan Horse for Educational and Training Programs", talk presented at 1st Interdisciplinary Congress on Research in Vocational Education and Training, March 25, 2009.
- Quellmalz ES, Pellegrino JW. (2009). Technology and testing, Science. Jan 2;323(5910):75-9.
- Ross, Magnus and Mary Welsh (2007). Formative Feedback to Improve Learning on a Teacher Education Degree using a Personal Learning Environment, International Journal of Emerging Technologies in Learning (iJET), Vol 2, No 3 Abstract/PDF
- Scriven, Michael (1999). The nature of evaluation part i: relation to psychology. Practical Assessment, Research & Evaluation, 6(11). Retrieved March 7, 2006 from [1]