Educational software evaluation: Difference between revisions
m (Multimedia evaluation moved to Educational software evaluation) |
mNo edit summary |
||
(5 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{Stub}} | {{Stub}} | ||
See also | See also [[instructional design method]] | ||
There are several ways to evaluate software: | |||
== Feature evaluation == | |||
E.g. Dalgarno (2004) proposes three broad categories: | |||
* Categories of cognitive task | |||
* Categories of input technique | |||
* Categories of system response | |||
;Cognitive task | |||
# Attending to static information | |||
# Controlling media | |||
# Navigating the system | |||
# Answering questions | |||
# Attending to question feedback | |||
# Exploring a world | |||
# Measuring in a world | |||
# Manipulating a world | |||
# Constructing in a world | |||
# Attending to world changes | |||
# Articulating | |||
# Processing data | |||
# Attending to processed data | |||
# Formatting output | |||
;Input technique | |||
# Typing | |||
# Valuators | |||
# Key pressing | |||
# Pull down menus | |||
# Menu lists | |||
# Buttons | |||
# Icons | |||
# Hot spots | |||
# Hypertext | |||
# Scroll bars | |||
# Media controls | |||
# Selecting | |||
# Dragging | |||
# Drawing | |||
;System response | |||
# Displaying | |||
# Presenting media | |||
# Presenting cues | |||
# Branching | |||
# Assessing answers | |||
# Generating feedback | |||
# Updating world | |||
# Generating world | |||
# Processing data | |||
# Searching | |||
# Saving and loading | |||
== Conceptual evaluation == | |||
Geissinger (1997) starts with the question "''Can this product actually teach what it is supposed to?''" and uses Barker & King's (1993:309) four categories: | |||
<table border="1"> | |||
<tr><td valign="top" width="33%"> | |||
<b><font face="Times,Times New Roman" size="2"></font></b><p><b><font face="Times,Times New Roman" size="2">Category</font></b></p></td> | |||
<td valign="top" width="67%"> | |||
<b><font face="Times,Times New Roman" size="2"></font></b><p><b><font face="Times,Times New Roman" size="2">Discussion</font></b></p></td> | |||
</tr> | |||
<tr><td valign="top" width="33%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Quality of end-user interface design</font></p></td> | |||
<td valign="top" width="67%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Investigation shows that the designers of the most highly-rated products follow well-established rules & guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them.</font></p></td> | |||
</tr> | |||
<tr><td valign="top" width="33%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Engagement</font></p></td> | |||
<td valign="top" width="67%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Appropriate use of audio & moving video segments can contribute greatly to usersí motivation to work with the medium.</font></p></td> | |||
</tr> | |||
<tr><td valign="top" width="33%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Interactivity</font></p></td> | |||
<td valign="top" width="67%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Usersí involvement in participatory tasks helped make the product meaningful and provoke thought.</font></p></td> | |||
</tr> | |||
<tr><td valign="top" width="33%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Tailorability</font></p></td> | |||
<td valign="top" width="67%"> | |||
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience.</font></p></td> | |||
</tr> | |||
</table> | |||
Belfer, Nesbit, & Leacock, T. proposed a [[Learning Object Review Instrument]] (LORI) | |||
== References == | == References == | ||
Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice [http://www.usq.edu.au/users/albion/papers/ascilite99.html HTML] | * Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education. | ||
* Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319. | |||
* Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education. | |||
* Belfer, K., Nesbit, J., & Leacock, T. (2002) Learning object review instrument (LORI). Version 1.4 | |||
* Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice [http://www.usq.edu.au/users/albion/papers/ascilite99.html HTML] | |||
* Dalgarno, B. (2004). A classification scheme for learner-computer interaction. In R.Atkonson, C.McBeath, D. Jones-Dwyer and R.Phillips (eds) Beyond the comfort zone, 21st annual conference of the Australasian Society for Computers in Learning in Tertiary Education, Perth, Australia. Available: [http://www.ascilite.org.au/conferences/perth04/procs/pdf/dalgarno.pdf PDF]. (This paper describes environments, but is useful for deciding on which criteria you will select a tool) | |||
* Geissinger H (1997) "Educational Software: Criteria for Evaluation". ASCILE '97 [http://www.ascilite.org.au/conferences/perth97/papers/Geissinger/Geissinger.html HTML]. | |||
* Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69. | |||
[[Category:Design_methodologies]] | |||
[[Category:Evaluation methods and grids]] |
Latest revision as of 14:29, 9 July 2009
See also instructional design method
There are several ways to evaluate software:
Feature evaluation
E.g. Dalgarno (2004) proposes three broad categories:
- Categories of cognitive task
- Categories of input technique
- Categories of system response
- Cognitive task
- Attending to static information
- Controlling media
- Navigating the system
- Answering questions
- Attending to question feedback
- Exploring a world
- Measuring in a world
- Manipulating a world
- Constructing in a world
- Attending to world changes
- Articulating
- Processing data
- Attending to processed data
- Formatting output
- Input technique
- Typing
- Valuators
- Key pressing
- Pull down menus
- Menu lists
- Buttons
- Icons
- Hot spots
- Hypertext
- Scroll bars
- Media controls
- Selecting
- Dragging
- Drawing
- System response
- Displaying
- Presenting media
- Presenting cues
- Branching
- Assessing answers
- Generating feedback
- Updating world
- Generating world
- Processing data
- Searching
- Saving and loading
Conceptual evaluation
Geissinger (1997) starts with the question "Can this product actually teach what it is supposed to?" and uses Barker & King's (1993:309) four categories:
Category |
Discussion |
Quality of end-user interface design |
Investigation shows that the designers of the most highly-rated products follow well-established rules & guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them. |
Engagement |
Appropriate use of audio & moving video segments can contribute greatly to usersí motivation to work with the medium. |
Interactivity |
Usersí involvement in participatory tasks helped make the product meaningful and provoke thought. |
Tailorability |
Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience. |
Belfer, Nesbit, & Leacock, T. proposed a Learning Object Review Instrument (LORI)
References
- Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education.
- Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319.
- Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education.
- Belfer, K., Nesbit, J., & Leacock, T. (2002) Learning object review instrument (LORI). Version 1.4
- Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice HTML
- Dalgarno, B. (2004). A classification scheme for learner-computer interaction. In R.Atkonson, C.McBeath, D. Jones-Dwyer and R.Phillips (eds) Beyond the comfort zone, 21st annual conference of the Australasian Society for Computers in Learning in Tertiary Education, Perth, Australia. Available: PDF. (This paper describes environments, but is useful for deciding on which criteria you will select a tool)
- Geissinger H (1997) "Educational Software: Criteria for Evaluation". ASCILE '97 HTML.
- Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69.