Educational software evaluation: Difference between revisions

The educational technology and digital learning wiki
Jump to navigation Jump to search
mNo edit summary
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Stub}}
{{Stub}}


See [[instructional design method]] for the moment.
See also [[instructional design method]]
 
There are several ways to evaluate software:
 
== Feature evaluation ==
 
E.g. Dalgarno (2004) proposes three broad categories:
* Categories of cognitive task
* Categories of input technique
* Categories of system response
 
;Cognitive task
# Attending to static information
# Controlling media
# Navigating the system
# Answering questions
# Attending to question feedback
# Exploring a world
# Measuring in a world
# Manipulating a world
# Constructing in a world
# Attending to world changes
# Articulating
# Processing data
# Attending to processed data
# Formatting output
 
;Input technique
# Typing
# Valuators
# Key pressing
# Pull down menus
# Menu lists
# Buttons
# Icons
# Hot spots
# Hypertext
# Scroll bars
# Media controls
# Selecting
# Dragging
# Drawing
 
;System response
# Displaying
# Presenting media
# Presenting cues
# Branching
# Assessing answers
# Generating feedback
# Updating world
# Generating world
# Processing data
# Searching
# Saving and loading
 
== Conceptual evaluation ==
 
Geissinger (1997) starts with the question "''Can this product actually teach what it is supposed to?''" and uses Barker & King's (1993:309) four categories:
 
<table border="1">
<tr><td valign="top" width="33%">
<b><font face="Times,Times New Roman" size="2"></font></b><p><b><font face="Times,Times New Roman" size="2">Category</font></b></p></td>
 
<td valign="top" width="67%">
<b><font face="Times,Times New Roman" size="2"></font></b><p><b><font face="Times,Times New Roman" size="2">Discussion</font></b></p></td>
</tr>
<tr><td valign="top" width="33%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Quality of end-user interface design</font></p></td>
<td valign="top" width="67%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Investigation shows that the designers of the most highly-rated products follow well-established rules &amp; guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them.</font></p></td>
</tr>
<tr><td valign="top" width="33%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Engagement</font></p></td>
<td valign="top" width="67%">
 
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Appropriate use of audio &amp; moving video segments can contribute greatly to usersí motivation to work with the medium.</font></p></td>
</tr>
<tr><td valign="top" width="33%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Interactivity</font></p></td>
<td valign="top" width="67%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Usersí involvement in participatory tasks helped make the product meaningful and provoke thought.</font></p></td>
</tr>
<tr><td valign="top" width="33%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Tailorability</font></p></td>
<td valign="top" width="67%">
<font face="Times,Times New Roman" size="2"></font><p><font face="Times,Times New Roman" size="2">Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience.</font></p></td>
</tr>
</table>
 
 
Belfer, Nesbit, & Leacock, T. proposed a [[Learning Object Review Instrument]] (LORI)


== References ==
== References ==


Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice [http://www.usq.edu.au/users/albion/papers/ascilite99.html HTML]
* Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education.
 
* Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319.
 
* Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education.
 
* Belfer, K., Nesbit, J., & Leacock, T. (2002) Learning object review instrument (LORI). Version 1.4


* Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice [http://www.usq.edu.au/users/albion/papers/ascilite99.html HTML]
* Dalgarno, B. (2004). A classification scheme for learner-computer interaction. In R.Atkonson, C.McBeath, D. Jones-Dwyer and R.Phillips (eds) Beyond the comfort zone, 21st annual conference of the Australasian Society for Computers in Learning in Tertiary Education, Perth, Australia. Available: [http://www.ascilite.org.au/conferences/perth04/procs/pdf/dalgarno.pdf PDF]. (This paper describes environments, but is useful for deciding on which criteria you will select a tool)
* Geissinger H (1997) "Educational Software: Criteria for Evaluation". ASCILE '97 [http://www.ascilite.org.au/conferences/perth97/papers/Geissinger/Geissinger.html HTML].
* Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69.


[[Category:Design_methodologies]]
[[Category:Design_methodologies]]
[[Category:Evaluation methods and grids]]
[[Category:Evaluation methods and grids]]

Latest revision as of 14:29, 9 July 2009

Draft

See also instructional design method

There are several ways to evaluate software:

Feature evaluation

E.g. Dalgarno (2004) proposes three broad categories:

  • Categories of cognitive task
  • Categories of input technique
  • Categories of system response
Cognitive task
  1. Attending to static information
  2. Controlling media
  3. Navigating the system
  4. Answering questions
  5. Attending to question feedback
  6. Exploring a world
  7. Measuring in a world
  8. Manipulating a world
  9. Constructing in a world
  10. Attending to world changes
  11. Articulating
  12. Processing data
  13. Attending to processed data
  14. Formatting output
Input technique
  1. Typing
  2. Valuators
  3. Key pressing
  4. Pull down menus
  5. Menu lists
  6. Buttons
  7. Icons
  8. Hot spots
  9. Hypertext
  10. Scroll bars
  11. Media controls
  12. Selecting
  13. Dragging
  14. Drawing
System response
  1. Displaying
  2. Presenting media
  3. Presenting cues
  4. Branching
  5. Assessing answers
  6. Generating feedback
  7. Updating world
  8. Generating world
  9. Processing data
  10. Searching
  11. Saving and loading

Conceptual evaluation

Geissinger (1997) starts with the question "Can this product actually teach what it is supposed to?" and uses Barker & King's (1993:309) four categories:

Category

Discussion

Quality of end-user interface design

Investigation shows that the designers of the most highly-rated products follow well-established rules & guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them.

Engagement

Appropriate use of audio & moving video segments can contribute greatly to usersí motivation to work with the medium.

Interactivity

Usersí involvement in participatory tasks helped make the product meaningful and provoke thought.

Tailorability

Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience.


Belfer, Nesbit, & Leacock, T. proposed a Learning Object Review Instrument (LORI)

References

  • Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education.
  • Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319.
  • Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education.
  • Belfer, K., Nesbit, J., & Leacock, T. (2002) Learning object review instrument (LORI). Version 1.4
  • Peter R Albion, Heuristic evaluation of educational multimedia: from theory to practice HTML
  • Dalgarno, B. (2004). A classification scheme for learner-computer interaction. In R.Atkonson, C.McBeath, D. Jones-Dwyer and R.Phillips (eds) Beyond the comfort zone, 21st annual conference of the Australasian Society for Computers in Learning in Tertiary Education, Perth, Australia. Available: PDF. (This paper describes environments, but is useful for deciding on which criteria you will select a tool)
  • Geissinger H (1997) "Educational Software: Criteria for Evaluation". ASCILE '97 HTML.
  • Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69.