Repertory grid technique: Difference between revisions
m (using an external editor) |
m (using an external editor) |
||
Line 25: | Line 25: | ||
Most repertory grid analyses use the following principle: | Most repertory grid analyses use the following principle: | ||
* The designer has to select a series of '''elements''' that are representative of a topic. E.g. to analyze perception of teaching styles, the elements would be teachers. To analyze learning materials, the elements could be learning objects. To analyze perception of laptop functionalities, the elements are various laptop models. | * The designer has to select a series of '''elements''' that are representative of a topic. E.g. to analyze perception of teaching styles, the elements would be teachers. To analyze learning materials, the elements could be learning objects. To analyze perception of laptop functionalities, the elements are various laptop models. | ||
* The next step is '''knowledge elicitation''' of personal constructs. To understand how an individual perceives (understands/compares) these elements, scalar constructs about these elements then have to be elicitated. E.g. using the so-called ''triadic method'', interviewed people will have to compare learning object A with B and C and then state in what regards they are being different. E.g. ''Pick the two teachers that are most similar and tell me why. then tell me how the third one is different''. The output will be contrasted attributes (e.g. ''motivating vs. boring'' or organized vs. a mess''). This procedure should be repeated until no more new constructs (words) come up. | * The next step is '''knowledge elicitation''' of personal constructs. To understand how an individual perceives (understands/compares) these elements, scalar constructs about these elements then have to be elicitated. E.g. using the so-called ''triadic method'', interviewed people will have to compare learning object A with B and C and then state in what regards they are being different. E.g. ''Pick the two teachers that are most similar and tell me why. then tell me how the third one is different''. The output will be contrasted attributes (e.g. ''motivating vs. boring'' or organized vs. a mess''). This procedure should be repeated until no more new constructs (words) come up. | ||
* These constructs are then reused to '''rate all the elements in a matrix''' (rating grid), usually on a simple five or seven point scale. A construct always has two poles, i.e. attribute pairs with two opposites. | * These constructs are then reused to '''rate all the elements in a matrix''' (rating grid), usually on a simple five or seven point scale. A construct always has two poles, i.e. attribute pairs with two opposites. | ||
Line 43: | Line 46: | ||
== Elicitation methods == | == Elicitation methods == | ||
Elicitation methods can vary. The basic procedures are either monadic, dyadic or | Elicitation methods can vary. The basic procedures are either monadic, dyadic, triadic, none, or full context form. | ||
* In the '''monadic procedure'', participants must describe an element with a single word or a short phrase. The the opposite of this term is asked. | * In the '''monadic procedure''', participants must describe an element with a single word or a short phrase. The the opposite of this term is asked. | ||
* In the '''dyadic procedure''', the participant is asked to look at pairs of elements and tell if they are similar or dissimilar in some way. If they are judged '''dissimilar''', he has to explain how, again with a single word or a short phrase and again also tell the opposite of this term. If they are judged '''similar''', then he is asked to select a third and dissimilar element and then again explain similarities and dissimilarities with simple phrases. | * In the '''dyadic procedure''', the participant is asked to look at pairs of elements and tell if they are similar or dissimilar in some way. If they are judged '''dissimilar''', he has to explain how, again with a single word or a short phrase and again also tell the opposite of this term. If they are judged '''similar''', then he is asked to select a third and dissimilar element and then again explain similarities and dissimilarities with simple phrases. | ||
* The '''triadic procedure''' has been defined above, i.e. participants are given three elements, must identify two similar and a different one and then explain. The elements in each triad are usually randomly selected and then replaced for the next iteration. | * The '''triadic procedure''' has been defined above, i.e. participants are given three elements, must identify two similar and a different one and then explain. The elements in each triad are usually randomly selected and then replaced for the next iteration. | ||
The knowledge elicitation procedure can be stopped when the participant stops coming up with new constructs. | The knowledge elicitation procedure can be stopped when the participant stops coming up with new constructs. | ||
* '''None''': In some studies (in particular applied areas such marketing studies), the researcher may provide the constructs. | |||
* In the '''full context form''' technique (Tan & Hunter, 2002), {{quotation|the research participant is required to sort the whole pool of elements into any number of discrete piles based on whatever similarity criteria chosen by the research participant. After the sorting, the research participant will be asked to provide a descriptive title for each pile of elements. This approach is primarily used to elicit the similarity judgments.}} (Siau, 2007: 5). | |||
* '''Group construct elicitation''' is according to Siau (2007) similar to the triadic sort method. Both element identification and construct elicitation with triadic sort are done together through discussion. | |||
Phrases that emerge for similarities are called the '''similarity pole''' (also called '''emergent pole'''. The opposing pole is called '''contrast pole''' or '''implicit''' pole. Numerical scale then should be consistent, e.g. the emergent poles always must have either a high or a low score. Certain software can require a direction. | Phrases that emerge for similarities are called the '''similarity pole''' (also called '''emergent pole'''. The opposing pole is called '''contrast pole''' or '''implicit''' pole. Numerical scale then should be consistent, e.g. the emergent poles always must have either a high or a low score. Certain software can require a direction. | ||
Line 221: | Line 227: | ||
Repertory grid techniques have been used in variety of domains to gather a picture of a set of people profile's in an organization or also to to come up with a more general set of typical profiles a job descriptions could have (e.g. managers as discussed below in the study about training needs analysis). Above, we shortly presented Steinkuehler and Derry's method for [http://www.alnresearch.org/HTML/AssessmentTutorial/Strategies/RepertoryGrid.html teacher assessment]. | Repertory grid techniques have been used in variety of domains to gather a picture of a set of people profile's in an organization or also to to come up with a more general set of typical profiles a job descriptions could have (e.g. managers as discussed below in the study about training needs analysis). Above, we shortly presented Steinkuehler and Derry's method for [http://www.alnresearch.org/HTML/AssessmentTutorial/Strategies/RepertoryGrid.html teacher assessment]. | ||
Hunter (1997) conducted an analysis of information system analysts, i.e. {{quotation|to identify the qualities of what constitutes an interpretation of an `excellent' systems analyst}}. A set of participants working with systems analysts was asked to identify up to six systems analysts, then a triadic | Hunter (1997) conducted an analysis of information system analysts, i.e. {{quotation|to identify the qualities of what constitutes an interpretation of an `excellent' systems analyst}}. A set of participants working with systems analysts was asked to identify up to six systems analysts, then a triadic elicitation method was to used to extract features (positive and negative construct poles) about ideal or incompetent analysts. | ||
Laddering (Stewart & steward, 1981), i.e. a series of "how" questions to the participant provided more detail, i.e. more clearly defined what he/she meant by the use of the more general construct. E.g. an example (Hunter, 1997: 75) of a participant's definition of "good user rapport" included "good relationship on all subjects" (work, interests, family), "user feels more comfortable" and also behavioral observation, "how is this done?" (good listener, finds out user's interests, doesn't forget, takes time to answer user's questions, speaks in terms users can understand, etc.) | Laddering (Stewart & steward, 1981), i.e. a series of "why and "how" questions to the participant, provided more detail, i.e. more clearly defined what he/she meant by the use of the more general construct. E.g. an example (Hunter, 1997: 75) of a participant's definition of "good user rapport" included "good relationship on all subjects" (work, interests, family), "user feels more comfortable" and also behavioral observation, "how is this done?" (good listener, finds out user's interests, doesn't forget, takes time to answer user's questions, speaks in terms users can understand, etc.) | ||
The constructs that emerged where: | The constructs that emerged where: | ||
Line 235: | Line 241: | ||
The rating of all analysts (elements) was done on a nine point scale and that allows (if desired) to choose a different position of each analyst. Participants had to order each of a set of six analysts plus a hypothetical "ideal" one and an "incompetent" one for each construct, therefore a total of 8 cards. | The rating of all analysts (elements) was done on a nine point scale and that allows (if desired) to choose a different position of each analyst. Participants had to order each of a set of six analysts plus a hypothetical "ideal" one and an "incompetent" one for each construct, therefore a total of 8 cards. | ||
A similar study has ben conducted by Siau et al (2007) on important characteristics of software development team members. E.g. they found out that interpersonal/communication skills and teamwork orientation are important. But they are always considered important characteristics of team members in any project. They also found {{quotation|some constructs/categories that are unique to team members of IS development projects, namely learning ability, multidimensional knowledge and professional orientation.}} | |||
=== Training needs analysis === | === Training needs analysis === | ||
Line 245: | Line 253: | ||
* It provides a representation of the manager's own world as it really is â and this in turn can help provide a clearer picture of how an organization is actually performing | * It provides a representation of the manager's own world as it really is â and this in turn can help provide a clearer picture of how an organization is actually performing | ||
* The technique uses real people to identify real needs [..]. | * The technique uses real people to identify real needs [..]. | ||
* It does not seek to fit | * It does not seek to fit people's training needs into existing [...] training plans. As a result, what can emerge is a definition of one or more areas of real weakness within a department or organization. [..] | ||
}} | }} | ||
Line 283: | Line 291: | ||
See also: Hawley (2007) | See also: Hawley (2007) | ||
; Evaluation of search engines | |||
Line 431: | Line 443: | ||
* Shaw, Mildred L G & Brian R Gaines (1995) Comparing Constructions through the Web, ''Proceedings of CSCL95: Computer Support for Collaborative Learning'' (Schnase, J. L., and Cunnius, E. L., eds.), pp. 300-307. Lawrence Erlbaum, Mahwah, New Jersey. A reprint is available from Knowledge Science Institute, University of Calgary, [http://pages.cpsc.ucalgary.ca/~gaines/reports/LW/CSCL95WG/index.html HTML] (Presents a first version of the web grid system) | * Shaw, Mildred L G & Brian R Gaines (1995) Comparing Constructions through the Web, ''Proceedings of CSCL95: Computer Support for Collaborative Learning'' (Schnase, J. L., and Cunnius, E. L., eds.), pp. 300-307. Lawrence Erlbaum, Mahwah, New Jersey. A reprint is available from Knowledge Science Institute, University of Calgary, [http://pages.cpsc.ucalgary.ca/~gaines/reports/LW/CSCL95WG/index.html HTML] (Presents a first version of the web grid system) | ||
* Siau, Keng, Xin Tan & Hong Sheng (2007). Important characteristics of software development team members: an empirical investigation using Repertory Grid, ''Information Systems Journal'' [http://dx.doi.org/10.1111/j.1365-2575.2007.00254.x DOI: 10.1111/j.1365-2575.2007.00254.x] | |||
* Shaw, M.L.G. (1980). On Becoming A Personal Scientist. London: Academic Press. | * Shaw, M.L.G. (1980). On Becoming A Personal Scientist. London: Academic Press. | ||
Line 436: | Line 450: | ||
* Stewart, V. & Stewart, A. (1981) ''Business Applications of Repertory Grid''. McGraw-Hill, London. | * Stewart, V. & Stewart, A. (1981) ''Business Applications of Repertory Grid''. McGraw-Hill, London. | ||
* Tan, F.B., Hunter, M.G. (2002), "The repertory grid technique: a method for the study of cognition in information systems", MIS Quarterly, Vol. 26 No.1, pp.39-57. | * Tan, F.B., Hunter, M.G. (2002), "The repertory grid technique: a method for the study of cognition in information systems", MIS Quarterly, Vol. 26 No.1, pp.39-57. (explains various elicitation techniques) | ||
* Stein, Sarah J., Campbell J. McRobbie and Ian Ginns (1998). Insights into Preservice Primary Teachers' Thinking about Technology and Technology Education, Paper presented at the Annual Conference of the Australian Association for Research in Education, 29 November to 3 December 1998, [http://www.aare.edu.au/98pap/mcr98085.htm HTML] | * Stein, Sarah J., Campbell J. McRobbie and Ian Ginns (1998). Insights into Preservice Primary Teachers' Thinking about Technology and Technology Education, Paper presented at the Annual Conference of the Australian Association for Research in Education, 29 November to 3 December 1998, [http://www.aare.edu.au/98pap/mcr98085.htm HTML] |
Revision as of 16:39, 12 February 2009
This article or section is currently under construction
In principle, someone is working on it and there should be a better version in a not so distant future.
If you want to modify this page, please discuss it with the person working on it (see the "history")
<pageby nominor="false" comments="false"/>
Definitions
The repertory grid technique (RGT) is a method for eliciting personal constructs. It is based on George Kelly's Personal Construct Theory in 1955 and was also initially developed within this context. As methodology, it can be used in a variety fundamental and applied research projects on human constructs.
Repertory grid analysis is also popular outside academia e.g. in counseling and marketing. Today, various variants of the global concept seem to exist, some more complex than others. According to Slater, 1976 cited by Dillon (1994:76), its use as analytic tool does not require acceptance of the model of man which Kelly proposed. Also within "main stream" RGT, several kinds of elicitation methods to extract constructs and to analyse them exits. A common way to describe the technique is identifying a set of "elements" (a set of "observations" from a universe of discourse) which are rated according to certain criteria termed "constructs". “The elements and/or the constructs may be elicited from the subject or provided by the experimenter depending on the purpose of the investigation. Regardless of the method, the basic output is a grid in the form of n rows and m columns, which record a subject's ratings, usually on a 5- or 7-point scale, of m elements in terms of n constructs”. (Dillon, 1994:76
- Some other definitions of RGT (emphasized text by DKS).
“The RGT (Kelly, 1955) originally stems from the psychological study of personality (see Banister et al., 1994; Fransella & Bannister, 1977, for an overview). Kelly assumed that the meaning we attach to events or objects defines our subjective reality, and thereby the way we interact with our environment. The idiosyncratic views of individuals, that is, the different ways of seeing, and the differences to other individuals define unique personalities. It is stated that our view of the objects (persons, events) we interact with is made up of a collection of similarity–difference dimensions, referred to as personal constructs. For example, if we perceive two cars as being different, we may come up with the personal construct fancy–conservative to differentiate them. On one hand, this personal construct tells something about the person who uses it, namely his or her perceptions and concerns. On the other hand, it also reveals information about the cars, that is, their attributes.” (Hassenzahl & Wessler, 2000:444)
“[..]The “Repertory Grid” [...] is an amazingly ingenious and simple ideographic device to explore how people experience their world. It is a table in which, apart from the outer two columns, the other columns are headed by the names of objects or people (traditionally up to 21 of them). These names are also written on cards, which the tester shows to the subject in groups of three, always asking the same question: “How are two of these similar and the third one different?” [...] The answer constitutes a “construct”, one of the dimensions along which the subject divides up her or his world. There are conventions for keeping track of the constructs. When the grid is complete, there are several ways of rating or ranking all of the elements against all the constructs, so as to permit sophisticated analysis of core constructs and underlying factors (see Bannister and Mair, 1968) and of course there are programs which will do this for you.” (Personal Construct Psychology, retrieved 14:09, 26 January 2009 (UTC).)
“The Repertory Grid is an instrument designed to capture the dimensions and structure of personal meaning. Its aim is to describe the ways in which people give meaning to their experience in their own terms. It is not so much a test in the conventional sense of the word as a structured interview designed to make those constructs with which persons organise their world more explicit. The way in which we get to know and interpret our milieu, our understanding of ourselves and others, is guided by an implicit theory which is the result of conclusions drawn from our experiences. The repertory grid, in its many forms, is a method used to explore the structure and content of these implicit theories/personal meanings through which we perceive and act in our day-to-day existence.” (A manual for the repertory grid, retrieved 12:18, 26 January 2009 (UTC)).
“The term repertory derives, of course, from repertoire - the repertoire of constructs which the person had developed. Because constructs represent some form of judgment or evaluation, by definition they are scalar: that is, the concept good can only exist in contrast to the concept bad, the concept gentle can only exist as a contrast to the concept harsh. Any evaluation we make - when we describe a car as sporty, or a politician as right-wing, or a sore toe as painful - could reasonably be answered with the question 'Compared with what?' The process of taking three elements and asking for two of them to be paired in contrast with the third is the most efficient way in which the two poles of the construct can be elicited.”. (Enquire Within, Kelly's Theory Summarised), retrieved 12:18, 26 January 2009 (UTC).
“The repertory grid technique is used in many fields for eliciting and analysing knowledge and for self-help and counselling purposes.” (Repertory Grid Technique, retrieved 12:18, 26 January 2009 (UTC).)
Overview
Most repertory grid analyses use the following principle:
- The designer has to select a series of elements that are representative of a topic. E.g. to analyze perception of teaching styles, the elements would be teachers. To analyze learning materials, the elements could be learning objects. To analyze perception of laptop functionalities, the elements are various laptop models.
- The next step is knowledge elicitation of personal constructs. To understand how an individual perceives (understands/compares) these elements, scalar constructs about these elements then have to be elicitated. E.g. using the so-called triadic method, interviewed people will have to compare learning object A with B and C and then state in what regards they are being different. E.g. Pick the two teachers that are most similar and tell me why. then tell me how the third one is different. The output will be contrasted attributes (e.g. motivating vs. boring or organized vs. a mess). This procedure should be repeated until no more new constructs (words) come up.
- These constructs are then reused to rate all the elements in a matrix (rating grid), usually on a simple five or seven point scale. A construct always has two poles, i.e. attribute pairs with two opposites.
According to Feixas and Alvarez, the repertory grid is applied in four basic steps: (1) The design phase is where the parameters that define the area of application are set out. (2) In the administration phase, the type of structured interview for grid elicitation and the resulting numerical matrix is defined. (3) The repertory grid data can then subjected to a variety of mathematical analyzes. (4) The structural characteristics of the construct system can then be described.
“The elements selected for the grid depend on which aspects of the interviewee's construing are to be evaluated. Elements can be elicited by either asking for role relations (e.g., your mother, employer, best friend) or by focusing on a particular area of interest. A market research study might, for example, use products representative of that market as elements (e.g., cleaning products, models of cars, etc.).”(http://www.terapiacognitiva.net/record/pag/man2.htm Design Phase])
“The type of rating method used (dichotomous, ordinal or interval) determines the type of mathematical analysis to be carried out as well as the the length and duration of the test administration. As before, the criteria for selection depend on the researcher's objectives and on the capacities of the person to be assessed.” (http://www.terapiacognitiva.net/record/pag/man2.htm Design Phase (2)])
According to Nick Milton (Repertory Grid Technique) the repertory grid technique includes four main stages.
- In stage 1, elements to analyze (e.g. concepts or observable items such as a pedagogical designs or roles) are selected for the grid. A similar number of attributes that allow to characterize each element are also defined. These attributes should either be generated with an elicitation method or can be taken from previously elicitated knowledge.
- In stage 2 each concept must be rated against each attribute.
- In stage 3, a cluster analysis is performed on both the elements and the attributes. This will show similarities between elements or attributes.
- In stage 4, the knowledge engineer walks the expert through the focus grid gaining feedback and prompting for knowledge concerning the groupings and correlations shown.
Elicitation methods
Elicitation methods can vary. The basic procedures are either monadic, dyadic, triadic, none, or full context form.
- In the monadic procedure, participants must describe an element with a single word or a short phrase. The the opposite of this term is asked.
- In the dyadic procedure, the participant is asked to look at pairs of elements and tell if they are similar or dissimilar in some way. If they are judged dissimilar, he has to explain how, again with a single word or a short phrase and again also tell the opposite of this term. If they are judged similar, then he is asked to select a third and dissimilar element and then again explain similarities and dissimilarities with simple phrases.
- The triadic procedure has been defined above, i.e. participants are given three elements, must identify two similar and a different one and then explain. The elements in each triad are usually randomly selected and then replaced for the next iteration.
The knowledge elicitation procedure can be stopped when the participant stops coming up with new constructs.
- None: In some studies (in particular applied areas such marketing studies), the researcher may provide the constructs.
- In the full context form technique (Tan & Hunter, 2002), “the research participant is required to sort the whole pool of elements into any number of discrete piles based on whatever similarity criteria chosen by the research participant. After the sorting, the research participant will be asked to provide a descriptive title for each pile of elements. This approach is primarily used to elicit the similarity judgments.” (Siau, 2007: 5).
- Group construct elicitation is according to Siau (2007) similar to the triadic sort method. Both element identification and construct elicitation with triadic sort are done together through discussion.
Phrases that emerge for similarities are called the similarity pole (also called emergent pole. The opposing pole is called contrast pole or implicit pole. Numerical scale then should be consistent, e.g. the emergent poles always must have either a high or a low score. Certain software can require a direction.
Ranking/rating of elements in a matrix also can be done with various procedures. Examples:
- Rating: Participants must judge each element on a Likert-type scale, usually with five or seven points. E.g. Please rate yourself on the following scale or Please rate the comfort of this car model.
- Ranking: Participants can be asked to rank each element with respect to a given construct. E.g. rank 10 learning management systems in terms of "easy to use - difficult to use". A system like Dokeos would rank higher than a system like WebCT.
- Binary ranking: Yes/no with respect to the emergent (positive) pole
Feixas and Alvarez outline the three methods to elicit constructs like this:
A) Elicitation of constructs using triads of elements. This is the original method used by Kelly. It involves the presentation of three elements followed by the question, "How are two of these elements similar, and thereby different from a third element?" and then "How is the third element different from the other two?" [...] B) Elicitation of constructs using dyads of elements. Epting, Schuman and Nickeson (1971) argue that more explicit contrast poles can be obtained using only two elements at a time. This procedure usually involves an initial question such as, "Do you see these people as more similar or different?" This prompt can then be followed by questions of similarity such as, "How are these two elements alike?" or "What characteristics do these two elements share?" Questions referring to differences such as "How are these two elements different?" are also appropriate. [...]
C) Elicitation of constructs using single elements. Also known as monadic elicitation, this way of obtaining constructs is the most similar to an informal conversation. It consists in asking subjects to describe in their own words the "personality" or way of being of each of the elements presented. The interviewer's task is limited to writing down the constructs as they appear and then asking for the opposite poles.Construction of repertory grid tables
An example
The following example was taken from Sarah J. Stein, Campbell J. McRobbie and Ian Ginns (2000) research on Preservice Primary Teachers' Thinking about Technology and Technology Education. We only will show parts of the tables (in order to avoid copyright problems).
“Following a process developed by Shapiro (1996), a Repertory Grid reflecting the views of the interviewed group about the technology design process was developed. The interview and survey responses were coded and categorised into a set of dipolar constructs (ten) consisting of terms and phrases commonly used by students about technology and the conduct of technology investigations (Table 1), and a set of elements (nine) of the technology process consisting of typical situations or experiences in the conduct of an investigation (Table 2). The Repertory Grid developed consisted of a seven point rating scale situated between pole positions on the individual constructs, one set for each element. A sample Repertory grid chart is shown in Table 3.”
- Table 1
- Repertory Grid - Constructs
Label | Descriptor - One pole | Descriptor - Opposite pole |
a. | Creating my own ideas | Just following directions |
b. | Challenging, problematic, troublesome | Easy, simple |
c. | Have some idea beforehand about the result | Have no idea what will result |
d. | ... | ... |
- Table 2
- Repertory Grid - Elements
Label | Descriptor |
1. | Selection of a problem for investigation by the participant |
2. | Identifying and exploring factors which may affect the outcome of the project |
3 | Decisions about materials and equipment may be needed |
4. | Drawing of plans may be involved |
5. | Building models and testing them may be required |
- Table 3
- Sample Repertory Grid Chart
The following statement is a brief description of a typical experience you, as a participant, might have while conducting a design and technology project. |
ELEMENT #1: Selection of a problem for investigation by the participant. |
Rate this experience on the scale of 1 to 7 below for the following constructs, or terms and phrases, you may use when describing the steps in conducting a design and technology project. CIRCLE YOUR RESPONSE. |
a. | Creating my own ideas | 1 2 3 4 5 6 7 | a. | Just following directions |
b. | Challenging, problematic, troublesome | 1 2 3 4 5 6 7 | b. | Easy, simple |
c. | Have some idea beforehand about the result | 1 2 3 4 5 6 7 | c. | Have no idea what will result |
d. | Using the imagination or spontaneous ideas | 1 2 3 4 5 6 7 | d. | Recipe-like prescriptive work |
All-in-one grids
Instead of presenting a new grid table for each element, one also could present participants a grid that includes the elements as a row. In this case, users have to insert numbers in the cells. This is difficult on paper, but a bit easier with a computer interface we believe.
However, if you have few elements, a paper version can be done easily. E.g. to analyse perception of different teachers, e.g. Steinkuehler and Derry's Repertory Grid tutorial provides the following example about teacher rating.
Similarity or Emergent Pole 1 | Elements | Contrast Pole 5 | |||||
Prof. Apple | Prof. Bean | Prof. Carmel | Prof. Dim | Prof. Enuf | Prof. Fly | ||
approachable | 1 | 1 | 5 | 4 | 5 | 1 | intimidating |
laid-back | 3 | 3 | 1 | 1 | 1 | 1 | task-master |
challenging | 4 | 2 | 3 | 1 | 2 | 5 | unengaging |
spontaneous lecturer | scripted lecturer | ||||||
etc. | etc. | ||||||
Two poles – the similarity or emergent pole and the contrast pole – are listed in columns at either end. Elements (in the middle columns) are rated in terms of the extent to which they belong to either of the poles of a construct. The ratings are placed in a row of the cells between the corresponding poles. The red dots indicate the elements used in each triad. |
Analysis techniques
Individual grids can be analyzed using various statistical data reduction techniques on both rows and columns, e.g. cluster analysis, principal component analysis or methods like correspondence analysis for both.
A simple descriptive technique to look at multiple grids that use the same constructs (e.g. as in some marketing research or knowledge engineering) is to simply chart the values for each participant as graph between the poles (opposite attributes). Otherwise, with grids that differ between individuals, it gets more complicated ...
Software
Specialized software can do either or all of three things:
- Help to design repertory grids
- Help to administer repertory grids.
- Perform a series of analysis.
An alternative method is to do the first part "by hand", the second with a web-based survey manager tool and the last with a normal statistics package. Many statistics programs can do cluster analysis and component analysis. Correspondence analysis is less available. None of the specialized software below has been tested in depth (26 January 2009 - DKS)
List of Software
- Commercial
- Gridcore Correspondence analysis tool for grid data. Between euros 50 and 150.
- GridLab (no link)
- Enquire Within (free evaluation copy)
- RepGrid (A free 15 elements/15 constructions) version is available)
- Point of Mind - .map software (German)
- Free
- The Idiogrid (Win) program by James W. Grice. Idiographic Analysis with Repertory Grids. Free since 2008, but users that get funding are expected to pay 105 $US.
- WinGrid (became a tool for artists).
- Omnigrid (older Mac/PC code)
Free online services
- WebGrid III and IV (free online software, runs on ports 1500 and 2000).Homepage (Mildred Shaw & Brian Gaines)
- sci:vesco.web (limited version is free and fully operational).
Repertory grid analysis in specific fields
Below we summarize a few projects that used repertory grids. These little summarizies are not meant to be representative and were not done in the same way. We simply wrote down a few interesting points that may be of interest to researchers in educational technology ....
Analysis of job behaviors
Repertory grid techniques have been used in variety of domains to gather a picture of a set of people profile's in an organization or also to to come up with a more general set of typical profiles a job descriptions could have (e.g. managers as discussed below in the study about training needs analysis). Above, we shortly presented Steinkuehler and Derry's method for teacher assessment.
Hunter (1997) conducted an analysis of information system analysts, i.e. “to identify the qualities of what constitutes an interpretation of an `excellent' systems analyst”. A set of participants working with systems analysts was asked to identify up to six systems analysts, then a triadic elicitation method was to used to extract features (positive and negative construct poles) about ideal or incompetent analysts.
Laddering (Stewart & steward, 1981), i.e. a series of "why and "how" questions to the participant, provided more detail, i.e. more clearly defined what he/she meant by the use of the more general construct. E.g. an example (Hunter, 1997: 75) of a participant's definition of "good user rapport" included "good relationship on all subjects" (work, interests, family), "user feels more comfortable" and also behavioral observation, "how is this done?" (good listener, finds out user's interests, doesn't forget, takes time to answer user's questions, speaks in terms users can understand, etc.)
The constructs that emerged where:
- 1. Delegator -- does work himself
- 2. Informs everyone -- keeps to himself
- 3. Good user rapport -- no user rapport
- 4. Regular feedback -- appropriate feedback
- 5. Knows detail -- confused
- 6. Estimates based on staff -- estimates based on himself
- 7. User involvement -- lack of user involvement
The rating of all analysts (elements) was done on a nine point scale and that allows (if desired) to choose a different position of each analyst. Participants had to order each of a set of six analysts plus a hypothetical "ideal" one and an "incompetent" one for each construct, therefore a total of 8 cards.
A similar study has ben conducted by Siau et al (2007) on important characteristics of software development team members. E.g. they found out that interpersonal/communication skills and teamwork orientation are important. But they are always considered important characteristics of team members in any project. They also found “some constructs/categories that are unique to team members of IS development projects, namely learning ability, multidimensional knowledge and professional orientation.”
Training needs analysis
Peters (1994:23), in the context of management education, argued that “The real challenge underlying any training needs analysis (TNA) lies not with working out what training a group of individuals needs but with identifying what the good performers in that group actually do. It is only when you have a benchmark of good performance that you can look to see how everybody measures up”. Peters (1994:28) argued that use of repertory grids allows
- It provides a means to capture subjective ideas and viewpoints and it helps people to focus their views and opinions.
- It can help to probe areas and viewpoints of which managers may be unaware, and as such it can be a way of generating new managerial insights.
- It helps individual managers to understand how they view good/poor performance.
- It provides a representation of the manager's own world as it really is â and this in turn can help provide a clearer picture of how an organization is actually performing
- The technique uses real people to identify real needs [..].
- It does not seek to fit people's training needs into existing [...] training plans. As a result, what can emerge is a definition of one or more areas of real weakness within a department or organization. [..]
Design and human computer interaction
Repertory grid analysis in human-computer interaction at large seems to be quite popular, e.g. we found design studies (Hassezahl and Wessler, 2000), search engine evaluation (Crudge & Johnson, 2004), models of text (Dillen and McKnight, 1990), elicitation of knowledge for expert systems (Shaw and Gaines, 1989)
- Design of artifacts
The design problem described by Hassenzahl & Wessler was how to evaluate early prototypes made in parallel. “The user-based evaluation of artifacts in a parallel design situation requires an efficient but open method that produces data rich and concrete enough to guide design. (Hassenzahl & Wessler, 2000:453)”. Unstructured methods (e.g. interviews or observations) require a huge amount of work. On the opposite, structured methods like questionnaires is their "insensitivity to topics, thoughts, and feelings—in short, information— that do not fit into the predetermined structure." (idem, 442). “The most important advantages of the RGT are (a) its ability to gather design-relevant information, (b) its ability to illuminate important topics without the need to have a preconception of these, (c) its relative efficiency, and (d) the wide variety of types of analyses that can be applied to the gathered data. (Hasszenzahl & Wessler, 2000:455).”
- Models of Text.
Dillen and McKnight (1990:Abstract) found that “individuals construe texts in terms of three broad attributes: why read them, what type of information they contain, and how they are read. When applied to a variety of texts these attributes facilitate a classificatory system incorporating both individual and task differences and provide guidance on how their electronic versions could be designed.”
- Knowledge elicitation for expert systems
Mildred Shaw and Brian Gaines led several studies on knowledge elicitation. On particularly interesting problem was “hat experts may share only parts of their terminologies and conceptual systems. Experts may use the same term for different concepts, use different terms for the same concept, use the same term for the same concept, or use different terms and have different concepts. Moreover, clients who use an expert system have even less likelihood of sharing terms and concepts with the experts who produced it.” (Shaw & Gaines, 1989). The authors summarize the situation with the following figure.
The methodology for developing a methodology for eliciting and analyzing consensus, conflict, correspondence and contrast in a group of experts can be summarized as follows:
- The group of experts comes to an agreement over a set of entities which instantiate the relevant domain. E.g. the union of all entities that can be extracted from individual elicitations.
- Each expert individually elicits attributes and values for the agreed entities. We will then find either correspondence or contrast. All attributes of the individual grids are mapped. Does one expert have an attribute that can be used to make the same distinctions between the entities as does an other expert (correspondence) or does an attribute in one system have no matching attribute in the other (contrast).
- “In phase 3 each expert individually exchanges elicited conceptual systems with every other expert, and fills in the values for the agreed entities on the attributes used by the other experts. [...] The result is a map showing consensus when attributes with the same labels are used in the same way and conflict when they are not [..]”
- Depending on the purpose of the study, one then can for instance identify subgroups of experts who think and act in similar ways or [negotiating a common solution if there is a need for it.]
Representation of information space
Cliff McKnight (2000) analysed the representation of information sources. Eleven sources were identified by the researcher, i.e. Library (books), E-mail, NewsGroup, Newspaper, Television, Radio, Journals (paper), Colleagues, Conferences, Journals (electronic), World Wide Web. {{quotation|These elements were presented in triads in order to elicit constructs. The triads were chosen such that no pair of elements appeared in more than one triad. [...] 10 constructs were elicited, and each element was rated on each construct as it was elicited, using a 1-5 scale. (McKnight, 2000).
The resulting repertory grid was then analysed with a two-way hierarchical cluster analysis using the FOCUS program (Shaw, 1980). This allowed an analysis of both construct clusters and element clusters and which we can't reproduce here. The authors used a 75% cutoff point to identify interesting clusters. E.g. a typical result regarding construct clusters was that “elements that are seen as "text" also have a strong tendency to be seen as "not much surfing opportunity", and as being "single focus". Similarly, elements that are seen as "quality controlled" also have a strong tendency to be seen as "historical" and "not entertaining".” Regarding element clusters, electronic journals were grouped with television and radio, while paper journals are grouped with library. e-mail and newsgroups were grouped and colleagues and conferences also. The only surprise was the association of electronic journals with radio, they score high on "entertaining" and relatively high on surfing opportunity.
- Web site Analysis
Hassenzahl and Trautmann (2000)
See also: Hawley (2007)
- Evaluation of search engines
Links
Journals
- The Journal of Constructivist Psychology publishes articles on grid
Associations and centres
- European Personal Construct Association (EPCA)
- The Centre for Personal Construct Psychology (UK)
- Personal Construct Psychology Association (PCPA)
- Center for Person-Computer Studies (Mildred Shaw & Brian Gaines, of University of Calgary, both are emeritus but seem to continue their work - 18:28, 11 February 2009 (UTC)). In particular, see:
- CPCS/KSI/KSS Reports.
- WebGrid online programs (open for use by researchers). The webgrid servers also link to publications.
- Enquire Within (Valerie Stewart & John Mayes, New Zealand)
- The PCP Information Centre (Joern Scheer, Germany )
Links of links
- Jeanette Hemmecke (Forschungsseite)
- The Psychology of Personal Constructs/The Repertory Grid Technique (Jörg Scheer).
- On-line papers on PCP (good large list)
- PPC und Repertory Grid Technik
- Repertory Grid Site
- The Reprid Gateway (Jankowicz)
Short introductions
- Repertory Grid Technique (RGT)
- Repertory Grid Technique
- Repertory Grid (Wikipedia)
- Enquire Within (Kelly's Theory Summarised).
- Repertory Grid (short technical introduction of teacher assessment by Constance A. Steinkuehler & Sharon J. Derry
- How to use a repertory grid
- Repertory grid methods
- Atherton J S (2007) Learning And Teaching: Personal Construct Theory [On-line] UK: Available: http://www.learningandteaching.info/learning/personal.htm Accessed: 26 January 2009
- Repertory Grid Technique (Middlesex University)
Manuals
- Bell, Richard, The Analysis of Repertory Grid Data using SPSS, broken link/needs replacement
Bibliography
- Grice J.W. (2002). Idiogrid: Software for the management and analysis of repertory grids, Behavior Research Methods, Instruments, & Computers, Volume 34, Number 3, pp. 338-341. Abstract/PDF.
- Walter, Otto B.; Andreas Bacher, and Martin Fromm. (2004). A proposal for a common data exchange format for repertory grid data. Journal of Constructivist Psychology, 17(3):247–251, July 2004.
- Baldwin, Dennis A., Greene, Joel N., Plank, Richard E., Branch, George E. (1996). Compu-Grid: A Windows-Based Software Program for Repertory Grid Analysis, Educational and Psychological Measurement 56: 828-832
- Seelig, Harald (2000). Subjektive Theorien über Laborsituationen : Methodologie und Struktur subjektiver Konstruktionen von Sportstudierenden, PhD Theses, Institut für Sport und Sportwissenschaft, Universität Freiburg. Abstract/PDF
- Banister, P., Burman, E., Parker, I., Taylor, M.,&Tindall, C. (1994). Qualitative methods in psychology. Philadelphia: Open University Press.
- Bannister D. & Mair J.M.M. (1968) The Evaluation of Personal Constructs London: Academic Press
- Bannister D. & Fransella F. (1986) Inquiring Man: the psychology of personal constructs (3rd edition) London: Routledge .
- Bell, R. (1988): Theory-appropriate analysis of repertory grid data. International Journal of Personal Construct Psychology. 1:101-118
- Bell, R. (2000) 'Why do statistics with Repertory Grids?', The Person in Society.
- Bell, R. C. (1990). Analytic issues in the use of repertory grid technique. In G. J. Neimeyer & R. A. Neimeyer (Eds.), Advances in Personal Construct Psychology (Vol. 1, pp. 25-48). Greenwich, CT: JAI
- Boyle, T.A. (2005), "Improving team performance using repertory grids", Team Performance Management, Vol. 11 Nos. 5/6, pp. 179-187.
- Bringmann, M. (1992). Computer-based methods for the analysis and interpretation of personal construct systems. In G. J. Neimeyer & R. A. Neimeyer (Eds.), Advances in personal construct psychology (Vol. 2, pp. 57-90). Greenwich, CN: JAI.
- Burke, M. (2001). The use of repertory grids to develop a user-driven classification of a collection of digitized photographs. Proceedings of the 64th ASIST Annual Meeting (pp. 76-92). Medford. NJ: Information Today Inc.
- Caputi, P., & Reddy, P. (1999). A comparison of triadic and dyadic methods of personal construct elicitation. Journal of Constructivist Psychology, 12(3), 253-264.
- Crudge, S. E. and Johnson, F. C. 2004. Using the information seeker to elicit construct models for search engine evaluation. J. Am. Soc. Inf. Sci. Technol. 55, 9 (Jul. 2004), 794-806. DOI 10.1002/asi.20023.
- Crudge, S.E. and Johnson, F.C. (2007), "Using the repertory grid and laddering technique to determine the user's evaluative model of search engines", Journal of Documentation, Vol. 63 No. 2, pp. 259-280.
- Dillon, A. and McKnight, C. 1990. Towards a classification of text types: a repertory grid approach. Int. J. Man-Mach. Stud. 33, 6 (Oct. 1990), 623-636. DOI 10.1016/S0020-7373(05)80066-5
- Dillon, A. (1994). Designing Usable Electronic Text, CRC, ISBN 0748401121, ISBN 074840113X
- Dunn, W.N. (1986). The policy grid: A cognitive methodology for assessing policy, dynamics. In W.N. Dunn (Ed.), Policy analysis: Perspectives, concepts and methods (pp.355-375). Greenwich, CT: JAI Press.
- Easterby-Smith, M., Thorpe, R. and Holman, D. (1996), "Using repertory grids in management", Journal of European Industrial Training, Vol. 20 No. 2, pp. 3-30.
- Fransella, Fay; Richard Bell, Don Bannister (2003). A Manual for Repertory Grid Technique, 2nd Edition, Wiley, ISBN: 978-0-470-85489-1.
- Gaines, B. R., & Shaw, M. L. G. (1997). Knowledge acquisition, modelling and inference through the WorldWideWeb, International Journal of Human–Computer Studies, 46, 729–759.
- Gaines, B. R., & Shaw, M. L. G. (1993). Eliciting Knowledge and Transferring it Effectively to a Knowledge-Based System. IEEE Transactions on Knowledge and Data Engineering 5, 4-14. (A reprint is available from Knowledge Science Institute, University of Calgary, HTML)
- Gaines, Brian R. & Mildred L. G. Shaw (2007). WebGrid Evolution through Four Generations 1994-2007, HTML
- Guillem Feixas and Jose Manuel Cornejo Alvarez (???), A Manual for the Repertory Grid, Using the GRIDCOR programme, version 4.0. HTML
- Hassenzahl, Marc & Wessler, Rainer. (2000), International Journal of Human-Computer Interaction, 2000, Vol. 12 Issue 3/4, p441-459
- Hassenzahl, M., Trautmann, T. (2001), "Analysis of web sites with the repertory grid technique", Conference on Human Factors in Computing Systems 2001, 167-168. ISBN 1-58113-340-5. PDF Reprint
- Hawley, Michael (2007). The Repertory Grid: Eliciting User Experience Comparisons in the Customer’s Voice, Web page, Uxmatters, HTML. (Good simple article that shows how to use one kind of grid technique to analyse web sites).
- Hemmecke, J.; Stary, C. (2007). The tacit dimension of user-tasks: Elicitation and contextual representation. Proceedings TAMODIA'06, 5th Int. Workshop on Task Models and Diagrams for User Interface Design. Springer Lecture Notes in Computer Science, LNCS 4385, pp. 308-323. Berlin, Heidelberg: Spinger
- Hemmecke, Jeannette & Christian Stary, A Framework for the Externalization of Tacit Knowledge, Embedding Repertory Grids, Proceedings of the Fifth European Conference on Organizational Knowledge, Learning, and Capabilities 2-3 April 2004, Innsbruck. PDF Preprint.
- Honey, Peter (1979). The repertory grid in action: How to use it as a pre/post test to validate courses, Industrial and Commercial Training, 11 (9), 358 - 369. DOI: DOI: 10.1108/eb003742
- Hunter, M.G. (2003). The use of RepGrids to gather interview data about information systems analysts, Information Systems Journal, Volume 7 Issue 1, Pages 67 - 81.
- Jankowicz, D. (2001), "Why does subjectivity make us nervous?: Making the tacit explicit", Journal of Intellectual Capital, Vol. 2 No. 1, pp. 61-73.
- Jankowicz, D. (2004), The Easy Guide to Repertory Grids, John Wiley & Sons Ltd, Chichester, UK. an easy introduction to grid repertory technique
- Jankowicz, Devi & Penny Dick (2001). "A social constructionist account of police culture and its influence on the representation and progression of female officers: A repertory grid analysis in a UK police force", Policing: An International Journal of Police Strategies & Management, 24 (2) pp. 181-199.
- Kelly G (1955). The Psychology of Personal Constructs New York: W W Norton.
- Latta, G.F., & Swigger, K. (1992). Validation of the repertory grid for use in modeling knowledge. Journal of the American Society for Information Science, 43(2), 115-129.
- Marsden, D. and Littler, D. (2000), "Repertory grid technique – An interpretive research framework", European Journal of Marketing, Vol. 34 No. 7, pp. 816-834.
- Cliff McKnight (2000), The personal construction of information space, Journal of the American Society for Information Science, v.51 n.8, p.730-733, May 2000 <730::AID-ASI50>3.0.CO;2-8 DOI 10.1002/(SICI)1097-4571(2000)51:8<730::AID-ASI50>3.0.CO;2-8
- Mitterer, J. & Adams-Webber, J. (1988). OMNIGRID: A general repertory grid design, administration and analysis program. Behavior Research Methods, Instruments & Computers, 20, 359-360.
- Mitterer, J.O. & Adams-Webber, J. (1988). OMNIGRID: A program for the construction, administration and analysis of repertory grids. In J. C. Mancuso & M. L. G. Shaw (Eds.), Cognition and personal structure: Computer access and analysis (pp. 89-103). New York: Praeger.
- Neimeyer, G. J. (1993). Constructivist assessment. Thousand Oaks: CA: Sage.
- Neimeyer, R. A. & Neimeyer, G. J. (Eds.) (2002). Advances in Personal Construct Psychology. New York: Praeger.
- Peters, W.L. (1994). Repertory Grid as a Tool for Training Needs Analysis, The Learning Organization, Vol. 1 No. 2, 1994, pp. 23-28. [
- Sewell, K. W., Adams-Webber, J., Mitterer, J., Cromwell, R. L. (1992): Computerized repertory grids: Review of the literature. International Journal of Personal Construct Psychology. 5:1-23
- Sewell, K.W., Mitterer, J.O., Adams-Webber, J., & Cromwell, R.L. (1991). OMNIGRID-PC: A new development in computerized repertory grids. International Journal of Personal Construct Psychology , 4, 175-192.
- Shapiro, B. L. (1996). A case study of change in elementary student teacher thinking during an independent investigation in science: Learning about the "face of science that does not yet know." Science Education, 5, 535-560.
- Shaw, Mildred L G & Brian R Gaines (1992). Kelly's "Geometry of Psychological Space" and its Significance for Cognitive Modeling, The New Psychologist, 23-31, October (HTML Reprint)
- Shaw, Mildred L G & Brian R Gaines (1989). Comparing Conceptual Structures: Consensus, Conflict, Correspondence and Contrast, Knowledge Acquisition 1(4), 341-363. ( A reprint is available from Knowledge Science Institute, University of Calgary, HTML Interesting paper that discusses how to deal with different kinds of experts).
- Shaw, Mildred L G & Brian R Gaines (1995) Comparing Constructions through the Web, Proceedings of CSCL95: Computer Support for Collaborative Learning (Schnase, J. L., and Cunnius, E. L., eds.), pp. 300-307. Lawrence Erlbaum, Mahwah, New Jersey. A reprint is available from Knowledge Science Institute, University of Calgary, HTML (Presents a first version of the web grid system)
- Siau, Keng, Xin Tan & Hong Sheng (2007). Important characteristics of software development team members: an empirical investigation using Repertory Grid, Information Systems Journal DOI: 10.1111/j.1365-2575.2007.00254.x
- Shaw, M.L.G. (1980). On Becoming A Personal Scientist. London: Academic Press.
- Stewart, V. & Stewart, A. (1981) Business Applications of Repertory Grid. McGraw-Hill, London.
- Tan, F.B., Hunter, M.G. (2002), "The repertory grid technique: a method for the study of cognition in information systems", MIS Quarterly, Vol. 26 No.1, pp.39-57. (explains various elicitation techniques)
- Stein, Sarah J., Campbell J. McRobbie and Ian Ginns (1998). Insights into Preservice Primary Teachers' Thinking about Technology and Technology Education, Paper presented at the Annual Conference of the Australian Association for Research in Education, 29 November to 3 December 1998, HTML
- Weakley, A. J. and Edmonds E. A. 2005. Using Repertory Grid in an Assessment of Impression Formation. In Proceedings of Australasian Conference on Information Systems, Sydney 2005
- Zuber-Skerritt and Roche (2004). "A constructivist model for evaluating postgraduate supervision: a case study", Quality Assurance in Education, Vol. 12 No. 2, pp. 82-93, [1] (Access restricted)