Methodology tutorial - empirical research principles

The educational technology and digital learning wiki
Jump to navigation Jump to search

<pageby nominor="false" comments="false"/>

Research Design for Educational Technologies - Empirical research principles

This is part of the methodology tutorial == The logic of empirical research ==

Elements of a typical research cycle

  • Details may considerably change within a given approach

Book-research-design-37.png

Key elements of empirical research

For a given research question, you usually do:

Book-research-design-38.png

  • Conceptualisations : make questions explicit, identify major concepts (variables), define terms and their dimensions, find analysis grids, define hypothesis, etc,
  • Artifacts : develop research materials (experiments, surveys), implement software, etc.
  • Measures : Observe (measure) in the field or through experiments (use your artifacts)
  • Analyses & conclusion : Analyze the measures (statistic or qualitative) and link to theoretical statements (e.g. operational research questions and hypothesis)

Objectives

Book-research-design-39.png

Research questions are the result of:

  • your initial objectives (which you may have to revise)
  • a (first) review of the literature

Everything you plan to do, must be formulated as a research question !

  • See slides on "Finding a research subject"

Conceptualizations

  • elaborate and "massage" concepts so that they can be used to study observable phenomena

Book-research-design-40.png

The usefulness of analysis frameworks

E.g. activity theory

Activity-theory.gif

Quote: The Activity Triangle Model or activity system representationally outlines the various components of an activity system into a unified whole. Participants in an activity are portrayed as subjects interacting with objects to achieve desired outcomes . In the meanwhile, human interactions with each other and with objects of the environment are mediated through the use of tools, rules and division of labour. Mediators represent the nature of relationships that exist within and between participants of an activity in a given community of practices. This approach to modelling various aspects of human activity draws the researcher s attention to factors to consider when developing a learning system. However, activity theory does not include a theory of learning, (Daisy Mwanza & Yrjö Engeström)

  • Translation: Il helps us thinking about a phenomenon to study.
  • A framework is not true or false, just useful or useless for a given intellectual task !

Models and hypothesis

  • These constructions link concepts and postulate causalities
  • causalities between concepts (theoretical variables) do not "exist" per se, they only can be observed indirectly
  • Typical statements: "More X leads to more Y", "an increase in X leads to a decrease in Y

Causality between teacher training and quality

Hypothesis (often heard): Continuous teacher training ( cause X ) improves teaching ( Y )

Book-research-design-42.png

The importance of difference (variance) for explanations

Without variance, no differences .... no explanatory science

we’d like to know why things exist, why we can observe "more" and "less" ....

Without co-variance, no correlations / causalities ... no explanation

Quantitative example:

Book-research-design-43.png

  • We got different grade averages and different training days
    • therefore variance for both variables
  • According to these data: increased training days lead to lower averages
    • (consider this hypothetical example false please !)

Qualitative example

Imagine that we wish to know why certain private schools introduce technology faster than others. One hypothesis to test could be: "Reforms need external pressure".

Strategies of a school

Type of pressure

strategy 1:no reaction

strategy 2:a task force is created

strategy 3:internal training programs are created

strategy 4:resources are reallocated

Letters written by parents

(N=4) (p=0.8)

(N=1) (p=0.2)

Letters written by supervisory boards

(N=2) (p=0.4)

(N=3) (p=0.6)

newspaper articles

(N=1) (p=100%)

N = number of observations, p = probability

  • Result (imaginary): increased pressure leads to increased action

How can we measure general concepts ?

A scientific proposition contains concepts (theoretical variables)

  • Examples: “the learner”, “performance”, “efficiency”, “interactivity”

An academic research paper links concepts and and empirical paper grounds these links with data.

  • ... empirical research requires that you work with data, find indicators, build indices,
  • because of observed correlations we can make statements at the theory level

Collaborative learning improves pedagogical effect:

Book-research-design-46.png

  • We got a real problem here ! How could we measure "pedagogical effect" or "collaborative

learning" ?

The bridge/gap between theoretical concept and measure:

  • There are 2 issues you must address:

(1) Going from “ abstract" to "concrete ” (theoretical concept - observables)

Examples:

  • measure of “student participation” with “number of forum messages posted”
  • measure of “pedagogical success” with “grade average of a class in exams”

(2) “ whole - part ” (dimensions):

Examples from educational design, i.e. dimensions you might consider when you plan to measure the socio-constructiveness of some teaching:

  • Decomposition of “socio-constructivist design” in (1) active or constructive learning, (2) self-directed learning, (3) contextual learning and (4) collaborative learning, (5) teacher’s interpersonal behavior (Dolmans et. al)
  • The Five Es socio-constructivist teaching model: Engagement, Exploration, Explanation, Elaboration and Evaluation (Boddy & al)

Example from public policy analysis:

  • Decomposition de “economic development” in industrialization, urbanization, transports, communications and education.

Example from HCI:

  • Decomposition of usability in "cognitive usability" (what you can achieve with the software) and "simple usability" (can you navigate, find buttons, etc.)

COLLES Constructivist On-Line Learning Environment Survey (Taylor and Maor)

Dimensions (from teacher education over the Internet survey studies survey):

    • Relevance How relevant is on-line learning to students' professional practices?
    • Reflection Does on-line learning stimulate students' critical reflective thinking?
    • Interactivity To what extent do students engage on-line in rich educative dialogue?
    • Tutor Support How well do tutors enable students to participate in on-line learning?
    • Peer Support Is sensitive and encouraging support provided on-line by fellow students?
    • Interpretation Do students and tutors make good sense of each other's on-line

communications?

Each of these dimensions is then measured with a few survey questions (items), e.g.:

Statements

Almost Never

Seldom

Some-times

Often

Almost Always

Items concerning relevance

my learning focuses on issues that interest me.

O

O

O

O

O

what I learn is important for my professional practice as a trainer.

O

O

O

O

O

I learn how to improve my professional practice as a trainer.

O

O

O

O

O

what I learn connects well with my prof. practice as a trainer.

O

O

O

O

O

Items concerning reflection

... I think critically about how I learn.

O

O

O

O

O

... I think critically about my own ideas.

O

O

O

O

O

... I think critically about other students' ideas.

O

O

O

O

O

... I think critically about ideas in the readings.

O

O

O

O

O

measure of economic development

  • usage of official statistics
  • (only part of the diagram is shown)

Book-research-design-47.png

measure of the strategic efficiency of a private distance teaching agency

  • example taken from a french methodology text book (Thiétard, 1999)

Book-research-design-48.png

Dangers and problems of concept operationalization

  1. Gap between data and theory
    • Example: measure communication within a community of practice (e.g. an e-learning

group) by the quantity of exchanged forum messages

    • (students may use other channels to communicate !)
  1. You forgot a dimension
    • example: measure classroom usage of technology only by looking at the technology the

teacher uses e.g. powerpoint, demonstrations with simulation software or math. software

    • ( you don’t take into account technology enhanced student activities )
  1. Concept overloading
    • example: Include “education” in the definition of development
      (it could be done,

but at the same you loose an important explanatory variable for development, e.g. consider India’s strategy that "overinvested" in education with the goal to influence on development)

    • Therefore: never ever collapse explanatory and explainable variables into one concept

!!

  1. Bad measures
    • (see later)

The measure

  • observe properties, attributes, behaviors, etc.
  • select the cases you study (sampling)

Book-research-design-49.png

Sampling

As a general rule:

  • Make sure that "operative" variables have good variance, otherwise you can’t make any

statements on causality or difference .....

  • operative variables = dependant (to explain) and independent (explaining) variables

Overview on sampling strategies

Type of selected cases

Usage

maximal variation

will give better scope to your result
(but needs more complex models, you have to control more intervening variables, etc. !!)

homogeneous

provides better focus and conclusions; will be "safer" since it will be easier to identify explaining variables and to test relations

critical

exemplify a theory with a "natural" example

according to theory,
i.e. your research questions

will give you better guarantees that you will be able to answer your questions ....

extremes and deviant cases

test the boundaries of your explanations, seek new adventures

intense

complete a quantitative study with an in-depth study

  • sampling strategies depend a lot on your research design !

Measurement techniques

  • There are not only numbers, but also text, photos and videos !
  • Not treated here, see the modules [book-research-design.htm#50470927_30368 See

Quantitative data acquisition methods (e.g. surveys and tests)] and [book-research-design.htm#50470929_68037 See Qualitative data acquisition methods (e.g. Interviews and observations)] !

Principal forms of data collection

Articulation

Situation

non-verbal
and verbal

verbal

oral

written

informal

participatory observation

information interview

text analysis,log files analysis,etc.

formal and
unstructured

systematic observation

open interviews,semi-structured interviews,thinking aloud protocols,etc.

open questionnaire,journals, vignettes,

formal and
structured

experiment simulation

standardized interview,

standardized questionnaire,log files of structured user interactions,

Reliability of measure

File:Book-research-design-50.png reliability = degree of measurement consistency for the same object

  1. by different observers
  2. by the same observer at different moments
  3. by the same observer with (moderately) different tools

example: measure of boiling water

    • A thermometer always shows 92 C. => it is reliable (but not construction valid)
    • The other gives between 99 and 101 C.: => not too reliable (but valid)

Sub-types de reliability (Kirk & Miller):

  1. circumstantial reliability: even if you always get the same result, it does not means

that answers are reliable (e.g. people may lie)

  1. diachronic reliability: the same kinds of measures still work after time
  2. synchronic reliability: we obtain similar results by using different techniques, e.g.

survey questions and item matching and in depth interviews

Book-research-design-51.png

The “3 Cs” of an indicator

File:Book-research-design-52.png Are your data complete  ?

  • Sometimes you lack data ....
  • Try to find other indicators

File:Book-research-design-53.png Are your data correct  ?

  • The reliability of indicators can be bad.
  • Example: Software ratings may not mean the same
    • according to cultures (sub-cultures, organizations, countries) people are more or less

outspoken.

File:Book-research-design-54.png Are your data comparable  ?

  • The meaning of certain data are not comparable.
  • examples:
    (a) School budgets don’t mean the same thing in different countries

(different living costs)
(b) Percentage of student activities in the classroom don’t measure "socio-constructive" sensitivity of a teacher (since there a huge cultural differences between various school systems)

Interpretation: validity (truth) and causality

  • Can you really trust your conclusions
  • Did you misinterpret statistical evidence for causality ?

Book-research-design-55.png

The role of validity

  • Validity (as well reliability) determine the formal quality of your research
  • More specifically, validity of your work (e.g. your theory or model) is determined by

the validity of its components.

In other words:

  • can you justify your interpretations ??
  • are you sure that you are not a victim of your confirmation bias ?
  • can you really talk about causality (or should you be more careful) ?

Note: Validity is not the only quality factor

Judgements

Theories

usefulness (understanding, explanation, prediction)

Models (“frameworks”)

usefulness & construction(relation between theory and data, plus coherence)

Hypotheses and models

validity & logic construction (models)

Methodology ("approach")

usefulness (to theory and conduct of empirical research)

methods

good relation with theory, hypothesis, methodology etc.

Data

good relation with hypothesis et models, plus reliability

File:Book-research-design-56.png A good piece of work satisfies first of all an objective, but it also must be valid

The same message with another picture:

File:Book-research-design-57.png The most important usefulness criteria is: "does it increase our knowledge"

File:Book-research-design-58.png The most important formal criteria are validity and reliability

File:Book-research-design-59.png Somewhere in between: "Is your work coherent and well constructed" ?

Book-research-design-60.png

Some reflections on causality

File:Book-research-design-61.png A correlation between 2 variables (measures) does not prove causality

Co-occurrence between 2 events does not prove that one leads to the other

  • The best protection against such errors is theoretical and practical reasoning !

example:

  • “We introduced ICT in our school and student satisfaction is much higher”
  • (It’s maybe not ICT, but just a reorganization effect that had impact on various other

variables such as teacher-student relationship, teacher investment, etc.)

File:Book-research-design-62.png If you observe correlations in your data and you are not sure, talk about association and not cause !

File:Book-research-design-63.png Even if can provide sound theoretical evidence for your conclusion, you have the duty to look a rival explanations !

  • There are methods to test rival explanations (see modules on data-analysis)

Some examples of bad inference

  • Simple hidden causalities

File:Book-research-design-64.png

  • Think !

Conclusion

Some advice

At every stage of research you have to think and refer to theory:

  • Good analytical frameworks (e.g. instructional design theory or activity theory) will

provide structure to your investigation and will allow you to focus on essential things.

  • You can’t answer your research question without a serious operationalization effort.
    • Identify major dimensions of concepts involved, use good analysis grids !

Watch out for validity problems

  • You can’t prove a hypothesis (you only can test, reinforce, corroborate, etc.).
    • Therefore, also look at anti-hypotheses !
  • Good informal knowledge of a domain will also help
    • Don’t hesitate to talk about your conclusions with a domain expert
  • Purely inductive reasoning approaches are difficult and dangerous.
    • ... unless you master an adapted (costly) methodology, e.g. "grounded theory"

You have a “confirmation bias” !

  • humans tend to look for facts that confirm their reasoning and ignore contradictory

elements

  • It’s your duty to test rival hypothesis (or at least to think about them) !

Attempt some (but not too much) generalization

  • show the others what they can learn from your piece of work , confront your work to

other’s !

Choice and complementarity of methods

Triangulation of methods

File:Book-research-design-65.png Different viewpoints (and measures) can consolidate or even refine results

  • E.g. imagine that you (a) led a quantitative study about teacher’s motivation to use ICT

in school or (b) that you administered an evaluation survey form to measure user satisfaction of a piece of software.

  • You then can run a cluster analysis through your data and identify major types of users
    • (e.g. 6 types of teachers or 4 types of users).
  • Then you can do in-depth interviews with 2 representatives for each type and "dig" in

their attitudes, subjective models, abilities, behaviors, etc. and confront these results with your quantitative study.

Theory creation v.s theory testing

File:Book-research-design-66.png qualitative methods are better suited to create new theories

    • (exploration / comprehension)

File:Book-research-design-67.png quantitative methods are better suited to test / refine theories

    • (explication / prediction)

... but:

  • validity, causality, reliability issues ought to be addressed in any piece of research
  • it is possible to use several methodological approaches in one piece of work