Wednesday, February 18, 2009

Qualitative Descriptive Research

Notes taken by Alicia

Qualitative descriptive research (case studies) - Ultimate goal is to improve practice. This presupposes a cause/effect relationship between behavior and outcome; however, this method will ONLY let you hypothesize about variables and describe them. When you move to show correlation among them, you’re doing quantitative work. But remember, correlation does not mean causation.

With these studies, you can examine factors that *might* be influencing behaviors, environments, circumstances, etc. You cannot prove cause/effect for certain.

Purpose - Case studies identify and provide evidence to support the fact that certain parts/variables exist, that they have construct validity (i.e. people agree these are the parts). Qualitative-descriptive method is a necessary precursor to quantitative research: you always need to operationalize variables–define them.

Subject selection critieria -

  • Begin with a theory, which already has construct validity
    [in UXD, a text for this would be Universal Principles of Design, since it is ripe for application to projects]
  • Subjects need to be representative of the thing under study so that it becomes possible to generalize findings to a wider community.

Data collection techniques -

  • Content analysis: coding for patterns (i.e. pattern recognition) across subjects
  • Think/talk aloud protocol

**The success of this methodology hinges on inter-rater reliability, that measure of agreement between coders. [The best example of how to do inter-rater reliability in composition is "The Pregnant Pause: An Inquiry Into the Nature of Planning" by Linda Flower and John R. Hayes.]

When there’s low inter-rater reliability, it could be because…

  1. All else being equal, the raters weren’t well trained
  2. Your categories were not operationally defined to a sufficient degree. Categories should be as concrete as possible.
  3. The raters themselves are flawed: they are not experts; they are ideologically opposed to the study or to potential findings; they are fatigued. When critiquing a coding study, it’s apt to question who the raters are.

No comments:

Post a Comment