dr chen wenli learning sciences and technologies ag learning sciences lab national institute of...

44
Dr Chen Wenli Learning Sciences and Technologies AG Learning Sciences Lab National Institute of Education Quantitative Research Methods (II)

Upload: arleen-sutton

Post on 22-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1
  • Dr Chen Wenli Learning Sciences and Technologies AG Learning Sciences Lab National Institute of Education Quantitative Research Methods (II)
  • Slide 2
  • Outline Logic of quantitative research Constructing hypothesis Types of quantitative research methods Survey research Experimental research Single-subject research Casual-comparative research Quantitative content analysis Validity and reliability in quantitative research
  • Slide 3
  • Experimental Research Characteristics of experimental research Experimental research design Experimental design Quasi-experimental design Factorial design Validity of experimental research Control of extraneous variables
  • Slide 4
  • Experimental Research Researcher applies some treatments to subjects for an appropriate length of time and then observes the effect of the treatments on the subjects by measuring response variables IV (experimental or treatment variable) a condition or set of conditions applied to subjects DV (response, criterion or outcome Variable) results or outcome on the subjects
  • Slide 5
  • Examples Quality of learning with an active versus passive motivational set (Benware & Deci, 1984) Comparison of computer-assisted cooperative, competitive, and individualistic learning (Johnson, Johnson, & Stanne, 1986) The effect of a computer simulation activity versus a hands-on activity on product creativity in technology education (Kurt, 2001) The effect of language course taught with online supplement material (Shimazu, 2005)
  • Slide 6
  • Characteristics The only type of research that directly attempts to influence a particular variable The only type that, when used properly, can really test hypotheses about cause-and-effect relationships. Enable researchers to go beyond description and the identification of relationships, to at least a partial determination of what causes them 3 characteristics of experimental research
  • Slide 7
  • Manipulation of IV Researcher manipulate the IV Decide the nature of treatment/intervention (what is going to happen to the subjects of the study) To whom it is to be applied To what extent When, where and how
  • Slide 8
  • Comparison of Groups At least 2 conditions are compared to assess the effect(s) of particular conditions or treatments (IV) Experimental group (receive a treatment of some sort) Control group (no treatment) or comparison group (receive different treatment) IV may be established in several ways: Presence VS absence of a particular form One form of variable VS another Varying degrees of the same form
  • Slide 9
  • Randomization Random assignment of subjects to groups an important ingredient in the best kinds of experiments every individual who is participating in the experiment has an equal chance of being assigned to any of the experimental or control conditions being compared It takes place before the experiment begins Allows the researcher to form groups that are equivalent Eliminate the threat of extraneous, or additional variables that might affect the outcome of the study
  • Slide 10
  • Commonly Used Notation X 1 =treatment group X 2 =control/comparison group O=observation (pretest, posttest, etc.) R=random assignment
  • Slide 11
  • Weak Experimental Designs One-shot case study design a single group is exposed to a treatment or event, and its effects assessed. One-group pretest-posttest design a single group is measured or observed both before and after exposure to a treatment. X O TechnologyAttitude scale to measure interest O X O Pretest Treatment Post test
  • Slide 12
  • True Experimental Designs Randomized posttest-only control group design involves two groups formed by random assignment and receiving different treatments Randomized pretest-posttest control group design differs from the randomized posttest-only control group only in the use of a pretest Treatment group R O X 1 O Control group R O X 2 O Treatment group R X 1 O Control group R X 2 O
  • Slide 13
  • True Experimental Designs Randomized Solomon four-group design involves random assignment of subjects to four groups, with two being pretested and two not. Treatment group ROX 1 O Control group ROX 2 O Treatment group RX 1 O Control group RX 2 O Better control the threat to internal validity Drawbackrequires twice as many participants
  • Slide 14
  • Quasi-Experimental Designs Used in place of experimental research when random assignment to groups is not feasible Posttest-only design with nonequivalent groups Pretest-posttest design with nonequivalent groups: Treatment group O X 1 O Control group O X 2 O Treatment group X 1 O Control groupX 2 O
  • Slide 15
  • Quasi-Experimental Designs Counterbalanced design: all groups are exposed to all treatments, but in a different order the order in which the groups receive the treatments should be determined randomly the number of groups and treatments must be equal Comparing the average scores fro all groups on the posttest for each treatment Group IX 1 O X 2 O X 3 O Group II X 3 O X 1 O X 2 O Group III X 2 O X 3 O X 1 O
  • Slide 16
  • Quasi-Experimental Designs Time-series design: involves repeated measurements or observations over time (until scores are stable ), both before and after treatment. O O O O X O O O O Uses a single group of participants Examines possible changes over time Study B Study A X
  • Slide 17
  • Factorial Designs Factorial designs extend the number of relationships that may be examined in an experimental study. Treatment ROX 1 O ControlROX 2 O Treatment ROX 1 O Control ROX 2 O Incorporates two or more factors The additional factor could be treatment variable or subject characteristics Enables researcher to detect differential differences (effects apparent only on certain combinations of levels of IVs)
  • Slide 18
  • A 2 X 2 factorial design BoyGirl Traditional Game- based learning Group 1 Group 2 Group 3Group 4
  • Slide 19
  • A 2 X 2 factorial design No interaction between factors Game -based Traditional Interacting factors BoyGirlBoy Attitudes toward learning Girl Attitudes toward learning Traditional Game -based
  • Slide 20
  • Validity Validity: the experiment tests the variable(s) that it purports to test If threats are not controlled for, they may introduce error into the study, which will lead to misleading conclusions Threats to validity Internal Internal: factors other than the IV that affect the DV External External: factors that affect the generalizability of the study to groups and settings beyond those of the experiment
  • Slide 21
  • Threats to Internal Validity History Uncontrolled event that occur during the study that may have an influence on the observed effect other than the IV Maturation Factors that influence a participant's performance because of time passing rather than specific incidents (e.g., the physical, intellectual, and emotional changes that occur naturally) Test practice The effects of participants taking a test that influence how they score on a subsequent test Instrumentation Influences on scores due to calibration changes in any instrument that is used to measure participant performance Statistical regression Problem that occurs when participants have been assigned to particular group on the basis of atypical or incorrect scores.
  • Slide 22
  • Threats to Internal Validity Bias in group composition Systematic differences between the composition of groups in addition to the treatment under study. Experimental mortality A differential loss of participants Hawthorne effect Change in the sensitivity or performance by the participants that may occur merely as a function of being a part of the study Novelty effect Participant interest, motivation, or engagement increases simply because they are doing something different Placebo effect The participants receive no treatment but believe they are
  • Slide 23
  • Threats to External Validity Population-sample differences The degree to which the participants in a study are representative of the population to which generalization is desired Artificial research arrangements The degree that a research setting deviates from the participant's usual routine Multiple-treatment interference More than one treatment is administered to the same participants and results in cumulative effects that may not be similar to the outside world and may threaten generalization of the results Treatment diffusion The situation when different treatment groups communicate with and learn from each other
  • Slide 24
  • Validity of Different Experimental Designs
  • Slide 25
  • Control of Extraneous Variables Confounding: the fact that the effects of the IV may intertwine with extraneous variables, such that it is difficult to determine the unique effects of each variable Common ways to control for extraneous variables Randomization Holding certain variables constant Matching Comparing homogeneous groups or subgroups Analysis of covariance (ANCOVA)
  • Slide 26
  • Single-Subject Research Most commonly used to study the changes in behavior an individual exhibits after exposure to a treatment or intervention of some sort. Can be applied in settings where group designs are difficult to put into play. Involves extensive collection of data on one subject at a time. Primarily use line graphs to present their data and to illustrate the effects of a particular intervention or treatment. Adaptations of the basic time-series design
  • Slide 27
  • Single-Subject Research A-B design baseline measurements (O) are repeatedly made until stability is established, then the treatment (X) is introduced and an appropriate number of measurements (O) are made during treatment implementation O O O X O X O X O baseline treatment phase phase A | B
  • Slide 28
  • Single-Subject Research Reversal (A-B-A) design baseline measurements (O) are repeatedly made until stability is established, then the treatment (X) is introduced and an appropriate number of measurements (O) are made during treatment implementation, followed by an appropriate number of baseline measurements (O) to determine stability of treatment (X) O O O X O X O X O O O baseline treatment baseline phase phase phase A | B | A
  • Slide 29
  • Other Single-Subject Research Designs A-B-A-B design 2wo baseline periods are combined with two treatment periods B-A-B design Used when an individual's behavior is so severe or disturbing that a researcher cannot wait for a baseline to be established A-B-C-B design: "C" condition refers to a variation of the intervention in the "B" condition. The intervention is changed during the "C" phase typically to control for any extra attention the subject may have received during the "B" phase.
  • Slide 30
  • Threats to Validity in Single Subject Research Internal Validity length of the baseline and intervention conditions the number of variables changed when moving from one condition to another the degree and speed of any change that occurs whether or not the behavior returns to baseline levels the independence of behaviors the number of baselines External Validity weak when it comes to generalizability It is important to replicate single-subject studies to determine whether they are worthy of generalization.
  • Slide 31
  • Controlling Threats in Single-Subject Studies Single-subject designs are most effective in controlling for subject characteristics, mortality testing, and history threats. They are less effective with location, data collector characteristics, maturation, and regression threats. They are especially weak when it comes to instrument decay, data-collector bias, attitude, and implementation threats.
  • Slide 32
  • Causal-Comparative Research Explores the possibility of cause-and-effect relationships when experimental and quasi-experimental approaches are not feasible Differs from experimental and quasi-experimental research IV is not manipulated (not ethical or not possible) Focuses first on the effect, then tries to determine possible Relationships can be identified in causal-comparative study, but causation cannot be fully established.
  • Slide 33
  • Steps in Causal-Comparative Research Formulating a problem Identify and define the particular phenomena of interest, and then to consider possible causes for, or consequences of, these phenomena. Selecting a sample Define carefully the characteristic to be studied and then to select groups that differ in this characteristic. Instrumentation No limits to the kinds of instruments that can be used Design Select two groups that differ on a particular variable of interest and then comparing them on another variable or variables.
  • Slide 34
  • Threats to Internal Validity in Causal-Comparative Research Weaknesses : lack of randomization Inability to manipulate an IV A major threat: the possibility of a subject selection bias. The procedures used to reduce this threat matching subjects on a related variable creating homogeneous subgroups the technique of statistical matching. Other threats to internal validity Location Instrumentation Loss of subjects.
  • Slide 35
  • Data Analysis in Causal- Comparative Studies The first step: construct frequency polygons. Means and SD are usually calculated if the variables involved are quantitative. The most commonly used test is a t-test for differences between means. ANCOVA is particularly useful in causal-comparative studies. The results of causal-comparative studies should always be interpreted with caution, because they do not prove cause and effect.
  • Slide 36
  • Common quantitative measure in learning and education Learning gain Post-pre (post-pre)/(1-pre) (Hakes gain) Adjusted post score (through ANCOVA) Learning efficacy Does it help reduce time spent for problem solving? Users attitude Teachbacks How well learner can teach back?
  • Slide 37
  • Quantitative Content Analysis Content analysis is a quantitative research instrument for a systematical and intersubjective description of content A form of textual analysis *usually* Categorizes chunks of text according to Code Based on the principles of social science of measuring and counting Reduces the complexity of content as it brings out the central patterns of the coverage One objective is to examine large amounts of content with statistic methods
  • Slide 38
  • Rough History Classical Content Analysis Used as early as the 30s in military intelligence Analyzed items such as communist propaganda, and military speeches for themes Created matrices searching for the number of occurrences of particular words/phrases (New) Content Analysis Moved into Social Science Research Study trends in Media, Politics, and provides method for analyzing open ended questions Can include visual documents as well as texts More of a focus on phrasal/categorical entities than simple word counting
  • Slide 39
  • Procedure
  • Slide 40
  • The Sample The sample Which types of content? Which period? Which characteristics? Elements of the research instrument Sampling units Units of analysis: unit of the content on which our measurements are based. The categories describe the properties of the media content which is relevant to our research question
  • Slide 41
  • Validity in Quantitative Research Definition: the extent to which any measuring instrument measures what it is intended to measure Types of validity Construct Validity: examines the fit between the conceptual definitions & operational definitions of the variables Content Validity : verifies that the method of measurement actually measures the expected outcomes. Predictive Validity : determines the effectiveness of the instrument as a predictor of a future event Statistical Conclusion Validity: concerned with whether the conclusions about relationships and/or differences drawn from statistical analysis are an accurate reflection of the real world
  • Slide 42
  • Reliability in Quantitative Research Definition: refers to the accuracy and consistency of information obtained in a study; important in interpreting the results of statistical analyses; and refers to the probability that the same results would be obtained with different samples (generalizability) 3 common methods to check reliability test-retest method administering the same instrument twice to the same group of individuals after a certain time interval has elapsed. equivalent-forms method administering two different, but equivalent, forms of an instrument to the same group of individuals at the same time. internal-consistency method comparing responses to different sets of items that are part of an instrument.
  • Slide 43
  • Summary Logic of quantitative research Constructing hypothesis Types of quantitative research methods Survey research Experimental research Single-subject research Casual-comparative research Others Validity and reliability in quantitative research
  • Slide 44