chapter 10 - part 1

81
Chapter 10 - Part 1 Factorial Experiments

Upload: tudor

Post on 21-Jan-2016

50 views

Category:

Documents


0 download

DESCRIPTION

Chapter 10 - Part 1. Factorial Experiments. Two-way Factorial Experiments. In Chapter 10, we are studying experiments with two independent variables, each of which will have multiple levels. - PowerPoint PPT Presentation

TRANSCRIPT

Chapter 10 - Part 1

Factorial Experiments

Two-way Factorial Experiments

In Chapter 10, we are studying experiments with two independent variables, each of which will have multiple levels.

We call each independent variable a factor. The first IV is called Factor 1 or F1. The second IV is called Factor 2 or F2.

So, in this chapter we will study two factor, unrelated groups experiments.

Conceptual Overview

• In Chapter 9 you learned to do the F test comparing two estimates of sigma2, MSB and MSW.

• F = MSB/MSW

• That is one version of a more generic formula. The generic formula tells us what to do in the two factor case.

• Here is the generic formula:

Generic formula for the unrelated groups F test

• F = SAMP.FLUC. + (?) ONE SOURCE OF VARIANCE)(SAMPLING FLUCTUATION)

• Which can also be written as

• F = (ID + MP + (?) ONE SOURCE OF VARIANCE)(ID+ MP)

Let me explain why.

The denominator of the F test

• The denominator in the F test reflects only random sampling fluctuation. It can not reflect the effect of any independent variable or combination of independent variables.

• In the unrelated groups F and t test, our best index of sampling fluctuation is MSW, our consistent, least squares, unbiased estimate of sigma2.

The denominator of the F test

• Sigma2 underlies sampling fluctuation. If we know sigma2, we can easily tell how far any sample mean should fall from the overall mean (M) and how far multiple means should fall from each other, on the average.

• Remember, sigma2 itself is a function of how different each individual score is from the average score. That is, sigma2 reflects that each score is not identical to the others in the same population (or, in this case, in the same group), because of random individual differences and random measurement problems (ID + MP).

Denominator = MSW

• In the F tests we are doing MSW serves as our best estimate of sigma2.

• Everyone in each specific group is treated the same.

• So the only reasons that scores differ from their own group means is that people differ from each other (ID) and there are always measurement problems (MP).

• MSW = ID +MP

Numerator of the F ratio: Generic formula

• Numerator of the F ratio is an estimate of sigma2 that reflects sampling fluctuation + the possible effects of one difference between the groups.

• In Ch. 9, there was only one difference among the ways the groups were treated, the different levels of the independent variable (IV)

• MSB reflected the effects of random individual differences (there are different people in each group), random measurement problems, and the effects of the independent variable.

• We can write that as MSB = ID + MP + (?)IV

So, in the single factor analysis of variance

F = (ID + MP + ?IV)(ID + MP)

• Both the numerator and denominator reflect the same elements underlying sampling fluctuation

• The numerator includes one, and only one, systematic source of variation not found in the denominator.

This allows us to conclude that:

• IF THE NUMERATOR IS SIGNIFICANTLY LARGER THAN THE DENOMINATOR, THE SIZE DIFFERENCE MUST BE MUST BE CAUSED BY THE ONE ADDITIONAL THING PUSHING THE MEANS APART, the IV.

• But notice there has to be only one thing different in the numerator and the denominator to make that conclusion inevitable.

Why we can’t use MSB as the numerator in the multifactor analysis of variance

• In the two factor analysis of variance, the means can be pushed apart by:

– The effects of the first independent variable (F1).

– The effects of the second independent variable (F2)

– The effects of combining F1 and F2 that are above and beyond the effects of either variable considered alone (INT)

– Sampling fluctuation (ID + MP)

So if we compared MSW to MSB in a two factor experiment, here is what we would have.

• F = (ID + MP + ?F1 + ?F2 + ?INT)(ID + MP)

That’s not an F test. In an F test the numerator must have one and only one source of variation beyond sampling fluctuation. HERE THERE ARE THREE OF THEM!Each of these three things besides sampling fluctuation could be pushing the means apart.

So, in the multifactor F test, a ratio between MSB and MSW is meaningless.

We must create 3 numerators to compare to MSW, each comprising ID + MP + one other factor

What can we do????We must take apart (analyze) the sums of squares and degrees of freedom betwee groups (SSB and dfB) into component parts. Each part must contain only one factor along with ID and MP. Then each component will yield an estimate of sigma2 that can be compared to MSW in an F test

We start with SSB and dfB and subdivide them – slide 1

• First, we create a way to study the effects of the first factor, ignoring the presence of Factor 2.

• We combine groups so that the resulting, larger aggregates of participants differ only because they received different levels of F1.

• Each such combined group will include an equal number of people who received the different levels of F2.

• So the groups are the same in that regard. • They differ only in terms of how they were treated

on the first independent variable, F1.

• Each combined group has different people than the other combined group(s).

• So the the groups differ because of random individual differences (ID).

• Different random measurement problems will appear in each group (MP).

• Each combined group received a specific level of F1 and each group has a mean.

• Thus the groups will differ from each other because of ID + MP + the possible effects of Factor 1.

Computing MSF1, one of the three numerators in a two factor F test

• So, if we find the differences between each person’s mean for his/her combined group and the overall mean, then square and sum the differences, we will have a sum of squares for the first independent variable (SSF1).

• Call the number of levels of an independent variable L. df for the combined group equals the number of levels of a factor minus one (LF1 – 1).

• An estimate of sigma2 that includes only ID + MP + (?) F1 can be computed by dividing this sum of squares by its degrees of freedom, as usual.

• MSF1 = SSF1/dfF1 = (ID + MP + ?F1)

Once you have MSF1, you have one of the three F tests you do in

a two factor ANOVA

•F = MSF1/MSW

Then you do the same thing to find MSF2

• You combine groups so that you have groups that differ only on F2.

• You compare each person’s mean for this combined group to the overall mean, squaring and summing the differences for each person to get SSF1.

• Degrees of freedom = the number of levels of Factor 2 minus 1 (dfF2 = LF2 – 1).

• Then MSF2 = SSF2/dfF2

• FF2 = MSF2/dfF2

The wholes are equal to the sum of their parts.

• Remember, SSB and dfB are the total between group sum of squares and degrees of freedom. Each is composed of three parts, SSF1, SSF2, SSINT and dfF1, dfF2, and dfINT. We are subdividing SSB and dfB into their 3 component parts.

• We have already computed SSF1, SSF2, dfF1, and dfF2.

• If we subtract the parts of SSB that we have already accounted for (SSF1 & SSF2), the remainder will be THE SUM OF SQUARES FOR THE INTERACTION (SSINT).

• If we subtract the parts of dfB that we have already accounted for (dfF1 & dfF2), the remainder will be the degrees of freedom for the interaction (dfINT).

Here’s the formulae

• SSB= SSF1 + SSF2 + SSINT

• So

• SSINT= SSB – (SSF1 + SSF2)

• dfB= dfF1 + dfF2 + dfINT

• So

• dfINT= dfB – (dfF1 + dfF2)

MSINT = SSINT/dfINT

• Once you find the sum of squares and degrees for the interaction, you can compute a mean square, as usual, by dividing SS by df.

• The mean square for the interaction reflects ID + MP plus the possible effects of combining two variables that are not accounted for by the effects of either one alone or by simply adding the effects of the two.

For example,

• For example, alcohol and tranquilizers both can make you intoxicated.

• Combine them and you don’t just get more intoxicated.

• You can easily wind up dead.• Their effects multiply each other, they don’t just

add together.• So the effect of the two variables together is

different from the effect of either considered alone. That’s an interaction of two variables.

What’s next?

• We have two things to do:

• 1. Learn to compute the ANOVA and test three null hypotheses.

• 2. Learn more theory

Like Ch. 9, we’ll learn to compute ANOVA first.

• Many students find that it begins to make sense, when you see how the numbers come together.

• After the computation, we’ll return to theory.

A two factor experiment• Introductory Psychology students are asked to perform

an easy or difficult task after they have been exposed to a severely embarrassing, mildly embarrassing, non-embarrassing situation.

• The experimenter believes that people use whatever they can to feel good about themselves.

• Therefore, those who have been severely embarrassed will welcome the chance to work on a difficult task.

• The experimenter hypothesizes that in a non-embarrassing situation, participants will enjoy the easy task more than the difficult task.

Example: Experiment Outline• Population: Introductory Psychology students

• Subjects: 24 participants divided equally among 6 treatment groups.

• Independent Variables:

– Factor 1: Embarrassment levels: severe, mild, none.

– Factor 2: Task difficulty levels: hard, easy

• Groups: 1=severe, hard task; 2=severe, easy task; 3=mild, hard task; 4=mild, easy task; 5=none, hard; 6=none, easy.

• Dependent variable: Subject rating of task enjoyment, where 1 = hating the task and 9 = loving it.

Effects

• We are interested in the main effects of embarrassment or task difficulty. Do participants like easy tasks better than hard ones? Do people like tasks differently when embarrassed or unembarrassed.

• We are also interested in assessing how combining different levels of both factors affect the response in ways beyond those that can be predicted by considering the effects of each IV separately. This is called the interaction of the independent variables.

Testing the null hypotheses.• Do people undergoing different levels of

embarrassment have differential responses to any task?

• H0-F1: Levels of embarrassment will not cause differences in liking for the task above and beyond those accounted for by sampling fluctuation.

• Do people like easy tasks better than hard ones (or the reverse) irrespective of how embarrassed they are?

• H0-F2: Differences in task difficulty will not cause differences in liking for the task above and beyond those accounted for by sampling fluctuation..

Testing the null hypotheses.• Do embarrassment and task difficulty

interact such that unembarrassed participants prefer easy tasks while embarrassed ones prefer hard tasks?

• H0-INT: There will be no differences in liking for the task caused by combining task difficulty and embarrassment that can not be attributed to the effects of each independent variables considered alone or to sampling fluctuation.

Computational steps• Outline the experiment. (Done)

• Define the null and experimental hypotheses. (Done)

• Compute the Mean Squares within groups.

• Compute the Sum of Squares between groups.

• Compute the main effects.

• Compute the interaction.

• Set up the ANOVA table.

• Check the F table for significance.

• Interpret the results.

A 3X2 STUDYEmbarrassment

TaskDifficulty

Severe Mild None

Easy

Hard

NXMXSX

EX

HX

M

SHX

SEX MEX NEX

MHX NHX

MSWEmbarrassment

TaskDifficulty

Severe Mild None

Easy

Hard

NXMXSX

EX

HX

M

SHX

SEX MEX NEX

MHX NHX

Compare each score to the mean for its group.

XX

Mean Squares Within Groups1.11.21.31.4

2.12.22.32.4

3.13.23.33.4

#S4.14.24.34.4

5.15.25.35.4

6.16.26.36.4

6486

4345

3575

5447

3364

5676

X

61 X 42 X 53 X

54 X 45 X 66 X 5M

#S X 2XX 0440

0101

4040

0114

1140

1010

2XX 6666

4444

5555

5555

4444

6666

XX X0

-220

0-101

-2020

0-1-12

-1-120

-1010

XX X

18624 kndfW78.1WMS

32WSS

Then we compute a sum of squares and df between groups

• This is the same as in Chapter 9

• The difference is that we are going to subdivide SSB and dfB into component parts.

• Thus, we don’t use SSB and dfB in our Anova summary table, rather we use them in an intermediate calculation.

Sum of Squares Between Groups (SSB)

Embarrassment

TaskDifficulty

Severe Mild None

Easy

Hard

NXMXSX

EX

HXSHX

SEX MEX NEX

MHX NHX

Compare each group meanto the overall mean.

MX

M

6666

4444

5555

5555

4444

6666

X

5161 kdfB

Sum of Squares Between Groups (SSB)

5555

5555

5555

5555

5555

5555

16BSS

M X 2MX 1111

1111

0000

0000

1111

1111

2MX 1111

-1-1-1-1

0000

0000

-1-1-1-1

1111

MX MX M

Next, we answer the questions about each factor having an overall effect.

• To get proper between groups mean squares we have to divide the sums of squares and df between groups into components for factor 1, factor 2, and the interaction.

• Let’s just look at factor 1. Our question about factor 1 was “Do people undergoing different levels of embarrassment have differential responses to any task?”

• We can group participants into all those who were severely embarrassed, all those who were mildly embarrassed, and all those who were not embarrassed.

SSF1: Main Effectof EmbarrassmentEmbarrassment

TaskDifficulty

Severe Mild None

Easy

Hard

NXMXSX

EX

HXSHX

SEX MEX NEX

MHX NHX

Compare each score’sEmbarrassment mean to the overall mean.

MX

M

Calculate Embarrassment Means

1.11.21.31.42.12.22.32.4

5.15.25.35.46.16.26.36.4

64864345

33645676

3.13.23.33.44.14.24.34.4

35755447

#S X #S X #S X

5SevereX

40X

8n5MildX

40X

8n5NoneX

40X

8n

Sum of squares and Mean Square for Embarrassment (F1)

Severe1.11.21.31.42.12.22.32.4

#S

Emb.55555555

X

55555555

2MX MX M

00000000

00000000

No5.15.25.35.46.16.26.36.4

Emb.55555555

55555555

00000000

00000000

Mild3.13.23.33.44.14.24.34.4

Emb.55555555

55555555

00000000

00000000

01 FSS

21311 FE Ldf

01 FMS

#S X 2MX MX M

#S X 2MX MX M

Factor 2

Then proceed as you just did for

Factor 1 and obtain SSF2 and MSF2 where dfF2=LF2 - 1.

SSF2: Main Effectof Task DifficultyEmbarrassment

TaskDifficulty

Severe Mild None

Easy

Hard

NXMXSX

EX

HXSHX

SEX MEX NEX

MHX NHX

Compare each score’s difficulty mean to the overall mean.

MX

M

Calculate Difficulty MeansHard1.11.21.31.43.13.23.33.45.15.25.35.4

#SEasy2.12.22.32.44.14.24.34.46.16.26.36.4

task648643453364

task357554475676

X #S X

5HardX

60X

12n5EasyX

60X

12n

555555555555

X

Sum of squares and Mean Square – Task Difficulty

555555555555

000000000000

2MX MX 000000000000

M X 2MX MX M555555555555

555555555555

000000000000

00000000000002 FSS

112122 FF Ldf

02 FMS

Computing the sum of squares and df for the interaction.

• SSB contains all the possible effects of the independent variables in addition to the random factors, ID and MP. Here is that statement in equation form

• SSB= SSF1 + SSF2 + SSINT

• Rearranging the terms:

SSINT = SSB - (SSF1+SSF2) or SSINT = SSB- SSF1-SSF2

SSINT is what’s left from the sum of squares between groups (SSB) when the main effects of the two IVs are accounted for.

So, subtract SSF1 and SSF2 from overall SSB to obtain the sum of squares for the interaction (SSINT).

Then, subtract dfF1 and dfF2 from dfB to obtain dfINT).

Means Squares - Interaction

DifficultyentEmbarrassmBnInteractio SSSSSSSS REARRANGE

nInteractioDifficultyentEmbarrassmB SSSSSSSS

160016 nInteractioSS

nInteractioSS 0016

16nInteractioSS

DifficultyentEmbarrassmBnInteractio dfdfdfdf

00.8nInteractioMS

125 2

Testing 3 null hypotheses in the two way factorial Anova

Hypotheses for Embarrassment

• Null Hypothesis - H0: There is no effect of embarrassment. Except for sampling fluctuation, the means for liking the task will be the same for the severe, mild, and no embarrassment treatment levels.

• Experimental Hypothesis - H1: Embarrassment considered alone will affect liking for the task.

Hypotheses for Task Difficulty

• Null Hypothesis - H0: There is no effect of task difficulty. The means for liking the task will be the same for the easy and difficult task treatment levels except for sampling fluctuation.

• Experimental Hypothesis - H1: Task difficulty considered alone will affect liking for the task.

Hypotheses for the Interaction of Embarrassment and Task Difficulty

• Null Hypothesis - H0: There is no interaction effect. Once you take into account the main effects of embarrassment and task difficulty, there will be no differences among the groups that can not be accounted for by sampling fluctuation.

• Experimental Hypothesis - H1: There are effects of combining task difficulty and embarrassment that can not be predicted from either IV considered alone. Such effects might be that:– Those who have been severely embarrassed will enjoy the difficult

task more than the easy task.

– Those who have not been embarrassed will enjoy the easy task more than the difficult task.

Theoretically relevant predictions

• In this experiment, the investigator predicted a pattern of results specifically consistent with her theory.

• The theory said that people will use any aspect of their environment that is available to avoid negative emotions and enhance positive ones.

• In this case, she predicted that the participants would like the hard task better when it allowed them to avoid focusing on feelings of embarrassment. Otherwise, they should like the easier task better.

Computational steps• Outline the experiment.

• Define the null and experimental hypotheses.

• Compute the Mean Squares within groups.

• Compute the Sum of Squares between groups.

• Compute the main effects.

• Compute the interaction.

• Set up the ANOVA table.

• Check the F table for significance.

• Interpret the results.

Steps so far• Outline the experiment.• Define the null and experimental hypotheses.• Compute the Mean Squares within groups.• Compute the Sum of Squares between groups.• Compute the main effects.• Compute the interaction.

What we know to this point

• SSF1=0.00, dfF1=2

• SSF2=0.00, dfF2=1

• SSINT=16.00, dfINT=2

• SSW=32.00, dfW=18

Steps remaining

• Set up the ANOVA table.• Check the F table for significance.• Interpret the results.

ANOVA summary table

Embarrassment

Task Difficulty

32 18 1.78

0 2 0 0 n.s

SS df MS F p

Interaction

Error

0 1 0 0 n.s

16 2 8

05.,50.418,2 pFInt

4.50 .05

55.305. 01.601.

Means for Liking a TaskEmbarrassment

TaskDifficulty

Severe Mild None

Easy

Hard 6 5

5

4

4 6

5M555

5

5

To interpret the results, always Plot the Means

0

1

2

3

4

5

6

7

Severe Mild NoneEmbarrassment

TaskTask Enjoy-ment

Easy

Hard

State Results

• Consistent with the experimenters theory, neither the main effect of embarrassment nor of task difficulty were significant.

• The interaction of the levels of embarrassment and of levels of the task difficulty was significant, .05.,50.418,2 pFInt

Interpret Significant Results

• Examination of the group means, reveals that subjects in the hard task condition most liked the task when severely embarrassed, and least liked it when not embarrassed at all.

• Those in the easy task condition liked it most when not embarrassed and least when severely embarrassed.

Describe pattern of means.

Interpret Significant Results

• These findings are consistent with the hypothesis that people use everything they can, even adverse aspects of their environment, to feel as good as they can.

Reconcile statistical findingswith the hypotheses.

Back to theory

• Notice some things about the experiment

Design of the experiment• Each possible combination of F1 and F2 creates an

experimental group that is treated differently from all other groups, in terms of one or both factors. Participants are randomly assigned to each of the treatment groups.

• For example, if there are 2 levels of the first variable (Factor 1or F1) and 2 of the second (F2), we will need to create 4 groups (2x2). If F1 has 2 levels and F2 has 3 levels, we need to create 6 groups (2x3). If F1 has 3 levels and F2 has 3 levels, we need 9 groups. Etc.

Identifying experimental designs• Two factor designs are identified by simply stating

the number of levels of each variable. So a 2x4 design (called “a 2 by 4 design”) has 2 levels of F1 and 4 levels of F2. A 3x2 design has 3 levels of F1 and 2 levels of F2. Etc.

• Which factor is called F1 and which is called F2 is arbitrary (and up to the experimenter).

Example of 2 x 3 designTo make it more concrete, assume we are testing new

treatments for Generalized Anxiety Disorder. In a two factor design we examine the effects of cognitive behavior therapy vs. a social support group among GAD patients who receive Ativan, Zoloft or Placebo. Thus, F1 has 2 levels (CBT/SocSup) while F2 has 3 levels (Ativan/Zoloft/Placebo) So, we would form 2 x 3 = 6 groups to do this experiment. Half the patients would get CBT, the other half get social support. A third of the CBT patients and one-third of the Social Support patients also get Ativan. Another third of those who receive CBT and one-third of those who get social support also receive Zoloft. The final third in each psychotherapy condition get a pill placebo.

A 2 x 3 design yields 6 groups. Let’s say you have 48 participants. Eight are randomly assigned to each group.

• Here are the six treatment groups:– CBT + Ativan– CBT + Zoloft– CBT + Placebo– Social support + Ativan– Social support + Zoloft– Social support + Placebo

Remember, each group is randomly chosen from an (already) random sample.

• So, all the groups have similar means and variances on variables measuring any kind of potential difference.

• Thus, the group means on all variables will differ at the start of a two factor experiment solely because of random sampling fluctuation.

• The question is whether that will still be true after we treat the groups differently is a systematic, preplanned manner.

Again, on what measures will the groups tend to resemble each other at the

beginning of the experiment?

• Answer: On every conceivable and inconceivable measure. If we were to make a bridge of playing cards from here to Pluto and put one measure on each card, our cards could only constitute a small percentage of the measurements that could be made. And on each and every one we would expect the group means to be fairly close to each other and to the population mean.

Here’s an example of a 2 x 2 study

• We are looking at the effects of stress and communication on liking for others. Participants were placed for half an hour in a crowded or uncrowded environment. So, half the participants were stressed, half were not. (Crowding is a simple, nonharmful way to stress people), Half of each group then had communication encouraged, the other half had communication discouraged. Here is a diagram of the study.

Again, each combination of the two independent variables becomes a group, all of whose members get the same level of both factors.

Stressed, Communication encouraged

Not stressed, Communication discouraged

Not stressed, Communication encouraged

Stressed, Communication discouraged

This is a 2X2 study.

STRESSLEVEL

COMMUNICATION

REVIEW

Analysis of Variance

• In our statistical analysis, we will determine whether differences among the treatment groups’ means are consistent with the null hypothesis that they differ solely because of random sampling fluctuation.

• As we did in Chapter 9, we will compare two estimates of sigma2, One estimate will be based on the variation of group means of around the overall mean.

• The other estimate is MSW, derived from the variation of scores around their own group means.

As you know…• Sigma2 is estimated either by comparing a

score to a mean (the within group estimate) or by comparing a mean to another mean. This is done by– Calculating a deviation or– Squaring the deviations.– Summing the deviations.– Dividing by degrees of freedom

XX .MX

H0 vs. H1

• Again, the null will predict that a ratio of the two estimates of sigma2 will be about 1.00.

• Again, the experimental hypothesis says the ratio will be greater than one.

• But we can’t just compare MSB to MSw as we did in Chapter 9. We have a problem that needs to be solved before any estimates of sigma2 can be compared.

The Problem

• Unlike the one-way ANOVA of Chapter 9, we now have two variables that may push the means of the experimental groups apart.

• Moreover, combining the two variables may have effects beyond those that would occur were each variable presented alone. We call such effects the interaction of the two variables.

• Such effects can be multiplicative as opposed to additive.• Example: Moderate levels of drinking can make you high.

Barbiturates can make you sleep. Combining them can make you dead. The effect (on breathing in this case) is multiplicative.

The difference between analysing single and multifactor designs – a review

• If we simply did the same thing as in Ch. 9, our between group term could be effected by Factor 1, Factor 2, and/or their interaction as well as by sampling fluctuation.

• An F test requires two estimates of sigma2, one that indexes only sampling fluctuation, the other indexes sampling fluctuation plus one other variable or interaction of variables.

• So we are going to have to do something different to get appropriate between groups terms/

Next, we create new between groups mean squares by combining the original

experimental groups.• To get proper between groups mean squares

we have to divide the sums of squares and df between groups into components for factor 1, factor 2, and the interaction.

• We calculate sums of squares and df for the main effects of factors 1 and 2 first.

• We obtain the sum of squares and df for the interaction by subtraction.

To analyze the data we will again estimate the population variance (sigma2) with mean squares and

compute F tests

• The denominator of the F ratio will be the mean square within groups (MSW)

• where MSW =SSW/ n-k. (AGAIN!)

• In the multifactorial analysis of variance, the problem is obtaining proper mean squares for the numerator.

• We will study the two way analysis of variance for independent groups.

Computing SS for Factor 1

• Pretend that the experiment was a simple, single factor experiment in which the only difference among the groups was the first factor (that is, the degree to which a group is embarrassed). Create groups reflecting only differences on Factor 1.

• So, when computing the main effect of Factor 1 (level of embarrassment), ignore Factor 2 (whether the task was hard or easy). Divide participants into three groups depending solely on whether they not embarrassed, mildly embarrassed, or severely embarrassed.

• Next, find the deviation of the mean of the severely, mildly, and not embarrassed participants from the overall mean. Then sum and square those differences. Total of the summed and squared deviations of each person’s group’s score from the overall mean in the severely, mildly, and not embarrassed conditions is the sum of squares for Factor 1. (SSF1).

dfF1 and MSF1

• Compute a mean square that takes only differences on Factor 1 into account by dividing SSF1 by dfF1.

• dfF1= L – 1 where L equals the number of levels (or different variations) of the first factor (F1).

• For example, in this experiment, embarrassment was either absent, mild or severe. These three ways participants are treated are called the three “levels” of Factor 1.

Then

• Do the same for factor 2• Subtract the sums of squares and df for factors 1 &

2 from the sum of squares and degrees of freedom between group to obtain SSINT & dfINT.

• Once you have all the sums of squares and degrees of freedom, compute ANOVA and determine significance with the F table.

• Finally, plot the means and carefully interpret the results.