introduction
TRANSCRIPT
1
Structural Equation Models Introduction – SEM with Observed Variables Exercises – AMOS
Quantitative Research, March 2012
TOPICS
1. Terminology – SEM & AMOS 2. Rules and conventions – SEM & AMOS 3. Model degrees of freedom and model identification 4. Estimating variances and covariances (AMOS example) 5. Regression model in the SEM framework (AMOS example) 6. Decomposition of total effects in recursive models 7. Models with multiple endogenous variables (AMOS example) 8. Model goodness of fit
ADDITIONAL INFO
1. SPSS & AMOS availability Data bases for the lab examples are available here:
http://web.me.com/paula.tufis/QR/2012.html Required readings: Arbuckle, J. L. (2009). Amos 18 User's Guide. Chicago, IL.: SPSS Inc, Tutorial (pp. 7-22), Example 1
(pp. 23 - 40), Example 4 (pp. 67 - 80). Optional readings: Maruyama, G. (1998). Basics of structural equation modeling. Thousand Oaks, Calif.: Sage
Publications, Chapter 2 (pp.15-25).
1
Terminology – SEM & AMOS
Exogenous variables – variables used as predictors that are not predicted by other variables Endogenous variables – variables that are predicted by other variables Mediating variables – endogenous variables in the middle of a causal chain
Observed/manifest variables – variables that are directly measured Unobserved/latent variables – variables that are not directly measured but constructed based on
relationships among other observed variables
Recursive models – in these models causality goes in one direction only and error terms are uncorrelated Non-recursive models – models which include bi-directional causal relationships (two or more variables
that influence each other) and/or correlations between two or more error terms
Model parameter – a coefficient (mean/ variance/ covariance/ regression coefficient) of a structural equation model, unknown at the beginning of the analysis
Sample statistics – can be variances and covariances (and sometimes means as well) observed in the sample data; these are used as input data in the analysis
Path diagram – a structural equations model represented according to the graphical conventions listed below
Free parameters – unknown coefficients estimated by the model Fixed parameters – coefficients set equal to a particular value by the researcher prior to model estimation Constrained parameters – coefficients set equal to one another by the researcher prior to model estimation
Rules and Conventions – SEM & AMOS
In AMOS default estimation procedures (ML estimation), all endogenous variables should be interval or ratio-level variables (in practice, ordinal level endogenous variables are accepted and procedures for interval and ratio-level data in AMOS are robust when using ordinal level data with 4 or more categories in large samples).
Usually, in SEM diagrams the variables are arranged from left to right in temporal/causal order (causes before effects)
Representation of variables:
Observed variables are represented by rectangles/squares: /
Unobserved variables are represented by ellipses/circles in SEM diagrams: / Representation of relationships: There is a clear distinction between association relationships and causal
relationships (both in theoretical terms and in terms of graphical representation of a structural equations model).
Non-causal (association) relationships are represented by curved double-headed arrows: Causal relationships are represented by straight single-headed arrows (with the arrow pointing towards
the effect): Association relationships can only be modeled among exogenous variables. Associations between two
endogenous variables or between an endogenous variable and an exogenous variable are not possible in the SEM framework.
The model should include all possible influences on the endogenous variable(s). The influences that cannot be accounted for by the predictors included in the model are included in the error term. The error term also includes measurement errors in the endogenous variable.
In short, this means that each endogenous variable in the model must have an associated error term Exogenous variables are assumed to be measured without error
In other words, exogenous variables do not have an associated error term Error terms are considered to be unobserved variables and as such are represented by ellipses All variables in the model (including latent variables) must have a unique variable name. Observed
variables’ names are their names in the data set. For the latent variables you can choose any name you like, provided these names are different from the names of observed variables in the active dataset.
All latent variables in the model must have a scale. This is accomplished by imposing some constraints either on the variance of the latent variable (not recommended in multi-group analyses), or on a regression coefficient associated with the latent variable (the default option in AMOS).
For a latent variable with multiple indicators, by default one of the loadings is set to 1. This makes the latent variable “borrow” the scale of the indicator for which the loading has been set to 1.
For latent variables representing error terms, by default the associated error path is set to 1.
2
AMOS is a WYSIWYG (what you see is what you get) program: The covariance between two exogenous variables is assumed to be 0 if the diagram does not include a
curved double-headed arrow between these two variables The effect of a variable on another variable is 0 if the diagram does not include a straight single-headed
arrow between these two variables A model will not output results if it has negative degrees of freedom (DF < 0). DF ≥ 0 is a necessary but not
sufficient condition for the model to be identified and estimate the results.
Introduction to AMOS
Preparing the Data:
Use SPSS/R/SAS/STATA … to arrange your data before starting the analysis in AMOS View descriptive statistics
SPSS: Analyze Descriptive Statistics Frequencies Replace “Don’t Know” / “No Answer” / “Not Applicable” with missing values
SPSS: Transform Recode Into Same/ (Different) Select Variables (Fill in new variable names and click Change) Old and New Values Assign System Missing for DK/NA values Add Continue OK
Note: the same commands can be adapted to recode variable values (e.g. when reversing or collapsing the scale of a variable, or when constructing dummy variables)
Note: If you use R/SAS/STATA you will have to export the data into a format that is readable by AMOS (see formats AMOS reads below)
AMOS requires a complete data set (no missing values) in order to run the analysis. There are several ways of dealing with missing data:
You can select a subsample that contains complete data (listwise deletion) SPSS: Compute a filter variable that has value 1 for all cases in which your variables of interest
have missing values: Transform Compute filter=1 If Include if case satisfies condition SYSMIS(var1) or SYSMIS(var2) or … Continue OK
Exclude all cases for which the filter variable=1: Data Select Cases If Condition is Satisfied If SYSMIS(filter) Continue Unselected cases are deleted OK
Note: the same commands can be adapted to select a subsample of your data for further analyses (e.g. you intend to analyze a subsample containing only males)
You can replace missing values with the mean of the variable SPSS: Transform Replace Missing Values Method: Series Mean Select Variables you
want to modify OK Use a missing values imputation method
AMOS Regression Imputation/ Stochastic regression imputation/ Bayesian Imputation Use the FIML method for dealing with missing data that AMOS provides (in this case you can use the
data with missing values)
Types of files AMOS reads:
Dbase III/ IV/ 5 (*.dbf)
Excel 3/4/5/8 (*.xls) Foxpro 2/2.5/2.6 (*.dbf) Lotus WK1/3/4 (*.wk1/3/4) MS Access (*.mdb) SPSS/PASW (*.sav) – raw data or variance-covariance matrices Text (*.txt, *.csv)
3
AMOS Toolbar Guide
Draw observed variables
Select data file(s)
Draw latent variables
Analysis properties
Draw a latent variable and add an indicator to it
Calculate estimates
Draw causal path
Copy path diagram to clipboard
Draw covariance/ correlation
View text output
Add a unique variable to an existing variable
Save current path diagram
Figure captions
Object properties
List variables in the model
Drag object properties
List variables in the data set
Preserve symmetries
Select one object at a time
Zoom in on a area that you select
Select all objects
View a smaller area of the diagram
Deselect all objects
View a larger area of the diagram
Duplicate objects
Show entire page on screen
Move objects
Resize diagram to fit page
Delete objects
Examine diagram with a loupe
Change the shape of objects
Display degrees of freedom
Rotate the indicators of a latent variable
Multiple-Group Analysis
Reflect the indicators of a latent variable
Print diagram/output results
Move parameter values
Undo
Reposition the path diagram on the screen
Redo
Touch up a variable
Specification search
4
Usual Analysis Steps1
1. Handle data manipulations and missing data 2. Get the data into AMOS
Specify data files for the model
View the variable list in the file you have specified
3. Setting up your interface: View/Set Interface Properties 4. Drawing the diagram
Draws observed variables
Draws latent variables; Also the symbol for error terms
Draws a latent variable with its indicators
Causal relationships
Covariances/ correlations
Adds an error term to an endogenous (dependent variable)
4. Name your variables: for observed variables, click and drag variable name from the list; for latent variables, name the variables yourself (right click Object Properties Fill in a name in the variable name box) 5. Editing functions
Add a title to your diagram
Select one object/ select all objects/deselect all objects
Copy object
Move object
Delete object
Modify object size
Rotate indicators of a latent variable
Drag properties from one object to another
Undo and Redo
Redraw (/refresh) the path diagram
Resize diagram to fit the page
6. Analysis Properties:
7. Run Model:
8. View the results: (text form) or (diagram form)
1 Instructions are described for AMOS version 5. With very small exceptions, they are applicable to later versions of
AMOS as well.
5
Computation of DF (degrees of freedom) in a model
# of sample moments= (# sample covariances) + (# sample variances) # of sample covariances = [p(p-1)]/2, where p=number of observed variables # of sample variances = p
# of parameters to estimate = # of free variances, covariances, and regression coefficients in the model DF = (# of sample moments) – (# of parameters to estimate)
Model identification
If DF > 0 the model is over-identified (the number of pieces of information > the number of unknowns the are multiple solutions for at least some of the model parameters + model goodness of fit measures can be computed). The “best” solution is chosen through maximization of the maximum likelihood function
If DF = 0 the model is just-identified (the number of pieces of information = the number of unknowns there is a unique solution for the model parameters, but measures of goodness of fit cannot be computed)
If DF < 0 the model is under-identified (the number of pieces of information < the number of unknowns neither model parameters nor measures of goodness of fit can be computed; there are an infinite number of solutions for the model, each of which is equally well fitting to the data and there is no criterion to choose the “best” solution from among these solutions).
AMOS EXAMPLE #1 – ESTIMATING VARIANCES AND COVARIANCES Data: Public Opinion Barometer data file [Barometrul de Opinie Publică], Fall 2005. Survey designed and
executed by Soros Foundation, Romania, available at:
http://www.soros.ro/ro/program_articol.php?articol=107 .
Note: The survey was part of the World Values Survey 2005. If you are interested in these variables for other countries, you can download the WVS survey data (see details on the last page).
Dataset for examples: qr2011_data_amoslab1.sav (available on the class website) Variables used:
V46 – freedom of choice and control over life (1=no freedom of choice & control at all … 10 = a great deal of freedom of choice & control) recoded into CONTR_1
V237 – age recoded into AGE_1 V238 – education (1=no schooling … 14= MA,Ph.D) recoded into EDUC_1 V22 – satisfaction with life (1= dissatisfied … 10= satisfied) recoded into SATIS_1
Recodes: missing on 99 + replacing missing values with means Analysis Properties: Minimization history, Standardized Estimates Model Results:
Notes: Estimated variable variances – above variable boxes, top right corner, in the unstandardized solution. Estimated variable covariances – above curved double-headed arrows, in the unstandardized solution.
Note: Estimated correlations among variables – above curved double-headed arrows, in the standardized solution.
6
AMOS EXAMPLE #4 – SIMPLE REGRESSION MODEL
Variables used: same variables as in the prior example V46, V237, V238, V22 (recoded into CONTR_1, AGE_1, EDUC_1, SATIS_1)
Analysis Properties: Minimization history, Standardized Estimates, Squared Multiple Correlations DF = 10 – 10 = 0
# sample moments = # covariances (6) + # variances (4) = 10 # parameters to estimate = # covariances (3) + # variances (4) + # regression coefficients (3) = 10
Model Results
OUTPUT INTERPRETATION (REGRESSION MODELS)
State your hypotheses about relationships in the model. What kind of relationships do you expect and
why do you think these relationships exist? You might have hypotheses about the causal relationships in
your model, but also about the non-causal relationships (associations among exogenous variables).
Interpret model goodness of fit for over-identified models (details to follow …)
Present and interpret correlations and/or covariances in your model. The interpretation implies a
discussion of the direction and intensity of associations among variables, and of the statistical
significance of the presented coefficients. Do the model results support your theoretical hypotheses or
not?
Notes: Covariances – near the curved arrows in the unstandardized solution Variances – above variable boxes, top right corner Unstandardized regression coefficients – above the straight arrows
Notes: Correlations – near the curved arrows in the standardized solution Standardized regression coefficients – above the straight arrows in the standardized solution R squared – above endogenous variable boxes, top right corner
7
Present and interpret regression coefficients (both unstandardized and standardized if they are both
informative, or choose the type of coefficient that is appropriate to your particular purposes), focusing
on relationships of interest in your model.
Regression weights (unstandardized regression coefficients) - represent the average amount of
change2 in the dependent variable for a single raw score unit increase in the predictor variable
(controlling for the other predictors in the model). Unstandardized regression coefficients allow you
to compare the intensity of one effect across different groups (e.g.: you can tell whether the effect of
age on satisfaction is stronger among women than among men – in this case, the same model has to
be estimated in the subsample of women and again in the subsample of men).
Standardized regression weights (standardized regression coefficients) - represent the average
amount of change2 in the dependent variable (Y) in Y standard deviations, given a standard deviation
unit increase in the predictor variable (controlling for the other predictors in the model).
Standardized regression coefficients allow you to compare the intensity of effects of the different
predictor variables on the dependent variable within the same group of respondents (e.g.: you can tell
whether the effect of age on satisfaction is stronger than the effect of education on satisfaction).
Report how much of the variation in your dependent variables is explained by the predictors you included in the model
Squared Multiple Correlation (R2) – represents the proportion of variance in the dependent
variable that is explained by the collective set of predictors
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Interpreting model results implies a little bit more than reproducing the standard phrases above (for
example “X and Y are positively associated” or “a one unit increase in X leads to a b units decrease in the
mean of Y, keeping under control… ” ). Focus on the substantive meaning of your model results. For
example, in simple terms, the model above informs us that:
People who believe that they have more freedom of choice and control over their lives tend to be
more satisfied with their lives.
As people get older they tend to become less satisfied with their lives.
More educated people tend to be more satisfied with their lives.
Interpret your model results in relation to the theory and the hypotheses you used for model building.
Decomposition of Effects in Recursive Models
Souces: Maruyama, Geoffrey M. 1998. Basics of structural equation modeling. Thousand Oaks, Calif.: Sage Publications. Alwin, Duane F. & Hauser, Robert M. 1975. “The Decomposition of Effects in Path Analysis.” American Sociological Review 40 (1): 37 – 47.
The relationship between 2 variables can be decomposed into:
Non-causal effects Causal effects
Total effect (=direct effect + indirect effects) – represents the average overall amount of change in the dependent variable for a one unit/ one standard deviation change in the predictor variable
Indirect effects – the effects of one variable on another variable though mediating variables (e.g.: changes in X1 trigger changes in X3, which in turn trigger changes in X5)
Direct effect – the net effect of one variable on another variable or the part of the causal effect that is not transmitted through mediating variables.
2 Increase if the regression coefficient is positive and decrease if the regression coefficient is negative.
8
Example:
Source: G.M. Maruyama, 1998, ”Basics of Structural Equation Modeling”, p.38
The model above is translated into the following system of equations: X3 = p31X1 + p32X2 + e3 X4 = p41X1 + p42X2 + p43X3 + e4 X5 = p51X1 + p52X2 + p53X3 + p54X4 +e5
In the prediction of X5 … X1, X2, X3, and X4 have direct effects on X5 X1, X2, and X3 also have indirect effects on X5
For example, X1 has indirect causal effects on X5 through/mediated by: X3: X1 X3 X5 X4: X1 X4 X5 X3 and X4: X1 X3 X4 X5
Each direct effects “uses up” a degree of freedom, but indirect effects do not “use up” additional degrees of freedom in the model.
Computing indirect effects based on direct effects: The value of an indirect effect is computed by multiplying the direct effects corresponding
to the arrows describing the trajectory of the indirect effect The total indirect effect of one variable on another variable is the sum of all indirect effects
existing between these two variables For example, the effect of X1 on X5 is decomposed into:
Direct causal effect DE = p51 Indirect causal effects:
X1 X3 X5 IE1 = p53* p31 X1 X4 X5 IE2 = p54 *p41 X1 X3 X4 X5 IE3 = p54*p43*p31 Total indirect causal effect: IE = EI1 + EI2 + EI3
Total causal effect TE = DE + IE
X1
X2
X3
X4
X5
e3
e4
e5 r12
p31
p51
p41
p32 p52
p42
p43
p53
p54
9
Non-causal effects are: Effects due to a shared antecedent (e.g., a part of the non-causal effect of X3 on X4 is due to
the fact that both X3 and X4 are caused by X1) Effects due to prior associations of antecedent exogenous variables (“effects transmitted
though the curved arrows”) The total association between 2 variables (bivariate covariance) = the causal effect + the non-
causal effect The non-causal effect = total association – causal effect
METHODS FOR DECOMPOSING CAUSAL EFFECTS IN RECURSIVE MODELS
Method I Estimate direct causal effects in the model Calculate indirect causal effects based on the direct causal effects (using the computation
method described previously) Compute total effects as sum of direct and indirect effect Non-causal effects can be computed as the difference between total association (bivariate
covariance/correlation) and the total causal effect Method II
The total causal effect is estimated in a reduced-form model (a model containing only the endogenous variable of interest, the predictor of interest as an exogenous variable, plus all variables in the complete model that are causally antecedent or contemporaneous with the predictor of interest as exogenous variables)
The direct causal effect is estimated in the complete model The indirect causal effect is computed as the difference between the total causal effect and the
direct causal effect Non-causal effects can be computed as above (the difference between total association and the
total causal effect) For example: to decompose the effect of X1 on X5
The bivariate covariance between X1 and X5 is the total association between the two variables (Q1)
Estimate the reduced-form model with X1 and X2 as predictors and X5 as the dependent variable. The effect of X1 on X5 in this model is the total causal effect (Q2)
Estimate the complete model with X1, X2, X3, and X4 as predictors and X5 as the dependent variable. The effect of X1 on X5 in this model is the direct causal effect (Q3)
The indirect causal effect is computed as the difference between the total causal effect and the direct causal effect (Q2-Q3)
The non-causal effect can be computed as the difference between total association and the total causal effect (Q1-Q2)
10
AMOS EXAMPLE – MULTIPLE ENDOGENOUS VARIABLES AND DECOMPOSITION OF EFFECTS
The same data as in the previous example, subsample: employed respondents (values 1 – employed full time, 2 – employed part time, 3 – self employed on variable v241)
Variables used: Exogenous variables:
v253 (household income) și b65 (number of household members) – for the construction of a variable measuring household income per capita in hundreds of RON (HHINCPC=v253/(b65*100)) recoded into HHINCPC_1
v235 – gender (1=male, 0=female) recoded into GENDER_1 v237 – age recoded into AGE_1
Mediating variables: V46 – freedom of choice and control over life (1=no freedom at all … 10=complete
freedom of choice) recoded into CONTR_1 v246 – freedom of decision at the workplace (1=no freedom at all … 10=complete
freedom of decion) recoded into WRKDEC_1 Final endogenous variable/endogenous variable of interest:
b10 – how do you think you’re life will be in a year from now (5=much worse … 1=much better) recoded into OPTIM_1 with reversal of scaling to measure optimism
Data recodes: missing on 8,9 + missing values replaced with mean, reversal of scale (b10) Analysis Properties: Minimization History, Standardized Estimates, Squared Multiple
Correlations, Indirect, Direct, & Total Effects DF = 21 – 20 = 1
# Sample moments = # covariances (15) + # variances (6) = 21 # Parameters to estimate = #covariances (3) + # variances (6) + #regression coefficients
(11) = 20 Model Results
HHINCPC_1
GENDER_1
AGE_1
CONTR_1
WRKDEC_1
OPTIM_1
e1 1
e2
1
e3
.066+
.160***
.118**
.147***
.140***
Standardized Estimates
1
11
DECOMPOSITION OF CAUSAL EFFECTS – METHOD I
COMPLETE MODEL - RESULTS (STANDARDIZED COEFFICIENTS) MODEL 1
Control over life (CONTR_1)
MODEL 1 Freedom of decision (WRKDEC_1)
MODEL 1 Optimism (OPTIM_1)
Income (HHINCPC_1) 0.066 + 0.160 *** 0.118 ** Control over life (CONTR_1)
0.147
***
Freedom of decision (LIBDEC_1)
0.140 ***
Men (GENDER_1) 0.058 0.061 + -0.001 Age (AGE_1) -0.044 0.084 * -0.152 *** R2 0.009 0.038 0.082 Note: *** p < 0.001; ** p<0.01; * p < 0.05; + p < 0.1
Decomposition of the effect of income on optimism
Direct causal effect (DE) = 0.118
Indirect causal effect: Through „control over life” IE1 = 0.066 * 0.147 = 0.009702 Through „freedom of decion” IE2 = 0.160 * 0.140 = 0.0224 Total indirect effect (IE) = IE1 + IE2 = 0.032102
Total causal effect (TE) = DE + IE = 0.150102
DECOMPOSITION OF CAUSAL EFFECTS – METHOD II
EXPLAINING THE RELATIONSHIP BETWEEN INCOME AND OPTIMISM – ADDING EFFECTS OF MEDIATING VARIABLES (STANDARDIZED COEFFICIENTS)
MODEL 1 Optimism (OPTIM_1)
MODEL 2 Optimism (OPTIM_1)
MODEL 3 Optimism (OPTIM_1)
Income (HHINCPC_1) 0.150 *** 0.139 *** 0.118 ** Control over life (CONTR_1)
0.164 *** 0.147 ***
Freedom of decision (LIBDEC_1)
0.140 ***
Men (GENDER_1) 0.016 0.006 -0.001 Age (AGE_1) -0.147 *** -0.139 *** -0.152 *** R2 0.041 0.068 0.082
Note: *** p < 0.001; ** p<0.01
Decomposition of the effect of income on optimism Direct causal effect (DE) = 0.118 (from MODEL 3) Total causal effect (TE) = 0.150 (from MODEL 1) Indirect causal effect (IE) = TE – DE = 0.150 – 0.118 = 0.032
Direct Effect
Used for computing indirect effects
Dir
ect
effe
ct
Total Effect
12
AMOS OUTPUT FOR DIRECT, INDIRECT, AND TOTAL EFFECTS Standardized Direct Effects (Group number 1 - Default model)
AGE_1 GENDER_1 HHINCPC_1 WRKDEC_1 CONTR_1
WRKDEC_1 0.084 0.061 0.160 0.000 0.000
CONTR_1 -0.044 0.058 0.066 0.000 0.000
OPTIM_1 -0.152 -0.001 0.118 0.140 0.147
Standardized Indirect Effects (Group number 1 - Default model)
AGE_1 GENDER_1 HHINCPC_1 WRKDEC_1 CONTR_1
WRKDEC_1 0.000 0.000 0.000 0.000 0.000
CONTR_1 0.000 0.000 0.000 0.000 0.000
OPTIM_1 0.005 0.017 0.032 0.000 0.000
Standardized Total Effects (Group number 1 - Default model)
AGE_1 GENDER_1 HHINCPC_1 WRKDEC_1 CONTR_1
WRKDEC_1 0.084 0.061 0.160 0.000 0.000
CONTR_1 -0.044 0.058 0.066 0.000 0.000
OPTIM_1 -0.147 0.016 0.150 0.140 0.147
Interpreting model goodness of fit
Report overall model fit (if chi-square says model is not a good fit for the data, it may be due to large sample size, and you have to look at other fit statistics – TLI, GFI, RMSEA).
Chi-square (CMIN) – shows how well the model fits the data (is a measure of model goodness of fit); compares the sample variance covariance matrix with the implied variance covariance matrix (the matrix implied by estimated model parameters)
H0: the model perfectly fits the data; p is the significance test associated with this H0 Using significance level α=0.05, if p < 0.05 H0 is rejected (the model does not perfectly fit the
data) GFI is also a measure of goodness of fit, based on the discrepancy between implied and population
variances and covariances GFI varies between 0 and 1, where 1 indicates perfect model fit Conventionally, GFI values greater than 0.85 indicate good model fit
AGFI - GFI adjusted for model complexity AGFI has an upper bound of 1 (indicating perfect fit), but no lower bound Conventionally, values greater than 0.90 indicate good model fit
Tucker-Lewis Index: is a goodness of fit measure adjusted for model complexity and estimates goodness of fit of the model of interest in relation to a baseline model (the independence model)
Usually, TLI varies between 0 and 1, but can sometimes take values outside this range If the model perfectly fits the data, TLI has value 1 Conventionally, a value of at least 0.90 indicates acceptable model fit Conventionally, a value of at least 0.95 is needed to judge the model as good fitting
RMSEA (Root Mean Square Error of Approximation) takes into account the population error of approximation and the number of degrees of freedom in the model (adjusts for model complexity)
If the approximation is good, RMSEA should be small Conventionally, a value of approximately 0.05 or less indicates a close fit of the model H0: RMSEA in population is no greater than 0.05 ; P test for close fit (PCLOSE) is the
significance test associated to this H0 Using significance level α=0.05, if p < 0.05 H0 is rejected (RMSEA in the population is greater
than 0.05 the model is not a close fit to the data) Note: You can find a description of all the goodness-of-fit tests reported in the AMOS output in Appendix C of the AMOS User Guide.
13
SPSS & AMOS availability
IBM SPSS:
Webpage: http://www.spss.com/software/statistics/
Latest version: 20
Free 14-day trial version available (registration required):
http://www14.software.ibm.com/download/data/web/en_US/trialprograms/W110742E06714B2
9.html?S_CMP=rnav
AMOS:
Webpage: http://www.spss.com/amos/
Latest version: 20
Free trial version (14 days?) available (registration required):
http://www14.software.ibm.com/download/data/web/en_US/trialprograms/G556357A25118V85.
html?S_CMP=rnav
Free student version (version 5.01), limited to 8 observed variables and 54 parameters:
http://amosdevelopment.com/download/index.htm