3620 Lecture 07

Download 3620 Lecture 07

Post on 26-Sep-2015

214 views

Category:

Documents

0 download

Embed Size (px)

DESCRIPTION

some shit

TRANSCRIPT

  • Factor AnalysisThere are two main types of factor analysis:

    Confirmatory Analysis, andExploratory Factor Analysis

    In here, we only consider the second type

    (to p2)

  • Factor Analysis (cont.)

    IntroductionModel representationData typeSteps to perform(to p12)(to p11)(to p8)(to p3)

  • IntroductionExploratory Factor Analysis

    DefinitionObjectiveAssumption(to p2)(to p7)(to p6)(to p4)

  • DefinitionAnalyze the structure of the interrelationship (correction) among a large set of decision variables to determine whether the information can be summarized into smaller set of factorsthat is decision variables that are corrected with one another but largely independent to others are combined into factorsexample (to p5)(to p3)

  • exampleFor example, a retail store manager wants to know the store performance in order to develop an action plan. He identifies 80 variables to identify the store performance. However, developing an action plan with 80 specific variables will becoming troublesome. So, can the manager summarize those 80 specific variables into smaller set of factors? Yes! He/she can adopt the exploration factor analysis to identify the underlying dimension of those variables, and then group those variables that are interrelated to each others --- statistically, that is

    (to p4)

  • ObjectiveTo summarize the information contained in the number of decision variables into smaller set of factors subjected to its minimum loss of informationit provides as a tool to better interpret the results of observations when a large number of decision variables is grouped into smaller set of factors(to p3)

  • AssumptionGiven that we have a correction matrix for all decision variable, a general guideline is that a correction which has a value greater than 0.3 in a correlation matrix among some decision variables is considered as significantNote that: later, we will examine different criteria are to be applied to control to those corrections(to p3)

  • Model representationThe Basic model representation of exploratory factor analysis is:

    R=FFTwhere,R = Correlation MatrixF = Factor MatrixFT = Factor Matrix Transposeexample(to p9)(to p3)

  • exampleFor Example,

    Correlation Matrix ABCDA -.953-.055-.130B -.953-.091-.036C-.055-.091.990D -.130-.036.990

    Factor MatrixFactor IFactor IIA-.400.900B.251.947C.932.348D.956.286Eigenvalue21.91(to p10)

  • R= -.400 .900 -.400 .251 .932 .956 .251 -.947 .900 -.947 .348 .286 .932 .348 .956 .286

    Because the factor matrix multiply the factor matrix transpose, it detects the one to one correspondent correlation. Then the correlation matrix is formed which is a critical element of factor anlaysis.

    (to p8)

  • Data type

    Variables for exploratory factor analysis should be represented in a metric form. When the variables in non-metric form, data transformation should be adopted (Hair et al, 1998).(to p2)

  • Steps to performFive Simple steps to follow:1. Testing assumption2. Selecting proper sample sizes3. Extracting factors4. Rotating factors 5. Refining and labeling factorsExample, Other reference: see attached files of 3620factor and 3620factor21 andVincent S. Lai, Intraorganizational Communication with Intranets, Communications of the ACM, July 2001/ Vol 44. No. 7, pp 95-100(to p13)(to p23)(to p14)(to p15)(to p18)(to p22)

  • 1. Testing assumptionTwo tests:a. Barlett test of sphericityA statistical test for the presence of correlations among the variables. It determines if the correlation matrix has significant correlation to some of decision variablesb. K.M.O. measure of sampling adequacymeasure calculated both for entire correlation matrix evaluating the appropriateness of applying exploratory factor analysis. This value should be greater than 0.5(Kaiser-Meyer-Olkin)(to p12)

  • Selecting proper sample sizesRegarding the sample size, Hair et al (1998) indicate that the sample fewer than 50 observation is questionable. As a general rule, the sample size should be 10 times or greater than number of variables, for example, there are 10 variables, the sample size should be at least 100 observations.

    While Tabachnick and Fidell (1996) show the general guideline of sample size regardless to the number of variables when adopting factor analysis. 50 observations as very poor, 100 as poor, 200 as fair, 300 as good, 500 as very good, and 1000 as excellent.

    Tabachnick, B.G. and Fidell, L.S. (1996), Using Multivariate Statistics, Third edition, HarperCollins College Publishers(to p12)

  • 3. Extracting factorsTwo common methods 1. Principal component analysistransform the original set of variables into a smaller set of linear combinations that account for most of the variance of the original set2. Common factor analysistransforms the original set of variables into a smaller set of factors to which their variances are common among the original factors (to p16)

  • The result is the same for both methods, thus in practice we just randomlypick one and Principal component analysis(to p17)

  • Criteria for extraction (ie to determine No. of factors)there are several criteria to extract factors: Latent Root Criterion, A Prior Criterion, Percentage of Variance Criterion, Scree Test Criterion. The most commonly used technique is the latent root criterion. The rationale for the latent root criterion is that any individual factor should account for the variance of at least a single variable if it is to be retained for interpretation. Each variable contributes a value of one to the total Eigenvalue. Thus, only the factor having Eigenvalue greater than one is considered significant (to p12)

  • Rotating factorsWhen the factors are extracted, factor loading is obtained. Factor loadings are the correlation of each variable and the factor. When rotating the factors, the variance has been redistributed so that the factor loading pattern and percentage of variance for each of the factors is difference.

    The objectives of rotating is to redistribute the variance from earlier factors to later ones to achieve a simple, theoretically more meaningful factor pattern, and make the result easily to be interpreted. There are three rotation methods (Hair et al, 1998): 1) Quartimax, 2) Varimax, 3) Equimax.(to p19)

  • 1.The quartimax rotation is to simplify the rows of a factor matrix, i.e. focus on rotating the initial factor so that a variable loads high on one factor and as low as possible on all other factor. 2. The varimax rotation is to simplify the columns of the factor matrix. With this approach, the maximum possible simplification is reached of there are only 1s and 0s in a column. 3.The equimax rotation is a compromise between the qurtimax and varimax.In practice, the first two are the most common one to apply.

    (to p20)

  • Factor Loadings

    After the rotation, redistribution of factor loading is obtained. Hair et al (1998) provide a guidelines for indentifying significant factor loadings based on sample size which shows in the Table 2. (to p21)

  • (to p12)

  • Refining and labeling factorsRefining processes:it can be based on reliability, construct validity, convergent validity that we discussed

    Labeling factorsFinally, when a factor solution has been obtained, ie all variables have significantly loaded onto a factor, label or give a name to each factor that makes up by their variables Here, variables with higher loadings are considered more important and have greater influence on the name or label the factor (to p12)

  • Example(to p24)

  • Step 1(to p25)

  • Step 2Assumed that the sample size is 103 observations which is 10 times greater than the number of variables (Hair et al, 1998). The sample size is thus quite good for the exploratory factor analysis.

    (to p26)

  • Step 3(to p27)

  • Step 4(to p28)

  • Step 5(to p12)