Some Additional Notes for SEM using AMOS


On confirmatory factor analysis (CFA)
  • Homogeneity of variance test will check if the data gathered from a different time (i.e. groups 1 and 2) are the same or not.
  • Assess model fit without the marker variable.
  • Configural, metric and scalar invariance:
    • Configural invariance. This is determined by a good model fit.
    • Metric invariance - estimate means and intercepts — click on the multigroup icon — run the model
      • Done using the chi-square difference test.
        • A measurement of the amount of error of the observed model and the predicted model.
        • Compares 2 models if the size of the chi-square difference is significant (there is a nominal difference) or not.
        • On the model comparison, all p-value should be non-significant for the model to be the same in both groups.
    • Full and partial scalar invariance 
      • Copy the intercepts - unconstrained, get the absolute difference.
      • All factors should be invariant (the same), but if there are factors that are not, its only partial invariance.
        • You can check which variables make the group different from each other.
Building a causal model using latent factors in SEM-AMOS
  • On common method bias:
    • AKA specific bias or common bias
    • This tests the variance due to the way respondent react to your measures. 
      • In research, the value respondents report is not always the true value.
      • This test the extent to which all variables were inflated by some external cause.
      • This relates some common cause to every single observed item, one at a time.
      • If it is an objective variable (not perceptual) that would prevent method bias.
      • If you don’t account for bias, your results are false positive.
    • Run using the specific bias test, name the common factor created
      • It's gonna run the constrained model (all paths to be equal to zero) and the unconstrained model (freely estimated).
      • Compares the two models using the chi-square difference test.
        • If significant - there is a difference — there is bias.
        • If not significant - there no difference, it is invariant, or the same - no bias.
      • If there is bias, go back to model, account for the bias, by retaining the specific bias (SB) constructs.
      • Check if the model is broken:
        • If there are negative standardized loadings.
        • Check what causes the breaking.
          • Some of the p-values become non significant
          • Look for the pattern (constructs that remain positive) — delete that one if it works
          • You report that the model was completely unstable. The SB construct broke the model (if you have no marker variable).
        • If you have a marker variable and there is bias:
          • Treat the marker variable as the common latent factor.
          • So you’ll remove the covariances attached to it and move to the left, just like how you treat CLF.
          • If it worked (the model is invariant or the same — p-value is not significant), proceed with the causal model but retain the SB constructs as control variable.
          • Make sure the model is not broken — look at the values — check the reliability and validity.
          • Report the final correlation matrix — ignore the SB markers (it is not part of your interests, they are just marker variable) (should have no validity concerns).
          • Save — impute the factor scores (adjusted for any potential bias) — create a new dataset saved on the folder where you are working, with new factors (variables created).
    • CLF is an added factor that is common to all factors that will account for the common variance shared among all variables.

On R squared:

  • This is the percent of variance being explained in that endogenous variable.
    • Example: Intention explained 44% of the variance.
  • In social sciences, 20% or 25% and above.
  • Some papers present less than 10% but this is already weak in social sciences field.
  • For observed variables, the R-squared can be low.
    • In finance, 2% of variance is good.
    • Example: Revenue change. 

What is the effect of a mediator between the independent and dependent variable?

  • You may look at the effect size.
  • Input the R-squared of both paths (including and not including the mediator).
  • This determines as to what extent the mediator variable has a meaningful impact.

On model fit:

  • Ignore the covariances.
  • Look at the regression weights.
    • Account for the paths or direction.
    • If you don’t account for it, your theory is useless
  • Compare the R-squared before and after, if it becomes better, then good.
  • Achieve the model fit before going to the p-value and estimates.
  • Write in your paper that you account for the relationship observed during the run. Change your theory and add citations.
    • There are some relationships being implied by the modification indices.
    • If there are no theoretical basis, you may not or should not covary because it is not theoretically sound.
    • You need to explain what you did and why you remove or add paths in your model.
    • Find citations if you account for a new relationship.
    • A logic may justify.
  • If some paths are not significant, you can delete paths to free up degrees of freedom.
    • More than 3 or 4 df is alright.
    • If you have enough df, don’t delete paths. Keep it.
    • Same with, if along the deletion process, you already had enough df, then stop deleting insignificant paths.

On imputed factor scores:

  • These are weighted average of indicator variables.
  • Weights used are the regression weights.
    • These are regression weighted average across each of those items to create factor scores.
  • On imputing factor scores, 
    • Make sure you are using the CLF, with no model fit issues.
    • You've already accounted for the modification indices (if there are any).
    • There are no validity and reliability concerns

On testing all direct effects:

  • Report standardized estimates
  • Report effects size if you would.
    • If effect size is less than 0.02, it is still not a meaningful effect, even if significant p-value.
    • If the R-squared is 0.05 or any small value, this is a not meaningful effect even if p-value is significant.

On mediation analysis:

  • Go to analysis properties — Bootstrap — in the output tab click on indirect, direct and total effects.
  • Run the bootstrap — Estimate matrices — it produces the total effects, direct effects and indirect effects.
  • Go down on bias corrected — you can see the p-values.
  • To estimate every possible indirect effect, you may use the two method:
    • Plugins and estimands:
    • First, use the estimand SpecificIndirectEffects (for path model).
      • What is an estimand? It is an estimation of a measure that is not estimated by default by AMOS.
      • It is an extra information that AMOS isn't producing.
      • Amos doesn't produce specific indirect effects, so the estimand will be used.
    • Second, use the plugin IndirectEffects.
      • These two plugins should be used together.
      • Don't run it in a latent causal model, only in a path model (imputed factor scores).
      • It will create an HTML file with the result
      • Showing all possible indirect effects in your model with lower and upper values at 90% confidence level.
      • This process is way better than doing it manually.
      • Just report the one you hypothesized.
    • Manually (using latent model)
    • Set up the latent model.
    • Go to analysis properties — Standardized estimates, specific direct effect, R-square
    • Perform bootstrap, etc.
    • Run the model  proceed
    • Select estimand — MyIndirectEffects (for latent model).
      • This one expects to find two paths, A and B.
      • Name a specific path A and B.
      • Run — then proceed.
      • Estimand will produce extra output in your output file.
    • Go to Outputs — Estimates —Scalars — User defined estimands  — produces A x B
      • Shows the estimates, upper and lower bound and the p-value.
      • Do the same with other paths, just delete and replace.

On interaction or moderation analysis:

  • Moderation analysis is better performed using the path model (imputed factors).
  • In standardizing variables you will get the same regression weights, but removed the multicollinearity.
    • So you have to standardize the constituent variables.
  • Interaction is a form of moderation, cause it changes the relationship between the IV and DV.
  • Do one interaction at a time to avoid weaken interactions.
    • That would mean, you have to report model fit with every interaction.
  • Always check model fit if it is still good.
  • Check regression effects
  • Plot those significant effects in stats tools package.
  • If done with one interaction, repeat the analysis by just dragging another interaction term to replace the previous one.
  • Record the R-squared value.
  • You can also check for the effect size using excel.
    • Just input the r-square with and without interaction terms
  • If you want to use latent factors, it is possible to test interaction with latent factors, but it is a lot messier. It requires you to standardize the individual items of both factors, and then multiply every pair across factors. You can see what that might look like in this SmartPLS video: https://youtu.be/upEf1brVvXQ

On multigroup analysis:

  • Remove interaction terms.
  • Use the plugin Multigroup.
  • Use the estimand MyGroupDifferencesEstimandVB
  • Using the chi-square difference approach.
  • Always report model fit.
These are just additional notes. Check this main page for the step-by-step process. Hope this helps.

Comments