Title: | Latent Interaction (and Moderation) Analysis in Structural Equation Models (SEM) |
---|---|
Description: | Estimation of interaction (i.e., moderation) effects between latent variables in structural equation models (SEM). The supported methods are: The constrained approach (Algina & Moulder, 2001). The unconstrained approach (Marsh et al., 2004). The residual centering approach (Little et al., 2006). The double centering approach (Lin et al., 2010). The latent moderated structural equations (LMS) approach (Klein & Moosbrugger, 2000). The quasi-maximum likelihood (QML) approach (Klein & Muthén, 2007) (temporarily unavailable) The constrained- unconstrained, residual- and double centering- approaches are estimated via 'lavaan' (Rosseel, 2012), whilst the LMS- and QML- approaches are estimated via by modsem it self. Alternatively model can be estimated via 'Mplus' (Muthén & Muthén, 1998-2017). References: Algina, J., & Moulder, B. C. (2001). <doi:10.1207/S15328007SEM0801_3>. "A note on estimating the Jöreskog-Yang model for latent variable interaction using 'LISREL' 8.3." Klein, A., & Moosbrugger, H. (2000). <doi:10.1007/BF02296338>. "Maximum likelihood estimation of latent interaction effects with the LMS method." Klein, A. G., & Muthén, B. O. (2007). <doi:10.1080/00273170701710205>. "Quasi-maximum likelihood estimation of structural equation models with multiple interaction and quadratic effects." Lin, G. C., Wen, Z., Marsh, H. W., & Lin, H. S. (2010). <doi:10.1080/10705511.2010.488999>. "Structural equation models of latent interactions: Clarification of orthogonalizing and double-mean-centering strategies." Little, T. D., Bovaird, J. A., & Widaman, K. F. (2006). <doi:10.1207/s15328007sem1304_1>. "On the merits of orthogonalizing powered and product terms: Implications for modeling interactions among latent variables." Marsh, H. W., Wen, Z., & Hau, K. T. (2004). <doi:10.1037/1082-989X.9.3.275>. "Structural equation models of latent interactions: evaluation of alternative estimation strategies and indicator construction." Muthén, L.K. and Muthén, B.O. (1998-2017). "'Mplus' User’s Guide. Eighth Edition." <https://www.statmodel.com/>. Rosseel Y (2012). <doi:10.18637/jss.v048.i02>. "'lavaan': An R Package for Structural Equation Modeling." |
Authors: | Kjell Solem Slupphaug [aut, cre] , Mehmet Mehmetoglu [ctb] , Matthias Mittner [ctb] |
Maintainer: | Kjell Solem Slupphaug <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.0.4 |
Built: | 2024-11-22 09:29:23 UTC |
Source: | https://github.com/kss2k/modsem |
wrapper for coef, to be used with modsem::coef_modsem_da, since coef is not in the namespace of modsem, but stats
coef_modsem_da(object, ...)
coef_modsem_da(object, ...)
object |
fittet model to inspect |
... |
additional arguments |
Compare the fit of two models using the likelihood ratio test. 'estH0' representing the null hypothesis model, and 'estH1' the alternative hypothesis model. Importantly, the function assumes that 'estH0' does not have more free parameters (i.e., degrees of freedom) than 'estH1'. alternative hypothesis model
compare_fit(estH0, estH1)
compare_fit(estH0, estH1)
estH0 |
object of class 'modsem_da' representing the null hypothesis model |
estH1 |
object of class 'modsem_da' representing the |
## Not run: H0 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z " estH0 <- modsem(m1, oneInt, "lms") H1 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " estH1 <- modsem(m1, oneInt, "lms") compare_fit(estH0, estH1) ## End(Not run)
## Not run: H0 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z " estH0 <- modsem(m1, oneInt, "lms") H1 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " estH1 <- modsem(m1, oneInt, "lms") compare_fit(estH0, estH1) ## End(Not run)
This function returns the default settings for the LMS and QML approach.
default_settings_da(method = c("lms", "qml"))
default_settings_da(method = c("lms", "qml"))
method |
which method to get the settings for |
list
library(modsem) default_settings_da()
library(modsem) default_settings_da()
This function returns the default settings for the product indicator approaches
default_settings_pi(method = c("rca", "uca", "pind", "dblcent", "ca"))
default_settings_pi(method = c("rca", "uca", "pind", "dblcent", "ca"))
method |
which method to get the settings for |
list
library(modsem) default_settings_pi()
library(modsem) default_settings_pi()
extract lavaan object from modsem object estimated using product indicators
extract_lavaan(object)
extract_lavaan(object)
object |
modsem object |
lavaan object
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' est <- modsem_pi(m1, oneInt) lav_est <- extract_lavaan(est)
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' est <- modsem_pi(m1, oneInt) lav_est <- extract_lavaan(est)
Calculates chi-sq test and p-value, as well as RMSEA for the LMS and QML models. Note that the Chi-Square based fit measures should be calculated for the baseline model, i.e., the model without the interaction effect
fit_modsem_da(model, chisq = TRUE)
fit_modsem_da(model, chisq = TRUE)
model |
fitted model. Thereafter, you can use 'compare_fit()' to assess the comparative fit of the models. If the interaction effect makes the model better, and e.g., the RMSEA is good for the baseline model, the interaction model likely has a good RMSEA as well. |
chisq |
should Chi-Square based fit-measures be calculated? |
get_pi_syntax()
is a function for creating the lavaan
syntax used for estimating
latent interaction models using one of the product indicators in lavaan
.
get_pi_data(model.syntax, data, method = "dblcent", match = FALSE, ...)
get_pi_data(model.syntax, data, method = "dblcent", match = FALSE, ...)
model.syntax |
|
data |
data to create product indicators from |
method |
method to use:
|
match |
should the product indicators be created by using the match-strategy |
... |
arguments passed to other functions (e.g., modsem_pi) |
data.frame
library(modsem) library(lavaan) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' syntax <- get_pi_syntax(m1) data <- get_pi_data(m1, oneInt) est <- sem(syntax, data)
library(modsem) library(lavaan) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' syntax <- get_pi_syntax(m1) data <- get_pi_data(m1, oneInt) est <- sem(syntax, data)
lavaan
syntax for product indicator approachesget_pi_syntax()
is a function for creating the lavaan
syntax used for estimating
latent interaction models using one of the product indicators in lavaan
.
get_pi_syntax(model.syntax, method = "dblcent", match = FALSE, ...)
get_pi_syntax(model.syntax, method = "dblcent", match = FALSE, ...)
model.syntax |
|
method |
method to use:
|
match |
should the product indicators be created by using the match-strategy |
... |
arguments passed to other functions (e.g., modsem_pi) |
character
vector
library(modsem) library(lavaan) m1 <- ' # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' syntax <- get_pi_syntax(m1) data <- get_pi_data(m1, oneInt) est <- sem(syntax, data)
library(modsem) library(lavaan) m1 <- ' # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' syntax <- get_pi_syntax(m1) data <- get_pi_data(m1, oneInt) est <- sem(syntax, data)
The data stem from the large-scale assessment study PISA 2006 (Organisation for Economic Co-Operation and Development, 2009) where competencies of 15-year-old students in reading, mathematics, and science are assessed using nationally representative samples in 3-year cycles. In this eacademicample, data from the student background questionnaire from the Jordan sample of PISA 2006 were used. Only data of students with complete responses to all 15 items (N = 6,038) were considered.
A data frame of fifteen variables and 6,038 observations:
enjoy1 indicator for enjoyment of science, item ST16Q01: I generally have fun when I am learning <broad science> topics.
enjoy2 indicator for enjoyment of science, item ST16Q02: I like reading about <broad science>.
enjoy3 indicator for enjoyment of science, item ST16Q03: I am happy doing <broad science> problems.
enjoy4 indicator for enjoyment of science, item ST16Q04: I enjoy acquiring new knowledge in <broad science>.
enjoy5 indicator for enjoyment of science, item ST16Q05: I am interested in learning about <broad science>.
academic1 indicator for academic self-concept in science, item ST37Q01: I can easily understand new ideas in <school science>.
academic2 indicator for academic self-concept in science, item ST37Q02: Learning advanced <school science> topics would be easy for me.
academic3 indicator for academic self-concept in science, item ST37Q03: I can usually give good answers to <test questions> on <school science> topics.
academic4 indicator for academic self-concept in science, item ST37Q04: I learn <school science> topics quickly.
academic5 indicator for academic self-concept in science, item ST37Q05: <School science> topics are easy for me.
academic6 indicator for academic self-concept in science, item ST37Q06: When I am being taught <school science>, I can understand the concepts very well.
career1 indicator for career aspirations in science, item ST29Q01: I would like to work in a career involving <broad science>.
career2 indicator for career aspirations in science, item ST29Q02: I would like to study <broad science> after <secondary school>.
career3 indicator for career aspirations in science, item ST29Q03: I would like to spend my life doing advanced <broad science>.
career4 indicator for career aspirations in science, item ST29Q04: I would like to work on <broad science> projects as an adult.
This version of the dataset, as well as the description was gathered from the documentation of the 'nlsem' package (https://cran.r-project.org/package=nlsem), where the only difference is that the names of the variables were changed
Originally the dataset was gathered by the Organisation for Economic Co-Operation and Development (2009). Pisa 2006: Science competencies for tomorrow's world (Tech. Rep.). Paris, France. Obtained from: https://www.oecd.org/pisa/pisaproducts/database-pisa2006.htm
## Not run: m1 <- " ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5 CAREER =~ career1 + career2 + career3 + career4 SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6 CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC " est <- modsem(m1, data = jordan) ## End(Not run)
## Not run: m1 <- " ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5 CAREER =~ career1 + career2 + career3 + career4 SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6 CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC " est <- modsem(m1, data = jordan) ## End(Not run)
modsem()
is a function for estimating interaction effects between latent variables
in structural equation models (SEMs).
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator-based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
)
2. Distributionally based approaches ("lms"
, "qml"
).
For the product indicator-based approaches, modsem()
is essentially a fancy wrapper for lavaan::sem()
which generates the
necessary syntax and variables for the estimation of models with latent product indicators.
The distributionally based approaches are implemented separately and are
not estimated using lavaan::sem()
, but rather using custom functions (largely
written in C++
for performance reasons). For greater control, it is advised that
you use one of the sub-functions (modsem_pi, modsem_da, modsem_mplus) directly,
as passing additional arguments to them via modsem()
can lead to unexpected behavior.
modsem(model.syntax = NULL, data = NULL, method = "dblcent", ...)
modsem(model.syntax = NULL, data = NULL, method = "dblcent", ...)
model.syntax |
|
data |
dataframe |
method |
method to use:
|
... |
arguments passed to other functions depending on the method (see |
modsem
object with class modsem_pi
, modsem_da
, or modsem_mplus
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' # Double centering approach est1 <- modsem(m1, oneInt) summary(est1) ## Not run: # The Constrained Approach est1_ca <- modsem(m1, oneInt, method = "ca") summary(est1_ca) # LMS approach est1_lms <- modsem(m1, oneInt, method = "lms", EFIM.S=1000) summary(est1_lms) # QML approach est1_qml <- modsem(m1, oneInt, method = "qml") summary(est1_qml) ## End(Not run) # Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' # Double centering approach est_tpb <- modsem(tpb, data = TPB) summary(est_tpb) ## Not run: # The Constrained Approach est_tpb_ca <- modsem(tpb, data = TPB, method = "ca") summary(est_tpb_ca) # LMS approach est_tpb_lms <- modsem(tpb, data = TPB, method = "lms") summary(est_tpb_lms) # QML approach est_tpb_qml <- modsem(tpb, data = TPB, method = "qml") summary(est_tpb_qml) ## End(Not run)
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' # Double centering approach est1 <- modsem(m1, oneInt) summary(est1) ## Not run: # The Constrained Approach est1_ca <- modsem(m1, oneInt, method = "ca") summary(est1_ca) # LMS approach est1_lms <- modsem(m1, oneInt, method = "lms", EFIM.S=1000) summary(est1_lms) # QML approach est1_qml <- modsem(m1, oneInt, method = "qml") summary(est1_qml) ## End(Not run) # Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' # Double centering approach est_tpb <- modsem(tpb, data = TPB) summary(est_tpb) ## Not run: # The Constrained Approach est_tpb_ca <- modsem(tpb, data = TPB, method = "ca") summary(est_tpb_ca) # LMS approach est_tpb_lms <- modsem(tpb, data = TPB, method = "lms") summary(est_tpb_lms) # QML approach est_tpb_qml <- modsem(tpb, data = TPB, method = "qml") summary(est_tpb_qml) ## End(Not run)
modsem_da()
is a function for estimating interaction effects between latent variables
in structural equation models (SEMs) using distributional analytic (DA) approaches.
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator-based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
)
2. Distributionally based approaches ("lms"
, "qml"
).
modsem_da()
handles the latter and can estimate models using both QML and LMS,
necessary syntax, and variables for the estimation of models with latent product indicators.
NOTE: Run default_settings_da
to see default arguments.
modsem_da( model.syntax = NULL, data = NULL, method = "lms", verbose = NULL, optimize = NULL, nodes = NULL, convergence = NULL, optimizer = NULL, center.data = NULL, standardize.data = NULL, standardize.out = NULL, standardize = NULL, mean.observed = NULL, cov.syntax = NULL, double = NULL, calc.se = NULL, FIM = NULL, EFIM.S = NULL, OFIM.hessian = NULL, EFIM.parametric = NULL, robust.se = NULL, max.iter = NULL, max.step = NULL, fix.estep = NULL, start = NULL, epsilon = NULL, quad.range = NULL, n.threads = NULL, ... )
modsem_da( model.syntax = NULL, data = NULL, method = "lms", verbose = NULL, optimize = NULL, nodes = NULL, convergence = NULL, optimizer = NULL, center.data = NULL, standardize.data = NULL, standardize.out = NULL, standardize = NULL, mean.observed = NULL, cov.syntax = NULL, double = NULL, calc.se = NULL, FIM = NULL, EFIM.S = NULL, OFIM.hessian = NULL, EFIM.parametric = NULL, robust.se = NULL, max.iter = NULL, max.step = NULL, fix.estep = NULL, start = NULL, epsilon = NULL, quad.range = NULL, n.threads = NULL, ... )
model.syntax |
|
data |
dataframe |
method |
method to use:
|
verbose |
should estimation progress be shown |
optimize |
should starting parameters be optimized |
nodes |
number of quadrature nodes (points of integration) used in |
convergence |
convergence criterion. Lower values give better estimates but slower computation. |
optimizer |
optimizer to use, can be either |
center.data |
should data be centered before fitting model |
standardize.data |
should data be scaled before fitting model, will be overridden by
NOTE: It is recommended that you estimate the model normally and then standardize the output using
|
standardize.out |
should output be standardized (note will alter the relationships of parameter constraints since parameters are scaled unevenly, even if they have the same label). This does not alter the estimation of the model, only the output. NOTE: It is recommended that you estimate the model normally and then standardize the output using
|
standardize |
will standardize the data before fitting the model, remove the mean
structure of the observed variables, and standardize the output. Note that NOTE: It is recommended that you estimate the model normally and then standardize the output using
|
mean.observed |
should the mean structure of the observed variables be estimated?
This will be overridden by NOTE: Not recommended unless you know what you are doing. |
cov.syntax |
model syntax for implied covariance matrix (see |
double |
try to double the number of dimensions of integration used in LMS,
this will be extremely slow but should be more similar to |
calc.se |
should standard errors be computed? NOTE: If |
FIM |
should the Fisher information matrix be calculated using the observed or expected values? Must be either |
EFIM.S |
if the expected Fisher information matrix is computed, |
OFIM.hessian |
should the observed Fisher information be computed using the Hessian? If |
EFIM.parametric |
should data for calculating the expected Fisher information matrix be
simulated parametrically (simulated based on the assumptions and implied parameters
from the model), or non-parametrically (stochastically sampled)? If you believe that
normality assumptions are violated, |
robust.se |
should robust standard errors be computed? Meant to be used for QML, can be unreliable with the LMS approach. |
max.iter |
maximum number of iterations. |
max.step |
maximum steps for the M-step in the EM algorithm (LMS). |
fix.estep |
if |
start |
starting parameters. |
epsilon |
finite difference for numerical derivatives. |
quad.range |
range in z-scores to perform numerical integration in LMS using
Gaussian-Hermite Quadratures. By default |
n.threads |
number of cores to use for parallel processing. If |
... |
additional arguments to be passed to the estimation function. |
modsem_da
object
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- " # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " ## Not run: # QML Approach est1 <- modsem_da(m1, oneInt, method = "qml") summary(est1) # Theory Of Planned Behavior tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC " # LMS Approach estTpb <- modsem_da(tpb, data = TPB, method = lms, EFIM.S = 1000) summary(estTpb) ## End(Not run)
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- " # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " ## Not run: # QML Approach est1 <- modsem_da(m1, oneInt, method = "qml") summary(est1) # Theory Of Planned Behavior tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC " # LMS Approach estTpb <- modsem_da(tpb, data = TPB, method = lms, EFIM.S = 1000) summary(estTpb) ## End(Not run)
function used to inspect fittet object. similar to 'lavInspect()' argument 'what' decides what to inspect
modsem_inspect(object, what = NULL, ...)
modsem_inspect(object, what = NULL, ...)
object |
fittet model to inspect |
what |
what to inspect |
... |
Additional arguments passed to other functions |
for 'modsem_da', and 'modsem_lavaan' for 'modsem_lavaan', it is just a wrapper for 'lavInspect()' for 'modsem_da' and “ what can either be "all", "matrices", "optim", or just the name of what to extract.
Estimation latent interactions through mplus
modsem_mplus( model.syntax, data, estimator = "ml", type = "random", algorithm = "integration", process = "8", ... )
modsem_mplus( model.syntax, data, estimator = "ml", type = "random", algorithm = "integration", process = "8", ... )
model.syntax |
lavaan/modsem syntax |
data |
dataset |
estimator |
estimator argument passed to mplus |
type |
type argument passed to mplus |
algorithm |
algorithm argument passed to mplus |
process |
process argument passed to mplus |
... |
arguments passed to other functions |
modsem_mplus object
# Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' ## Not run: estTpbMplus <- modsem_mplus(tpb, data = TPB) summary(estTpbLMS) ## End(Not run)
# Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' ## Not run: estTpbMplus <- modsem_mplus(tpb, data = TPB) summary(estTpbLMS) ## End(Not run)
modsem_pi()
is a function for estimating interaction effects between latent variables,
in structural equation models (SEMs), using product indicators.
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
), and
2. Distributionally based approaches ("lms"
, "qml"
).
modsem_pi()
is essentially a fancy wrapper for lavaan::sem()
which generates the
necessary syntax and variables for the estimation of models with latent product indicators.
Use default_settings_pi()
to get the default settings for the different methods.
modsem_pi( model.syntax = NULL, data = NULL, method = "dblcent", match = NULL, standardize.data = FALSE, center.data = FALSE, first.loading.fixed = TRUE, center.before = NULL, center.after = NULL, residuals.prods = NULL, residual.cov.syntax = NULL, constrained.prod.mean = NULL, constrained.loadings = NULL, constrained.var = NULL, constrained.res.cov.method = NULL, auto.scale = "none", auto.center = "none", estimator = "ML", group = NULL, run = TRUE, na.rm = NULL, suppress.warnings.lavaan = FALSE, suppress.warnings.match = FALSE, ... )
modsem_pi( model.syntax = NULL, data = NULL, method = "dblcent", match = NULL, standardize.data = FALSE, center.data = FALSE, first.loading.fixed = TRUE, center.before = NULL, center.after = NULL, residuals.prods = NULL, residual.cov.syntax = NULL, constrained.prod.mean = NULL, constrained.loadings = NULL, constrained.var = NULL, constrained.res.cov.method = NULL, auto.scale = "none", auto.center = "none", estimator = "ML", group = NULL, run = TRUE, na.rm = NULL, suppress.warnings.lavaan = FALSE, suppress.warnings.match = FALSE, ... )
model.syntax |
|
data |
dataframe |
method |
method to use:
|
match |
should the product indicators be created by using the match-strategy |
standardize.data |
should data be scaled before fitting model |
center.data |
should data be centered before fitting model |
first.loading.fixed |
Should the first factor loading in the latent product be fixed to one? |
center.before |
should indicators in products be centered before computing products (overwritten by |
center.after |
should indicator products be centered after they have been computed? |
residuals.prods |
should indicator products be centered using residuals (overwritten by |
residual.cov.syntax |
should syntax for residual covariances be produced (overwritten by |
constrained.prod.mean |
should syntax for product mean be produced (overwritten by |
constrained.loadings |
should syntax for constrained loadings be produced (overwritten by |
constrained.var |
should syntax for constrained variances be produced (overwritten by |
constrained.res.cov.method |
method for constraining residual covariances |
auto.scale |
methods which should be scaled automatically (usually not useful) |
auto.center |
methods which should be centered automatically (usually not useful) |
estimator |
estimator to use in |
group |
group variable for multigroup analysis |
run |
should the model be run via |
na.rm |
should missing values be removed (case-wise)? Default is |
suppress.warnings.lavaan |
should warnings from |
suppress.warnings.match |
should warnings from |
... |
arguments passed to other functions, e.g., |
modsem
object
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' # Double centering approach est1 <- modsem_pi(m1, oneInt) summary(est1) ## Not run: # The Constrained Approach est1Constrained <- modsem_pi(m1, oneInt, method = "ca") summary(est1Constrained) ## End(Not run) # Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' # Double centering approach estTpb <- modsem_pi(tpb, data = TPB) summary(estTpb) ## Not run: # The Constrained Approach estTpbConstrained <- modsem_pi(tpb, data = TPB, method = "ca") summary(estTpbConstrained) ## End(Not run)
library(modsem) # For more examples, check README and/or GitHub. # One interaction m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' # Double centering approach est1 <- modsem_pi(m1, oneInt) summary(est1) ## Not run: # The Constrained Approach est1Constrained <- modsem_pi(m1, oneInt, method = "ca") summary(est1Constrained) ## End(Not run) # Theory Of Planned Behavior tpb <- ' # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Covariances ATT ~~ SN + PBC PBC ~~ SN # Causal Relationships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC ' # Double centering approach estTpb <- modsem_pi(tpb, data = TPB) summary(estTpb) ## Not run: # The Constrained Approach estTpbConstrained <- modsem_pi(tpb, data = TPB, method = "ca") summary(estTpbConstrained) ## End(Not run)
lavaan
syntaxGenerate parameter table for lavaan
syntax
modsemify(syntax)
modsemify(syntax)
syntax |
model syntax |
data.frame
with columns lhs, op, rhs, mod
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' modsemify(m1)
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' modsemify(m1)
Multiply indicators
multiplyIndicatorsCpp(df)
multiplyIndicatorsCpp(df)
df |
A data DataFrame |
A NumericVector
Extract parameterEstimates from an estimated model
parameter_estimates(object, ...)
parameter_estimates(object, ...)
object |
An object of class |
... |
Additional arguments passed to other functions |
Plot Interaction Effects
plot_interaction( x, z, y, xz = NULL, vals_x = seq(-3, 3, 0.001), vals_z, model, alpha_se = 0.15, ... )
plot_interaction( x, z, y, xz = NULL, vals_x = seq(-3, 3, 0.001), vals_z, model, alpha_se = 0.15, ... )
x |
The name of the variable on the x-axis |
z |
The name of the moderator variable |
y |
The name of the outcome variable |
xz |
The name of the interaction term. If the interaction term is not specified, it
will be created using |
vals_x |
The values of the |
vals_z |
The values of the moderator variable to plot. A separate regression
line ( |
model |
An object of class |
alpha_se |
The alpha level for the std.error area |
... |
Additional arguments passed to other functions |
A ggplot
object
library(modsem) ## Not run: m1 <- " # Outer Model X =~ x1 X =~ x2 + x3 Z =~ z1 + z2 + z3 Y =~ y1 + y2 + y3 # Inner model Y ~ X + Z + X:Z " est1 <- modsem(m1, data = oneInt) plot_interaction("X", "Z", "Y", "X:Z", -3:3, c(-0.2, 0), est1) tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC # BEH ~ ATT:PBC BEH ~ PBC:INT # BEH ~ PBC:PBC " est2 <- modsem(tpb, TPB, method = "lms") plot_interaction(x = "INT", z = "PBC", y = "BEH", xz = "PBC:INT", vals_z = c(-0.5, 0.5), model = est2) ## End(Not run)
library(modsem) ## Not run: m1 <- " # Outer Model X =~ x1 X =~ x2 + x3 Z =~ z1 + z2 + z3 Y =~ y1 + y2 + y3 # Inner model Y ~ X + Z + X:Z " est1 <- modsem(m1, data = oneInt) plot_interaction("X", "Z", "Y", "X:Z", -3:3, c(-0.2, 0), est1) tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC # BEH ~ ATT:PBC BEH ~ PBC:INT # BEH ~ PBC:PBC " est2 <- modsem(tpb, TPB, method = "lms") plot_interaction(x = "INT", z = "PBC", y = "BEH", xz = "PBC:INT", vals_z = c(-0.5, 0.5), model = est2) ## End(Not run)
Get standardized estimates
standardized_estimates(object, ...)
standardized_estimates(object, ...)
object |
An object of class |
... |
Additional arguments passed to other functions |
For modsem_da
, and modsem_mplus
objects,
the interaction term is not standardized such that var(xz) = 1
.
The interaction term is not an actual variable in the model, meaning that it does not
have a variance. It must therefore be calculated from the other parameters in the model.
Assuming normality and zero-means, the variance is calculated as
var(xz) = var(x) * var(z) + cov(x, z)^2
. Thus setting the variance of the interaction
term to 1 would only be 'correct' if the correlation between x
and z
is zero.
This means that the standardized estimates for the interaction term will
be different from those using lavaan
, since there the interaction term is an
actual latent variable in the model, with a standardized variance of 1.
summary for modsem objects
summary for modsem objects
summary for modsem objects
## S3 method for class 'modsem_da' summary( object, H0 = TRUE, verbose = interactive(), r.squared = TRUE, adjusted.stat = FALSE, digits = 3, scientific = FALSE, ci = FALSE, standardized = FALSE, loadings = TRUE, regressions = TRUE, covariances = TRUE, intercepts = !standardized, variances = TRUE, var.interaction = FALSE, ... ) ## S3 method for class 'modsem_mplus' summary( object, scientific = FALSE, standardize = FALSE, ci = FALSE, digits = 3, loadings = TRUE, regressions = TRUE, covariances = TRUE, intercepts = TRUE, variances = TRUE, ... ) ## S3 method for class 'modsem_pi' summary(object, ...)
## S3 method for class 'modsem_da' summary( object, H0 = TRUE, verbose = interactive(), r.squared = TRUE, adjusted.stat = FALSE, digits = 3, scientific = FALSE, ci = FALSE, standardized = FALSE, loadings = TRUE, regressions = TRUE, covariances = TRUE, intercepts = !standardized, variances = TRUE, var.interaction = FALSE, ... ) ## S3 method for class 'modsem_mplus' summary( object, scientific = FALSE, standardize = FALSE, ci = FALSE, digits = 3, loadings = TRUE, regressions = TRUE, covariances = TRUE, intercepts = TRUE, variances = TRUE, ... ) ## S3 method for class 'modsem_pi' summary(object, ...)
object |
modsem object to summarized |
H0 |
should a null model be estimated (used for comparison) |
verbose |
print progress for the estimation of null model |
r.squared |
calculate R-squared |
adjusted.stat |
should sample size corrected/adjustes AIC and BIC be reported? |
digits |
number of digits to print |
scientific |
print p-values in scientific notation |
ci |
print confidence intervals |
standardized |
print standardized estimates |
loadings |
print loadings |
regressions |
print regressions |
covariances |
print covariances |
intercepts |
print intercepts |
variances |
print variances |
var.interaction |
if FALSE (default) variances for interaction terms will be removed (if present) |
... |
arguments passed to lavaan::summary() |
standardize |
standardize estimates |
## Not run: m1 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " est1 <- modsem(m1, oneInt, "qml") summary(est1, ci = TRUE, scientific = TRUE) ## End(Not run)
## Not run: m1 <- " # Outer Model X =~ x1 + x2 + x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z " est1 <- modsem(m1, oneInt, "qml") summary(est1, ci = TRUE, scientific = TRUE) ## End(Not run)
A simulated dataset based on the Theory of Planned Behaviour
tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) INT ~ ATT + SN + PBC BEH ~ INT + PBC + INT:PBC " est <- modsem(tpb, data = TPB)
tpb <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att1 + att2 + att3 + att4 + att5 SN =~ sn1 + sn2 PBC =~ pbc1 + pbc2 + pbc3 INT =~ int1 + int2 + int3 BEH =~ b1 + b2 # Inner Model (Based on Steinmetz et al., 2011) INT ~ ATT + SN + PBC BEH ~ INT + PBC + INT:PBC " est <- modsem(tpb, data = TPB)
A simulated dataset based on the Theory of Planned Behaviour, where INT is a higher order construct of ATT, SN, and PBC.
tpb <- ' # First order constructs ATT =~ att1 + att2 + att3 SN =~ sn1 + sn2 + sn3 PBC =~ pbc1 + pbc2 + pbc3 BEH =~ b1 + b2 # Higher order constructs INT =~ ATT + PBC + SN # Higher order interaction INTxPBC =~ ATT:PBC + SN:PBC + PBC:PBC # Structural model BEH ~ PBC + INT + INTxPBC ' ## Not run: est <- modsem(tpb, data = TPB_2SO, method = "ca") summary(est) ## End(Not run)
tpb <- ' # First order constructs ATT =~ att1 + att2 + att3 SN =~ sn1 + sn2 + sn3 PBC =~ pbc1 + pbc2 + pbc3 BEH =~ b1 + b2 # Higher order constructs INT =~ ATT + PBC + SN # Higher order interaction INTxPBC =~ ATT:PBC + SN:PBC + PBC:PBC # Structural model BEH ~ PBC + INT + INTxPBC ' ## Not run: est <- modsem(tpb, data = TPB_2SO, method = "ca") summary(est) ## End(Not run)
A simulated dataset based on the Theory of Planned Behaviour, where INT is a higher order construct of ATT and SN, and PBC is a higher order construct of PC and PB.
tpb <- " # First order constructs ATT =~ att1 + att2 + att3 SN =~ sn1 + sn2 + sn3 PB =~ pb1 + pb2 + pb3 PC =~ pc1 + pc2 + pc3 BEH =~ b1 + b2 # Higher order constructs INT =~ ATT + SN PBC =~ PC + PB # Higher order interaction INTxPBC =~ ATT:PC + ATT:PB + SN:PC + SN:PB # Structural model BEH ~ PBC + INT + INTxPBC " ## Not run: est <- modsem(tpb, data = TPB_2SO, method = "ca") summary(est) ## End(Not run)
tpb <- " # First order constructs ATT =~ att1 + att2 + att3 SN =~ sn1 + sn2 + sn3 PB =~ pb1 + pb2 + pb3 PC =~ pc1 + pc2 + pc3 BEH =~ b1 + b2 # Higher order constructs INT =~ ATT + SN PBC =~ PC + PB # Higher order interaction INTxPBC =~ ATT:PC + ATT:PB + SN:PC + SN:PB # Structural model BEH ~ PBC + INT + INTxPBC " ## Not run: est <- modsem(tpb, data = TPB_2SO, method = "ca") summary(est) ## End(Not run)
A dataset based on the Theory of Planned Behaviour from a UK sample. 4 variables with high communality were selected for each latent variable (ATT, SN, PBC, INT, BEH), from two time points (t1 and t2).
Gathered from a replciation study of the original by Hagger et al. (2023). Obtained from https://doi.org/10.23668/psycharchives.12187
tpb_uk <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att3 + att2 + att1 + att4 SN =~ sn4 + sn2 + sn3 + sn1 PBC =~ pbc2 + pbc1 + pbc3 + pbc4 INT =~ int2 + int1 + int3 + int4 BEH =~ beh3 + beh2 + beh1 + beh4 # Inner Model (Based on Steinmetz et al., 2011) # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC " est <- modsem(tpb_uk, data = TPB_UK)
tpb_uk <- " # Outer Model (Based on Hagger et al., 2007) ATT =~ att3 + att2 + att1 + att4 SN =~ sn4 + sn2 + sn3 + sn1 PBC =~ pbc2 + pbc1 + pbc3 + pbc4 INT =~ int2 + int1 + int3 + int4 BEH =~ beh3 + beh2 + beh1 + beh4 # Inner Model (Based on Steinmetz et al., 2011) # Causal Relationsships INT ~ ATT + SN + PBC BEH ~ INT + PBC BEH ~ INT:PBC " est <- modsem(tpb_uk, data = TPB_UK)
This function estimates the path from x
to y
using the path tracing rules.
Note that it only works with structural parameters, so "=~"
are ignored, unless
measurement.model = TRUE
.
If you want to use the measurement model,
"~"
should be in the mod
column of pt
.
trace_path( pt, x, y, parenthesis = TRUE, missing.cov = FALSE, measurement.model = FALSE, maxlen = 100, ... )
trace_path( pt, x, y, parenthesis = TRUE, missing.cov = FALSE, measurement.model = FALSE, maxlen = 100, ... )
pt |
A data frame with columns |
x |
Source variable |
y |
Destination variable |
parenthesis |
If |
missing.cov |
If |
measurement.model |
If |
maxlen |
Maximum length of a path before aborting |
... |
Additional arguments passed to trace_path |
A string with the estimated path (simplified if possible)
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' pt <- modsemify(m1) trace_path(pt, x = "Y", y = "Y", missing.cov = TRUE) # variance of Y
library(modsem) m1 <- ' # Outer Model X =~ x1 + x2 +x3 Y =~ y1 + y2 + y3 Z =~ z1 + z2 + z3 # Inner model Y ~ X + Z + X:Z ' pt <- modsemify(m1) trace_path(pt, x = "Y", y = "Y", missing.cov = TRUE) # variance of Y
Extract or modify parTable from an estimated model with estimated variances of interaction terms
var_interactions(object, ...)
var_interactions(object, ...)
object |
An object of class |
... |
Additional arguments passed to other functions |
wrapper for vcov, to be used with modsem::vcov_modsem_da, since vcov is not in the namespace of modsem, but stats
vcov_modsem_da(object, ...)
vcov_modsem_da(object, ...)
object |
fittet model to inspect |
... |
additional arguments |