Git Product home page Git Product logo

mirt's People

Contributors

awmeade avatar jpritikin avatar netique avatar philchalmers avatar rdebelak avatar sebastianueckert avatar seonghobae avatar sumny avatar wibeasley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mirt's Issues

installation trouble

I'm getting errors trying to install the latest version. Here's my R code and output (with some formatting to enhance readability):

'> library('devtools')
'> install_github('mirt', 'philchalmers', quick = TRUE)
Installing github repo(s) mirt/master from philchalmers
Installing mirt.zip from https://github.com/philchalmers/mirt/archive/master.zip
Installing mirt
"C:/PROGRA1/R/R-2151.3/bin/x64/R" --vanilla CMD build
"C:\Users\USERNAME\AppData\Local\Temp\RtmpI1eE0M\mirt-master" --no-manual
--no-resave-data --no-vignettes

  • checking for file 'C:\Users\USERNAME\AppData\Local\Temp\RtmpI1eE0M\mirt-master/DESCRIPTION' ... OK
  • preparing 'mirt':
  • checking DESCRIPTION meta-information ... OK
  • cleaning src
  • checking for LF line-endings in source and make files
  • checking for empty or unneeded directories
  • looking to see if a 'data/datalist' file should be added
  • building 'mirt_0.6.1.tar.gz'

"C:/PROGRA1/R/R-2151.3/bin/x64/R" --vanilla CMD INSTALL
"C:\Users\USERNAME\AppData\Local\Temp\RtmpI1eE0M/mirt_0.6.1.tar.gz"
--library="C:/Program Files/R/R-2.15.3/library" --with-keep.source --no-docs
--no-multiarch --no-demo

  • installing source package 'mirt' ...
    • libs

g++ -m64 -I"C:/PROGRA1/R/R-2151.3/include" -DNDEBUG -I"C:/Program Files/R/R->2.15.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c Estep.cpp -o Estep.o
g++ -m64 -I"C:/PROGRA1/R/R-2151.3/include" -DNDEBUG -I"C:/Program Files/R/R-2.15.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c Misc.cpp -o Misc.o
gcc -m64 -I"C:/PROGRA1/R/R-2151.3/include" -DNDEBUG -I"C:/Program Files/R/R-2.15.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -std=gnu99 -mtune=core2 -c dgroup.c -o dgroup.o
g++ -m64 -I"C:/PROGRA1/R/R-2151.3/include" -DNDEBUG -I"C:/Program Files/R/R-2.15.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c dpars.cpp -o dpars.o
g++ -m64 -I"C:/PROGRA1/R/R-2151.3/include" -DNDEBUG -I"C:/Program Files/R/R-2.15.3/library/Rcpp/include" -I"d:/RCompile/CRANpkg/extralibs64/local/include" -O2 -Wall -mtune=core2 -c traceLinePts.cpp -o traceLinePts.o
g++ -m64 -shared -s -static-libgcc -o mirt.dll tmp.def Estep.o Misc.o dgroup.o dpars.o traceLinePts.o C:/Program Files/R/R-2.15.3/library/Rcpp/lib/x64/libRcpp.a -Ld:/RCompile/CRANpkg/extralibs64/local/lib/x64 -Ld:/RCompile/CRANpkg/extralibs64/local/lib -LC:/PROGRA1/R/R-2151.3/bin/x64 -lR
g++.exe: error: C:/Program: No such file or directory
g++.exe: error: Files/R/R-2.15.3/library/Rcpp/lib/x64/libRcpp.a: No such file or directory
ERROR: compilation failed for package 'mirt'

  • removing 'C:/Program Files/R/R-2.15.3/library/mirt'
  • restoring previous 'C:/Program Files/R/R-2.15.3/library/mirt'
    Error: Command failed (1)

The problem appears to be with the file names on my system, specifically the space in "C:/Program Files" (i.e., between "Program" and "Files". That piece repeated is:

g++.exe: error: C:/Program: No such file or directory
g++.exe: error: Files/R/R-2.15.3/library/Rcpp/lib/x64/libRcpp.a: No such file or directory
ERROR: compilation failed for package 'mirt'

Can you help?

I'm using R version 2.15.3 (2013-03-01) on a laptop running Windows 7 x64 Image v2. I installed Rtools30.exe yesterday (from http://cran.r-project.org/bin/windows/Rtools/).

Thanks -

Kevin Stanford
Cincinnati, Ohio USA

Relative efficiency plot

Construct a relative efficiency plot which will work for MultipleGroupClass and accross different nested model type (e.g., 1PL vs 2PL). RE = I(\theta,x) / I(\theta, y), with baseline at 1

mirt and confmirt with model parameter > 1

I just installed mirt_0.4.1-2 and tried some of the examples from the reference manual. For exploratory IRT the mirt and confmirt methods produce the following error if the model parameter is greater than 1:

Error in get(get(name, envir = exports, inherits = FALSE), envir = ns) :
internal error -3 in R_decompress1
Calls: mirt ... :: -> getExportedValue -> getInternalExportName -> get
In addition: Warning message:
In get(get(name, envir = exports, inherits = FALSE), envir = ns) :
restarting interrupted promise evaluation
Error in get(get(name, envir = exports, inherits = FALSE), envir = ns) :
internal error -3 in R_decompress1
Calls: mirt ... :: -> getExportedValue -> getInternalExportName -> get

I got this error with every sample data set i tried, for example if i call
pmod2 <- mirt(Science, 2)
(page 30 of the reference)

PS: Don't know whether this related, but during installation i got the following warnings
...
** inst
** byte-compile and prepare package for lazy loading
in method for 'itemplot.internal' with signature 'object="ExploratoryClass"': no definition for class "ExploratoryClass"
in method for 'itemplot.internal' with signature 'object="ConfirmatoryClass"': no definition for class "ConfirmatoryClass"
in method for 'itemplot.internal' with signature 'object="MultipleGroupClass"': no definition for class "MultipleGroupClass"
in method for 'fscores.internal' with signature '"ExploratoryClass"': no definition for class "ExploratoryClass"
in method for 'fscores.internal' with signature '"ConfirmatoryClass"': no definition for class "ConfirmatoryClass"
in method for 'fscores.internal' with signature '"MultipleGroupClass"': no definition for class "MultipleGroupClass"
Creating a generic function for 'residuals' from package 'stats' in package 'mirt'
...

Covariance Estimation Issue

Hello Phil,
I recently downloaded your latest version of “mirt” on GitHub (version 1.10.1), and I am running into a discrepancy when using this package on my PC versus on linux. I’m fitting between-item MIRT 2PL/GPCM models for my data and am specifying to freely estimate the covariances between the dimensions using the “COV” option in the “mirt.model” syntax (e.g., COV=F1F2F3). However, when I run on linux, all between-dimension covariances are always outputted as “0.25”, whereas when I run on my PC, the covariances are freely estimated as I specified them to be. I’m including a simple example below (modified version of one of your examples in the R mirt documentation). Do you know what could be causing this discrepancy? I tried updating my version on R to the most recent version (3.2.1) on linux and then installing “mirt” from GitHub, but that did not resolve the problem. Thank you very much for your help.
Sincerely,
Katherine

Example
library(mirt)
set.seed(52486)

simulate data

a <- matrix(c(
1.5,NA,NA,
0.5,NA,NA,
1.0,NA,NA,
NA,1.0,NA,
NA,1.5,NA,
NA,NA,0.5,
NA,NA,1.0,
NA,NA,1.0),ncol=3,byrow=TRUE)

d <- matrix(c(
-1.0,
-1.5,
1.5,
0.0,
3.0,
2.5,
2.0,
1.0),ncol=1,byrow=TRUE)

sigma <- diag(3)
sigma[upper.tri(sigma)] <- sigma[lower.tri(sigma)] <- .8
items <- rep('dich',8)
dataset <- simdata(a,d,2000,items,sigma)

analyses

model.1 <- mirt.model('
F1 = 1-3
F2 = 4-5
F3 = 6-8
COV = F1F2F3')

mod1 <- mirt(dataset, model.1, method = 'MHRM')
coef(mod1,simplify=TRUE)$cov

Results from my PC

coef(mod1,simplify=TRUE)$cov
F1 F2 F3
F1 1.000 NA NA
F2 0.637 1.000 NA
F3 0.686 0.657 1

mod1

Call:
mirt(data = dataset, model = model.1, method = "MHRM")

Full-information item factor analysis with 3 factor(s).
Converged within 0.001 tolerance after 197 MHRM iterations.
mirt version: 1.10.1
M-step optimizer: NR

Log-likelihood = -7691.483, SE = 0.021
AIC = 15420.97; AICc = 15421.35
BIC = 15527.38; SABIC = 15467.02
G2 (236) = 207.6, p = 0.9087
RMSEA = 0, CFI = 1, TLI = 1.073>

Results from linux

coef(mod1,simplify=TRUE)$cov
F1 F2 F3
F1 1.00 NA NA
F2 0.25 1.00 NA
F3 0.25 0.25 1

mod1

Call:
mirt(data = dataset, model = model.1, method = "MHRM")

Full-information item factor analysis with 3 factor(s).
Converged within 0.001 tolerance after 285 MHRM iterations.
mirt version: 1.10.1
M-step optimizer: NR

Log-likelihood = -7736.41, SE = 0.021
AIC = 15510.82; AICc = 15511.2
BIC = 15617.24; SABIC = 15556.87
G2 (236) = 297.28, p = 0.0042

One column 'covdata' argument to mixedmirt()

When 'covdata' argument to mixedmirt() is data frame with only one column, error occurs. It seems that such a data frame is converted to vector while mixedmirt() is making some operations on it. Simple workaround is to include more columns in the data frame provided as 'covdata' even when only one of them is used. Nevertheless it would be nice if also one-column data frame can be provided as 'covdata'.

Parallel processing for MH sampler

Parallel processing option for draw.thetas(), should work better now that the draws are done
mainly in R. May be especially useful in multilevel models

Request: Specify lbound and ubound in mirt

It would be very convenient to be able to specify upper and lower bounds for parameters when calling a model using mirt or other functions, similar to how priors can be specified. At present, the three-step process to extract pars='values', change the bounds, and pass the table back to the model can be rather cumbersome for coding.

Thanks!

Error in fscores() for the MultipleGroupClass

When using "scores.only=T" in "fscores()" for MultipleGroupClass the returned estimates are not sorted in the same way as rows in data provided to the "multipleGroup()" function.
Please compare lines 247 and 250 of the "R/fscores.internal.R".

model1 = mirt(myData, 1)
model2 = multipleGroup(myData, 1, groups, invariance=c('slopes', 'intercepts'))
scores1 = fscores(model1, full.scores=T)
scores2 = fscores(model2, full.scores=T)
scoresOnly1 = fscores(model1, full.scores=T, scores.only=T)
scoresOnly2 = fscores(model2, full.scores=T, scores.only=T)
plot(scores1[, 'F1'], scores2[, 'F1'])
plot(scoresOnly1, scoresOnly2)

leads to plots below:
scores1_scores2
scoresonly1_scoresonly2

Fit statistics in multigroup models with missing data

When I try to compute fit statistics for multigroup model with missing data I encounter an error:
Error in mod2values(x) :
cannot get a slot ("est") from an object of type "S4"

The problem does not occur in multigroup models without missing data nor in in single-group models with missing data, but coincidence of this two features (multigroup and missing data) causes problems.

See:

# code from examples in help for itemfit()
set.seed(1234)
a <- matrix(rlnorm(20, meanlog=0, sdlog = .1),ncol=1)
d <- matrix(rnorm(20),ncol=1)
items <- rep('dich', 20)
data <- simdata(a,d, 2000, items)

# further part of the example in which missing data is added
data[sample(1:prod(dim(data)), 500)] <- NA
raschfit <- mirt(data, 1, itemtype='Rasch')

Theta <- fscores(raschfit, method = 'ML', full.scores=TRUE)
itemfit(raschfit, impute = 10, Theta=Theta)

# new code - multigroup Rasch model (with full measurement invariance)
raschGrFit = multipleGroup(data, 1,
                           group = c(rep("gr1", 1000), rep("gr2", 1000)),
                           invariance = c("free_means", "free_var",
                                          "slopes", "intercepts"),
                           itemtype = "Rasch")
# trying to compute itemfit (causes an error...)
ThetaGr <- fscores(raschGrFit, method = 'ML', full.scores=TRUE)
itemfit(raschGrFit, impute = 10, Theta=ThetaGr)

Using Custom items

Hi, long time no see, actually I've working on the experimental Itemtype, but I forgot to fork so,
here I am.

Actually, I succeeded using createItem but I am having some trouble with using PRIOR or
setting STARTING POINTS, especially when categories are collapsed...

Here's my code and the funny thing is sometimes estimates are quite accurate even when
n.subj=100, n.item=10, but at other times, it's awful. and I don't know why,
it happens even when I delete all variables with rm(list=ls())

Here's code =====

Actually I have troublesome data stored in test_egrm.RData

but with n.subj=100 and n.item=10, categories are quite often collapsed... so

PS) and in my model, second factor is called xi, and it's moderating first factor so,

I don't think it can be called factor and when I do summary the model,

the result doesn't make sense, so...

create response with n.subj, n.item, ncat, and 1-dim latent var and xi, both from N(0,1)

createR=function(n.subj, n.item, ncat, Theta) {

#

a = runif(n.item, 1, 2)
d = matrix(runif(n.item*(ncat-1), -3, 3), ncol=ncat-1)
d = t(apply(d, 1, sort))

#P2.egrm for creating response data R
P2.egrm <- function(par, Theta, ncat) {
    th1 = Theta[,1]; xi1 = Theta[,2];
    a = par[1]
    d = par[2:ncat]
    d.mean=mean(d);
    D.star = matrix(exp(Theta[,2]), nrow=nrow(Theta), ncol=ncat-1) *
        matrix((d - d.mean) + d.mean, nrow=nrow(Theta), ncol=ncat-1, byrow=T)
    TH1 = matrix(th1, nrow=nrow(Theta), ncol=ncat-1)
    A = matrix(a, nrow=nrow(Theta), ncol=ncat-1)
    P = 1/(1+exp(-1*(A*(TH1-D.star))))

    return(P)
}

# P.egrm for creating Items
P.egrm <- function(par, Theta, ncat=4) {
    th1 = Theta[,1]; xi1 = Theta[,2];
    a = par[1]
    d = par[2:ncat]
    d.mean=mean(d);
    D.star = matrix(exp(Theta[,2]), nrow=nrow(Theta), ncol=ncat-1) *
        matrix((d - d.mean) + d.mean, nrow=nrow(Theta), ncol=ncat-1, byrow=T)
    TH1 = matrix(th1, nrow=nrow(Theta), ncol=ncat-1)
    A = matrix(a, nrow=nrow(Theta), ncol=ncat-1)
    P = 1/(1+exp(-1*(A*(TH1-D.star))))
    P.star=cbind(1, P)-cbind(P, 0)

    # Is this correct or justifiable?
    P.star <- ifelse(P.star < 1e-20, 1e-20, P.star)
    P.star <- ifelse(P.star > (1 - 1e-20), (1 - 1e-20), P.star)
    #

    return(P.star)
}

R = matrix(0, nrow=n.subj, ncol=n.item)
R2 = matrix(0, nrow=n.subj, ncol=n.item)
for (i.item in 1:n.item) {
    u <- runif(n.subj, 0, 1)
    U <- matrix(u, ncol=ncat, nrow=n.subj)
    P <- cbind(1, P2.egrm(c(a[i.item], d[i.item,]), Theta, ncat))
    R[, i.item]=apply(U-P <= 0 , 1, function(x) max(which(x)))
    R2[, i.item]=apply(U-P <=0, 1, sum)  # in this case, all categories should be number 1,2,3,4,...
}

return(R)

}

Data(R) Generation

n.subj = 100; n.item = 10; ncat=4; # we need this for following scripts
Theta = matrix(rnorm(n.subj*2), ncol=2)
R <- createR(n.subj = n.subj, n.item = n.item, ncat=ncat, Theta=Theta)

save(R, file= "test_egrm.RData")

load("test_egrm.RData")

USING customItems EsTIMATE egrm model

name <- "c.egrm"
par <- c(a=1, d1=-1, d2=0, d3=1)
est <- c(T, T, T, T)

P.egrm for creating Items

P.egrm <- function(par, Theta, ncat) {
th1 = Theta[,1]; xi1 = Theta[,2];
a = par[1]
d = par[2:ncat]
d.mean=mean(d);
D.star = matrix(exp(Theta[,2]), nrow=nrow(Theta), ncol=ncat-1) *
matrix((d - d.mean) + d.mean, nrow=nrow(Theta), ncol=ncat-1, byrow=T)
TH1 = matrix(th1, nrow=nrow(Theta), ncol=ncat-1)
A = matrix(a, nrow=nrow(Theta), ncol=ncat-1)
P = 1/(1+exp(-1_(A_(TH1-D.star))))
P.star=cbind(1, P)-cbind(P, 0)

# Is this correct or justifiable?
P.star <- ifelse(P.star < 1e-20, 1e-20, P.star)
P.star <- ifelse(P.star > (1 - 1e-20), (1 - 1e-20), P.star)
#

return(P.star)

}
item.egrm <- createItem(name, par=par, est=est, P=P.egrm)

mirt(R, 2, rep('egrm', n.item), customItems=list(egrm=item.egrm), pars='values')

system.time(mod1 <- mirt(R, 2, rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm)))

Are the estimates accurate?

cor(fscores(mod1, full.scores=T)[,1], Theta[,1])
cor(fscores(mod1, full.scores=T)[,2], Theta[,2])

Estimating with PRIORS

s.mirt.model1 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (1-10, d3, norm, 0, 1)"

s.mirt.model2 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a1, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (1-10, d3, norm, 0, 1)"

s.mirt.model3 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a1, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (2-10, d3, norm, 0, 1)"

s.mirt.model4 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a1, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1)"

system.time(mod1.prior <- mirt(R, mirt.model(s.mirt.model1), rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm)))

Works fine, but item1 has parameter "d3"

coef(mod1.prior)

Are the estimates accurate?

cor(fscores(mod1.prior, full.scores=T)[,1], Theta[,1])
cor(fscores(mod1.prior, full.scores=T)[,2], Theta[,2])

And estimates are poor

s.mirt.model11 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (1-10, d3, norm, 0, 1)
"

system.time(mod21.prior <- mirt(R, mirt.model(s.mirt.model11), rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm)))
cor(fscores(mod21.prior, full.scores=T)[,1], Theta[,1])
cor(fscores(mod21.prior, full.scores=T)[,2], Theta[,2])

Estimates distorted?

Is it because of d3 parameter of item1?

s.mirt.model12 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (2-10, d3, norm, 0, 1)"

system.time(mod22.prior <- mirt(R, mirt.model(s.mirt.model12), rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm)))
cor(fscores(mod22.prior, full.scores=T)[,1], Theta[,1])
cor(fscores(mod22.prior, full.scores=T)[,2], Theta[,2])

Or maybe Starint point?

s.mirt.model13 <- "F1 = 1-10
F2 = 1-10
PRIOR = (1-10, a, lnorm, 0, 1), (1-10, d1, norm, 0, 1), (1-10, d2, norm, 0, 1), (1-10, d3, norm, 0, 1)
START = (1-10, a, 1.0), (1-10, d1, -1.0), (1-10, d2, 0), (1-10, d3, 1.0)"

mirt(R, mirt.model(s.mirt.model13), rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm), pars='values')
temp <- mirt(R, mirt.model(s.mirt.model13), rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm), technical=list(NCYCLES=1))
coef(temp)

If you look at the result of coef(temp), START doesn't seem to work.

THE LAST RESORT

parprior=list()
parnum=1:40
itemtype=rep("norm", 40); par1=rep(0, 40); par2=rep(1, 40);
itemtype[seq(1,37,4)]="lnorm";
for (i.parnum in parnum) {
parprior=c(parprior, list(c(i.parnum, itemtype[i.parnum],par1[i.parnum],par2[i.parnum])))
}

system.time(mod23.prior <- mirt(R, 2, rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm), parprior=parprior))
mirt(R, 2, rep('c.egrm', n.item), customItems=list(c.egrm=item.egrm), pars="values")

Are the estimates accurate?

cor(fscores(mod23.prior, full.scores=T)[,1], Theta[,1])
cor(fscores(mod23.prior, full.scores=T)[,2], Theta[,2])

Not so good, .16, .10

maybe parameter d3 of item1 should be deleted?

Quite a long scrips, anyway, to summarize,

I think when there's PRIOR for parameters that are useless from collapsing,

i think i should be deleted,

and START = doesn't seem to work when using customItems,

anyway I already created built-in itemtype but since I didn't fork,

I don't know how I can show it to you,

maybe pull and fork and moving the files...

hope you can give some advice on how to managing with RStudio and github and fork and branching?

Cheers

Warning in mac during install

Hello, Phil.

I was found a warning message in mac during install the dev version. Cheers!

Best regards,
Seongho Bae

> try(library('devtools'), silent = T)
> try(install_github('philchalmers/mirt'), silent = T)
Downloading github repo philchalmers/mirt@master
Installing mirt
'/Library/Frameworks/R.framework/Resources/bin/R' --vanilla CMD INSTALL  \
  '/private/var/folders/sw/y1nzt3cj2dv0qbp6_tmd8v7w0000gn/T/RtmpNAeypC/devtools3e7722b5a11b/philchalmers-mirt-3141f85'  \
  --library='/Library/Frameworks/R.framework/Versions/3.2/Resources/library' --install-tests 

* installing *source* packagemirt...
** libs
clang++ -I/Library/Frameworks/R.framework/Resources/include -DNDEBUG  -I/usr/local/include -I/usr/local/include/freetype2 -I/opt/X11/include -DPLATFORM_PKGTYPE='"mac.binary.mavericks"' -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/RcppArmadillo/include"   -fPIC  -Wall -mtune=core2 -g -O2  -c Estep.cpp -o Estep.o
clang++ -I/Library/Frameworks/R.framework/Resources/include -DNDEBUG  -I/usr/local/include -I/usr/local/include/freetype2 -I/opt/X11/include -DPLATFORM_PKGTYPE='"mac.binary.mavericks"' -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/RcppArmadillo/include"   -fPIC  -Wall -mtune=core2 -g -O2  -c Misc.cpp -o Misc.o
clang++ -I/Library/Frameworks/R.framework/Resources/include -DNDEBUG  -I/usr/local/include -I/usr/local/include/freetype2 -I/opt/X11/include -DPLATFORM_PKGTYPE='"mac.binary.mavericks"' -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/RcppArmadillo/include"   -fPIC  -Wall -mtune=core2 -g -O2  -c dpars.cpp -o dpars.o
dpars.cpp:183:15: warning: unused variable 'npars2' [-Wunused-variable]
    const int npars2 = nfact + nfact * (nfact + 1) / 2;
              ^
1 warning generated.
clang++ -I/Library/Frameworks/R.framework/Resources/include -DNDEBUG  -I/usr/local/include -I/usr/local/include/freetype2 -I/opt/X11/include -DPLATFORM_PKGTYPE='"mac.binary.mavericks"' -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/3.2/Resources/library/RcppArmadillo/include"   -fPIC  -Wall -mtune=core2 -g -O2  -c traceLinePts.cpp -o traceLinePts.o
clang++ -dynamiclib -Wl,-headerpad_max_install_names -undefined dynamic_lookup -single_module -multiply_defined suppress -L/Library/Frameworks/R.framework/Resources/lib -L/usr/local/lib -o mirt.so Estep.o Misc.o dpars.o traceLinePts.o -framework Accelerate -L/usr/local/lib/gcc/x86_64-apple-darwin13.0.0/4.8.2 -lgfortran -lquadmath -lm -F/Library/Frameworks/R.framework/.. -framework R -Wl,-framework -Wl,CoreFoundation
installing to /Library/Frameworks/R.framework/Versions/3.2/Resources/library/mirt/libs
** R
** data
*** moving datasets to lazyload DB
** inst
** tests
** byte-compile and prepare package for lazy loading
Creating a generic function forprintfrom packagebasein packagemirtCreating a generic function foranovafrom packagestatsin packagemirtCreating a generic function forresidualsfrom packagestatsin packagemirt** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (mirt)
> sessionInfo()
R version 3.2.0 (2015-04-16)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.10.4 (Yosemite)

locale:
[1] ko_KR.UTF-8/ko_KR.UTF-8/ko_KR.UTF-8/C/ko_KR.UTF-8/ko_KR.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] devtools_1.7.0

loaded via a namespace (and not attached):
[1] httr_0.6.1     magrittr_1.5   tools_3.2.0    RCurl_1.95-4.6 stringi_0.4-1  knitr_1.10     stringr_1.0.0  bitops_1.0-6  
> 

Warning: M-step optimimizer converged immediately.

Hello,

I am having this warning and don't know how to fix it...

Iteration: 1, Log-Lik: -3129.128, Max-Change: 0.00000
Warning: M-step optimimizer converged immediately. Solution is either at the ML or
starting values are causing issues and should be adjusted.

I tried different starting values but got the same result.
Here's my code

fit <- mirt(R, 1, itemtype="grsm", TOL=1e-14)
pars <- mirt(R, 1, itemtype="grsm", TOL=1e-14, pars='values')
pars$value[1:21] = 1
fit <-  mirt(R, 1, itemtype="grsm", TOL=1e-14, pars=pars)

and Here's data for reproducing the result.

R <-
structure(c(4, 4, 5, 4, 4, 5, 5, 5, 3, 5, 4, 4, 5, 4, 3, 4, 4, 
4, 4, 3, 5, 5, 4, 5, 3, 5, 4, 3, 5, 4, 5, 4, 5, 5, 5, 5, 4, 4, 
5, 5, 4, 5, 4, 3, 5, 4, 4, 5, 4, 4, 5, 5, 4, 3, 4, 4, 5, 4, 4, 
5, 5, 4, 5, 4, 5, 5, 4, 4, 4, 4, 5, 5, 3, 5, 4, 5, 5, 5, 5, 4, 
4, NA, 4, 5, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 5, 
5, 5, 5, 4, 3, 5, 5, 3, 3, 5, 5, 5, 5, 4, 5, 5, 4, 5, 5, 4, 5, 
5, 4, 4, 4, 4, 5, 4, 5, 5, 5, 5, 4, 5, 4, 5, 5, 4, 2, 5, 5, 5, 
5, NA, 4, 4, 5, 3, 5, 4, 5, 5, 4, 5, 4, 4, 4, 4, 4, 4, 5, 4, 
5, 5, 4, 5, 3, 4, 5, 4, 4, 5, 4, 5, 5, 4, 5, 3, 4, 3, 5, 4, 5, 
4, 4, 4, 5, 5, 5, 5, 5, 4, 4, 5, 5, 5, 4, 5, 4, 4, 4, 5, 4, 4, 
5, 4, 5, 5, 4, 5, 5, 4, 4, 3, 5, 4, 5, 4, 4, 5, 4, 5, 4, 4, 4, 
5, 4, 4, 4, 5, 5, 4, 5, 5, 5, 4, 5, 4, 5, 4, 5, 5, 5, 4, 5, 5, 
4, 3, 4, 4, 4, 3, 4, 4, 5, 4, 3, 5, 3, 5, 5, 3, 5, 5, 5, 5, 4, 
3, 4, 4, 4, 3, 3, 4, 4, 5, 5, 5, 4, 4, 4, 4, 5, 5, 5, 5, 1, 5, 
5, 4, 4, 4, 4, 4, 5, 5, 5, 3, NA, 4, 5, 4, 4, 5, 5, 3, 3, 5, 
4, 2, 5, 4, 3, 4, 3, 4, 5, 2, 5, 5, 5, 5, 4, 5, 4, 4, 3, 4, 4, 
4, 4, 3, 5, 5, 3, 4, 4, 5, 4, 5, 4, 3, 5, 4, 4, 5, 4, 4, 5, 5, 
4, 2, 4, 3, 5, 4, 4, 4, 5, 4, 4, 4, 4, 5, 4, 3, 4, 4, 5, 4, 5, 
5, 4, 5, 5, 4, 4, 4, 3, NA, 3, 4, 4, 5, 5, 5, 4, 2, 5, 5, 5, 
4, 4, 4, 4, 3, 5, 5, 5, 3, 5, 4, 2, 4, 5, 4, 3, 4, 5, 5, 5, 3, 
4, 5, 4, 3, 5, 5, 5, 4, 4, 3, 4, 3, 5, 4, 5, 5, 5, 4, 5, 5, 5, 
4, 5, 4, 3, 5, 4, 5, 4, 4, 5, 4, 4, 3, 5, 4, 5, 4, 4, 5, 4, 5, 
4, 4, 5, 4, 3, 3, 4, 4, 4, 5, 3, 4, 4, 3, 2, 4, 5, 5, 4, 5, 5, 
NA, 3, 3, 5, 4, 5, 4, 4, 5, 5, 5, 5, 5, 5, 2, 3, 4, 4, 4, 4, 
4, 5, 4, 3, 5, 4, 4, 4, 5, 5, 5, 4, 5, NA, 2, 4, 3, 4, 4, 4, 
4, 2, 5, 4, 4, 3, 4, 4, 5, 3, 4, 3, 4, 5, 4, 3, 5, 4, 3, 5, 4, 
4, 4, 4, 5, 4, 4, 3, 5, 4, 4, 5, 4, 4, 3, 3, 4, 5, 5, 3, 4, NA, 
5, 3, 4, 5, 3, 4, 5, 3, 4, 4, 4, 4, 3, 4, 3, 4, 4, 4, 5, 4, 4, 
4, 2, 5, 5, 4, 3, 2, 5, 5, 4, 4, 5, 4, 4, 5, 5, 4, 5, NA, 4, 
3, 4, 4, 5, 5, 5, 3, 5, 4, 4, 5, 4, 3, 4, 2, 4, 4, 2, 5, 5, 4, 
5, 4, 5, 4, 4, 5, 5, 4, 4, 4, 3, 5, 5, 4, 4, 3, 5, 4, 5, 4, 4, 
5, 4, 4, 5, 4, 3, 4, 4, 4, 4, 4, 4, 5, 4, 4, 4, 5, 4, 5, 4, 4, 
5, 4, 4, 4, 4, 3, 4, 5, 5, 4, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 
5, 5, 5, 3, 4, 5, 5, 5, 4, 4, 3, 4, 5, 5, 5, 3, 5, 4, 3, 4, 5, 
3, 3, 4, 5, 5, 5, 4, 3, 5, 4, 4, 5, 5, 5, 4, 4, 4, 4, 3, 4, 4, 
5, 5, 5, 4, 5, 5, 3, 5, 5, 4, 2, 4, 5, 5, 4, 4, 5, 4, 4, 3, 5, 
4, 5, 4, 4, 5, 4, 4, 4, 5, 4, 4, 5, 4, 4, 5, 4, 5, 3, 4, 4, 3, 
4, 5, 5, 5, 5, 4, 5, 2, 3, 5, 5, 4, 5, 4, 4, 3, 5, 5, 4, 5, 5, 
4, 4, 5, 4, 4, 4, 5, 5, 4, 3, 5, 4, 4, 4, 4, 5, 4, 4, 5, 4, 3, 
4, 3, 4, 4, 5, 4, 3, 5, 4, 5, 4, 5, 4, 5, 4, 5, 4, 4, 5, NA, 
4, 5, 4, 3, 5, 4, 4, 5, 5, 5, 4, 4, 3, 5, 4, 4, 5, 4, 4, 3, 3, 
4, 4, 5, 3, 4, 3, 5, 5, 4, 5, 5, 4, 4, 3, 3, 4, 4, 5, 4, 4, 4, 
4, 4, 4, 5, 3, 4, 5, 4, 5, 4, 5, 4, 3, 5, 5, 5, 4, 4, 4, 4, 5, 
5, 5, 5, NA, 4, 3, 4, 3, 5, 5, 5, 3, 5, 4, 4, 5, 4, NA, 4, 2, 
4, 4, 2, 5, 5, 4, 5, 5, 5, 4, 4, 4, 4, 4, 4, 3, 3, 5, 5, 3, 4, 
3, 5, 5, 5, 4, 3, 5, 4, 4, 5, 4, 2, 4, 4, 4, 4, 4, 4, 5, 4, 4, 
4, 5, 5, 4, 4, 4, 5, 5, 4, 3, 4, 4, 5, 5, 5, 4, 5, 4, 5, 5, 4, 
5, NA, 5, 4, 4, 4, 5, 4, 4, 3, 4, 5, 5, 5, 4, 4, 4, 4, 4, 5, 
5, 2, 4, 5, 3, 5, 5, 3, 3, 4, 4, 5, 5, 4, 3, 5, 4, 4, 5, 4, 5, 
4, 4, 4, 4, 4, 4, 4, 5, 5, 4, 4, 5, 5, 4, 3, 5, 4, 3, 4, 5, 5, 
2, 5, 5, 4, 4, 3, 5, 4, 4, 4, 4, 5, 4, 4, 4, 4, 5, 4, 4, 3, 4, 
5, 5, 5, 3, 3, 4, 3, 4, 4, 4, 5, 5, 5, 5, 4, 3, 3, 5, 4, 5, 4, 
4, 4, 5, 4, 4, 5, 5, 3, 3, NA, 4, 4, 4, 4, 5, 4, 3, 5, 4, 4, 
5, 4, 5, 5, 4, 5, NA, 3, 5, 3, 4, 4, 4, 4, 4, 5, 5, 5, 4, 5, 
4, 5, 3, 5, 4, 4, 5, 4, 2, 5, 4, 3, 4, 4, 4, 5, 5, 5, 4, 4, 3, 
5, 2, NA, 4, 4, 5, 3, 2, 4, 5, 5, 3, 4, 4, 5, 5, 3, 5, 5, 4, 
4, 2, 3, 4, 5, 5, 4, 4, 4, 4, 4, 5, 5, 3, 4, 5, 4, 5, 4, 5, 4, 
4, 5, 5, 5, 4, 4, 4, 4, 5, 5, 5, 4), .Dim = c(298L, 4L), .Dimnames = list(
    c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", 
    "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", 
    "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", 
    "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", 
    "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", 
    "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", 
    "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", 
    "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", 
    "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", 
    "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", 
    "102", "103", "104", "105", "106", "107", "108", "109", "110", 
    "111", "112", "113", "114", "115", "116", "117", "118", "119", 
    "120", "121", "122", "123", "124", "125", "126", "127", "128", 
    "129", "130", "131", "132", "133", "134", "135", "136", "137", 
    "138", "139", "140", "141", "142", "143", "144", "145", "146", 
    "147", "148", "149", "150", "151", "152", "153", "154", "155", 
    "156", "157", "158", "159", "160", "161", "162", "163", "164", 
    "165", "166", "167", "168", "169", "170", "171", "172", "173", 
    "174", "175", "176", "177", "178", "179", "180", "181", "182", 
    "183", "184", "185", "186", "187", "188", "189", "190", "191", 
    "192", "193", "194", "195", "196", "197", "198", "199", "200", 
    "201", "202", "203", "204", "205", "206", "207", "208", "209", 
    "210", "211", "212", "213", "214", "215", "216", "217", "218", 
    "219", "220", "221", "222", "223", "224", "225", "226", "227", 
    "228", "229", "230", "231", "232", "233", "234", "235", "236", 
    "237", "238", "239", "240", "241", "242", "243", "244", "245", 
    "246", "247", "248", "249", "250", "251", "252", "253", "254", 
    "255", "256", "257", "258", "259", "260", "261", "262", "263", 
    "264", "265", "266", "267", "268", "269", "270", "271", "272", 
    "273", "274", "275", "276", "277", "278", "279", "280", "281", 
    "282", "283", "284", "285", "286", "287", "288", "289", "290", 
    "291", "292", "293", "294", "295", "296", "297", "298"), 
    c("1", "2", "3", "4")))

If I fit other models(graded, gpcm, Rasch), it works fine.
Here's the result from other models...

          logLik      AIC     AICc      BIC    SABIC      DIC
graded -1002.291 2038.581 2040.767 2101.432 2047.519 2038.581
grsm   -3129.128 6274.257 6274.755 6303.833 6278.462 6274.257
gpcm   -1010.193 2054.387 2056.572 2117.237 2063.324 2054.387
Rasch  -1025.173 2078.346 2079.830 2130.106 2085.706 2078.346

Do FIX and START work for experimental item?

Since I have to use experimental items.. I pushed my modified package on my site...

Here's code to replicate the problem..


rm(list=ls())

R <-
structure(c(1, 2, 2, 1, 1, 2, 2, 1, 1, 1, 3, 1, 2, 1, 2, 1, 1,
2, 1, 2, 1, 2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2, 1, 2, 2, 1,
1, 1, 2, 2, 3, 3, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 3, 2, 1, 2, 4,
1, 2, 2, 2, 1, 1, 4, 1, 1, 2, 1, 1, 3, 2, 1, 4, 1, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 1, 2, 1, 1, 1, 2, 2, 2, 1, 2, 1, 1, 2, 3,
2, 4, 1, 3, 2, 2, 1, 1, 1, 3, 3, 2, 3, 2, 3, 3, 2, 1, 2, 3, 3,
3, 1, 1, 1, 3, 3, 2, 2, 3, 1, 3, 3, 1, 2, 2, 3, 1, 3, 2, 2, 3,
3, 1, 2, 3, 1, 3, 3, 2, 3, 2, 3, 3, 2, 1, 2, 4, 3, 2, 3, 2, 2,
1, 4, 1, 3, 3, 1, 3, 3, 2, 3, 4, 1, 3, 3, 3, 2, 2, 3, 3, 3, 2,
2, 3, 1, 2, 1, 1, 1, 2, 2, 3, 3, 3, 1, 3, 2, 3, 2, 1, 4, 2, 2,
2, 1, 1, 1, 4, 1, 4, 4, 2, 3, 3, 2, 3, 2, 4, 3, 4, 4, 1, 1, 4,
4, 3, 2, 3, 1, 3, 1, 1, 2, 2, 2, 1, 3, 2, 2, 2, 4, 1, 4, 1, 1,
4, 3, 1, 3, 3, 4, 3, 2, 1, 2, 4, 3, 4, 4, 2, 1, 1, 4, 1, 3, 3,
1, 1, 3, 2, 4, 4, 1, 3, 3, 3, 2, 4, 3, 4, 1, 2, 2, 2, 1, 1, 1,
1, 1, 2, 2, 1, 2, 3, 1, 1, 2, 3, 2, 1, 3, 1, 2, 1, 1, 1, 1, 4,
1, 4, 1, 2, 3, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 3, 1, 2, 2, 1,
3, 1, 1, 2, 3, 1, 1, 2, 2, 3, 3, 4, 1, 2, 1, 1, 1, 1, 1, 4, 1,
3, 3, 2, 1, 2, 1, 3, 4, 3, 1, 3, 1, 4, 1, 3, 1, 1, 1, 3, 2, 1,
2, 1, 2, 2, 1, 2, 3, 1, 3, 1, 2, 2, 1, 1, 3, 1, 1, 1, 2, 2, 2,
1, 2, 1, 3, 2, 1, 2, 1, 1, 2, 2, 1, 1, 1, 1, 4, 4, 1, 4, 2, 2,
1, 1, 1, 2, 1, 1, 4, 1, 1, 1, 1, 3, 1, 2, 3, 1, 1, 1, 1, 2, 3,
3, 1, 2, 1, 3, 2, 4, 1, 2, 1, 1, 3, 1, 1, 4, 1, 3, 3, 2, 1, 2,
1, 3, 4, 1, 1, 1, 1, 2, 1, 1, 3, 1, 1, 3, 2, 1, 1, 1, 4, 4, 2,
2, 4, 1, 3, 3, 2, 2, 2, 1, 3, 1, 1, 1, 2, 2, 1, 1, 2, 1, 3, 2
), .Dim = c(100L, 5L))


ncat = function(R) {
stopifnot(is.matrix(R))
max(apply(R, 2, function(x) length(unique(x[!is.na(x)]))))
}


n.item <- ncol(R); ncat <- ncat(R)

# Do START and FIXED work?

model <- paste("F1 = 1-",n.item,
"\nCONSTRAIN = (1-",n.item,",d1)",
paste(",(1-",n.item,",d",2:(ncat-1),")",sep="", collapse=""),
"\nSTART = (1, c, 0.0)",
"\nFIXED = (1, c)",
sep="")

fit.grsm3_ <- mirt(R, mirt.model(model), itemtype=rep("grsm3", n.item))
coef2mat = function(cf) {
    stopifnot(is.list(cf))
    do.call(rbind, cf[1:(length(cf)-1)])
}

coef2mat(coef(fit.grsm3_))

Check if the c for first item is 0...

Ability estimation

From Okan Bulut:

"I'm wondering if I can use "fscores" function to estimate abilities by providing item parameters and response data. I don't want to estimate item parameters. I will use real item parameters and simulated response data to estimate abilities. I know that "fscores" function uses mirt class objects. Can I manually create input for this function? Also, I have another question. Does EAP and MAP take the correlations among dimensions into account when estimating abilities?"

fscores() 2-dim inputs fail

Hello Phil,

When working with uni-dimensional model if I was giving response pattern for few of the items and make rest in the data as NA I was still getting the fscores() result.

When I tried this with 2 dimensional but it is throwing error.

I can give an example to demonstrate what is going wrong

data(LSAT7)
data_LSAT7=expand.table(LSAT7)



modL <- mirt(data_LSAT7, 1)



> fscores(modL, response.pattern = c(1,1,1,1,NA), method="EAP")
     Item.1 Item.2 Item.3 Item.4 Item.5       F1     SE_F1
[1,]      1      1      1      1     NA 0.683704 0.8074962



> fscores(modL, response.pattern = c(1,NA,NA,NA,NA), method="EAP")
     Item.1 Item.2 Item.3 Item.4 Item.5        F1     SE_F1
[1,]      1     NA     NA     NA     NA 0.1491945 0.9488469

As one can see, as long as I am providing at least one response I am getting the score.

Whereas, when I try to do the same thing for multidimensional models, it doesn't work, throws an error.

for 2 dimensional:

modL2_noRot = mirt(data_LSAT7, 2, rotate = "none")


 fscores(modL2, response.pattern = c(1,1,1,1,NA), method="ML")
Error in `rownames<-`(`*tmp*`, value = c("Item.1", "Item.2", "Item.3",  : 
  length of 'dimnames' [1] not equal to array extent

same happens for 3 dimensional models

modL3_noRot = mirt(data_LSAT7, 3, rotate = "none")


fscores(modL3_noRot, response.pattern = c(1,1,1,1,NA), method="ML")
Error in `rownames<-`(`*tmp*`, value = c("Item.1", "Item.2", "Item.3",  : 
  length of 'dimnames' [1] not equal to array extent

Is there any way to fix this error?

Thanking you,
Irshad

mirt bugs

Hello there

Been using the MIRT package (version 0.4.2). It’s been really useful and seems to work very well but I have spotted a couple of bugs (I think) that I thought I ought to report in case you’re interested.

I think the “read.mirt” functions divides the item difficulty parameters by “D” when it doesn’t need to. 

Under the intial item parametrization (as written in the 1-4PL section on page 29 of the manual) D is already factored out of the location parameters and so the bit in the function read.mirt that divides by D

abc[2] <- -abc[2]/(abc[1] * D)

isn’t necessary.

Not sure the item information function works quite as it should. Fitted a unidimensional model with some 3PL items and found that some of the item information functions generated by “extract.item” and “iteminfo” looked very odd. Specifically they didn’t peak at the item location and in fact seemed to get larger and larger with increasing abilities. Happy to send over my data and code if you’re interested in investigating this.

Best wishes

Tom

Tom Benton
Principal Research Officer
Assessment Research and Development Division

Cambridge Assessment
1 Regent Street, Cambridge CB1 2EU
Telephone: +44 (0) 1223 558706

www.cambridgeassessment.org.uk

Cambridge Assessment is the brand name of the University of Cambridge Local Examinations Syndicate, a department of the University of Cambridge. Cambridge Assessment is a not-for-profit organisation.__

fscores(): response pattern containing NA causes: Error in tabdata[i, ] : subscript out of bounds

Could you please let me know if I am doing something wrong. Thanks in advance.

Executing following:

data <- expand.table(LSAT7)
fscores(mod, response.vector = c(0,NA,0,0,0))

causes an error:

Error in tabdata[i, ] : subscript out of bounds

p.s. I tried fscore() with different method parameters and all of them produced the same error; with the exception of EAPsum (although it is probably not using the response parameter).

LSAT7 does not contain NA responses; but the same error occurs with the data that does; e.g.:

fscores(mod, method='MAP') # ability estimates

Method:  MAP

Empirical Reliability:
    F1 
0.0193 
      V1 V2 V3   Freq            F1     SE_F1
 [1,]  0  0  0    222 -1.396151e+00 0.7691358
 [2,]  0  0  1    268 -1.090677e+00 0.7731981
 [3,]  0  0 NA    219 -1.215985e+00 0.7853614
 [4,]  0  1  0    263 -9.076819e-01 0.7803819
 [5,]  0  1  1    378 -5.867751e-01 0.8001421
 [6,]  0  1 NA    379 -6.998354e-01 0.8076445
 [7,]  0 NA  0     80 -1.091278e+00 0.8120889
 [8,]  0 NA  1    176 -7.463413e-01 0.8269984
 [9,]  0 NA NA    928 -8.744946e-01 0.8376759
...

Warning messages in Windows during install but maybe harmful

Hello, Phil.

I'm reporting warning messages in Windows during install but maybe harmful. Cheers!

Best regards,
Seongho Bae

> try(library('devtools'), silent = T)
> try(install_github('philchalmers/mirt'), silent = T)
Downloading github repo philchalmers/mirt@master
Installing mirt
"C:/PROGRA~1/RRO/R-32~1.0/bin/x64/R" --vanilla CMD INSTALL  \
  "C:/Users/Seongho/AppData/Local/Temp/Rtmpoxp5M1/devtoolsd2546ca626ca/philchalmers-mirt-dbe2a10"  \
  --library="C:/Users/Seongho/Documents/R/win-library/3.2" --install-tests 

* installing *source* package 'mirt' ...
** libs
g++ -m64 -I"C:/PROGRA~1/RRO/R-32~1.0/include" -DNDEBUG    -I"C:/Users/Seongho/Documents/R/win-library/3.2/Rcpp/include" -I"C:/Users/Seongho/Documents/R/win-library/3.2/RcppArmadillo/include" -I"c:/applications/extsoft/include"     -O2 -Wall  -mtune=core2 -c Estep.cpp -o Estep.o
g++ -m64 -I"C:/PROGRA~1/RRO/R-32~1.0/include" -DNDEBUG    -I"C:/Users/Seongho/Documents/R/win-library/3.2/Rcpp/include" -I"C:/Users/Seongho/Documents/R/win-library/3.2/RcppArmadillo/include" -I"c:/applications/extsoft/include"     -O2 -Wall  -mtune=core2 -c Misc.cpp -o Misc.o
g++ -m64 -I"C:/PROGRA~1/RRO/R-32~1.0/include" -DNDEBUG    -I"C:/Users/Seongho/Documents/R/win-library/3.2/Rcpp/include" -I"C:/Users/Seongho/Documents/R/win-library/3.2/RcppArmadillo/include" -I"c:/applications/extsoft/include"     -O2 -Wall  -mtune=core2 -c dpars.cpp -o dpars.o
dpars.cpp: In function 'void _dgroup(std::vector<double, std::allocator<double> >&, Rcpp::NumericMatrix&, const NumericMatrix&, const mat&, const mat&, const vec&, const vec&, const bool&)':
dpars.cpp:271:34: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
dpars.cpp:272:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
dpars.cpp: In function 'void _dgroup_pre(std::vector<double, std::allocator<double> >&, Rcpp::NumericMatrix&, Rcpp::S4&, const NumericMatrix&, const bool&)':
dpars.cpp:296:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
g++ -m64 -I"C:/PROGRA~1/RRO/R-32~1.0/include" -DNDEBUG    -I"C:/Users/Seongho/Documents/R/win-library/3.2/Rcpp/include" -I"C:/Users/Seongho/Documents/R/win-library/3.2/RcppArmadillo/include" -I"c:/applications/extsoft/include"     -O2 -Wall  -mtune=core2 -c traceLinePts.cpp -o traceLinePts.o
g++ -m64 -shared -s -static-libgcc -o mirt.dll tmp.def Estep.o Misc.o dpars.o traceLinePts.o -LC:/PROGRA~1/RRO/R-32~1.0/bin/x64 -lRlapack -LC:/PROGRA~1/RRO/R-32~1.0/bin/x64 -lRblas -lgfortran -Lc:/applications/extsoft/lib/x64 -Lc:/applications/extsoft/lib -LC:/PROGRA~1/RRO/R-32~1.0/bin/x64 -lR
installing to C:/Users/Seongho/Documents/R/win-library/3.2/mirt/libs/x64
** R
** data
*** moving datasets to lazyload DB
** inst
** tests
** byte-compile and prepare package for lazy loading
Creating a generic function for 'print' from package 'base' in package 'mirt'
Creating a generic function for 'anova' from package 'stats' in package 'mirt'
Creating a generic function for 'residuals' from package 'stats' in package 'mirt'
** help
*** installing help indices
  converting help for package 'mirt'
    finding HTML links ... done
    Bock1997                                html  
    DIF                                     html  
    DTF                                     html  
    DiscreteClass-class                     html  
    LSAT6                                   html  
    LSAT7                                   html  
    M2                                      html  
    MDIFF                                   html  
    MDISC                                   html  
    MixedClass-class                        html  
    MultipleGroupClass-class                html  
    PLCI.mirt                               html  
    SAT12                                   html  
    Science                                 html  
    SingleGroupClass-class                  html  
    anova-method                            html  
    averageMI                               html  
    bfactor                                 html  
    boot.mirt                               html  
    coef-method                             html  
    createItem                              html  
    deAyala                                 html  
    expand.table                            html  
    expected.item                           html  
    expected.test                           html  
    extract.group                           html  
    extract.item                            html  
    fixef                                   html  
    fscores                                 html  
    imputeMissing                           html  
    itemGAM                                 html  
    itemfit                                 html  
    iteminfo                                html  
    itemplot                                html  
    key2binary                              html  
    marginal_rxx                            html  
    mdirt                                   html  
    mirt-package                            html  
    mirt                                    html  
    mirt.model                              html  
    mirtCluster                             html  
    mixedmirt                               html  
    mod2values                              html  
    multipleGroup                           html  
    personfit                               html  
    plot-method                             html  
    print-method                            html  
    probtrace                               html  
    randef                                  html  
    residuals-method                        html  
    show-method                             html  
    simdata                                 html  
    summary-method                          html  
    testinfo                                html  
    wald                                    html  
** building package indices
** installing vignettes
** testing if installed package can be loaded
* DONE (mirt)
> sessionInfo()
R version 3.2.0 (2015-04-16)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=Korean_Korea.949  LC_CTYPE=Korean_Korea.949    LC_MONETARY=Korean_Korea.949
[4] LC_NUMERIC=C                 LC_TIME=Korean_Korea.949    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] devtools_1.7.0

loaded via a namespace (and not attached):
[1] httr_0.6.1     magrittr_1.5   tools_3.2.0    RCurl_1.95-4.6 stringi_0.4-1  knitr_1.10    
[7] stringr_1.0.0  bitops_1.0-6  
> 

SE and GPCM

Hi Phil,

i tried to post this to the google-group but somehow my posts doesnt get posted :(

i have some troubles estimating SE in a GPCM where estimation of the information matrix fails. i have no idea, i don't think the model is somewhat uncommon and i use a pars-table. can you have a look at this?

thanks a lot, felix

dev installation issue, failing on tex dependency

Running R 2.15.2 on Mac OS X 10.7.5, encountered the following error while trying to install development version:

install_github('mirt','philchalmers')
Installing github repo(s) mirt/master from philchalmers
Installing mirt.zip from https://api.github.com/repos/philchalmers/mirt/zipball/master
Installing mirt
/Library/Frameworks/R.framework/Resources/bin/R --vanilla CMD build '/private/var/folders/pn/2z45qwvs7257v8v31w62vq7r0000gn/T/RtmpvuaSlu/philchalmers-mirt-ce9cfae' --no-manual
--no-resave-data

  • checking for file '/private/var/folders/pn/2z45qwvs7257v8v31w62vq7r0000gn/T/RtmpvuaSlu/philchalmers-mirt-ce9cfae/DESCRIPTION' ... OK
  • preparing 'mirt':
  • checking DESCRIPTION meta-information ... OK
  • cleaning src
  • installing the package to re-build vignettes
  • creating vignettes ... ERROR
    Error in texi2dvi(file = file, pdf = TRUE, clean = clean, quiet = quiet, :
    Running 'texi2dvi' on 'mirt-presentation-2012.tex' failed.
    Calls: -> texi2pdf -> texi2dvi

Execution halted

Error: Command failed (1)

Error in itemfit()

itemfit() returns "Fehler in itemfit(fit.mirt) : Ersetzung hat Länge 0", i.e. "Error in itemfit(fit.mirt) : Replacement has length 0".

LD and Cramer's V

From Paula Elosua:

"First at all, let me congratulate you for your excellent work doing mirt.

I’m professor of psychometrics in Spain, and I’m interested in the statistics that mirt uses for assessing LD. You mention Chen and Thissen, and Cramer’s V…but could you please give me more precise references about those, - theoretical and practical??

Thanks is advance"

itemtype = 'partial credit'

From D. Alian:

I want to fit a 3-Faktor partial credit model using "confmirt". What "itemtype" is appropriate for pcm? Is it "Rasch". When I use "Rasch" with my data I am returned an error message even if I fix factor number to 1.

u2.confmirt<-confmirt(data,1,itemtype="Rasch")

error in LoadPars(itemtype = itemtype, itemloc = itemloc, lambdas = lambdas,  :
  Attribut 'names' [4] are not the same length as vector [3] (...my translation...)

Does that mean, that I did some wrong specification?

coef(..., IRTpars = TRUE) doesn't work in multipleGroups

Hi Phil,

apparently coef(..., IRTpars = TRUE) doesn't work in multipleGroups:

set.seed(12345)
a <- matrix(abs(rnorm(15,1,.3)), ncol=1)
d <- matrix(rnorm(15,0,.7),ncol=1)
d <- cbind(d, d-1, d-2)
itemtype <- rep('graded', nrow(a))
N <- 1000
dataset1 <- simdata(a, d, N, itemtype)
dataset2 <- simdata(a, d, N, itemtype, mu = .1, sigma = matrix(1.5))
dat <- rbind(dataset1, dataset2)
group <- c(rep('D1', N), rep('D2', N))
model <- mirt.model('F1 = 1-15')

mod_configural <- multipleGroup(dat, model, group = group)

coef(mod_configural, IRTpars = FALSE)
coef(mod_configural, IRTpars = TRUE)

Best wishes

RMSEA for missing data

From Wen-Ta Tseng, PhD:

"I've got a question: Why couldn't I obtain G*2 value and RMSEA value as in your MIRT package, although I used a no-missing data set?

Your answer and guidance will be highly appreciated! Below is the outcome message from R 2.15.1

> SCORE=read.csv(file='D:\\Attitude.csv', header = TRUE, sep = ",", quote="")
> (mod1 <- mirt(SCORE, 1))

Call:
mirt(data = SCORE, nfact = 1)

Full-information factor analysis with 1 factor
Converged in 11 iterations using 40 quadrature points.
Log-likelihood = -14045.37
AIC = 28234.74
BIC = 28578.64
G^2 = NA, df = 725, p = NA, RMSEA = NA

Best,"

Processing of plausible.draws in imputeMissing()

Hi, Phil.

I want to impute values in variables with multiple imputations.

So I tried to use score <- fscores(mod1, method = 'MAP', plausible.draws = 100, MI = 100) and imputeMissing(mod1, score), but I failed.

What happened to me?

Best regards,
Seongho Bae

> dat <- expand.table(LSAT7)
> (original <- mirt(dat, 1))
Iteration: 28, Log-Lik: -2658.805, Max-Change: 0.00010
Call:
mirt(data = dat, model = 1)

Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 28 EM iterations.
mirt version: 1.10.2 
M-step optimizer: BFGS 
EM acceleration: Ramsay
Number of rectangular quadrature: 61

Log-likelihood = -2658.805
AIC = 5337.61; AICc = 5337.833
BIC = 5386.688; SABIC = 5354.927
G2 (21) = 31.7, p = 0.0628
RMSEA = 0.023, CFI = 0.939, TLI = 0.924
> NAperson <- sample(1:nrow(dat), 20, replace = TRUE)
> NAitem <- sample(1:ncol(dat), 20, replace = TRUE)
> for(i in 1:20)
+   dat[NAperson[i], NAitem[i]] <- NA
> (mod <- mirt(dat, 1))
Iteration: 33, Log-Lik: -2648.603, Max-Change: 0.00010
Call:
mirt(data = dat, model = 1)

Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 33 EM iterations.
mirt version: 1.10.2 
M-step optimizer: BFGS 
EM acceleration: Ramsay
Number of rectangular quadrature: 61

Log-likelihood = -2648.603
AIC = 5317.206; AICc = 5317.429
BIC = 5366.284; SABIC = 5334.523
> scores <- fscores(mod, method = 'MAP', full.scores = TRUE, plausible.draws = 100, MI = 100)
> TESTfulldata <- imputeMissing(mod, scores)
> (TESTfullmod <- mirt(TESTfulldata, 1))
Error: data argument is required
> 
> require(psych)
> describe(data.frame(TESTfulldata))
          vars    n mean   sd median trimmed mad min max range  skew kurtosis   se
Item.1       1 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2       2 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3       3 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4       4 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5       5 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.1     6 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.1     7 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.1     8 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.1     9 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.1    10 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.2    11 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.2    12 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.2    13 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.2    14 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.2    15 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.3    16 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.3    17 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.3    18 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.3    19 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.3    20 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.4    21 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.4    22 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.4    23 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.4    24 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.4    25 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.5    26 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.5    27 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.5    28 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.5    29 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.5    30 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.6    31 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.6    32 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.6    33 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.6    34 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.6    35 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.7    36 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.7    37 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.7    38 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.7    39 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.7    40 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.8    41 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.8    42 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.8    43 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.8    44 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.8    45 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.9    46 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.9    47 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.9    48 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.9    49 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.9    50 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.10   51 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.10   52 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.10   53 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.10   54 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.10   55 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.11   56 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.11   57 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.11   58 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.11   59 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.11   60 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.12   61 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.12   62 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.12   63 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.12   64 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.12   65 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.13   66 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.13   67 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.13   68 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.13   69 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.13   70 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.14   71 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.14   72 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.14   73 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.14   74 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.14   75 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.15   76 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.15   77 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.15   78 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.15   79 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.15   80 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.16   81 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.16   82 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.16   83 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.16   84 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.16   85 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.17   86 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.17   87 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.17   88 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.17   89 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.17   90 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.18   91 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.18   92 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.18   93 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.18   94 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.18   95 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.19   96 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.19   97 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.19   98 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.19   99 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.19  100 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.20  101 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.20  102 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.20  103 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.20  104 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.20  105 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.21  106 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.21  107 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.21  108 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.21  109 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.21  110 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.22  111 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.22  112 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.22  113 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.22  114 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.22  115 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.23  116 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.23  117 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.23  118 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.23  119 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.23  120 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.24  121 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.24  122 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.24  123 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.24  124 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.24  125 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.25  126 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.25  127 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.25  128 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.25  129 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.25  130 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.26  131 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.26  132 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.26  133 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.26  134 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.26  135 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.27  136 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.27  137 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.27  138 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.27  139 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.27  140 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.28  141 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.28  142 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.28  143 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.28  144 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.28  145 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.29  146 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.29  147 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.29  148 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.29  149 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.29  150 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.30  151 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.30  152 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.30  153 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.30  154 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.30  155 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.31  156 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.31  157 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.31  158 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.31  159 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.31  160 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.32  161 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.32  162 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.32  163 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.32  164 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.32  165 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.33  166 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.33  167 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.33  168 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.33  169 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.33  170 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.34  171 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.34  172 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.34  173 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.34  174 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.34  175 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.35  176 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.35  177 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.35  178 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.35  179 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.35  180 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.36  181 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.36  182 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.36  183 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.36  184 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.36  185 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.37  186 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.37  187 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.37  188 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.37  189 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.37  190 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.38  191 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.38  192 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.38  193 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.38  194 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.38  195 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.39  196 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.39  197 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.39  198 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.39  199 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.39  200 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.40  201 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.40  202 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.40  203 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.40  204 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.40  205 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.41  206 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.41  207 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.41  208 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.41  209 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.41  210 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.42  211 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.42  212 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.42  213 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.42  214 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.42  215 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.43  216 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.43  217 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.43  218 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.43  219 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.43  220 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.44  221 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.44  222 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.44  223 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.44  224 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.44  225 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.45  226 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.45  227 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.45  228 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.45  229 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.45  230 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.46  231 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.46  232 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.46  233 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.46  234 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.46  235 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.47  236 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.47  237 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.47  238 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.47  239 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.47  240 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.48  241 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.48  242 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.48  243 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.48  244 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.48  245 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.49  246 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.49  247 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.49  248 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.49  249 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.49  250 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.50  251 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.50  252 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.50  253 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.50  254 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.50  255 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.51  256 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.51  257 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.51  258 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.51  259 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.51  260 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.52  261 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.52  262 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.52  263 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.52  264 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.52  265 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.53  266 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.53  267 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.53  268 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.53  269 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.53  270 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.54  271 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.54  272 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.54  273 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.54  274 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.54  275 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.55  276 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.55  277 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.55  278 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.55  279 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.55  280 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.56  281 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.56  282 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.56  283 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.56  284 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.56  285 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.57  286 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.57  287 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.57  288 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.57  289 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.57  290 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.58  291 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.58  292 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.58  293 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.58  294 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.58  295 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.59  296 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.59  297 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.59  298 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.59  299 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.59  300 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.60  301 1000 0.82 0.38      1    0.91   0   0   1     1 -1.71     0.92 0.01
Item.2.60  302 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.60  303 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.60  304 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.60  305 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.61  306 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.61  307 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.61  308 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.61  309 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.61  310 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.62  311 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.62  312 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.62  313 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.62  314 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.62  315 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.63  316 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.63  317 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.63  318 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.63  319 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.63  320 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.64  321 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.64  322 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.64  323 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.64  324 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.64  325 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.65  326 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.65  327 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.65  328 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.65  329 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.65  330 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.66  331 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.66  332 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.66  333 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.66  334 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.66  335 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.67  336 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.67  337 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.67  338 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.67  339 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.67  340 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.68  341 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.68  342 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.68  343 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.68  344 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.68  345 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.69  346 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.69  347 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.69  348 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.69  349 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.69  350 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.70  351 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.70  352 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.70  353 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.70  354 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.70  355 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.71  356 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.71  357 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.71  358 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.71  359 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.71  360 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.72  361 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.72  362 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.72  363 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.72  364 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.72  365 1000 0.84 0.36      1    0.93   0   0   1     1 -1.88     1.55 0.01
Item.1.73  366 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.73  367 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.73  368 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.73  369 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.73  370 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.74  371 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.74  372 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.74  373 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.74  374 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.74  375 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.75  376 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.75  377 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.75  378 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.75  379 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.75  380 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.76  381 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.76  382 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.76  383 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.76  384 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.76  385 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.77  386 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.77  387 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.77  388 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.77  389 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.77  390 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.78  391 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.78  392 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.78  393 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.78  394 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.78  395 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.79  396 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.79  397 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.79  398 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.79  399 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.79  400 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.80  401 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.80  402 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.80  403 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.80  404 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.80  405 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.81  406 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.81  407 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.81  408 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.81  409 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.81  410 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.82  411 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.82  412 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.82  413 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.82  414 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.82  415 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.83  416 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.83  417 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.83  418 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.83  419 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.83  420 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.84  421 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.84  422 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.84  423 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.84  424 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.84  425 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.85  426 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.85  427 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.85  428 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.85  429 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.85  430 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.86  431 1000 0.82 0.38      1    0.91   0   0   1     1 -1.71     0.92 0.01
Item.2.86  432 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.86  433 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.86  434 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.86  435 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.87  436 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.87  437 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.87  438 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.87  439 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.87  440 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.88  441 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.88  442 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.88  443 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.88  444 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.88  445 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.89  446 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.89  447 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.89  448 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.89  449 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.89  450 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.90  451 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.90  452 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.90  453 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.90  454 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.90  455 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.91  456 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.91  457 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.91  458 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.91  459 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.91  460 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.92  461 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.92  462 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.92  463 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.32 0.01
Item.4.92  464 1000 0.61 0.49      1    0.63   0   0   1     1 -0.43    -1.81 0.02
Item.5.92  465 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.93  466 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.93  467 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.93  468 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.93  469 1000 0.61 0.49      1    0.63   0   0   1     1 -0.44    -1.81 0.02
Item.5.93  470 1000 0.84 0.36      1    0.93   0   0   1     1 -1.89     1.59 0.01
Item.1.94  471 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.94  472 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.94  473 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.94  474 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.94  475 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.95  476 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.95  477 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.95  478 1000 0.77 0.42      1    0.84   0   0   1     1 -1.29    -0.34 0.01
Item.4.95  479 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.95  480 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.96  481 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.96  482 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.96  483 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.39 0.01
Item.4.96  484 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.96  485 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.97  486 1000 0.83 0.38      1    0.91   0   0   1     1 -1.72     0.95 0.01
Item.2.97  487 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.97  488 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.97  489 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.97  490 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.98  491 1000 0.83 0.38      1    0.91   0   0   1     1 -1.73     0.98 0.01
Item.2.98  492 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.56 0.02
Item.3.98  493 1000 0.77 0.42      1    0.84   0   0   1     1 -1.28    -0.36 0.01
Item.4.98  494 1000 0.61 0.49      1    0.64   0   0   1     1 -0.45    -1.80 0.02
Item.5.98  495 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
Item.1.99  496 1000 0.83 0.38      1    0.91   0   0   1     1 -1.74     1.01 0.01
Item.2.99  497 1000 0.66 0.47      1    0.70   0   0   1     1 -0.67    -1.55 0.01
Item.3.99  498 1000 0.77 0.42      1    0.84   0   0   1     1 -1.27    -0.38 0.01
Item.4.99  499 1000 0.61 0.49      1    0.64   0   0   1     1 -0.44    -1.81 0.02
Item.5.99  500 1000 0.84 0.36      1    0.93   0   0   1     1 -1.90     1.63 0.01
>
> sessionInfo()
R version 3.1.2 (2014-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=ko_KR.UTF-8       LC_NUMERIC=C               LC_TIME=ko_KR.UTF-8       
 [4] LC_COLLATE=ko_KR.UTF-8     LC_MONETARY=ko_KR.UTF-8    LC_MESSAGES=ko_KR.UTF-8   
 [7] LC_PAPER=ko_KR.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
[10] LC_TELEPHONE=C             LC_MEASUREMENT=ko_KR.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
 [1] grid      parallel  stats4    stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] psych_1.5.6           semTools_0.4-9        lavaan_0.5-18         GPArotation_2014.11-1
 [5] TAM_1.9-0             CDM_4.2-12            mvtnorm_1.0-2         car_2.0-25           
 [9] rsm_2.7-2             pracma_1.8.6          psychometric_2.2      rmeta_2.16           
[13] metafor_1.9-7         Matrix_1.1-5          meta_4.3-0            lsr_0.5              
[17] multilevel_2.5        MASS_7.3-37           nlme_3.1-121          plyr_1.8.3           
[21] latticeExtra_0.6-26   RColorBrewer_1.1-2    bfa_0.3.1             gWidgets_0.0-54      
[25] RGtk2_2.20.31         rgenoud_5.7-12        rrcovNA_0.4-7         rrcov_1.3-8          
[29] robustbase_0.92-3     SQUAREM_2014.8-1      stringr_1.0.0         RCurl_1.95-4.6       
[33] bitops_1.0-6          mirt_1.10.2           lattice_0.20-29      

loaded via a namespace (and not attached):
 [1] DEoptimR_1.0-2            Rcpp_0.11.6               RcppArmadillo_0.5.200.1.0 SparseM_1.6              
 [5] WrightMap_1.1             cluster_2.0.1             coda_0.17-1               expm_0.99-1.1            
 [9] lme4_1.1-7                magrittr_1.5              mgcv_1.8-4                minqa_1.2.4              
[13] mnormt_1.5-3              msm_1.5                   nloptr_1.0.4              nnet_7.3-9               
[17] norm_1.0-9.5              pbivnorm_0.6.0            pbkrtest_0.4-2            pcaPP_1.9-60             
[21] polycor_0.7-8             quadprog_1.5-5            quantreg_5.11             sfsmisc_1.0-27           
[25] splines_3.1.2             stringi_0.4-1             survival_2.37-7           tensor_1.5               
[29] tools_3.1.2              
> 
> 

fscores() on new response patterns

From Adilson dos Angos:

"I was using the fscores function and I have a question: is it possible to
estimate the ability for new response patterns?"

multimirt

Multilevel structure function using lme4 or nlme via multimirt extension

Suggested stats to include

Some statistics that users have contacted me about that should be developed in the package. Feel free to add more by commenting.

  • EAP sum score conversions (Thissen et al., 1995)
  • M*2 global fit statistic, which Cai and Hansen (2012) show is superior to the M2 statistic currently available in the fitIndices() function, in terms of providing better control over the type I error rate, greater power, and being more computationally efficient.
  • The S-X^2 statistic for item fit, given it seems to outperform many other item fit indices (Kang & Chen, 2008; Orlando & Thissen, 2000, 2003).
  • Fit statistics based on the information matrix in Ranger and Kuhn (2012)

bug in read.mirt

hi phil,

i have problems using read.mirt() with a multiple groups model:

set.seed(12345)
a <- matrix(abs(rnorm(15,1,.3)), ncol=1)
d <- matrix(rnorm(15,0,.7),ncol=1)
itemtype <- rep('dich', nrow(a))
N <- 1000
dataset1 <- simdata(a, d, N, itemtype)
dataset2 <- simdata(a, d, N, itemtype, mu = .1, sigma = matrix(1.5))
dat <- rbind(dataset1, dataset2)
group <- c(rep('D1', N), rep('D2', N))
models <- mirt.model('F1 = 1-15')

mod_configural <- multipleGroup(dat, models, group = group) #completely separate analyses

read.mirt(mod_configural)

gives.

Fehler in read.mirt(mod_configural) :
vector: kann keinen Vektor vom Mode '2' erzeugen.

cannot make vector with mode = 2.

best, felix

Error in checkAtAssignment("NULL", "nfact", "integer") : ‘nfact’ is not a slot in class “NULL”

p.s. Thanks so much for the great package (and nice documentation).

Would very much appreciate help with the following.

I get an error:

Error in checkAtAssignment("NULL", "nfact", "integer") :nfactis not a slot in classNULL

When executing:

dat = structure(list(V1 = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), V2 = c(1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), V3 = c(1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L
), V4 = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, NA, 
NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 
NA, NA, NA, NA, NA), V5 = c(1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 
0L, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 0L, 0L, 
0L, NA, NA, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)), .Names = c("V1", "V2", 
"V3", "V4", "V5"), row.names = c(NA, 50L), class = "data.frame")

library(mirt)
mod1 <- mirt(dat, 1)

Tests

Mirt is a first R package for IRT capable of dealing with serious-size data sets.

But to use it not only for experiments but also in "production environments" it should be not only fast but also reliable.

To achieve reliability, a good set of tests and testing data should be prepared. Such tests are useful not only for finding existing bugs, but also for regression tests (making sure, that after making changes previous functionalities still work as they worked before).

I hope I will be able to prepare some high-level tests (tests that call only functions available to the user) this spring and I hope this issue will help me to remember about that :)

Can I increase MHRM iterations in boot.mirt?

Hello, Phil.

Can I increase MHRM iterations in boot.mirt?
I tried to use technical arguments in boot.mirt function, but It did not works.

Additionally, I want to know group:items reflects group+items+group:items within random argument in mixedmirt. I can not find any description on manual.

entrepreneur.mixed <- mixedmirt(data = trainingset, covdata = person.info, model = entrepreneur.cfa.syntax, random = list(~1+coworker + employees + age + operation_date|industry+items+industry:items), itemtype = 'gpcm', technical = list(NCYCLES = 1e+5))

entrepreneur.mixed.boot <- boot.mirt(entrepreneur.mixed, R = 1000)
MHRM terminated after 2000 iterations.
Error in draw.thetas(theta0 = gtheta0[[g]], pars = pars[[g]], fulldata = Data$fulldata[[g]],  : 
  NAs are not allowed in subscripted assignments
additional information: warnings:
In log(eigen(sigma, symmetric = TRUE, only.values = TRUE)$values) :
  NaN was generated
Error in draw.thetas(theta0 = gtheta0[[g]], pars = pars[[g]], fulldata = Data$fulldata[[g]],  : 
  NAs are not allowed in subscripted assignments

about grsm parametrization

Hello,

I am trying grsm model, and I think it should be made clear of the parametrization.

first, for the object from mirt( , itemtype="grsm")
coef(obj, IRTpars=T) doesn't work.

When I saw the result from coef() and coef( , IRTpars=T) were the same,
I thought the model is parameterized in IRT way.

But as I go trying several times, it found it wasn't. So for anyone I might get confused as I did...

Of course transforming the coefficient is not so hard,
Anyway, do you know any reference about "rating scale graded response model"?

Mostly people say grsm as Generalized Rating Scaling Model which is nested model of GPCM,
I tried searching and googling with no result so far...

Crossvalidation

Hi,

I've been trying to cross-validate an estimated model. I guess the respective functionality is implemented? However, I seem to stumble into an error:

# example data
dat <- expand.table(deAyala)

# generate two subsamples
set.seed(1529)
sel <- sample(nrow(dat), 10000, replace=F)
d1 <- dat[sel,]   # calibaration sample
d2 <- dat[-sel,]  # validation sample

# estimate model based on subsample d1
fit <- mirt(d1, model= mirt.model('G = 1-5'), itemtype="2PL")

# estimate thetas for subsample d2
fs <- fscores(fit, full.scores=T, scores.only=T, response.pattern=d2)

Note that the argument scores.only is ignored when specifying a response.pattern.

The following works fine:

# item fit for subsample 1 (calibration sample)
itemfit(fit)
residuals(fit, type="Q3")

However, this won't work:

# item fit for subsample 2 (validation sample)
itemfit(fit, Theta=fs[,"F1"])
residuals(fit, type="Q3", Theta=fs[,"F1"])

Actually residuals produecs a series of warnings. Did I misunderstand the function or is there a glitch?

Best, Timo

Wired Model Fit Indices in M2() function.

Hello Phil,

I've got weird model fit indices in M2() function with 1.10 releases when try to calculate confirmatory models.

In case of 1.9;
ver1 9

But in case of 1.10;
version1 10

            M2   df p     RMSEA   RMSEA_5  RMSEA_95       TLI CFI    SRMSR
stats 20722.69 1059 0 0.2677532 0.2640676 0.2704207 -1.205233   0 0.190344

And, my syntax input style was like this;

entrepreneur.cfa.syntax <- mirt.model('
                                        readiness = 1-5
                                        C = 6-11
                                        O = 12-15
                                        Grit_consistency = 16-19
                                        Grit_perservance = 20-25
                                        EA = 26-41
                                        performance = 42-48
                                        CMV = 1-48

                                        COV = readiness*C*O*Grit_consistency*Grit_perservance*EA*performance
                                        ')

Was my syntax wrong? Any changes 1.9 to 1.10 for the confirmatory model syntax?

Best regards,
Seongho Bae

Error messages occured during install what using a github repo.

please check this error messages.
default

session_info()
Session info--------------------------------------------------------------------------------------------
setting value
version R version 3.1.2 (2014-10-31)
system x86_64, darwin13.4.0
ui RStudio (0.98.1102)
language (EN)
collate ko_KR.UTF-8
tz Asia/Seoul

Packages------------------------------------------------------------------------------------------------
package * version date source
bfa * 0.3.1 2014-02-11 CRAN (R 3.1.0)
bitops 1.0.6 2013-08-17 CRAN (R 3.1.0)
car * 2.0.22 2014-11-18 CRAN (R 3.1.2)
CDM * 4.0 2014-11-22 CRAN (R 3.1.2)
coda 0.16.1 2012-11-06 CRAN (R 3.1.0)
DEoptimR 1.0.2 2014-10-19 CRAN (R 3.1.1)
devtools * 1.6.1 2014-10-07 CRAN (R 3.1.1)
evaluate 0.5.5 2014-04-29 CRAN (R 3.1.0)
foreign * 0.8.61 2014-03-28 CRAN (R 3.1.0)
formatR 1.0 2014-08-25 CRAN (R 3.1.1)
GPArotation * 2014.11.1 2014-11-25 CRAN (R 3.1.2)
httr 0.5 2014-09-02 CRAN (R 3.1.1)
knitr 1.8 2014-11-11 CRAN (R 3.1.2)
lattice * 0.20.29 2014-04-04 CRAN (R 3.1.2)
latticeExtra * 0.6.26 2013-08-15 CRAN (R 3.1.0)
lavaan * 0.5.18.788 2015-03-01 local
lsr * 0.3.2 2014-01-31 CRAN (R 3.1.0)
MASS * 7.3.35 2014-09-30 CRAN (R 3.1.2)
Matrix 1.1.4 2014-06-15 CRAN (R 3.1.2)
meta * 4.0.1 2014-11-19 CRAN (R 3.1.2)
metafor * 1.9.5 2014-11-24 CRAN (R 3.1.2)
mirt 1.8.5 2015-03-11 Github (9f2eddb)
mnormt 1.5.1 2014-06-30 CRAN (R 3.1.1)
multilevel * 2.5 2013-04-10 CRAN (R 3.1.0)
mvtnorm * 1.0.1 2014-11-13 CRAN (R 3.1.2)
nlme * 3.1.118 2014-10-07 CRAN (R 3.1.2)
nnet 7.3.8 2014-03-28 CRAN (R 3.1.2)
norm 1.0.9.5 2013-02-28 CRAN (R 3.1.0)
pbivnorm 0.5.1 2012-10-31 CRAN (R 3.1.0)
pcaPP * 1.9.60 2014-10-22 CRAN (R 3.1.2)
plyr * 1.8.1 2014-02-26 CRAN (R 3.1.0)
polycor 0.7.8 2010-04-03 CRAN (R 3.1.0)
pracma * 1.7.9 2014-11-15 CRAN (R 3.1.2)
psych * 1.4.8.11 2014-08-12 CRAN (R 3.1.1)
psychometric * 2.2 2010-08-08 CRAN (R 3.1.0)
quadprog 1.5.5 2013-04-17 CRAN (R 3.1.0)
RColorBrewer * 1.0.5 2011-06-17 CRAN (R 3.1.0)
Rcpp 0.11.3 2014-09-29 CRAN (R 3.1.1)
RcppArmadillo 0.4.550.1.0 2014-11-28 CRAN (R 3.1.2)
RCurl 1.95.4.4 2014-11-29 CRAN (R 3.1.2)
rmeta * 2.16 2012-10-29 CRAN (R 3.1.0)
robustbase * 0.92.2 2014-11-22 CRAN (R 3.1.2)
rrcov * 1.3.4 2013-08-26 CRAN (R 3.1.0)
rrcovNA * 0.4.4 2013-08-29 CRAN (R 3.1.0)
rsm * 2.7 2014-10-02 CRAN (R 3.1.1)
rstudioapi 0.1 2014-03-27 CRAN (R 3.1.0)
semTools * 0.4.6 2014-10-03 CRAN (R 3.1.1)
sfsmisc 1.0.26 2014-06-16 CRAN (R 3.1.0)
stringr 0.6.2 2012-12-06 CRAN (R 3.1.0)
TAM * 1.2 2014-11-22 CRAN (R 3.1.2)
tensor 1.5 2012-05-05 CRAN (R 3.1.0)

Seongho Bae

SE for IRTpars = TRUE

Thank you for the excellent resource with the mirt package.

When outputting coef(data, IRTpars=TRUE, printSE=TRUE) no standard errors are printed. If I change the IRTpars=TRUE to FALSE the SE are printed.

My model has SE = TRUE

Thank You

'data inputations'

Hello Phil,

I found a word 'data inputations' in M2(). How about change to 'data imputations' from 'data inputations'?

Best regards,
Seongho Bae

> findM2(mod5, Theta = mod5_theta)
quadpts: 15000 / iteration: 1

 Error: Fit statistics cannot be computed when there are missing data. Pass a suitable
                 impute argument to compute statistics following multiple
                 data inputations 

IRT parameters

Hi Phil,

I've been using the MIRT package and it's really a tremendous resource - thanks for all the work you put into it. It does appear, however, that the "IRTparm=TRUE" does not return transformed item parameter estimates.

I'm not sure if I'm supposed to provide reproducible code or what the proper protocol here is.

Thanks.
Charlie

Survey weights

Feature Request: Allow survey weights to be included in estimation.

parameter data.frame depends on order

dear phil,

when one changes the parameters of a mirt model in the parameter-data.frame, mirt throws an error when the order of the parameters is changed (happened to me after merging another column with known parameters).

pars = mirt(Science, 1, pars="values")
pars = pars[sample(18),]
mirt(Science, 1, pars = pars)

i guess, mirt could sort the parameter-df in order to prevent this problem.

best, felix

obtaining IRF value for a specific theta (ability)

Could you please let me know if there is a way to obtain IRF value for a specific theta (ability)

itemplot(mod, 1) provides a nice IRF plot.
However, if possible, I would like to obtain a numeric value of P(\theta) for a specific theta; something like:
> irf(mod, item = 1, theta = 0.2)
0.81

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.