Student names are indicated in bold.

Ames, A. J. & Leventhal, B. C. (in press). Modeling changes in response style with Longitudinal IRTree models. Multivariate Behavioral Research.

Ames, A. J., Leventhal, B.C., & Ezike, N. C. (2020). Monte Carlo simulation in item response theory applications using SAS. Measurement: Interdisciplinary Research and Perspectives, 18(2), 55-74. doi: 10.1080/15366367.2019.1689762

Bandalos, D.L. (2021). Item meaning and order as causes of correlated residuals in confirmatory factor analysis. Structural Equation Modeling: An Interdisciplinary Journal 28(6), 903-913. https://doi.org/10.1080/10705511.2021.1916395

Bao, Y., Shen, Y., Wang, S., & Bradshaw, L. (2021). Flexible Computerized Adaptive Tests to Detect Misconceptions and Estimate Ability Simultaneously. Applied Psychological Measurement, 45(1), 3-21.

Chod, S. M., Goldberg, A., Muck, B., Pastor, D., & Whaley, C. O. (2021). Can we get an upgrade?: How two college campuses are building the democracy we aspire to be. In E. C. Matto, A. R. M. McCartney, E. A. Bennion, A. Blair, T. Sun, & D. Whitehead (Eds)., Teaching Civic Engagement Globally, America Political Science Association.

DeMars, C. E. (2020). Comparing causes of dependency: Shared latent trait or dependence on observed response. Journal of Applied Measurement, 21 (4), 400-419.

DeMars, C. E. (2020). Alignment as an alternative to anchor purification in DIF analyses Structural Equation Modeling, 27, 56-72. doi: 10.1080/10705511.2019.1617151

DeMars, C. E. (2020). Multilevel Rasch modeling: Does misfit to the Rasch model impact the regression model? Journal of Experimental Education, 88, 605-619. doi: 10.1080/00220973.2019.1610859

Finney, S.J. & Buchanan, H.A. (2021). A more efficient path to learning improvement: Using repositories of effectiveness studies to guide evidence-informed programming. Research & Practice in Assessment, 16(1), 36-48.

Finney, S.J., Gilmore, G.R., & Alahmadi, S. (in press). “What’s a good measure of that outcome?” Resources to find existing and psychometrically-sound measures. Research & Practice in

Finney, S.J., Perkins, B.A., & Satkus, P. (2020). Examining the simultaneous change in emotions during a test: Relations with expended effort and test performance. International Journal of Testing, 20, 274-298. DOI: 10.1080/15305058.2020.1786834

Finney, S.J., Satkus, P. & Perkins, B.A. (2020). The effect of perceived test importance and examinee emotions on expended effort during a low-stakes test: A longitudinal panel model. Educational Assessment, 25, 159 – 177. DOI: 10.1080/10627197.2020.1756254

Finney, S.J., Wells, J.B., & Henning, G.W. (2021). The need for program theory and implementation fidelity in assessment practice and standards (Occasional Paper No. 51). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Gregg, N. & Leventhal, B.C. (2020). Data visualizations: Effective evidence-based practices. [Digital ITEMS Module 17]. Educational Measurement: Issues and Practice, 39(3), 239-240. doi: 10.1111/empi.12387

Fulcher, K. H., & Leventhal, B. C. (2020). James Madison University: Assessing and Planning During a Pandemic. Assessment Update, 32(6), 4-5.

Fulcher, K. H., & Prendergast, C. O. (2020). Equity‐Related Outcomes, Equity in Outcomes, and Learning Improvement. Assessment Update, 32(5), 10-11.

Fulcher, K. H., & Prendergast, C. O. (2021). Learning improvement at scale: A how-to guide for higher education. Sterling, VA: Stylus.

Horst, S.J., Finney, S.J., Prendergast, C.O., Pope, A. & Crewe, M. (2021). The credibility of inferences from program effectiveness studies published in student affairs journals: Potential impact on programming and assessment. Research & Practice in Assessment, 16(2), 17-32.

Leite, W., Bandalos, D.L. & Shen, Z. (in press). Simulation Methods in Structural Equation Modeling. In R.H. Hoyle, (Ed.), Handbook of Structural Equation Modeling, 2nd Edition. New York: Guilford Publications.

Leventhal, B. C., & Ames, A. J. (2020). Monte Carlo simulation studies in IRT [Digital ITEMS Module 13]. Educational Measurement: Issues and Practice, 39(2), 109-110. doi:10.1111/emip.12342

Leventhal, B. C. & Grabovsky, I. (2020). Adding objectivity to Standard Setting: Evaluating consequence using the conscious and subconscious weight methods. Educational Measurement: Issues and Practice, 39(1), 30-36. doi: 10.1111/emip.12316

Leventhal, B. C., Schubert, L., & Trybus, M. (in press). The SELAP continuum: How to assess student  employee learning and job performance. Assessment Update.

Linder, G. F., Ames, A. J., Hawk, W. J., Pyle, L. K., Fulcher, K. H., & Early, C. E. (2020). Teaching Ethical Reasoning: Program Design and Initial Outcomes of Ethical Reasoning in Action, a University-wide Ethical Reasoning Program. Teaching Ethics.

Lúcio, P. S., Lourenço, F. C., Cog-Moreira, H., Bandalos, D. L. Ferriera de Carvalho, C. A., Batista Kida, A., de Ávial, C. R. (2021). Reading comprehension tests for children: Test equating and specific age-interval reports. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2021.662192

Myers, A.J. & Finney, S.J. (2021). Change in self-reported motivation before to after test completion: Relation with performance. Journal of Experimental Education, 89, 74-94. DOI: 10.1080/00220973.2019.1680942

Myers, A.J. & Finney, S.J. (2021). Does it matter if examinee motivation is measured before or after a low-stakes test? A moderated mediation analysis. Educational Assessment, 26, 1-19. DOI: 10.1080/10627197.2019.1645591

Myers, A.J, Ames, A.J, Leventhal, B.C, & Holzman, M.A. (2020). Validating rubric scoring processes: An application of an item response tree model. Applied Measurement in Education, 33(4), 293-308. doi: 10.1080/08957347.2020.1789143

Pastor, D.A., & Love, P.D. (2020). University-wide assessment during COVID-19: An opportunity for innovation. AALHE Intersection, 2(1), 1-3.

Perkins, B.A., Pastor, D.A., & Finney, S.J. (in press). Between- and within-examinee variability in test-taking effort and test emotions during a low-stakes test. Applied Measurement in Education.

Perkins, B.A., Satkus, P., & Finney, S.J. (2020). Examining the factor structure and measurement invariance of test emotions across testing platform, gender, and time. Journal of Psychoeducational Assessment, 38, 969-981. DOI: 10.1177/0734282920918726

Pope, A., Finney, S.J., & Crewe, M. (in press). Evaluating the effectiveness of an academic success program: Showcasing the importance of theory to practice. Journal of Student Affairs Inquiry.

Prendergast, C., Satkus, P., Alahmadi, S., Bao, Y.(in press). Fostering the Assessment Processes of Academic Programs in Higher Education During COVID-19: An Example from James Madison University. Assessment Update.

Satkus, P., & Finney, S.J. (in press). Antecedents of examinee motivation during low-stakes tests: Examining the variability in effects across different research designs. Assessment and Evaluation in Higher Education.

Sauder, D.C., & DeMars, C.E. (2020). Applying a multiple comparison control to IRT item-fit testing. Applied Measurement in Education, 33, 362-377. doi: 10.1080/08957347.2020.1789138.

Smith, K.L., & Finney, S.J. (2020). Elevating program theory and implementation fidelity in higher education: Modeling the process via an ethical reasoning curriculum. Research and Practice in Assessment, 15, 1-13.

Spratto, E., Leventhal, B. C., & Bandalos, D. (2021). Seeing the forest and the trees: Comparison of  two IRTree models to investigate the impact of full vs. endpoint-only response option labeling. Educational and Psychological Measurement, 81(1), 39-60 doi: 10.1177/0013164420918655

Tang, H., & Bao, Y.(in press). Latent class analysis of K-12 teachers’ barriers in implementing OER. Distance Education.

Tang, H., & Bao, Y.(2020). Social Justice and K-12 Teachers' Effective Use of OER: A Cross-Cultural Comparison by Nations. Journal of Interactive Media in Education, 2020(1).

Back to Top