A Review of Introductory Physics Laboratories and Hybrid Lab Classrooms Including Difficulties, Goals, and Assessments

By  Dedra Demaree
Received: 2011-10-9 / Accepted: 2011-12-7 / Published: 2011-12-30
PDF Main Manuscript (246.66 KB)
Abstract This paper reviews introductory physics labs and is intended as a broad introduction to laboratory possibilities and considerations. The focus is on laboratory curriculum developed since the advent of computers, in part because this coincides well with the timing of early papers in Physics Education Research. The discussion of labs is broadened to include activity-based learning environments that use physical equipment. A discussion of difficulties associated with labs (both administering them and student learning) is given as well as a discussion of typical goals physical activity-based learning environments address. Finally a discussion of assessments is provided. [More...]

Comparisons of Item Response Theory Algorithms on Force Concept Inventory

By  Li Chen, Jing Han, Jing Wang, Yan Tu, Lei Bao
Received: 2011-8-23 / Accepted: 2011-12-19 / Published: 2011-12-30
PDF Main Manuscript (854.81 KB)
Abstract Item Response Theory (IRT) is a popular assessment method widely used in educational measurements. There are several software packages commonly used to do IRT analysis. In the field of physics education, using IRT to analyze concept tests is gaining popularity. It is then useful to understand whether, or the extent to which, software packages may perform differently on physics concept tests. In this study, we compare the results of the 3-parameter IRT model in R and MULTILOG using data from college students on a physics concept test, the Force Concept Inventory. The results suggest that, while both methods generally produce consistent outcomes on the estimated item parameters, some systematic variations can be observed.For example, both methods produce a nearly identical estimation of item difficulty, whereas the discrimination estimated with R is systematically higher than that estimated with MULTILOG. The guessing parameters, which depend on whether “pre-processing” is implemented in MULTILOG, also vary observably. The variability of the estimations raises concerns about the validity of IRT methods for evaluating students’ scaled abilities. Therefore, further analysis has been conducted to determine the range of differences between the two models regarding student abilities estimated with each. A comparison of the goodness of fit using various estimations is also discussed. It appears that R produces better fits at low proficiency levels, but falls behind at the high end of the ability spectrum. [More...]

Predicting Student Achievement: A Path Analysis Model on A Mathematics Coaching Program

By  Scott A Zollinger, Patti Brosnan
Received: 2011-9-25 / Accepted: 2011-11-28 / Published: 2011-12-30
PDF Main Manuscript (299.5 KB)
Abstract In response to calls for mathematics education reform, researchers at The Ohio State University, working with the Ohio Department of Education, developed the Mathematics Coaching Program (MCP). Coaches from 164 schools have participated in this classroom embedded professional development program designed to promote standards-based instructional practices. Preliminary results indicated MCP has a positive impact on student achievement. To provide supporting evidence of these results, researchers developed a path analysis model consisting of seven components to determine factors that were predictors of student achievement both before and after schools participated in MCP. Dependent variable components were pre- and post-MCP test scores and independent variable components represented economically disadvantaged students, non-white students, disability students, number of years in MCP, and coach mathematical content knowledge. Results indicated that the initial and modified theoretical models were not acceptable fits for our fourth grade sample data, but many parameter values were consistent with previous research. Disability, SES, and ethnicity were significant predictors of pre-MCP test scores and were negatively correlated. For post-MCP test scores, t-values of disability and ethnicity decreased to non-significant levels but t-values for SES nearly doubled. The number of years a school participated in the program was a significant predictor of post-MCP test scores but coach mathematical content knowledge was not. Overall, the path analysis model did not test as an acceptable fit for this data, but the final version represents a starting
point for future testing. [More...]