2011 Issues

January 2011 ISSUE: 1


A Formal Model Simulation of Group Learning Behaviors: Group Size and Collaborative Performance
Florin Bocaneala and Lei Bao, Pages 1-6

Recently, group study has become a common educational strategy employed by educators throughout the country. In this paper, we attempt to construct theoretical models to support and help us understand the re-puted efficiency of the group learning over traditional instruction. Specifically, we concentrate on one as-pect of the group dynamics: the dynamics of the group emergent semantic agreement.

Effectiveness of Two Interactive Learning Techniques in Introductory Astronomy
Jessica C. Lair and Jing Wang, Pages 7-11

As a part of the shift to active learning environments in the Department of Physics and Astronomy at Eastern Kentucky University, we implemented the use of a clicker system in all the introductory astronomy courses. The clickers were used in class on a daily basis to allow students to actively participate in lectures. Several of the astronomy courses at Eastern Kentucky University also include interactive laboratory sessions. Here we present pre- and post-test data from the solar system astronomy class utilizing the Astronomy Diagnostic Test (ADT) from the first semester of clicker use compared to previous semesters. We also compared the differences between the laboratory and non-laboratory sections of the introductory astronomy course by comparing their ADT results. In both cases the students’ normalized gain on the ADT is higher when taught the concepts using the interactive techniques.

Experimental Designs for Inquiry with Problem-Based Strategy
Shaona Zhou, Yau-Yuen Yeung, Hua Xiao, and Xiaojun Wang, Pages 12-16

Taking into account that current teaching methods fail to lead students to feel the connections between science and society, and that some drawbacks exist in experimental courses, we have designed a series of experiments with Integrated Circuit (IC) cards which lead the reader to follow a problem-based strategy. The experiments are developed to help students get a better experience through inquiry. Several questions are provided for each experiment as a guide to start the investigation. These questions are without standard answers, so students are suggested to explore by themselves. With the experimental results, we could also ask students to think over the restrictions of how to utilize the IC card in daily life.

 

December 2011 ISSUE: 2


A Review of Introductory Physics Laboratories and Hybrid Lab Classrooms Including Difficulties, Goals, and Assessments
Dedra Demaree, Pages 17-25

This paper reviews introductory physics labs and is intended as a broad introduction to laboratory possibilities and considerations. The focus is on laboratory curriculum developed since the advent of computers, in part because this coincides well with the timing of early papers in Physics Education Research. The discussion of labs is broadened to include activity-based learning environments that use physical equipment. A discussion of difficulties associated with labs (both administering them and student learning) is given as well as a discussion of typical goals physical activity-based learning environments address. Finally a discussion of assessments is provided.

Comparisons of Item Response Theory Algorithms on Force Concept Inventory
Li Chen, Jing Han, Jing Wang, Yan Tu, and Lei Bao, Pages 27-34

Item Response Theory (IRT) is a popular assessment method widely used in educational measurements. There are several software packages commonly used to do IRT analysis. In the field of physics education, using IRT to analyze concept tests is gaining popularity. It is then useful to understand whether, or the ex-tent to which, software packages may perform differently on physics concept tests. In this study, we com-pare the results of the 3-parameter IRT model in R and MULTILOG using data from college students on a physics concept test, the Force Concept Inventory. The results suggest that, while both methods generally produce consistent outcomes on the estimated item parameters, some systematic variations can be observed. For example, both methods produce a nearly identical estimation of item difficulty, whereas the discrimination estimated with R is systematically higher than that estimated with MULTILOG. The guessing parameters, which depend on whether “pre-processing” is implemented in MULTILOG, also vary observably. The variability of the estimations raises concerns about the validity of IRT methods for evaluating students’ scaled abilities. Therefore, further analysis has been conducted to determine the range of differences between the two models regarding student abilities estimated with each. A comparison of the goodness of fit using various estimations is also discussed. It appears that R produces better fits at low proficiency levels, but falls behind at the high end of the ability spectrum.

Predicting Student Achievement: A Path Analysis Model on A Mathematics Coaching Program
Scott A. Zollinger; Patti Brosnan, Pages 35-45

In response to calls for mathematics education reform, researchers at The Ohio State University, working with the Ohio Department of Education, developed the Mathematics Coaching Program (MCP). Coaches from 164 schools have participated in this classroom embedded professional development program designed to promote standards-based instructional practices. Preliminary results indicated MCP has a positive impact on student achievement. To provide supporting evidence of these results, researchers developed a path analysis model consisting of seven components to determine factors that were predictors of student achievement both before and after schools participated in MCP. Dependent variable components were pre- and post-MCP test scores and independent variable components represented economically disadvantaged students, non-white students, disability students, number of years in MCP, and coach mathematical content knowledge. Results indicated that the initial and modified theoretical models were not acceptable fits for our fourth grade sample data, but many parameter values were consistent with previous research. Disability, SES, and ethnicity were significant predictors of pre-MCP test scores and were negatively correlated. For post-MCP test scores, t-values of disability and ethnicity decreased to non-significant levels but t-values for SES nearly doubled. The number of years a school participated in the program was a significant predictor of post-MCP test scores but coach mathematical content knowledge was not. Overall, the path analysis model did not test as an acceptable fit for this data, but the final version represents a starting point for future testing.