Different methods, different results? Alternative statistical approaches for estimating the impact of a comprehensive school reform model
The current Bush administration has refocused the attention of policy researchers on the quality of education research through No Child Left Behind. This legislation emphasizes the importance of scientifically based research using high-quality experimental or nonexperimental designs for estimating the impact of reforms on student performance. This dissertation explores different nonexperimental approaches to estimating treatment effects because of their frequency of use in education, range in quality, and potential biases. Specifically, it empirically compares alternative comparison group matching and statistical methods for estimating the impact of the America's Choice Comprehensive School Reform model on the change in fourth-grade reading performance using publicly available school-level data from Florida. The adequacy of the propensity score and multivariate matching methods in minimizing the average differences between treatment and comparison matches and balancing observed pre-treatment characteristics between these groups are compared. Additionally, different approaches for estimating treatment impacts are compared, including pre-post, repeated measures, fixed-effects, and random-effects methods. Matching and model-based adjustment are also compared. Findings revealed that the comparison group selected with fixed-one propensity score matching was the most similar to the America's Choice schools on average; and that the different impact methods produced mostly similar impact estimates. Specifically, the pre-post and repeated measures methods, as well as the fixed-effects and random-effects repeated measures methods, produced mostly similar impact estimates when using model-based adjustment for both initial status and change characteristics (i.e., all controls). However, matching only and model-based adjustment only with all controls produced mostly different impact estimates. Furthermore, matching only and matching combined with model-based adjustment with all controls produced different impact estimates for the fixed-effects methods, but similar ones for the random-effects methods. Yet, matching combined with model-based adjustment and model-based adjustment only with all controls using the full comparison pool produced mostly similar impact estimates. This research suggests that when treatment and potential comparison schools are largely dissimilar, the fixed-effects repeated measures method using the full comparison pool and model-based adjustment with all controls should be used to estimate treatment effects in contrast to any other combination of the pre-post or repeated measures methods or comparison groups examined.
| Year of publication: |
2005-01-01
|
|---|---|
| Authors: | Taylor, Brooke Snyder |
| Publisher: |
ScholarlyCommons |
Saved in:
Saved in favorites
Similar items by subject
-
Find similar items by using search terms and synonyms from our Thesaurus for Economics (STW).