The Brookings Institution has released a report that calls into question the belief that state pre-K programs significantly improve student achievement. Despite advocates’ calls for state pre-K as a way to boost student academic and social achievement in later life, the Brookings analysis find that state pre-K programs does not correlate with student academic progress.
To date, there has been only one randomized study of pre-K effectiveness. Parents who wanted to enroll their children in the Tennessee Voluntary Pre-K Program (TVPK) allowed their children to be randomly placed in the program or not.
The students’ outcomes were tracked through the third grade, and the findings were:
Positive achievement effects at the end of pre-K reversed and began favoring the control children by 2nd and 3rd grade;
TVPK participants had more disciplinary infractions and special education placements by 3rd grade than control children; and
No effects of VPK were found on attendance or retention in the later grades.
While this is only one study, it is suggestive. Therefore, the author of the present study examined levels of enrollment in state pre-K and compared them to state performances on the National Assessment of Educational Progress (NAEP) five years later. The author provides the following caveat: “The analyses I carry out are simple, descriptive, and rely entirely on publicly available data. I do not apply the usual array of statistical tools for analyzing panel data because the assumptions those techniques require are not well met with the data at hand, the presentation of their results would interfere with my effort to be transparent to a general audience about the logic of the analysis, and I do not require precise estimates to draw conclusions.”
After examining the data, the author found (quote):
no association between states’ federally reported scores on the fourth grade National Assessment of Educational Progress (NAEP) in various years and differences among states in levels of enrollment in their state’s pre-K program five years earlier than each of those years (when the fourth-graders taking NAEP would have been preschoolers);
positive associations (small and typically not statistically significant) between NAEP scores and earlier pre-K enrollment, when the previous analysis is conducted using NAEP scores that are statistically adjusted to account for differences between the states in the demographic characteristics of students taking NAEP; and
no association between differences among states in their gains in state pre-K enrollment and their gains in adjusted NAEP scores.
The author found a variety of relationships between states’ pre-K enrollment and their NAEP scores. For example, in Florida, the rate of improvement in the NAEP reading score was higher before the introduction of the state’s voluntary pre-k program than after. Crucially, the author found that, although states with larger pre-k enrollments appeared to have higher NAEP scores, an increase in enrollment by 10 percent had a statistically insignificant effect on later NAEP scores. This finding calls into question whether the state pre-k programs themselves have a causal relationship with the NAEP scores; rather, it is possible that “states that have invested in larger state pre-K programs are also engaged in other education reforms that affect NAEP scores independent of pre-K.”
In offering his conclusions, the author does not say that there are no other, non-academic, benefits to state pre-k; however, he does point out that the “fadeout” effect observed in later grades has been well-documented. Thus, the report present a clear reminder that the question of how to make pre-K consistently effective over the long-term is still unsettled. The overall data may mask variations in a state’s programs, meaning that some programs have a far greater positive impact than others; if this is so, then—as early childhood education researchers have recognized—more study is needed to understand which program elements need improvement, and which ones merit replication.