The Long View on Student Testing: Ups and Downs, but Achievement Gap Persists

Posted by & filed under CGR Staff.

Erika RosenbergSummer’s finally here, but just weeks ago students in schools across New York completed state tests that carry bigger stakes than ever before. This is the first year that test scores will feed into teacher evaluations, and with the tests now aligned with the new Common Core curriculum, many observers believe passing rates will decline.

The push-back against testing and increased accountability has grown, and it’s easy to see why. Students, families and schools have seen passing rates decline, felt more pressure to increase performance and wondered whether testing now gets too big a space in education. It’s worth revisiting why and how these changes came about, and examining the long-term trend in performance.

Read more »

The “dirty little secrets” of program evaluation

Posted by & filed under CGR Staff.

Kirstin PryorDirty little secret #1—When you say “We do program evaluation,” typical reactions include polite but confused nods and rolling or glazed-over eyes. Unfortunately, the real value of evaluation is often crowded out by rhetoric about investing only in “evidence-based” programs on the one hand and pressures of grant compliance on the other.

CGR’s clients operate in the real world, where program evaluation isn’t quite as pristine as it is in academia. Some of CGR’s evaluations have formal designs with control groups, sophisticated statistical analysis, and measurable “hard” outcomes. But the vast majority of evaluations are not as “pure”—and this is where the fun begins. Real programs serve real people who are affected by many things besides the program in question. They have real limited budgets, real funders funding different outcomes, real challenges getting data, and operate in real dynamic contexts. Evaluation often gets a bum rap—either it’s watered down to the point of PR or it’s done just for compliance. So what’s the point? Read more »