clock menu more-arrow no yes

Filed under:

Benchmark time again

This article was originally published in The Notebook. In August 2020, The Notebook became Chalkbeat Philadelphia.

This week teachers across the School District will administer the benchmark tests.

This process doesn’t always yield great results. There have been questions on material that was well ahead on the planning and scheduling timeline. Reconciling different curricula leads to concepts being displayed differently on the benchmark than I have taught them. I believe this is the reason that benchmarks are always a hot topic whenever a group of science educators gets together at professional development.

Last year I administered five benchmark tests, this year only three. I believe last year’s provider was Kaplan, but they have been replaced by the CTB McGraw-Hill Company. I’m not sure if the switch in provider caused the reduction in testing, or if that was an Office of Teaching and Learning decision.

These tests cover reading, mathematics, and science. Two weeks ago, students were given a predictive version of the reading and mathematics test to give teachers baseline data. It seems different principals and regional superintendents put different emphasis on the results from these tests.

I am expected to scrutinize the data from these tests with my students. I go over questions where we did significantly better than the District average, and questions where we did significantly worse than the District average. My hope is to figure out what I taught well, and what I didn’t do such a good job of teaching. The goal is to continue pedagogy that works, and try something new when I re-teach.

Typically, the results of our benchmarks come back a week after we take them. I put a breakdown of how the class, school, region, and School District performed on each question on the projector. As a class, we find questions were we performed ten percent better or ten percent worse than the SDP as a whole.

Once we select questions to cover, I can put a digital copy of the test on the projector. We also look over the distribution of wrong answers to the questions. I find it helpful to target the most popular wrong answer when we look over the test. The whole class then reads the question and the answers and tries to figure out where the confusion lay.

Sometimes I realize I need to teach vocabulary differently. Sometimes I find out that particular concept needs to be role-played or further scaffolded. The students also try to figure why some material came easier to them than the rest of the District. I really do appreciate their feedback about how they learned a concept well. Getting away from "It was an easy question" to having the kids do some meta-cognition always brings a smile to my face.

I’m interested in hearing from other teachers how they view the benchmarks.

  • Do they inform instruction?
  • Do your scores, and especially your science scores, matter to your principal/regional superintendent?
  • Do the families of students know or care about the results of these tests?
  • What could be done differently to improve the value of these tests? (Or should we continue with what were doing?)
  • The average score for a third grade student on the last science benchmark was only 43.5 percent, and 4th graders only mustered a 46 percent. Does this information tell us that our students are poor at science, or does it tell us this test is poor at measuring what our students know about science? Is it both?

The COVID-19 outbreak is changing our daily reality

Chalkbeat is a nonprofit newsroom dedicated to providing the information families and educators need, but this kind of work isn't possible without your help.

Connect with your community

Find upcoming Philadelphia events

Sign up for the newsletter Chalkbeat Philadelphia

Sign up for our newsletter.