County test coordinators talk about district-mandated tests, test quality, and data
Educators know this all too well — mandated testing has gone through the roof and the impacts are far reaching. Much of the problem is at the local level —district-mandated assessments account for 74% of all required testing of K–12. It’s time to put an end to the culture of over-testing.
MSEA members are leading on the effort to reduce testing. The Less Testing More Learning Act of 2017 is legislation that seeks to cap student testing at 2% of annual instructional time. This week, test coordinators Celia Burton and Dawn Pipkin, from Prince George’s and St. Mary’s counties respectively, testified before the Senate Education, Health and Environmental Affairs Committee.
As they waited for their time in front of the senators, Celia and Dawn spent time in front of the cameras in interviews with MPT, WTOP, WJZ, and WBFF, and sat for a Q & A with Newsfeed about the quality of local tests, alignment to content and standards, and the fallout of over testing for students and teachers.
Dawn Pipkin, a National Board Certified Teacher and test coordinator at Leonardtown Middle School in St. Mary’s County:
Test development is a skill and not one taught in teacher preparation programs, so when we think about the quality of tests, it really rests on how well the district is staffed — do we have people who are skilled at writing the tests? We have to think about what that looks like for districts that can’t afford that kind of staffing or don’t have the means to do the sort of work required to create strong assessments.
It takes multiple, multiple, iterations of a test in order to get something that we can have confidence in. A test of poor quality erodes educators’ faith in the time and effort we spend on those tests. In any district that doesn’t have people who are really skilled in that, it takes years before really good assessments emerge.
When we’re spending a lot of time on local assessments that we don’t have faith in, or are not sure they are of good quality, it’s very frustrating.
Celia Burton is test coordinator at Benjamin Tasker Middle School in Prince George’s County:
Three to five years. Teachers have to be trained with the level of instruction that the students are expecting to receive from them, and the type of questions need to be aligned with the standards. If the writers of the questions have had ineffective training on the standards and curriculum, and thus have no basic knowledge, let alone depth of knowledge of those standards and curriculum, there are often errors in the questions. When that happens, students have to be retested or the tests have to be cancelled and reassigned to the student the following year.
It absolutely takes three to five years to get great in-depth questions, data, and confidence in the data.
Celia: Distrust. Teachers have no trust in the process of assessment creation or if the types of questions will be rigorous or not. They lose faith that the quality and rigor of the questions can elicit the knowledge the students have gained.
Dawn: When there’s not a really sound curriculum in place, and there are no adequate supports in place for educators, there’s a learning curve for teachers to understand what those new standards look like and how to teach them and assess them.
It depends on the quality of training they got when they started the standards — they teach the new standards at the level of their understanding. As their knowledge deepens, they might find that what they were doing before wasn’t as well aligned as they thought.
It requires a constant retooling of instruction and as that happens, a teacher is constantly looking at kids who may have knowledge gaps based on their own, or a previous teacher’s, best understanding of that standard at that time.
Celia: From a data perspective, you have poor data because students are not performing as well as they should because of inaccuracies on the assessment, and parents are not given good information. It’s embarrassing when our higher achieving students are finding the true correct answer or seeing the flaws in the questions and/or that the questions aren’t aligned with what’s being taught in the classroom. It’s really unfortunate.
Dawn: When we haven’t done a great job of keeping things aligned and we jump in at different starting points, students get mixed messages about what the expectations are. On one hand, in an early iteration of the assessment, they may have done really well; but as understanding develops of the depth and complexity of the content, the test evolves and perhaps that same student does poorly.
The biggest issue for students is that when you get to multiple iterations of the test across the year, you have to ask yourself: are my students able to have enough rich experiences with the kind of instructional sequences — that are aligned to the standards — that they feel prepared? That’s the real danger in over-testing — there’s not enough time for students to have all the rich learning experiences that would help them develop the critical thinking to meet the challenge of the standard.
Celia: There are way too many assessments to stop and really look at the data and see where students are strong and weak. If students are weak in a standard, that is a standard that a teacher should be able to go back and reteach, not continue to teach strictly to the curriculum and compound the problem.
Our teachers don’t have the time to have to do deep data analysis. Further, the data is old and often there isn’t the time or skills to really examine and interpret the data, and the support isn’t there for them to get it.
We really need to have someone examine the types of questions and we need to cut back on the number of questions in an assessment. A student should not have to go to math class and answer 40 questions, then head to a reading class and a 60-question test, followed by a 30-question social studies test. I want to ask legislators: how would you like it if that was your child or grandchild?
Over-testing is ineffective instruction.
Dawn: Ultimately, teachers wind up not being able to really use that data in a meaningful way because they have to think about the next assessment that they need to prepare their students for so the students don’t end up sitting in front of an assessment that they don’t feel they’ve been prepared for.
What’s most important is to go back and look at our assessments and decide which are good and which are not. And we must be able to shore up teachers’ confidence in them and be sure that they understand what that end goal is.
If we were able to really have teachers be engaged in thoughtful discussion around the data and then plan instruction accordingly, students and teachers would have a lot more investment in what’s going on and I think we would be reaping the benefits.