Let’s talk about Karla. Karla is a real person, although “Karla” is not her real name. A tenth-grade honors student in a heavily tested school district, Karla positively brims with enthusiasm for learning. Ask her for a poster and slide presentation; she will give you those things, plus a handcrafted, three-dimensional model. All of her teachers agree that Karla will undoubtedly thrive in a university setting and go on to flourish in any profession she chooses.

On her recent ELA benchmark practice test, Karla answered only 30 percent of the questions correctly. She was devastated. Her teacher was perplexed, frustrated. If Karla could only answer 30 percent of the questions correctly, what might that suggest about the overall validity of the test for measuring student knowledge?

Taking a test_ Standardized Testing

The fact is that there are simply too many other variables that can potentially affect student test scores for us to consider the data a complete picture of student achievement. Maybe it’s an unfamiliar graphing tool in the online testing application. Maybe it’s the lawn mower that cranked up outside the classroom door just as testing began. Maybe it’s a lack of sleep or a lack of breakfast. Often, it’s the test itself. One of those questions Karla missed required her to infer the meaning of a word based upon its context in a sentence. Easy! Karla is great at that. However, of the four possible synonyms listed as multiple-choice options, Karla didn’t recognize three. How can we expect students to demonstrate their content knowledge and aptitude on tests full of unfamiliar vocabulary?

Still, even as educators continue to express their doubts about the validity of the results and bemoan the increasing emphasis on standardized testing, many school districts seem to be doubling down. In addition to the cadre of assessments looming over students as their courses conclude, many districts have now imposed recurring benchmark tests (such as the one Karla took) to measure student growth along the way. These tests, designed to mimic the end-of-course assessments, further reduce overall instruction time and often induce stress on students, parents, and teachers, alike. Do the pros of this increased testing outweigh the cons? Let’s consider the data on the data.

Recent studies in five states, as presented by The Conversation, demonstrate that: accurate predictions can be made about student proficiency data based solely on characteristics of their communities, without consideration of the schools or teachers, themselves. Simply by examining community and family data available through the U.S. Census, analysts managed, with astounding accuracy, to predict the percentage of students who would score proficient or above on their state tests. Specifically, these studies focused on variables such as the percentage of community members living in poverty and the percentage of community members with bachelor’s degrees.

This won’t surprise many teachers who have been struggling to overcome the “achievement gap” for years. What teachers could tell you, though, that the test data might not, is that their students ARE making great gains. Karla happens to be excellent at drawing inferences from context clues, plus she can design and deliver an oral presentation that would knock your socks off. Donny, a high school sophomore who reads at a third-grade level and vastly prefers video games, snickered the other day while quietly reading a book he had chosen. Katie, the impossibly shy girl whose voice had never before been heard in class, made everyone’s jaws drop when she interjected her insightful perspective during a Socratic seminar.

happy little girl with book and resting on the grass

All of this – the burgeoning communication skills, a newfound pleasure in reading, the gathering of courage to share – it is all progress! Sweeping, transformative, wonderful progress… the kind that is mostly imperceptible on a multiple-choice assessment.