We know that our habits and affinities are now shaped by finely parsed algorithms; yet we collude in the act of becoming data points because we, too, gain something from the transaction. The data we collect and use in schools is likewise subject to parsing that sometimes lacks a clear relationship between means and ends. The pressure to collect it can steal time from other vital aspects of our work. But if a metric is truly well-designed, we can get a greater wealth of insight about how students learn than ever before. In my own work, then, I am trying to find a middle ground between fully embracing the value of specific kinds of data and a healthy skepticism about our obsession with developing ever more complex layers of metrics.
Secondary Education, until recently, has had an ambivalent relationship with data. Great teachers have a preference for the human stories that schools tell us about children: the case study narratives about learners; the descriptions of innovative curriculum that can change how kids learn; written comments home. Much of this narrative and observational evidence is relegated to the category of “qualitative data,” which takes a back seat to quantitative metrics. There are both good and bad reasons for this change in preference. Without the revelations hard data provides, less visionary and committed educators have been able to hide behind the subjectivity of classroom experience to fog up outcomes.
So the school world in the last few decades has been chastised, rightfully, for failing to gather and act on numbers that tell stark stories about inequality, specifically about early literacy, chronic absence, high school completion, suspensions, college matriculation, and so on. And here is where data can induce change. But the other side of this coin is the application of the for-profit world’s model of metrics to schools in order to assess funding, outcomes, and teaching. That’s a far more troubling trend. Regardless, few school leaders or consultants in the current climate challenge the value of collecting data as a way of holding schools “accountable” (a numerical word in itself so overused that it has a numbing effect). The question therefore has changed from “Do we need to rely on data to succeed?” to whether the amount of effort to collect data, the targets set to galvanize these efforts, and the quality of the outcomes justify our reliance on numbers.
I work wth independent, charter, and district schools, so a fascinating part of my work is examining how these different school cultures use data.
Independent schools, where I spent a significant portion of my career, used to be (and often still are) data-averse, at least from the vantage point of classroom teachers. Except for admissions, exmissions (college or next level acceptance), and annual giving—metrics that directly affect income and enrollment—requests for data related to students and learning often aroused suspicion and cynicism. The autonomy and relative homogeneity of learners in most independent schools make the extensive use of data a choice. The state asks for numbers on employment and diversity, but not much else, and one’s job performance in a classroom is not directly tied to outcomes. Ironically, freed from the public necessity, Independent schools have opportunities to use metrics creatively in a way that could provide innovative models of data use for all schools, if they so choose.
The direct foil is the Charter School. Many of these schools live and die by data, because they have to justify their existence by outperforming district school counterparts with the same population of students. Balancing students’ social-emotional health and authentic engagement with the relentless reliance on test prep and scripted curriculum remains a huge dilemma in Charters. The attrition of young teachers offers another twist; inexperienced teachers who barely know their students are often asked to act on evidence they have limited context for.
District schools are, increasingly, under the same pressures, but the amount of that pressure rises or falls in proportion to the socio-economic conditions of the students they serve. Public schools in more affluent neighborhoods have a less consequential relationship with data. While interventions are more common for schools with low scores, making judgments about the pace of change remains a challenge. The problem here can be lack of urgency, of positive mindset, or leaders unable to refine objectives in the absence of a clear aspiration—a mission problem. Where the pressure to provide accurate metrics could be be most useful is where schools have the greatest distance to go and are still held least accountable: improving outcomes for students with disabilities and second language learners.
Imagine the value of these three kinds of schools conferring and sharing their own stories about the design and use of metrics! Yet this kind of interaction is exceedingly rare.
Here are four types of data I encounter in my work. They will be familiar to most of you reading this, and are intended to provoke some debate:.
Example—the written requirement on Individualized Education Plans (IEPs) that a student with disabilities “will be able to do a specific task X number of times” within a defined period of time, often a one-size fits all refrain that is not only disconnected from the student but rarely receives adequate follow up; use of data in hiring that assumes college pedigree predicts quality of teaching; data that connects lack of third grade reading proficiency to incarceration rates (a dramatic myth that has gained street cred with blunt rather than refined analysis)
Seriously Superficial Data.
Example—private school boards measuring the school’s success on the number of ivy league acceptances per year. In public elementary schools, overall aggregated attendance rate as a monthly average (as long as it doesn’t fall below 92% you are OK); college data for first generation college students that cite matriculation but not 2-4 year graduation rates.
Example, a collective impact organization writing that by 2020, X % of 3rd grade kids will read at proficient levels or X% of high school students will matriculate at college.
Really Actionable Data (RAD):
Which children and how many by teacher are 2s vs. 3s on reading proficiency scores; teacher attrition rates over a five year period, linked to exit surveys; student-generated data tracking and measuring their engagement in a classroom.
While there is nothing earth shattering about the categories, the work becomes challenging when aspirational data is confused with actionable data, and superficial data gets misused to argue that a school is serving students well. Misleading data is sometimes taken as aspirational but it is sometimes just….misleading.
It’s worth asking four questions:
-Do the metrical tools and the data they collect have a direct impact on student life and learning?
-Is there enough bandwidth at the school (or foundation) to manage and apply the data usefully so that those who can make best use of that data have the capacity to do so?
-Is there clarity about the audience for the data—and what that audience will use it for?
-Do older students, beginning in Middle School, have an opportunity to assess and measure their own experience in school as partners in learning?
Collecting and using data in school is a means, not an end in itself, and of course this is where things get troublesome. The distinction is not trivial– because schools, in the rush to establish credibility, reputation and support, can get a bad case of tunnel vision while the objects of their work, each one a unique story, are sitting in a story circle, waiting to bloom.