Accountability with standards and measurements for students and teachers is a major concern on the part of all education stakeholders, certainly the local business community and taxpayers. If our region is to be known for its strong public school system, there needs to be an effective system of accountability and assessment tools to ensure that the right focus is in place to raise student achievement.
Consider the following scenarios from school districts elsewhere:
These are but three examples of assessment data being abused, an especially critical problem in today's education environment in which single and imperfect measures of student achievement are being used to size up the effectiveness of our schools.
I am a strong advocate of accountability in our public schools and using valid and reliable testing methodologies to gauge student progress and achievement levels. But when it comes to student learning, no single test series by itself, not even a good one, can render a full picture of what students understand and can do in relation to national and local standards and curricula. Many, if not most, teachers regard our present set of standardized tests among the least useful of the data sources they can use to gain insights into how to improve student learning.
What is needed is for schools to use testing methodologies that make use of "longitudinal" data; that is, testing data that is gathered from specific groups of students over a period of time. What good is it to take snapshots of one group of 10th-grade students taking the test this year with another group of 10th graders taking it the prior year? One group may have transfer students who are particularly bright or not so bright or a particular set of circumstances that affect the outcome. Effective assessment is best achieved when testing data for specific students are looked at over time, as those students progress through the grades.
Longitudinal data that monitors specific students' progress over a period of time is important for finding out whether something is getting better or worse.
Aside from using the proper methodology, the effective use of assessment data depends on several factors that need to be in place far earlier than the testing process.
TERC, a math and science education think tank in Cambridge, Mass., recently published a report on "Uses and Abuses of Data" that identified several elements contributing to effective data use. They bear repeating here.
Locally, the Business Roundtable for Education is working with 30 charter schools in San Diego County to develop and use effective measures of assessing student achievement; principally the use of longitudinal data. Charter schools are friendly environments for educational reform; in this case, reforming the way other public schools assess student achievement.
Funded by the La Jolla-based Girard Foundation, the Roundtable's three-year Data Analysis and Accountability Project is putting into place the infrastructure needed to collect and analyze data at the participating charter school sites. This will enable those charter schools to take the information they know about their students' achievement levels, monitor it over time to watch for individual growth and slips in achievement, and be able to respond early on.
The end result is that valid testing methodologies not only give teachers, administrators and parents a more accurate picture of their students' academic progress, they end up raising test scores as well.
It's a strong case in which all parties involved win.
Hovenic, Ed.D., is president and chief executive officer of the San Diego Regional Chamber of Commerce Foundation and executive director of the foundation's Business Roundtable for Education. E-mail her at firstname.lastname@example.org.