How do I get the most out my HS teacher evaluations?

A School Success Excellence grade

Q - How do I get the most out my HS teacher evaluations?

A - Ask your students!

Guest post by Dr. David Balik 

I’ll never forget when one of my doctoral professors warned us as a cohort that we’d better have a dissertation topic that we’re really interested in, so that when “the going gets tough” and we’re grinding our way through the writing and the research, the level of interest and excitement we have towards our subject matter will carry the day, and help keep us going. After ditching my first topic because it didn’t meet the aforementioned litmus test, I continued to search for a meaningful topic that would actually add something to the over-all “conversation” in education today. I soon realized that it was right under my nose!

Three years ago, along with the support and guidance of my Superintendent (Dr. Barrett Mosbacker), I developed a “Student Feedback Survey” that we determined would be carefully incorporated into our Faculty evaluation process, at the high school level. Part of this decision was driven by my reading and research on teacher evaluations, and their relative uselessness where instructional improvement and student learning were concerned. Case in point: the Department of Education recently released data that shows 96.8 percent of teachers and 93.8 percent of principals evaluated received satisfactory or “proficient” ratings. While most teachers and principals across the country received a state 'satisfactory' rating, officials – including the Secretary of Education - say that means there's something wrong with the evaluation system used to rate them. One spokesman said, “It is very difficult for me to rationalize how a state can have virtually 100 percent of educators evaluated as satisfactory when, based on the statewide assessment, one-in-four students are scoring below proficient in reading, and one-in-three are scoring below proficient in math.” What’s more disturbing, based on the National Assessment of Educational Progress (NAEP), more than half of our 4th and 8th grade students are scoring below proficient in math and reading. I believe these results are a clear indication that our current evaluation system is in major need of change. Herein lies the problem.... across much of the United States, the system of teacher evaluation is old and outdated, and does not accurately assess or evaluate teachers in such a way as to truly promote better instruction and improved student learning.

Teacher ratings are most commonly associated with student evaluations at the college or university level. Student evaluations can be used in both formative and summative systems (Peterson, 2000). That distinction is critically important because the two goals require different techniques and personnel. Student evaluations are formative when their purpose is to help teachers improve and enhance their skills. This seems to work especially well when used during a semester to determine what practices are working well and which are not, to pinpoint needed changes, and to guide those changes. Student evaluations are summative when they are used to assess the overall effectiveness of an instructor, particularly for tenure and/or promotion decisions.

The use of student rating evaluations in assessing teacher performance has received considerable attention in the literature for many years. They began in the 1920s, when Harvard students published assessments of their professors’ effectiveness. The first published form for collecting student ratings, the Purdue rating scale of instruction, was released in 1926.

Important, useful, and reliable data about teacher performance can be obtained through student feedback. Students are good sources of information because they are the objects of the instruction, have closely and recently observed a number of teachers, have the subjective bias of students, and benefit directly from good teaching.

According to Peterson (2000), “seventy years of empirical research on teacher evaluation shows that current practices do not improve teachers or accurately tell what happens in classrooms. Administrator reports do not increase good teachers’ confidence or reassure the public about teacher quality” (p. 18). Peterson (2000) goes on to assert that teacher evaluation as presently practiced does not identify innovative teaching so that it can be adopted by other teachers. Despite these obvious and long-standing problems, many schools continue to rely on principal reports.

Common sense suggests that the most effective form of student evaluation for formative purposes would include ongoing assessment combined with teacher response over the course of a semester or year. There are several studies that explored the impact of student feedback with consultation on teacher performance, student attitudes, and student learning. For instance, two different meta-analyses conducted by Cohen and L’Hommedieu, Menges, and Brinko (1990) indicate that teachers who received mid-term student ratings feedback and peer or administrative consultation showed significant improvement in teaching effectiveness. In a more recent study (Hampton & Reiser, 2004), final student rating results revealed significant differences in favor of the assessment/feedback/assessment model on teaching practices, ratings of teaching effectiveness, and student motivation. Similarly, a study conducted indicated that feedback with consultation provided statistically significant changes in the overall effectiveness of instructors.

Research also shows that students of teachers who received feedback and consultation demonstrated more positive attitudes than students whose teachers did not receive feedback and consultation (Hampton & Reiser, 2004). They found that teachers receiving student feedback and consultation had higher ratings from their students in relation to how interesting their subject area was thought to be. In another study at a large university that addressed the ratings of 263 teachers, different treatment groups showed significant differences in personal interest towards courses. Furthermore, teachers in the feedback and consultation group were rated higher according to the overall value of the course.

Today, student evaluation is being promoted by the Measures of Effective Teaching (MET) Project, funded by the Bill & Melinda Gates Foundation, and led by more than a dozen organizations, including Dartmouth, Harvard, Stanford, University of Chicago, University of Michigan, University of Virginia, and University of Washington, Educational Testing Service, RAND Corporation, the National Math and Science Initiative, the New Teacher Center, Cambridge Education, Teachscape, Westat, and the Danielson Group.

            Partnering with nearly 3,000 volunteer teachers in six school districts around the country, the MET Project is based on three simple premises:

1.     when feasible, an evaluation should include students’ achievement gains,

2.     any additional components of the evaluation (e.g., classroom observations, student feedback) should be demonstrably related to student achievement gains, and

3.     most importantly, the measure should include feedback on specific practices that can support professional development.

Launched in 2009, the preliminary findings of the MET project stated

any measure of teacher effectiveness should support the continued growth of teachers, by providing actionable data on specific strengths and weaknesses. Even if value-added measures are valid measures of a teacher’s impact on student learning, they provide little guidance to teachers (or their supervisors) on what they need to do to improve. Therefore, the goal is to identify a package of measures, including student feedback and classroom observations, which would not only help identify effective teaching, but also point all teachers to the areas where they need to become more effective teachers themselves. (Bill & Melinda Gates Foundation, 2011, p. 5)

            Students in the MET classrooms were asked to report their perceptions of the classroom instructional environment. The Tripod survey, developed by Harvard researcher Ron Ferguson and administered by Cambridge Education, assesses the extent to which students experience the classroom environment as engaging, demanding, and supportive of their intellectual growth. The survey asks students in each of the MET classrooms if they agree or disagree with a variety of statements, including “My teacher knows when the class understands, and when we do not”; “My teacher has several good ways to explain each topic that we cover in this class”; and “When I turn in my work, my teacher gives me useful feedback that helps me improve.” 

            The goal is for students to give feedback on specific aspects of a teacher’s practice, so that teachers can improve their use of class time, the quality of the comments they give on homework, their pedagogical practices, and their relationships with their students.

            Despite the work of the MET project, the vast majority of the research on student evaluations has been done at the college and university level. Even so, research on the impact of midterm feedback to instructors is almost nonexistent (Mertler, 1996). In an exhaustive literature review of these studies, Finley and Crawley (1993) found that about 80% of studies concern higher education. Less research has been done at the high school level (Peterson, 2000; Smith & Brown, 1976; Traugh & Duell, 1980), and even less real application of this method occurs in high schools (Levin, 1979). Hanna, Hoyt, and Aubrecth (1983) stated that student evaluations at the high school level have been largely neglected. That is why initiatives like the MET Project, and this study are critical to research of high school students.

Teacher evaluation is an integral component of a teacher’s professional career. Nevo noted that evaluations are usually perceived as a means to control, motivate, and hold accountable teachers, including firing them for poor performance. He also concluded that evaluations have the reputation of being harmful rather than helpful to teachers.

            Current evaluation methods are seriously flawed. The system relies often on untrained evaluators lacking time, expertise, and resources needed to accomplish the task. Most current teacher evaluations serve only a summative function and thus have little effect on professional development. Many researchers recommend methods providing better feedback to meet this formative function.

            Student evaluations are not the only basis for instructional improvement, but they are a cost-effective, readily available technique that provides a unique perspective–that of the education consumer. As Cashin mentioned “… extensive review of literature indicates that in general student ratings tend to be statistically reliable, valid, relatively free of bias, and useful, probably more so than any other form of data used for teacher evaluations”. Therefore, when properly constructed and administered, student ratings can provide valid and reliable data for both formative and summative purposes.

            Teachers exposed to student feedback should understand how it can provide a valuable and useful review of their present practices, and a basis for modifying those practices to improve instruction.