Rating rubrics, such as the Charlotte Danielson framework, are often the standard for evaluating teacher performance. However, they can also be a source of frustration for both teachers and administrators. A common issue arises from a lack of alignment on specific descriptions within these rubrics. For instance, an administrator may have a different understanding of what an “effective” rating looks like in action compared to a teacher. This disconnect can leave teachers feeling uncertain about their aspirations and hinder their professional growth.
Clear expectations are essential. When teachers know exactly what is expected for each rating, they gain clarity on what they are aspiring to achieve. For example, if your school or district defines an “effective” rating as scaffolding for different levels of learners, that expectation must be clearly articulated. If the standard is four distinct student groups engaging in varied activities aligned to the learning target, then teachers should know this is the benchmark. Anything less would fall short. Similarly, vague rubric language can add confusion. Words like “most” can mean different things—an administrator may think “most” means 75%, while someone else may interpret it as 51%. The same goes for descriptors like “engaging.” Engagement shouldn’t just mean that students are on task; it should mean they are critically thinking, problem-solving, or deeply comprehending. Agreeing on what these terms mean in practice is just as important as agreeing on the teacher moves that demonstrate them.
Conversely, providing low ratings without clear guidance on how to achieve a higher rating can be incredibly frustrating. Teachers may feel demoralized if they receive feedback that lacks actionable steps for improvement. This ambiguity can stifle motivation and impede professional growth. If the goal is to help teachers develop, then expectations for what constitutes an “effective” or “highly effective” rating must be unambiguous. Administrators should be able to walk into a classroom, observe specific teacher actions, and connect them to clearly defined rating levels—so much so that teachers can predict their rating before the administrator leaves the room.
During my time as a principal, we addressed this by taking a collaborative approach. We posted all the rubric components we would use to evaluate teachers and asked teachers to do a gallery walk, adding specific descriptions of actions that would earn an effective or highly effective rating. Administrators then built on this list, ensuring that all voices were represented while refining clarity where needed. We aligned on vague language so we were both clear about what each rating specifically required. This process not only built trust but also created a shared language for success. It also provided a reference point for feedback: administrators could cite where a teacher fell short or suggest additional moves that could lift their practice.
To foster alignment, administrators should invest time in unpacking what effective ratings look like in practice. Professional development sessions or team meetings should focus not just on evaluation but on clarifying expectations, including what vague terms like “most” or “engagement” truly mean in action. With clear benchmarks, teachers can set tangible goals—for example, moving from differentiating for two groups to four—and request the support needed to get there.
In summary, clarity in expectations empowers teachers to strive for excellence while cultivating a culture of trust and collaboration. By working together to define and refine what effective teaching looks like—down to the meaning of the words in the rubric—we can remove ambiguity, increase motivation, and create a transparent path for teacher growth that ultimately improves student outcomes.