and a teacher was interviewed. The administrator that was interviewed has had extensive experience in teaching, counseling, and is currently a first-year principal. The assistant principal is in their first year in utilizing the Tulsa Model of observations. The teacher that was interviewed is a 4th year teacher. The teacher is a 6th-12th grade Band Director.
Question 1
The first question that was asked was if the teacher evaluation instrument captures the important aspect of a teachers roll. When I asked the administrator, she said that she believes the model evaluates all aspects of a teacher. If the tool is used correctly; she added, the instrument looks for a wide range of tasks. When asking the teacher, she believed that the instrument did not take in to consideration many unseen aspects of teaching. For example, the teacher mentioned that the evaluation does not monitor roles such as classroom preparation and mentoring moments with students. When comparing the responses, there is a difference in how effective the tool is at monitoring all aspects of a teacher’s role. The teacher seems to only focus on the few aspects they believe is not analyzed. The administrator looks at the tool and can see that every function of a teacher can fit into one of the 20 dimensions. After analyzing the results, it seems that the classroom teacher is upset with how the evaluator is using the evaluation tool.
Question 2
The next question was regarding the expected outcome of the evaluation process. From the administrator’s perspective, she believes the evaluation process will bring a platform for dialogue that will increase the effectiveness of teachers. From there, a conversation of improvement can be established. When the teacher was asked this question, she responded that she believed is was a way for the administration to monitor teachers. The attitude of the teacher was much more negative when responding. The model did not seem like a positive experience to the teacher. It felt more of a pass or fail situation. Comparing their responses, there is a fundamental difference in interpretation of the function of the model. The assistant principal believed the system was used to create a dialogue in which teachers can be commended for positive behaviors and work on areas that are not as effective. The teacher saw the model as a pass-fail situation. He did not believe the model represented a teacher’s ability to educate, since the evaluations were only 20 minutes long.
Question 3
Next, the teachers were asked if the evaluation instrument and process was used differently for teachers of different teaching experiences. The principal said that she believes that the evaluation instrument was used differently based on experience. The principal said that since a teacher with more experience should make less mistakes, the tool can be used to improve more specific items. An experienced teacher will have a higher score in many domains. This will allow the teacher to focus only on the few lower scores. On the other hand, a newer teacher may experience lower scores in many domains. It is difficult for a new teacher to address every domain at the same time. The teacher said that the tool is absolutely used differently based on experience. The reasons were very different than the administrator. Instead of looking at the differing uses, he believes that administrators do not effectively evaluate teachers that are more experienced. There is a personal connection that a teacher and an administrator may have that will skew the results. The teacher gave multiple examples of colleagues he knew that were never fully evaluated. Examples included a “pat on the back, good job and left after five minutes.” The resulting analysis is the administrator sees the importance of utilizing the instructional tool in diverse ways. By taking experience as a function of how to use the tool, the evaluation process can almost be customized to effectively monitor all teachers.
Question 4
The fourth question that was discussed asked if there was room for improvement in the evaluation tool.
The assistant principal believed the evaluation tool has been crafted very effectively. When looking at the tool, she believed that every aspect of effective teaching is included. Concern of the evaluation tool was based not on the tool, but rather based on the use of the tool. The principal’s concerns were founded in the training of administrators to utilize the tool. The teacher believed the evaluation tool was an ineffective tool in giving feedback. Concerns were mentioned from past experiences. The teacher gave experiences of them getting the same comments year after year. Feedback was never adjusted based on the performance. From there, the teacher gave an example of when he had been sent an observation report after talking to the evaluator in the hallway for a brief time. Had the teacher’s evaluator properly referenced past evaluations, the teacher would not have felt that the processes lacked effectiveness. His evaluator should have altered their comments in a way that reflected progress in the teacher’s
effectiveness.
Question 5
The final question regards the appropriateness of when to administer additional support and remediation. The assistant principal said before she takes any official steps, she seeks out the advice of other principals. Since there are luckily four other principals, she references them first. The reason she references is that any improvement plan will stay on the teachers record. The right decision must be made when the outcome follows a person. From there, the principal references the district contract on regulations on procedures of implementing an improvement plan. On the other side, the teacher said he had no idea of the process in which a teacher would get an improvement plan. He said he had never been addressed about improvement plans. I followed this response with mentioning he may check his certified contract for details. The teacher seemed very opposed to the idea of being put on an improvement plan. In his eyes, they are seemed negative and cannot improve
When analyzing the evaluation instrument, I discover there are many domains that are evaluated. There is a wide array of dimensions that are addressed in the Tulsa Model. If the tool is used correctly, an evaluator can look at aspects of curriculum, professionalism, teacher effectiveness, and other domains. There seems to be a disconnect between evaluators and the people they are evaluating. During these interviews and studies on the Tulsa Model, I have learned that it is imperative for an educational leader to have a mastery in using the evaluation tool. ELCC 1.2 states that a “Candidate understand and can collect and use data to identify school goals, assess organizational effectiveness, and implement plans to achieve school goals”. Assessing organizational effectiveness and implementing plans to achieve school goals can be directly related to my research of the Tulsa Model. As an education leader, I will oversee monitoring teacher effectiveness. If I am unable to utilize the evaluation tool, I will not be able to successfully monitor effectiveness. Not only is it my responsibility to evaluate the organizations effectiveness, I must be able to explain the reasoning behind the results. I discovered in the interviews that some teachers believe their concerns are with the model. What was not realized was that the teacher had an issue with how the evaluator utilized the model. Clear understanding of the model and process can aid in the teacher understanding what may be unclear results from the evaluator rather than blaming the evaluation tool.