Meeting Minutes Session #1 (9/1/11)
I. Group Norms:
Start and Stop on Time
Confidentiality (don’t attribute things to anyone one person—okay to talk in broadly about the
discussions)
One person speaks at a time
Cellphones/electronics: put on vibrate and use for emergency only (or on breaks). Computers
should be used only to support the “work” of the training session.
Be respectful
Stay on Topic
Don’t Be Monopolistic of the Time
Be prepared, do your homework
Be on time from lunch and other breaks.
II. Hopes and Fears:
Hopes:
o Change the “status quo” and get some goals for us to work on.
o That the NBTC will be an opportunity for conversations that will increase trust and
improve the system so that it can offer supports for teachers.
o Find out what we are looking for when we say “effective” teaching.
o The evaluation process that we revise will be used, enforced and meaningful.
o That we will have good formative assessments that we can use.
o Evaluation systems account for differences in students (high‐need, EL, etc.)
o Good Administrator training for those performing the evaluations.
o Good time lines (that are followed)
o Assessments CAN measure growth. We need multiple measures.
o Evaluations lead to improved teaching that can increase student achievement and
narrow gaps.
o Identify places where we can improve and provide student support.
o Will be a self‐reflective environment
o Realistic evaluations.
Fears:
o Will misuse standardized test data.
o Will be a barrier between Administrators and Teachers.
o Lack of resources to implement a new system
o Won’t scare new teachers from entering or remaining in the field. We need to retain
new teachers.
III. Summary of Key Takeaways from the Internal Scan Discussion
A mentoring and coaching model was an area that we need to work/expand on.
How can the evaluation system support hiring process?
Teacher evaluation is geriatric and not effective.
We need to develop trust and understanding around how data is collected and used before we can think about incorporating it into an evaluation system.
We are doing a lot of good work in the instructional support area, but it is not aligned to the
evaluation systems.
HR systems are perceived as fair.
The current support system is reactive: We have some proof points where it is working but this is an aspect of reform that will be more long‐term than redesigning the evaluation system.
The group found the internal scan tool useful for the discussion and by and large the results
seemed to reflect most districts. It also allowed groups to identify gaps.
We need for incentives for teachers, trust, shared‐decision making.
We had a good discussion around the history of supports/evaluation/hr processes.
Slide #12—surprise that the majority of respondents reported consequences for poor
performers. We are not doing that.
Slide #33—Student needs aren’t driving placement decisions.
We have a lot of the tools in place but there is inconsistency in how they are being used/
understood.
We need to figure out how to link professional development to needs that are identified in the evaluation process. We do a lot of PD for big groups but it is not individualized. Evaluation
connected to continuous improvement.
Came up with a dream plan for our district:
o Trust and mentoring
o Shifting the culture of the district—inspire reflection, strong morale and a belief in the
system and engaged in their own growth
o Need agreement on what is effective
o Need on‐going team evaluations with team mentoring.
Need to do a better job of mentoring the new teachers and finding ways for teams to help each
other.
We need a system that continues to support good teachers to becoming great.
We need a broader measure of effectiveness, and we need to identify the metrics for
effectiveness. We don’t want it to be a conversation of he said/she said.
We want to use formative on‐going data. Not post‐mortem data. We need to use data that is
timely enough that you can still do something about it.
We want to establish a culture of trust, so that colleagues can coach/mentor each other.
IV. Key Takeaways from the Article Review of “The Widget Effect”
1. What is the author’s point of view and/or theory of action?
- We can make evaluations more meaningful by linking to student learning data and
connecting to professional development.
- Good teachers will improve student learning. Systems that help improve the quality of
teaching will improve student learning.
- If we do a better job at distinguishing levels of performances we can take the necessary
steps to improve performance through differentiated professional development.
2. What evidence does the author present to support his/her point of view and theory of action?
- Solutions don’t have evidence. The problem has evidence.
- Only remediation and dismal is the only thing that are being done with the evaluation
- Poor performers that want out of the system in CA, don’t have a way out. We don’t have a
way out for them. Exit strategies are needed.
- Moral imperative to be rigorous. The implications are severe. Effective teachers can make a
real impact.
- Article demonstrates that we don’t focus on adequate teachers and make them great. The
current system focuses only on poor performers, which make up a small proportion of the
workforce.
- Evaluating systems tend to give you an overall grade and don’t give you specific things to
work on.
3. What, in your judgment, are the most important ideas in the article?
‐ We need to be careful of the “blame game.”
- First thing you need to do is come up with a needs assessment. Where are the needs for
achieving the goals of teaching and learning are assessed.
- Should be a culture of building on the strengths and improving the weaknesses. If we just
focus on the evaluation we might miss out on that opportunity.
- I want to improve, how do I do that without getting a “Gotcha”
- There are different types of “good” teachers. Some may be rigid. Some may be more rigid.
The concept of what you think is the best teacher may not be true.
- There still needs to be standards. I want to know when I am being evaluated, what are they
looking for. There needs to be a method.
- The values are the same, but the methods are different. And we play to our strengths, but
we have to be able to trust the other methods.
- You can be effective in different ways.
- Teaching is an art form. A scientific approach to deciding what an effective teacher will not
work. We need to be careful to not have a “one‐size fits all” approach.
- Part of evaluation has to be identifying the range of strategies that effective teachers
employ. There’s a part of it that is not art. Good teachers should have a set of tools to use.
- If I am not using the strategies that “are not” on the list, it should be the responsibility of the
evaluator to look at them with an open mind.
- It is critical that administrators be trained on how to do this. We need to transition slowly
with a lot of support for a long time. We cannot lose site of this.
- We recommend that districts focus on both teacher and principal/site administrator
effectiveness.
V. Summary of Key Takeaways from the Internal Scan Discussion
Common Themes (All Articles):
Status Quo is not working
Differentiation
Firing teachers is not going to solve the problem—need to improve the quality of teachers
More frequent observations
Regular feedback so the evaluation isn’t “post‐mortem”
Multiple measures (sources of information), and multiple models for evaluation (e.g.
observation, student data, artifacts, etc.)
Focus on student learning (growth) not student achievement
Student data, if used, should be used over a period of time (multiple years)
Comprehensive understanding of the process (training for administrators and teachers to
understand)
Collaboration
Support side: follow through, professional learning, coaching/mentoring,
Escape route for teachers that want to escape.
Build in time to train evaluators and transition smoothly to a new system
Alexander Valley: The New Teacher Project: Evaluation 2.0
Teacher evaluation as it is now is not adequate, doesn’t fulfill the mission of improving student
achievement.
New evidence? No
6 main points of the article:
o Evaluation should be annual
o Clear rigorous expectations
o Multiple measures
o Multiple rating levels
o Regular feedback during the process
o Frequent observation
o Significant impact on careers/consequence
Sets the standards high. A long‐term project. How would you get to that? You have to move
slowly. A district is going to need a long‐period of time to get there. 3‐4 years.
Authors identified pitfalls for each of the six main points. It was nice to see balance for the
recommendations.
Their recommendations would probably engender quite a bit of resistance. Represents a
paradigm shift. People have anxiety about this. We need to understand where they are coming
from, and work together with them.
Bellevue: The New Teacher Project: Rating a Teacher Observation Tool
They recommend that districts ask themselves 5 questions when rating an evaluation too.
You can use these 5 questions to see if you are meeting these criteria.
The group liked that there was a question regarding high performance expectations: “Are
the performance expectations clear and precise and concise enough?”
Bennett Valley: NTCTQ: A Practical Guide for Teacher Evaluations
Good analysis of different evaluation tools. Gives a good description of each (opportunities and
challenges).
No one measure should be used. A combination should be used.
Well researched.
Great recommendations
Mark West: Incompetent Teachers or Dysfunctional Systems (West‐ED)
Dismissal model is not going to help our system.
Problems with system:
o Mis-assignments
o Teachers working in high‐poverty schools judged the same as those in low‐need
schools
o NO reciprocal accountability
Old Adobe: NEA: Teacher Evaluation Systems the Window for Opportunity and Reform
Hopeful article highlighting other places (states) that are doing innovative work in the area of
evaluation (e.g. TAPP, Pro Comp, Peer Assistance and Review, etc.)
Common themes from this innovative work:
o Involve multiple stakeholders
o Credible measures (multiple measures)
o Higher frequency
o Linked to embedded to professional development
Ways to link evaluation to pay and student achievement data: Need to evaluate over time and higher frequency of observations.
Petaluma City Schools: Kim Marshall: It’s Time to Rethink Teacher Supervision and Evaluation
Teacher evaluation is ineffective and a poor use of teacher’s time
Teacher observations make up a tiny fraction (one dot of 1,000) of the time teachers are with
their kids.
Recommendation: Informal reviews and evaluations more frequently, and formally in depth
every few years.
Observations should be frequent and shorter—an on‐going dialogue, so that you are sharing
back and forth.
Get teachers to focus on 3‐4 things per year.
Piner‐Olivet: Hamilton Project: Identifying Effective Teachers Using Performance on the Job
Teacher certification is not a predictor of teacher effectiveness.
Certification itself doesn’t make a difference and they provided evidence for it (graphs)
Recommendation #3: Offer bonus for teachers who chose to teach in the lower‐performing
schools.
For any of these evaluation systems to succeed, it needs to be fair.
The first two recommendations have to do with certification. They recommend dropping the bar
for those who enter the profession.
Upcoming teacher shortage? Never seen it in Sonoma County?
#2: Make it harder to tenure teachers
#3: People be evaluated with various measures (objective and subjective) and including student
achievement and teacher practice.
#4: look at this over time, not just how you are doing this year.
Santa Rosa City Schools: Student Achievement Taskforce—NBPTS/Improve Evaluation
Defining terms: Student Achievement versus student learning. Student achievement is the
status of knowledge at one point in time. Student learning is the growth of student achievement
over a period of time. It is student learning not achievement that should be used to evaluate the
effectiveness of a teacher.
Critique of using standardized tests in evaluations.
Make sure the assessment is not just year‐to‐year.
Validity of tests (one size fits all).
Data systems that tracks students and links to teachers.
Skimpy on actual evidence.
SCOE (Alt Education): Policy 2.0 Hope Street Group
ARRA set the stage for meaningful reform
That might explain why there is a shortage of data
14 states once a year, 12 have state‐developed instruments, 22 states where the state doesn’t have anything to do it.
SCOE (Spec Ed): AFT: Randi Weingarten
Evaluation systems are not working
You can’t fire your way to having good teachers
This is why we have resisted systems that are just sorting good from bad (or different from what someone wants)
Abundant research that people improve over time and with support.
Can you imagine if they did this to doctors
Due process, takes too long to remove ineffective teachers.
Comes up with a list of proposals to help this process:
o Two pages of different ideas and suggestions. Three are:
Bar of teachers—peer review
Escape route for teachers, so they are not punished for trying the profession.
Administrators should be teachers
Due process aligned to the evaluation system
Sonoma Valley: PACE article on Value‐Added Measures
Pro‐ Value Added Measures but cautions about how to use it.
The benefit of the value added approach, it takes into account some of the factors, like socioeconomic
status. It’s the best of the three measurement tools out there. This one is preferable
to the other two in use.
Works better at the program level.
Very complex. If the district or state thinks can easily implement this model they are mistaken.
Nervous about how this should be used with individuals.
Not a data yet that it is going to work in evaluations
Windsor Unified: Bill and Melinda Gates Foundation: Empowering Effective Teachers
Conditions to support implementation
o Good data systems
o Stakeholder engagement
o Financial sustainability
Recommendations:
o Most effective teachers get paid more money
o More rigor in deciding tenure.
o Differentiated compensation and career pathways.
Who’s not represented here: the middle school (for our district). We are the hard chargers, we can’t get too far ahead.
VI. Key Characteristics of an Effective Evaluation System:
Common Themes (all districts):
Differentiated
Clear expectations, purpose
Collaborative (includes peers)
Standards‐based
There is a similar process for teachers and administrators
Training for administrators/evaluators
Resources for the new system
Rubric scoring
Emphasizes “realistic” goal setting for teachers
Includes objective criteria
Frequent observations that lead to an on‐going dialogue
Time for reflection and feedback
Staff and administrators are bought into the process
Clear, measurable, agreed‐upon goals that are understood by all.
Respectful/collegial process
Alexander Valley:
A process where administrators and teachers together review all steps and agree jointly on the focuses of the year
Spiral topics/focuses
Differentiated* for the range of stages of where staff are in their career
Multiple informal observations*
Digital data collection
Agreed upon norms
Bellevue:
Clear expectations*
Commonality in practice
Collaborative*
Growth model for teachers
Standards‐based*
Supportive of change
Similar processes for administrators*
Training for admin*
A visible process
Supported with resources*
Reflective
Meaningful to professional growth
Teacher‐centered‐teachers take responsibility for monitoring growth
Process is also evaluated
Student learning piece (currently undefined) but may include: (a) growth model; (b) multiple measures; (c) longitudinal data; (d) quantifiable; (e) meaningful
Bennett Valley
Rubric scoring
Recognition that is meaningful to teachers
Includes suggestions for improvement and provides opportunities and links to professional development
Emphasizes goal‐setting*
Uses multiple measures (not just observation).
Mark West
Informative not punitive
Objective criteria*
Growth focused on teachers and students
Mentoring/coaching peer model
Everyone accountable
Teacher‐selected goal setting*
On‐going/more frequent*
Old Adobe
Consistent
Collaborative
Standards‐based
Time for reflection and feedback*
Links to resources (e.g. mentors, peers, materials)
Petaluma City Schools:
Constructive
Differentiated*
Collaborative*
Self evaluation/reflection
Multiple observations (peer and admin)
Measures effectiveness
Promotes on‐going dialogue
Santa Rosa
Buy‐in by admin and teachers*
Improves student learning*
Clear measurable goals understood by all*
Method of assessing progress toward goals
Sonoma Valley:
Clearly defined purpose and process*
Respectful process
Predictable
Collegial*
Constructive
SCOE (alt ed)
Have more opportunities for feedback
Tied to school goals
Inquiry‐cycle design
SCOE (special ed)
Clear/transparent
Increased morale
Includes realistic/practical goal‐setting
Range of rubric scoring* to reflect teachers’ learning process
Includes some sort of measurement of student learning
Support to meet goals
Aligns with curriculum
Windsor
Stakeholders would buy in
Meaningful
Clear purpose
Aligns with the district’s mission
Carries over into the off‐cyle years
Specific, precise, continual self-reflection.
Help from administration
Open communication, reciprocal
Mentoring throughout the district
Fill in gaps of teachers who don’t have “history” with the district
Connect evaluation to something constant/systematic and on‐going
Honors teacher creativity and administration, while supporting student learning
How does your project impact your class?
VII. Meeting Evaluation
Pluses:
o Paperless worked well.
o Chance to come together—text‐based discussions, and listen to different perspectives
o Key characteristics activity
o Being in a group with our districts
o Variety of the professional articles
o Summaries were very nice
o We didn’t have to read all of them
o Jigsaw Approach
o Good to see county and district trends/ideas/perspectives
“Deltas”:
o Want to hear what is driving the teacher evaluation trend on the national
Comments (0)
You don't have permission to comment on this page.