Literacy Coaches: How do you assess your impact?

Screen Shot 2017-05-23 at 9.56.39 AM

A few months ago I received an email from a curriculum director in a district where I sometimes do a bit of consulting: Did I have any data regarding my impact in schools? Did I know where I could find such data? The district needed to prove my impact in order to get the grant funding it was hoping for.

This email sparked a new area of inquiry for me: How do I measure my impact? How do I know if I’ve been effective as a coach?

I still consider myself at the beginning of this inquiry, but through trial and error, I’ve already learned a few things.

GOALS

Assessing my impact this school year would have been a lot easier if I had set much clearer goals for my work from the beginning. Much of the work I’ve done this year was based on teachers’ personal professional goals, which was a great start, I think. However, in the future I plan to work with each school community to set school-wide and system wide goals that are just as explicit and clearly written as individual teacher goals.

This year, and in years’ past, educators and I spoke often about our shared goals for literacy instruction, but they weren’t written down anywhere, and there wasn’t a clear plan from the start for how to assess progress toward those goals, other than surveys and evaluations now and then. Next year I plan to make some changes.

STUDENT-CENTERED DATA

As a coach, I’m focused on whole schools, as well individual classrooms and students. One of my plans for next year is to set out to collect student data to demonstrate the growth of literacy instruction in each school–across the school. For example, an engagement inventory, like the one in Jen Serravallo’s book, is a simple tool that I can use as coach to observe entire classrooms at work and quickly assess how we’re doing with engagement on a whole-school level.

I often use checklists to help me assess how we’re doing in different areas of classroom environments: strategy charts, writing centers, student writing on display, seating arrangement, etc. These checklists contain valuable school-wide data that could represent growth as a school over time in different areas. We also use checklists and rubrics for looking at student-writing. This data is another way to be able to look across the whole school to assess how student work is coming along.

I also keep notes of all my classroom visits. These notes are mainly to help me remember each visit from week to week, but I can also look back at these notes for patterns and signs of growth.

I might also use photographs to document growth over time. I often wish that I had before and after photos of classrooms I work with. Photos could be used to document classroom libraries, writing centers, charts, students at work, and more.

SURVEY DATA

At the start of this year, I began using Google Forms to gather feedback after each workshop and course that I taught, and it has turned out to be invaluable. With all the responses automatically feeding into a spreadsheet, I can keep better track of all the information, sort it different ways, and use different tools for finding key words, patterns, and trends in the responses.

What I liked about the surveys this year was that I had two or three questions that were always the same: 1) How did it go? 2) What could have been better? 3) What will the impact of this day/week/course be on your classroom? Asking those three questions allowed me to look across all the responses from all the workshops to look for patterns and common themes. For example, people nearly always loved participating in classroom labsite demonstrations, but were less unanimous when it came to having time to plan together with same-grade level colleagues.

I’ve also stopped using a simple “rate the day” on the scale of 1-5. What I’ve found, after a decade of doing this work, is that that scores are almost always 4-5, and I beat myself up over not getting 100% 5’s. Maybe this sort of question is helpful at a larger conference with many presenters, but not so helpful for me, individually. The open-ended feedback is far more revealing and useful, and I find that when a survey includes a scale, people tend to give less written feedback – they feel they’ve already rated the day, so why go to the trouble of also writing about it?

QUANTITATIVE DATA

Which brings me back to that curriculum director and his request for data. It would be wonderful if I could simply share all the feedback responses from surveys across the years. In fact, I have done that in the past, and hopefully it helps with grant applications. However, I know that many grant committees are looking not only for written recommendations and teacher evaluations of my work, but for objective, quantitative, “hard data” to support the work.

I need not explain here why standardized test scores, as a measure of success in literacy instruction is problematic. For a million complicated reasons, standardized tests are not great indicators of children’s authentic growth as readers and writers, and tend to be unreliable evidence of the quality of teaching that students received in any given year–which is the thing I ‘m most concerned with.

Yet states and other institutions are looking for scientific data that my coaching is having an impact, leaving me with a challenge. No scientists will be researching me specifically, any time soon, so I have begun to search for studies of the practices I support in schools, studies of the model of professional learning that I provide, and studies of other literacy coaches like me.

ONGOING INQUIRY

This is my inquiry project for the summer and next year: finding authentic ways to assess my impact in schools. I’m already thinking ahead to next year and years to come.

Are you a literacy coach? How do you assess your impact? Share your thoughts below.