The Only Way to Evaluate Professional Development
This year at the elementary level, the majority of our professional development efforts have focused on Writing Workshop, with the idea that next school year all teachers (and students) will be able hit the ground running with this method of instruction.
With these thoughts in mind, I regularly ask myself, “How will we know if this year’s Writing Workshop professional development has been successful?” (keeping in mind the learning will continue throughout the upcoming years)
In Evaluating Professional Development, Tom Guskey features five increasing levels of sophistication for evaluating professional development: participants' reaction to professional development; how much participants learned; evaluating organizational support and change; how participants use their new knowledge and skills; and improvements in student learning.
The levels in this model for evaluating professional development are hierarchically arranged from simple to more complex. With each succeeding level, the process of gathering information is likely to require increased time and resources. More importantly, each higher level builds on the ones that come before. In other words, success at one level is necessary for success at the levels that follow.
Implications for Our Work
This book reaffirms what I have speculated for a while; the majority of schools and districts gauge the effectiveness of professional development with nothing more than teacher surveys. As Guskey informs the reader, “Sadly, the bulk of professional development today is evaluated only at Level 1 [participant reaction], if at all. Of the rest, the majority stop at Level 2 [participant learning].”
In regards to our Writing Workshop professional development (and any other form of PD), the “end game” is the impact we have on student learning. So, we should be able to indicate the different forms of evidence that will be used, both quantitative and qualitative, to determine if student learning has improved as a result of Writing Workshop implementation. Some of these indicators may include: an analysis of student folders/notebooks, contrasts in on-demand writing prompts that function as pre- and post-assessments, students’ abilities to self-assess their own work/progress, teacher anecdotal notes from student conferences, student and teacher “interviews,” and classroom observations/walkthroughs.
Ideally, an initiative’s look-fors should be determined prior to any professional development taking place, as we should always be planning with our intended goals and student learning outcomes in mind. Admittedly, for various reasons, this was not the path I followed with Writing Workshop…For the next topic on which we focus, I look forward to establishing the goals and outcomes, both proactively and while involving as many stakeholders as possible. At the same time, I have to keep in mind that we can’t only consider student learning, because “success at one level [of professional development evaluation] is necessary for success at the levels that follow.”
In the End
First, Evaluating Professional Development is a book I can highly recommend. Although it’s not overly “exciting,” I can confidently say all professional development I facilitate will now be more strategically planned and evaluated as a result of reading and highlighting my way through the book.
Second, while Writing Workshop has been cited as an example, the driving purpose of this post is to convey the message…
Fixating on surveys and participant reaction to evaluate the effectiveness of professional development can be comparable to assessing classroom instruction based on nothing more than student engagement…
It’s all about student learning.
As John Hattie declares, “…we need to turn away from finding the ‘thing’ – the program, the resource, the teaching method, or the structure. When we become the ‘evaluators of our impact’, then we have the basis for the greatest single improvement in our schools.