Monthly Archives: March 2012
I’ve finally wrapped up a few bigger tasks in the last few weeks (except for that whole Innovate 2013 conference planning, support for the HS 1:1 roll out, and putting together an accreditation plan) so I’ve been able to make it to classrooms more often. This isn’t about evaluating your work, it’s about evaluating mine. We’re devoting more time to professional learning structures and they are only as successful as the impact that they have on student learning.
You’ve set goals with your administrators that serve to anchor their observations so I’m using a short, informal structure of a three-minute walk-though focusing only on the learning environment as it aligns to our Professional Growth and Supervision Plan principles. I’m also tracking the integrity of our documentation of the student learning in Rubicon Atlas.
In three minutes, I’m building a school-wide body of evidence based on the following questions implicit in our teacher evaluation rubric:
- Do students know what they will (should) learn from engaging in the task?
- Why I look for this? How can we empower kids to advocate for their learning if they don’t know the opportunities or the criteria for success?
- Is there accountability to high expectations of behavior and engagement in learning?
- Why I look for this? The one doing the talking, the writing, the modeling, the problem-solving, the lab (etc) is the one doing the learning. How can kids construct meaning unless they dig into the work and grapple with ideas?
- What is the level of cognitive demand?
- Why I look for this? How are we treating kids as thinkers? Are they engaging in tasks that require critical thought and “higher levels” of Bloom’s taxonomy or do we expect only lower order thinking skills, such as simply collecting or recalling information?
- Is what is happening in the classroom aligned to what we say is happening in the classroom?
- Why I look for this? Can we track the story of a cohort’s learning? Without the story, we cannot access where (and why) there are strengths and challenges with understanding, determine where to replicate or replace, and hold ourselves and students accountable so we can continue to build a cohesive learning experience.
A two-week snapshot of 22 classrooms
(Click on graphs for a larger version)
Learning doesn’t happen from an experience; learning happens when one reflects on experience.
It’s been weeks since I’ve returned from ASB-Unplugged and this is really the first opportunity I’ve had to capture some reflection. One thing I haven’t quite figured out is how to create the space for all of us at Graded to share out learning that emerges from conferences. And as a result, I probably owe an apology to those that may have found themselves stuck with me at the lunch table my first few days back. This, however, is a feeble attempt to begin to close the abyss. I owe it to you to share what I learned.
Although I felt like I got to explore a lot, I didn’t walk away inspired to try a new digital tool, or to significantly alter structures of professional learning, or change expectations I have (we have) for relevant, engaging learning. I didn’t come away with a deeper understanding of technology’s role in learning or how I can better serve Graded in moving beyond where we are to where we can be. I think we’re on the right track to figuring out solutions to some complex issues. My biggest take away is linked to the concept of benchmarking and trust. I know. Odd.
I think we commonly enter a learning community with a lens of comparing to see where we stand in relation to what others are doing. For those that know me, by nature I’m a bit of a case builder. I land on an idea and filter information to support my conclusions. I’m really trying to grow beyond this instinct. The first few hours of the conference, I found myself thinking… “well, we do that… many of our classrooms look like that… we have that in place…” After a session with Scott McLeod (click here to see our workshop resources) I began to grapple with a whole new idea. In an almost passing remark, he noted the importance of benchmarking not to organizations that match or extend our reach to excellence, but to benchmark to the organization we WANT to be – and that may mean not having another, specific program to benchmark against or measurement tools to evaluate what is valuable to our school at the ready. This may mean we need to benchmark to an ideal. This is a much more ambiguous, daunting task than, for example, identifying other international programs that are doing a “good job” and delivering graduates to the doors of ivy leagues.
Beginning in mere days, we will begin our accreditation process by first examining our mission and projecting a direction for our school. We will use the outcome to evaluate where we stand and build steps to become the school we want and can be. Admittedly, I’m curious to see where we land as a community. How aligned are we presently to a shared vision of schooling? Can we embrace a future we cannot define? Will we honestly question our assumptions and collectively commit to building a program that serves children well?
In my 20 months at Graded School, I continue to be surprised by the work. New questions continuously emerge and my learning curve remains steep (just the way I like it). I trust that if we engage in the process with integrity, we’ll land on the benchmarks that will help define Graded in the future.