Recently I’ve been really digging into Project-Based Learning. My last three posts have all revolved around this. Often when I talk with people about a shift to more learning that is project-based, inquiry-driven, choice-based, and experiential, I get pushback asking for the research that backs it up. The truth is, there is a lot of support for this type of learning. If you want to do a deep dive into that research, check out this great post from A.J. Juliani on The Research Behind PBL, Genius Hour, and Choice in the Classroom.
If you take the time to read through that post from Juliani, you’ll find research on engagement and achievement, success stories from fellow teachers, ways that PBL is connected to standards, and some related reading. I’m thinking about this question of research because two authors that I follow both recently shared posts that questioned why we continue to do some of the same things in education. We’re so driven to think about what the research says about new practices, that sometimes we don’t look at what the research shares about the stuff we’re already doing.
Before I get into that too far, here’s what I have learned. Research changes over time. Methods and strategies change over time. Things that were considered “Best Practice” in the past may not be true best practice anymore. And there are times we find that things that we thought were not a best practice have become one after further study. The other thing I’d say about best practice is that sometimes there are practices that we utilize that are pretty good, but when we learn that there are better practices, it might be time to make a shift. What is it that Maya Angelou says?
A recent post from Scott McLeod (here), and then a related post from AJ Juliani (here) both shared a link to this post from The Hechinger Report. As we spend time talking about transformative learning opportunities in our schools, I think the data that The Hechinger Report is sharing should drive us to think more deeply about why we do the things we currently do in education. Let me share some of the key points that stood out to me from this post.
As we all know, the No Child Left Behind Act of 2001 mandated that every student in the 3rd through 8th grade need to take an annual test to see who was performing at grade level.
In the years after the law went into effect, the testing and data industries flourished, selling school districts interim assessments to track student progress throughout the year along with flashy data dashboards that translated student achievement into colored circles and red warning flags. Policymakers and advocates said that teachers should study this data to understand how to help students who weren’t doing well.
Anyone who’s in education probably has spent significant amounts of time in the past 20-ish years analyzing student performance on tests. Here in Indiana that might include the IREAD-3 or ILEARN tests. It might also include time spent poring over data from NWEA, or other formative assessment data within your district. So, here’s the question. If these tests are supposed to help us identify the students who need the most support, and help teachers adjust to meet the needs of those students, why do we continue to see the same learning gaps from many of the same demographic groups?
According to Heather Hill, a professor at the Harvard Graduate School of Education, “Studying student data seems to not at all improve student outcomes in most evaluations I’ve seen.” A review of research by Hill (found here) finds that in terms of student outcome, most of the 23 identified outcomes were unaffected, and of those that were affected, only 2 had positive impacts, and in one case the result was negative.
So, if the time analyzing student data (something that seems like it would be beneficial and impactful for students) isn’t having a positive outcome for students, we must ask the question, why?
According to these studies, teachers are using various assessments to identify content that they need to return to. Often, they then make plans to revisit those concepts using a combination of whole-group and small-group instruction. But we need to go a step further. We must take that data that’s been collected, along with what we know about kids, to deepen our understanding of how kids learn, identify the reasons behind misconceptions, and then adjust our instructional strategies.
If our strategy to support students on concepts that they are not currently grasping is to re-teach the topic the way we did the first time, hand the student a worksheet, or put the student on a technology-based program to practice, we’re not going to impact student learning. We can’t do the same thing again for a student who is struggling.
That is part of why I am on this path of pushing others to think about doing school differently. More inquiry, PBL, or design thinking will put our students in learning situations that are different. It forces students to move out of their comfort zone and to the growing edge. And that’s the reality – we all need to be a little bit outside of our comfort zone to grow. Trying new instructional strategies are going to force you out of your own comfort zone.
And I don’t want a takeaway from this post to be that we have been wasting our time with data-driven instruction, PLCs, RTI, etc. That work is valuable, but if that work doesn’t also change teacher instruction, the learning gaps are going to remain.
As McLeod closed his post, so will I: It is time we make schools different.