Education Week, which bills itself as the newspaper of record on education, decided to visit the news about the latest results from the most recent PISA exam, and starts off grabbing the bull by the horns:
Do the newly released results from the 2015 Programme for International Student Assessment (PISA) throw cold water on project-based learning? The results, issued this week, examined instructional practices in science and their correlations with student performance in the subject. The report used student questionnaires to determine how frequently the following techniques were used in science classrooms: teacher-directed instruction, perceived feedback, adaptive instruction, and inquiry-based instruction. The report then analyzed the relationship between these approaches and student scores on the assessment.
Apparently there was a negative correlation between scores achieved on PISA and the various instructional techniques. Students were asked nine questions, including how often they spend time in a laboratory doing experiments, how often they are asked to draw conclusions from an experiment they conducted, etc and, as Ed Week reports:
The results were striking. Students in classrooms using teacher-directed instruction performed far better than those in classrooms using inquiry-based approaches.
Hold the presses! There must be some explanation!! And being the newspaper of record on education, Ed Week provides them:
First is the old reliable chestnut that every self-respecting denizen of the internet uses to win any argument imaginable: “Correlation does not mean causation.” (Audience applause of recognition). Ed Week justifies further: “This was not an experiment in which two randomly selected groups of students were taught in different ways. Perhaps the results reflect a factor other than the method of instruction.”
True! So let’s just ignore all the other evidence that has been mounting over the years, like hundreds of years of evidence of the effectiveness of direct-teaching techniques, and the fact that outside tutors and learning centers rely on such techniques rather than the trendy fad-du-jours that pass for educational expertise amongst the ranks of the edu-literati. And let’s continue to ignore that top-scoring nations tend to rely on traditional techniques (including the use of tutors/learning centers/jukus), resulting in high scores on a test that uses open-ended, ill-posed questions more than those recall- and procedural-based tests like TIMMS. One would think countries reliant on PBL, inquiry-based, student-centered approaches in which critical thinking and learning how to learn are center stage would flourish on tests with questions that are more of the same as what has been experienced in classrooms.
And in fact, it looks like the folks at OECD thought of this already, and Ed Week trots it out: “[OECD] notes that teacher-directed instruction is used more frequently in more-advantaged schools, and that inquiry-based instruction is more common in schools serving disadvantaged students. “Teachers,” the report notes, “may be using hands-on activities to make science more attractive to disengaged students.” That is, the students who most often experienced project-based learning were low-performing to begin with.”
Beautiful. More-advantaged students can handle traditional type teaching, which goes to a theory of mine about how students not deigned to be “gifted” or “bright enough” are put on a protection from learning track. Traditional teaching works only for those bright enough to handle it. (I wrote about this in an article here; it brought in loads of hate mail, so feel free to add to the pile if you wish.) And of course, no mention of how those advantaged students may have benefitted from tutors or learning centers.
Finally, Ed Week points out that OECD’s analysis of the results “says nothing about the quality of instruction. Teacher-directed instruction can be done well or poorly.” Yes, indeed. And traditional education done poorly has been used as the definition of traditional education for years–with no end in sight for this particular practice.
I can think of some other excuses that neither OECD nor Ed Week thought of. Like, this was a science test, and not math or English, so let’s not extend the results to other disciplines. Or, maybe they didn’t use a “balanced approach”. Or the results were based on student questionnaires, which is unreliable. There are MANY excuses to choose from. But here’s something of interest that people may wish to think about. It comes from Zalmon Usiskin, an education specialist who was prominent in the design of Everyday Math, a K-6 math program that received initial funding from a grant from NSF in the early 90’s, that adheres to inquiry-based, student-centered approaches, as well as a dizzying spiral approach to math topics. He is remarking on the tendency to nay-say the results of tests deemed to be “inauthentic”. His words have particular meaning with respect to the arguments of PISA vs TIMMS, as well as the most recent interpretation of PISA results:
“Let us drop this overstated rhetoric about all the old tests being bad. Those tests were used because they were quite effective in fitting a particular mathematical model of performance – a single number that has some value to predict future performance. Until it can be shown that the alternate assessment techniques do a better job of prediction, let us not knock what is there. The mathematics education community has forgotten that it is poor performance on the old tests that rallied the public behind our desire to change. We cannot pick up the banner but then say the tests are no measure of performance. We cannot have it both ways.”
(Zalman Usiskin What Changes Should Be Made for the Second Edition of NCTM Standards. UCSMP Newsletter, n12 pp. 10 (Winter 1993) )
The writing of the ineffectiveness of problem-based instruction and of discovery-learning type of instruction has been on the wall for a long time. Based both on decades of experience and on research, from project Follow Through to Kirschner Sweller and Clarke (and others). Why do you think yet another set of scores will convince anyone?
Let the Common Core travesty unfold on its full glory, let most of American student be led to ignorance by their teachers, and then — perhaps — parents will rebel.
I am tired of idiots.
LikeLike
“Why do you think yet another set of scores will convince anyone?”
I don’t. That’s why I wrote this.
LikeLike