Jobs I Would Never Want, Dept.

Just saw an advertisement for a math teaching position in a “new, progressive, independent” middle school in San Francisco. As I read it, I was reminded of what Stanford ed school Professor Steve Labaree said about constructivism in his book “The Trouble with Ed School”. In it he says that while constructivism rules the waves in ed schools, it has had little impact on education in general.

He goes on to say:

“[T]his form of progressivism has had an enormous impact on educational rhetoric but very little impact on educational practice. … Instruction in American schools is overwhelmingly teacher-centered; classroom management is the teacher’s top priority; traditional school subjects dominate the curriculum; textbooks and teacher talk are the primary means of delivering this curriculum; learning consists of recalling what texts and teachers say. … What signs exist of student-centered instruction and discovery learning tend to be superficial or short-lived.”

While what he says may be true for high school, it can certainly be rebutted for the lower grades in which student-centered and inquiry-based/project-based learning is becoming increasingly commonplace. To wit and for example and case in point is the ad I was talking about for the math teacher for the private middle school. Here are some excerpts from the ad:

“We call our teachers “Guides” to indicate how we see the ideal learning relationship with students. We support the shift from “sage on the stage” to “guide by the side” and beyond.”

“We’re inspired by Jo Boaler’s approach to Math teaching, and aim for a mix of project-based, real-world math with a “Pure Math” complement that develops numeracy with approaches like Number Talks.”

And this is a private school. In public schools such approaches are also prevalent in lower grades.

As far as requirements for the job, this one grabbed my attention:

“You comfortably exist in and naturally model authenticity. Through experience and practice and perhaps your own mentors, you’ve developed an awareness of what authenticity looks like, and a comfort being yourself.”

Not sure what that means other than the person they’re looking for is so full of him or herself and so far out of tune with what students need in terms of learning, that they’re not even wrong.

Advertisement

Trying to be Charitable, Dept.

I came across this article from over a year ago which appeared in the N.Y. Times about how we are not teaching students to solve math problems the right way.
In it, the author features a quote from an email she received from Tracy Zager, an author of math ed books:

“I really feel I have no way to have an impact on this teacher’s blind spot since it is shared by all math teachers and so many other teachers: If you don’t understand, it’s your fault. It was never a sensible idea to try to have students memorize first and understand later; this approach to mathematics instruction is structurally flawed. I really feel for these parents and this kid, but the frustration they face is inevitable. If we teach kids math without understanding, we build on a house of cards.”

I try to be charitable on Christmas and ask myself what would Jesus do if he read something like this. I have to say, my first reaction is He would say “I don’t have time for this.”
I looked at the comments from a year ago and was pleased to see that I addressed it already. So I’ll repeat what I wrote in comment here so you can all rest easy and have a nice Christmas:

The email quoted at the beginning of the article states: ‘I really feel I have no way to have an impact on this teacher’s blind spot since it is shared by all math teachers and so many other teachers: If you don’t understand, it’s your fault.’

I do not agree with the generalization that ALL math teachers blame poor learning outcomes on students. I certainly do not, nor do many math teachers I know.
I also find that Tracy Zager [the person who wrote the email quoted in the article] gives a typical mischaracterization as to how math is taught; i.e., that students memorize first and understand later. A glance at math texts from previous eras shows that there were explicit explanations of what is happening with specific procedures and algorithms. Also sometimes understanding comes before the procedure, sometimes after. And for some students, depending on the procedure, it might occur many years later.

Traditional math taught poorly continues to be the definition for traditionally taught math–the mischaracterization of traditionally taught math prevails. For the record, I teach the conceptual context of particular procedures, as well as what’s going on in various types of word problems and how to analyze them mathematically. But  despite “teaching the concepts” students still gravitate to the procedural because they want–and need–to know “how to do the problems”.  Who can blame them? After time, some understanding of why certain procedures does occur. And for some students, it may or may not. Case in point: I will demonstrate what a negative exponent means; i.e., x²/x³ can be represented by x*x/x*x*x, which simplifies to 1/x. We can also apply the rule of quotients of powers and subtract exponents: we then get x-¹. But after a day or so, students have forgotten this and have to be reminded of the procedure. After about a week or so, it begins to sink in. But the “understanding” part while helping some students certainly doesn’t help all.

In the words of the teachers at the no excuses school Michaela in London which takes a traditional approach to teaching: “Just tell them”. And sometimes that means telling them a few times. It doesn’t mean skimp on the understanding. It means do both. I have practiced this philosophy for years as well as other teachers I know. If the only way students can do a problem is to practice the procedure, then go with it. Stop whining that “They don’t truly understand”.
I majored in math, but didn’t understand why the invert and multiply rule for fractional division worked until 10 years ago. So shoot me.

New Boss, Old Boss, Dept.

In 2006 and 2007, a National Math Advisory Panel was put together to review the current state of math education in the US and make recommendations on what should be taught (and learned) in K-8.  In the course of putting together their report, there was considerable argument over the inclusion of the word “the” when applied to “standard algorithms”.  Some members of the panel wanted the recommendation for students to learn multidigit addition and subtraction to refer to “a” standard algorithm.  The use of the word “a” left open what the definition of standard algorithm is.  Some believed that any method that relied upon place value, be it pictures, or alternatives to what we’ve come to know as “the” standard algorithm (adding or subtracting in columns, starting from the ‘ones’ column and carrying and borrowing–or ‘regrouping’ as some would insist these operations be called) should be what students learn.

The knock-down drag out was finally resolved with “the standard algorithm” being the recommendation.

I was therefore surprised to read Arizona’s revision of the Common Core math standards.  The fourth grade Common Core (CC) standards require students to learn “the” standard algorithm for multi-digit addition and subtraction.  (The fact that it can be taught earlier than fourth grade and that CC does not prohibit such teaching is another topic; suffice to say that many schools, and textbooks wait until fourth grade to teach it, relying on alternative methods in the preceding grades.) Arizona’s revised standards replace the word “the” with “a”, thus representing a step backward from the hard-fought battle that took place within the NMAP.

One can see a summary of what the public comments were in the linked document above.  One public comment stated:

“I LOVE the change from “the” to “a.” This small change reflects a bigger understanding we are trying to push!”  

The thought process here is that standard algorithms obscure the understanding of what’s going on when one does such operations.  It also embodies a philosophy of many progressivist types that I call the “students must understand or they will die” philosophy.

Other comments supported the use of the word “the”:

“This was the progression. 2nd grade used models and strategies, 3rd used strategies and algorithms (plural), 4th grade used the standard algorithm There are many different algorithms…but only one STANDARD algorithm.”

“I understand the intention, but this standard is not distinct enough from the third-grade standard which states that students should fluently add and subtract whole numbers. The third-grade standard needs work!”

As if these weren’t enough, there were strenuous objections to the choice of “a” over “the” by two people Arizona called upon to review and comment upon the revisions: Jim Migram (retired math professor from Stanford who has fought for clear and effective math education in K-12 for over 20 years) and Ze’ev Wurman, another stalwart fighter in the quest for proper math education.

They commented as follows:

Milgram: This is nonsense, and classic educationese. It is based on a complete misunderstanding of what algorithms actually are, and starts a process in which students in this country gradually lose the capacity to do the advanced mathematics that is essential in going into STEM and related areas.

Wurman:The removal of the “the” is a gross mistake. The National Mathematics Advisory Panel, certainly a much bigger authority than McREL, purposely inserted the “the” into its recommendations to teach “the standard algorithms.” While there are many possible algorithms for arithmetic, only a single set is “standard” and it deserves to have the definite article. All around the world people use the four standard (arithmetic) algorithms and the few differences one see across the world are cosmetic, trivial, and non-essential. Pretending that there are multiple standard algorithms for the four arithmetic operations is mathematically ignorant or intentionally misleading.

Despite such warnings and reasoning, the progressivist approach has remained, with the Arizona workgroup commenting:

“No revision necessary. A standard algorithm is valuing all students and what they bring to the classroom as recognized by the public comments.”

And there you have it.  Meet the new boss: worse than the old one.

 

 

 

Unclear on the Concept, Dept.

 

What with the PISA scores out, and US not doing as well as other countries, the finger pointing begins. What are we doing wrong? What are other countries doing that we should be doing.

Well, of course, we must not be teaching math right, and Common Core happens to be handy so let’s blame that. But as someone points out in this article:

“You can certainly point the fingers at many things but Common Core wouldn’t be one of them,” added Erben. “Common Core hasn’t been around long enough to suggest in any way, shape or form that it is responsible for the declining U.S. PISA results that have been going on for many more years.”

Good point. Common Core is just the gasoline on the fire of math reform that’s been raging for the last twenty or so years. So let’s blame something else. Like not enough time spent on science:

“Those countries that had significantly higher science scores devoted more hours in school to science, giving kids much more time to do scientific research and experimentation in the classroom,” said Erben. “If you restrict the number of science and math hours then, of course, they’re not going to do as well.” 

Let me see if I can shed some more light on this. If you restrict the hours of science and math, it will have an effect–that’s true. But so will having students do hands-on, inquiry-based, student centered learning in K-6. Math appreciation has never been a good substitute for actual math learning. And as long as they’re asking the question about what other nations are doing, how about taking into account how higher scoring countries tend to use the traditional methods held in disdain by math reformers here.  And before you jump up and say “But in Japan they teach by discovery, I saw it in a videotape…” consider this article that lays to rest some misconceptions of how math is taught in Japan. Consider also that in Japan and other Asian countries that much outside tutoring goes on (via private tutors, and “jukus” known as “cram schools”) which rely on memorization and other things considered anathema by reformists.

But we’re not likely to see an article like that any time soon. And for those articles that DO talk about what’s wrong, the authors of such articles get called “racists”. (As the comments section of this recent article of mine demonstrates.).

What’re ya gonna do?

Having it Both Ways, Dept

Education Week, which bills itself as the newspaper of record on education, decided to visit the news about the latest results from the most recent PISA exam, and starts off grabbing the bull by the horns:

Do the newly released results from the 2015 Programme for International Student Assessment (PISA) throw cold water on project-based learning? The results, issued this week, examined instructional practices in science and their correlations with student performance in the subject. The report used student questionnaires to determine how frequently the following techniques were used in science classrooms: teacher-directed instruction, perceived feedback, adaptive instruction, and inquiry-based instruction. The report then analyzed the relationship between these approaches and student scores on the assessment.

Apparently there was a negative correlation between scores achieved on PISA and the various instructional techniques.  Students were asked nine questions, including how often they spend time in a laboratory doing experiments, how often they are asked to draw conclusions from an experiment they conducted, etc and, as Ed Week reports:

The results were striking. Students in classrooms using teacher-directed instruction performed far better than those in classrooms using inquiry-based approaches.

Hold the presses! There must be some explanation!! And being the newspaper of record on education, Ed Week provides them:

First is the old reliable chestnut that every self-respecting denizen of the internet uses to win any argument imaginable: “Correlation does not mean causation.” (Audience applause of recognition).   Ed Week justifies further: “This was not an experiment in which two randomly selected groups of students were taught in different ways. Perhaps the results reflect a factor other than the method of instruction.”

True! So let’s just ignore all the other evidence that has been mounting over the years, like hundreds of years of evidence of the effectiveness of direct-teaching techniques, and the fact that outside tutors and learning centers rely on such techniques rather than the trendy fad-du-jours that pass for educational expertise amongst the ranks of the edu-literati.  And let’s continue to ignore that top-scoring nations tend to rely on traditional techniques (including the use of tutors/learning centers/jukus), resulting in high scores on a test that uses open-ended, ill-posed questions more than those recall- and procedural-based tests like TIMMS. One would think countries reliant on PBL, inquiry-based, student-centered approaches in which critical thinking and learning how to learn are center stage would flourish on tests with questions that are more of the same as what has been experienced in classrooms.

And in fact, it looks like the folks at OECD thought of this already, and Ed Week trots it out: “[OECD] notes that teacher-directed instruction is used more frequently in more-advantaged schools, and that inquiry-based instruction is more common in schools serving disadvantaged students. “Teachers,” the report notes, “may be using hands-on activities to make science more attractive to disengaged students.” That is, the students who most often experienced project-based learning were low-performing to begin with.”

Beautiful.  More-advantaged students can handle traditional type teaching, which goes to a theory of mine about how students not deigned to be “gifted” or “bright enough” are put on a protection from learning track. Traditional teaching works only for those bright enough to handle it.  (I wrote about this in an article here; it brought in loads of hate mail, so feel free to add to the pile if you wish.)  And of course, no mention of how those advantaged students may have benefitted from tutors or learning centers.

Finally, Ed Week points out that OECD’s analysis of the results “says nothing about the quality of instruction. Teacher-directed instruction can be done well or poorly.”  Yes, indeed.  And traditional education done poorly has been used as the definition of traditional education for years–with no end in sight for this particular practice.

I can think of some other excuses that neither OECD nor Ed Week thought of. Like, this was a science test, and not math or English, so let’s not extend the results to other disciplines.  Or, maybe they didn’t use a “balanced approach”.  Or the results were based on student questionnaires, which is unreliable.  There are MANY excuses to choose from.  But here’s something of interest that people may wish to think about. It comes from Zalmon Usiskin, an education specialist who was prominent in the design of Everyday Math, a K-6 math program that received initial funding from a grant from NSF in the early 90’s, that adheres to inquiry-based, student-centered approaches, as well as a dizzying spiral approach to math topics.  He is remarking on the tendency to nay-say the results of tests deemed to be “inauthentic”.  His words have particular meaning with respect to the arguments of PISA vs TIMMS, as well as the most recent interpretation of PISA results:

“Let us drop this overstated rhetoric about all the old tests being bad.  Those tests were used because they were quite effective in fitting a particular mathematical model of performance – a single number that has some value to predict future performance. Until it can be shown that the alternate assessment techniques do a better job of prediction, let us not knock what is there. The mathematics education community has forgotten that it is poor performance on the old tests that rallied the public behind our desire to change. We cannot pick up the banner but then say the tests are no measure of performance. We cannot have it both ways.”

(Zalman Usiskin  What Changes Should Be Made for the Second Edition of NCTM Standards. UCSMP Newsletter, n12 pp. 10 (Winter 1993) )

The “More than one way to teach math ” gambit

A Dec 3, 2016 op-ed by Christopher Phillips in the New York Times, talks about how Americans have been bad at math since 1895.  This in itself is an interesting claim, given that for a time period from the 40’s through the mid 60’s, math scores on the Iowa Tests of Basic Skills made a steady climb in the states of Iowa, Indiana and Minnesota.  See here for further information. It’s also interesting from the standpoint that a glance at older textbooks from the 1890’s into the 1940’s and 50’s, shows the types of math questions that students were expected to master, which an alarming number of students today have difficulty doing. Furthermore, going back to the 1980’s, many of the students in first year algebra classes had fairly good mastery of fractions, decimals, percents and general computation essentials. In today’s first year algebra classes it is not unusual to see alarming deficits in such knowledge and skills.

Interestingly, the drop in scores around the mid 60’s coincided with the student cohort population that was getting the brunt of what is termed the 60’s new math, which Phillips’ op-ed also focuses on.  He states:

Though critics of the new math often used reports of declining test scores to justify their stance, studies routinely showed mixed test score trends. What had really changed were attitudes toward elite knowledge, as well as levels of trust in federal initiatives that reached into traditionally local domains. That is, the politics had changed.

Whereas many conservatives in 1958 felt that the sensible thing to do was to put elite academic mathematicians in charge of the school curriculum, by 1978 the conservative thing to do was to restore the math curriculum to local control and emphasize tradition — to go “back to basics.”

This is fairly accurate, though there are some things that he neglects to say which are worth mentioning.  One is that the new-math era was one of the only times that mathematicians were given an opportunity to make proper math education available to the masses. The difficulty with the program (specifically in the lower grades) was due in large part to its formal approach which was steeped in set theory and logic and which placed less emphasis on basic computational skills and procedures. The general public, the education community, and even mathematicians themselves judged the new-math programs a failure.

The second and not insignificant issue is that mathematicians were assigned the blame, and the education establishment took back the reins. That establishment received an inadvertent boost in 1983 with the publication of A Nation at Risk, the shockingly pessimistic assessment of the nation’s schools by the National Commission on Excellence in Education. The report sounded another alarm about student math performance, and the National Council of Teachers of Mathematics (NCTM), an organization that became increasingly dominated by educationists during the 1970’s and 80’s, took advantage of this new education crisis to write revised math standards. The Curriculum and Evaluation Standards for School Mathematics, published in 1989, purported to put the country back on the math track. But because it was, in part, a reaction to the new math and those believed responsible for it, NCTM did not promote a lively public debate, as had the creators of the new math, but suppressed it.

Phillips concludes that “The fate of the new math suggests that much of today’s debate about the Common Core’s mathematics reforms may be misplaced. Both proponents and critics of the Common Core’s promise to promote “adaptive reasoning” alongside “procedural fluency” are engaged in this long tradition of disagreements about the math curriculum. These controversies are unlikely to be resolved, because there’s not one right approach to how we should train students to think.”

While the controversies he mentions have been going on for a long time, so also have there been people who learn math via the methods held in disdain by math reformers. Such methods include memorization, as if memorizing is tantamount to “rote learning”. To reformers, “fluently deriving” the math facts (such as 7 x 4 = 7 x 2 x 2) is superior to simply memorizing because it includes “understanding”–the holy grail of math reformers. In all of these polemical debates that have been circulating in the press and in the literature over the years, there is a dearth of studies to see how people who are successful at math have learned it. The methods used by tutors and learning centers seem to focus on those methods said not to work–and the success of the students who use these techniques are appropriated by the schools and school districts who claim that their reform-based programs have worked.

There may not be just one right approach but I tend to disagree with his framing of the argument–in particular his claim that teaching math is the same as teaching students to think.  He also neglects to mention studies that follow established research practices that do show some practices to be effective and some to be ineffective. Shall we continue to rely on statements such as “research shows”, in which the research is flimsy, questionable, and unscientific, with newspapers taking so-called experts’ word for it? Or should we start relying on a growing body of verifiable and peer reviewed research that is seeing more reliance on cognitive science? (See, for example, this report by Anna Stokke, a mathematics professor at University of Winnipeg.)

Phillips is correct that the pathways selected have often been political. But leaving it that “there’s no right way” is not going to solve very much, when in fact there are ways that have proven to be more effective than others.  And some ways that are not effective at all.