Don’t Get Too Excited, Dept.

From an article at “The 74” about the rating of textbooks for conformity/alignment with Common Core we have this:

“With funding from the Gates Foundation and others, EdReports initially released reports last year on elementary and middle school math materials, finding mixed results. Its latest review focuses on five series of high school math textbooks. EdReports enlists and trains educators to score instructional material, including textbooks. Scorers first rate “focus and coherence,” meaning the extent to which the material covers the standards and whether content is connected so that students can understand the relationship between different concepts. For example, coherent instructional materials frequently return to previously learned skills and explain their relationship to new content. The group uses what it calls a “gateway” approach, meaning that textbook series scoring poorly on the focus and coherence indicators aren’t judged on any of the other criteria. Such was the fate for publishers of three of the recently reviewed series: Houghton Mifflin Harcourt, Pearson, and the College Board.

“The one math series to make it to the usability gateway, receiving high marks on all three main criteria, was produced by the nonprofit publishing company CPM Educational Program.”

The thing to keep in mind is that alignment/conformity with Common Core math standards does not mean that the book is effective. Secondly, books judged to NOT meet the criteria are not necessarily bad. That said, having used CPM’ algebra textbook, I can say that it is confusing, based on discovery, and when I used it as a student teacher, my supervising teacher often had to give a “supplementary lesson” for the students who didn’t pick up what the discovery lesson was supposed to impart. Which was most of the class.

The CPM middle and high school series has been adopted in my school district (San Luis Coastal USD) whose superintendent is of a constructivist “mindset” (to use a Jo Boaler term, please forgive me), and the principals reporting to him seem to be of similar ilk.

Advertisements

Update on Capt. Obvious

It seems I wasn’t the only one who read about OECD’s report. Others have been reporting on it, and comments have been coming in. Of interest was one in which the commenter interpreted “applied math” to mean “rote memorization”. How he made that stretch is beyond me, but in his view, it meant telling students the procedures without the reason why, and thus the students become “math zombies” to use a term invoked by some–able to “do” math but not “know” math, and lacking the flexibility to apply concepts and procedures to problems outside of those whose solutions they have apparently memorized. He goes on to talk about how he links proportion (no cross multiplication allowed in his classroom), ratios, to direct variation, and then finally to the concept of slope. And of course he focuses on point-slope rather than slope-intercept.

Everything in its time and order, I say. I like point- slope too, and really, the concept of slope is not that hard. But unlike others who insist that students must KNOW and UNDERSTAND why a linear equation produces a straight line, and why a straight line has a constant slope, I believe that you start simple, start small, and address more complex ideas later when students are ready and possess what used to be called “mathematical maturity”. I was introduced to slope in algebra, and got the proof of constant slope and straight lines in geometry.

Even in calculus, first year students learn an intuitive approach to limits and then proceed to the application of limits–learning the power and usefulness of derivatives and integrals. In upper level courses, if they stick, students learn the formal definitions and proofs of limits and continuity, after having a sufficient basis of what their use is and having gained mathematical maturity.

But the nay-sayers to OECD’s report are interpreting it in the old refrain: Applied math is rote memorization and pure math is “math with understanding”. So of course it only makes sense that OECD would come up with the conclusion it did. And a right good one it was they say, drawing in the circle of wagons tighter and doing a chorus line kick and recitation of Kumbya.

The issue of “rote understanding” doesn’t surface in their arguments, and somehow they sidestep the artificiality and limited usefulness of “real world” problems as well as “open-ended” and “ill posed” problems. The value of initially worked examples, and scaffolded problems that escalate in difficulty is ignored. Apparently the new watchword is “get the students to ask the questions that make the concept relevant”. Which is an aberration of what Dan WIllingham meant when he introduced this idea, and is nothing more than the same old constructivist approach that hasn’t been working for the past two and a half decades.

Thank you, Capt. Obvious

So now OECD comes to the earth-shattering conclusion that students who do well in math are those given complex multi-step problems, and those who tend to do poorly are given so called “real world” problems. Those of us who’ve been bitching and moaning for years about the superficiality and uselessness of the “real-world” problems are now in a position to say “I told you so.” Yes, the real world problems are like a specific application that ignores the general underlying procedures of problem solving. Like teaching someone how to get from pt A to pt B in a specific city without teaching them how to use a road map.

Secondly, the real-world applied math technique tends to be taught to students from lower-income families while the students from higher income families reap the benefits of a more “pure math” approach. Of course, it could be that the techniques are the same for both, but that the students from higher income families have more access to tutors and learning centers which tend to teach in a more “pure math” style, but the OECD study doesn’t seem to delve into that.

Diane Briars, a former president of NCTM was asked about this trend; she touted Common Core as a solution to this problem:

“Briars added that the new Common Core standards are aimed at boosting conceptual understanding, and that’s one reason teachers are asking students to draw all those crazy pictures that are lampooned in the media.

” “There’s been a lot of push back in the media against these pictures and diagrams. The feedback in the media is, ‘Why don’t you just give them the rule?’ ” said Briars, “This report speaks to that. No, don’t just give them the rule. They need that conceptual understanding.” ”

Well, no, not exactly, Diane. The way CC has been interpreted is that the “rule” you seem not to like (aka standard algorithm) is delayed. The std algorithm for multidigit addition and subtraction for example, appears in the 4th grade, even though Zimba and McCallum, two lead writers of the CC math standards, say it can be taught earlier than 4th grade. What happens is students are given “strategies” requiring the drawing of pictures and diagrams for years before being given the “rule” in the belief that providing what they feel is “understanding” prior to the procedure gives the sufficient background. Otherwise, they believe, the standard algorithm eclipses why/how the procedure works. Which is nonsense. Procedures and understanding work in tandem.

This came to light when they looked at the PISA results–the exam given to students in various OECD nations:

“In the report, “Equations and Inequalities: Making Mathematics Accessible to All,”  published on June 20, 2016, researchers looked at math instruction in 64 countries and regions around the world, and found that the difference between the math scores of 15-year-old students who were the most exposed to pure math tasks and those who were least exposed was the equivalent of almost two years of education. The research was based on how students answered survey questions that accompanied an international test, called the Programme for International Student Assessment, or PISA.

“The result was surprising for two reasons. First, the PISA exam itself is largely a test of applied math, not equation-solving. For example, one question asks students to calculate the length of a revolving door entrance that doesn’t let air get out. And yet the students with more pure math instruction were better able to handle this and other PISA questions.”

Well, I guess traditional math IS good for something. But don’t tell NCTM. They believe they are providing the conceptual understanding despite the fact that they aren’t.

Thanks, I Think, Dept.

 

A few days ago, I left a comment at Annie Murphy Paul’s blog  in which she talks about how group work doesn’t have to be annoying. (See http://anniemurphypaul.com/…/why-students-dont-like-active…/ ) In her latest blog post, she quoted from my comment which was:

“Quite true; students are novices. Particularly true in lower grades (k-8) where group work and student-centered, inquiry-based classes have become more common over the years. Thus, the comparison of “group work” to the collaboration that supposedly is going on in the working world and which they will need to do is flawed. In the real world, whatever collaboration occurs consists of people bringing their individual expertise to the table. In school, everyone is essentially a novice, so you have either the blind leading the blind, or the smart kids … take the lead and do the work that no one else does.”

She left out some words at the ellipses. It was the following parenthetical remark: “(who excel sometimes because they are getting traditionally based education at home, via tutor, or learning center)”. I guess she found that a bit too snarky.

But in her latest blog post she talks about how group work needn’t be the blind leading the blind, even though kids are in fact novices. She describes an approach called “jig saw” in which each student is given a portion of the assignment to do their homework on, and then must instruct the other members of the team of their findings so all can put together whatever it is they’re doing.

I have engaged in such “jig saw” activities in a class I took in ed school. What happens is the same thing that happens in a relay race where you have to carry a bunch of bean bags and awkwardly sized objects, run around a track and then dump the said objects, plus one more into the next person’s arms. The stuff is never dumped in an organized way so that the next person can carry it.  As a result, the next person has to run a bit slower than they would ordinarily, with objects becoming more disarrayed, and then dumped into even more jumbled fashion into the next person’s arms. With “jig saw” each person does a mind dump, glad to get it out of the way, and other members, nervous about their own area of so-called “expertise” can barely absorb the information the person is imparting.

I suppose jig saw can be made to work well.  But since I have my own confirmation bias against group work, I’m not disposed to saying “Wow that sounds like a great idea”. Instead, I am disposed to say “Why not teach students how to organize thoughts and areas of inquiry and then assemble the facts into coherent written form, or whatever the assignment happens to be? Why must it always be assumed that we must work collaboratively now that we’re in the 21st century?

It seems to me that students will learn more if they each write their own essay–or work on math problems by themselves.  But the group-think of the edu-establishment would have us believe that these type of group activities provide “authentic” collaboration experiences that students will later find in the real world.  In reality, there are other more authentic, opportunities to learn to work together: playgrounds, sports teams, music, theater and more.  The collaboration exercises they are forced to do in academic classes are more artificial than they are authentic.  If students learn how to read, write, do math, and think for themselves,  they’ll be able to collaborate better.

Pretending to be objective, Dept.

 

From the Common Core math standards website, in a discussion on the Standards for Mathematical Practice:

From SMP 1: (Make sense of problems and persevere in solving them), it states” “Mathematically proficient students check their answers to problems using a different method, and they continually ask themselves, ‘Does this make sense?’ “

Yes, it would be nice if students asked themselves if the answer makes sense. But checking answers to problems using a different method? Sometimes I do, and sometimes I don’t. This appears to be an opinion, that has been interpreted as a way to teach math, and many teachers now require students to do just that, or find multiple ways of solving problems.

At the same time, in another part of the Common Core math standards website, it states: “These Standards do not dictate curriculum or teaching methods.”

So what’s it gonna be?

I ask this because Fordham just came out with the results of a national survey on Common Core math standards, and on p. 10, one of the statements is “Overall, data show that teachers are changing their instructional practices in three key ways. … More teachers are teaching students multiple methods to solve problems. Consistent with the Common Core expectation that students be able to “access concepts from a number of perspectives,” 65 percent of both K–2 and 3–5 teachers and 41 percent of 6–8 teachers report that they are “teaching multiple methods to solve a problem” more often than they did before the CCSS-M were implemented; just 2–5 percent at all grade bands report doing this less frequently. ”

For something that doesn’t dictate teaching methods, it seems to me that it is being interpreted in ways that dictate teaching methods. And of course Fordham doesn’t question any of it. They just ask their questions and amass their data.

This Year’s Finland, Dept

Never mind that the PISA test is steeped in fuzziness and constructed based on the REM system developed in The Netherlands. And never mind that despite the fuzziness of the test, nations that score high on PISA seem to use traditional teaching techniques.

Finland has fallen out of favor of late, and now Estonia flies the banner high: Good scores on PISA and most of the students are from low-income families. Therefore they must be doing something right. The Atlantic article doesn’t go into detail on what the curriculum for math is about but it does bow to the “Too much of a good thing–what good are test scores anyway” gambit:

“But throughout the country, policymakers and educators are talking about the need to produce students who can do more than score well on a test, perhaps go on to become entrepreneurs and creative leaders. Educators are also concerned that focusing on the average student and bringing up low-achievers to that standard comes at the expense of pushing gifted students further.

“Estonian education philosophy needs to change and is changing, many educators said, to one that puts more focus on students as individuals and has them drive more of what happens in the classroom.”

Look, the entrpeneurial and creativity aspects that seem to come so naturally to the US and are the envy of other countries, I can assure you, are not the results of our illustrious educational system. There are other factors at play. And let’s not poo-poo test scores. They must mean something or we wouldn’t be spending so much time with them. Estonia is doing something right; it obviously isn’t broken so let’s not make Estonia into another US. They tried that in Japan, and Japan got wise and reverted to its previous system. Singapore is trying it now, but they have enough aspects of their old system that the US portions of their curriculum haven’t spoiled things. Student-centered classrooms are really not the answer no matter how much Mark Tucker and others seem to think so.

Never mind that the PISA test is steeped in fuzziness and constructed based on the REM system developed in The Netherlands. And never mind that despite the fuzziness of the test, nations that score high on PISA seem to use traditional teaching techniques.

Finland has fallen out of favor of late, and now Estonia flies the banner high: Good scores on PISA and most of the students are from low-income families. Therefore they must be doing something right. The Atlantic article doesn’t go into detail on what the curriculum for math is about but it does bow to the “Too much of a good thing–what good are test scores anyway” gambit:

“But throughout the country, policymakers and educators are talking about the need to produce students who can do more than score well on a test, perhaps go on to become entrepreneurs and creative leaders. Educators are also concerned that focusing on the average student and bringing up low-achievers to that standard comes at the expense of pushing gifted students further.

“Estonian education philosophy needs to change and is changing, many educators said, to one that puts more focus on students as individuals and has them drive more of what happens in the classroom.”

Look, the entrpeneurial and creativity aspects that seem to come so naturally to the US and are the envy of other countries, I can assure you, are not the results of our illustrious educational system. There are other factors at play. And let’s not poo-poo test scores. They must mean something or we wouldn’t be spending so much time with them. Estonia is doing something right; it obviously isn’t broken so let’s not make Estonia into another US. They tried that in Japan, and Japan got wise and reverted to its previous system. Singapore is trying it now, but they have enough aspects of their old system that the US portions of their curriculum haven’t spoiled things. Student-centered classrooms are really not the answer no matter how much Mark Tucker and others seem to think so.

One-Way Arguments, Dept.

 

A math teacher who is teaching in Rome and writes a blog which has an impenetrable “I know a hell of a lot more about this than you” attitude to it, has written about the inferiority of “direct instruction” approaches.

In doing so, she mischaracterizes direct instruction as “The formula is provided (Area= length x width), and countless problems follow – usually, after having been modeled by the teacher. Block practice again, and not much understanding – just computation with little relevance.”

There is such a thing as scaffolded problems that have some variation so that students must think beyond the initial worked example.  Using her example of finding the perimeter of a rectangle with sides of 10 and 2, one could have problems such as “A rectangle has a length of 8. If the perimeter is 24, what is the width?” This is a variation of the initial worked example. Subsequent problems can lead up to what she believes is the “holy grail” of teaching: “open-ended problems”. After a few such problems, students may then be ready for: “What possible lengths and widths can a rectangle have so that the perimeter equals 24?”

Instead, she likes to lead with that, and direct students to many different types of questions and problems. She crows with delight at the insights her students find. But in the end, what is it that the 4th graders take away?  For many if not most of them, it will be like going to a shopping mall and being confronted with displays and stores all at once.  Visitors to shopping malls are frequently overwhelmed and sometimes forget what they came to buy.  The blog author (Christina) has the advantage of being an adult and having thought about the various things she knows about perimeter and area. She believes that her “investigations” and probing questions open their minds, and create schemas of mathematical truths.  But for these young students, it is more likely to be a hodge podge of information they are left to sort through.

Lastly, she refers to a similar discussion by Robert Kaplinsky who thinks that asking students to find the relationship between perimeter and area is a good exercise. This type of problem belongs in a calculus optimization unit. Or at best an advanced high school algebra course that deals with solutions to cubics.

This is nothing more than a pointless elementary desk-work exercise that addresses no outcomes more advanced than understanding a couple of formula and doing a lot of low-level arithmetic, then sorting through options. Taken by itself, this exercise yields no interesting insights for students. And it begs one to ask, what if instead of 20 square units it was 3072 square units? Or a million? Are students asked to believe that exhausting options and picking the largest number is a reasonable way to tackle this problem? What analytics will lead them to the key insight that will help them find an appropriate general attack on it?

What is sad is that people like this blogger and others gain quite a following of believers and become thought leaders in a world where students need good, solid, scaffolded problems. Instead they are dumped in a shopping mall and told to make lists of their insights–which are soon forgotten.

Don’t miss it! ResearchED in Washington DC

For those of you who wish you could attend a ResearchED conference, but can’t afford to travel overseas: There will be a ResearchED conference in Washington DC on October 29. Good line up of speakers so far, including Robert Craigen and Eric Kalenze among others.

If the only conferences you’ve attended are the NCTM type which serve as an echo chamber for bad ideas and teaching practices, with tons of adoring fans clamoring for the likes of various dubious “thought leaders”, then you’re in for a surprise with ResearchED. These conferences are populated by like-minded people who are fed up with the group-think that pervades the education establishment and has been ruining education for many children.