Saturday, 30 January 2016

Response to Teaching Mathematics for Understanding: An analysis of Lessons Submitted by Teachers Seeking NBPTS Certification

In this study, the researchers analyze portfolios submitted by teacher candidates. They conclude that the lessons included many tasks involving hands on activities or real world contexts and technology, multiperson collaboration and hands-on material, but rarely required students to provide explanations or demonstrate mathematical reasoning.

I wondered as I read this article how representative the study would be. When I am in a job interview and asked to describe a lesson, I generally describe a very hands-on activity such as building clinometers and using them to measure heights as a way to make trigonometry meaningful.  While I’m not misleading anyone, as I do run this activity almost every year, it is the exception and not the rule in my class, as I generally follow a fairly traditional lesson structure. Indeed, if asked to provide a portfolio of my lessons, I would these ‘special’ lessons which are not what the students in my classes experience most days. I would suggest many other teachers might follow similar patterns.

The authors criticize that that tasks, while hopefully engaging and meaningful, tend to have a ‘low frequency of high demand tasks’ in exchange for a ‘higher incidence of innovative pedagogical features.’ This made me think of two things: First, I often feel pressure to be ‘performing’ and ‘entertaining’ my classes, which I think does not need to be a teachers’ role. Secondly, particularly with new curriculums coming into place, I hope math class remains challenging as it is one of the last bastions of challenge (some) students have in schools. Some of my students tell me they are used to getting near-perfect marks in most other subjects simply for completing their work to an acceptable degree, with little regard for quality.  While I don’t mean to torture students, I think part of what schools need to teach students is how to work hard to achieve something that is difficult, and when we settle by catering to a lowest common denominator, we may be robbing students of the opportunity to have to work hard for something.

Finally, in their conclusion, the authors bring up the concern of many studies including theirs focusing on classroom lessons and not on assessment. For example, if teachers use methods of instruction that include group work, hands on activities and technology, but their assessment focuses on pencil and paper knowledge and problem solving, we are not being fair to students. Indeed, lessons should prepare students for, and resemble, assessment. Students get (rightfully) frustrated if they have done and understood the coursework, yet are not able to be successful in assessments.


Question: How should policymakers determine appropriate levels of difficulty for math classes?

Saturday, 23 January 2016

Response to 'When Learning no longer matters: Standardized testing and the creation of inequality' (Boaler)

In this article, Boaler tells the story of a school in an underprivileged area which has a forward thinking math department. The students demonstrate strong increases in achievement and strong results on independently developed forms of assessment, but still perform poorly on Standardized assessments. Boaler outlines the damage that is done to students’ self esteem as they work hard to achieve and feel like they have learned and understand, yet are still told they are ‘below average’ on these high stakes assessments. If I had to criticize the article, it would be that Boaler does not adequately describe the practices the math department undertakes in order to increase students’ mathematical understanding. She non-specifically states that they observe each other’s classes, meet regularly, and take part in professional development opportunities, but I would have been interested in more specifics about their teaching practices.

A stop for me in the article occurred when Boaler that students might perform poorly on standardized assessments as they are set in contexts that are confusing to linguistic-minority and low-income students. She gives an example of the following question:

A cable crew had 120 feet of cable left on a 1,000 foot spool after wiring 4 identical new homes. If the spool was full before the homes were wired, which equation could be used to find the length of cable (x) used in each home?
F. 4x + 120 = 1000
G. 4x - 120 = 1000
H. 4x = 1000
J. 4x - 1 000 = 120


This brought to mind the only high stakes provincial testing we still have in BC – the Math 10 Provincial exam. All Grade 10 students write a provincial math exam, but they are divided into one of two groups based on course they are enrolled in – the Foundations/Pre-Calculus (FPC) class exam focuses more on algebraic skills, and the exam reflects this. Alternately, the Apprenticeship/Workplace (AW) exam, which usually attracts students who are more challenged in math, features mostly extremely ‘wordy’ problems similar to the one above. In my experience, students in the FPC course generally score close to their term marks, but the majority of students fail the AW exam. I believe the intention of the writers of the exam is that the math will make more sense to students if it is put into context, but there is a disconnect and students find reading the long problems challenging, and often don’t know how to apply the math to the problem, even if they understand the mathematical concept. 

Question: How can we put math concepts into meaningful concepts without overwhelming students with language and concepts they might not be familiar with (like spools and cable in the question above)

Saturday, 16 January 2016

Response to Hill, Ball and Schilling

In this article, the authors attempt to develop measures of teachers’ combined knowledge of content and students by writing, piloting, and analyzing results from multiple choice items.

To be honest, this articles’ attempts to conceptualize and measure teacher knowledge did not resonate strongly with me. A 30 page paper and detailed questionnaires allowed the authors to come up with conclusions I consider trite such as “teachers have skills and insights and wisdom beyond that of other mathematically educated adults” (pg. 395) and “Teachers know that students often make certain areas in particular areas or that some topics are likely to be difficult… but teachers often reason about students’ mathematics:  They see student work, hear student statements, and see students solving problems. Teachers must puzzle about what students are doing or thinking, using their own knowledge of the topic and their insights about students.” (p. 396) While these conclusions are valid, they seem quite obvious, and the task of trying to measure and quantify something that so clearly would be extremely different for every teacher seems disingenuous. Certainly, I’m a better Math teacher now than I was in my first year – while my mathematical knowledge has not greatly increased, my knowledge of how students learn, understand, and make errors has increased, allowing me to teach more effectively.

The article also challenges the use of Multiple Choice testing. This has been a regular struggle for me – when teaching in Ontario, my colleagues and I almost never used multiple choice testing. When I wrote my first test for my first job teaching in BC, I showed it to the department head for input, and he said “it looks good, but where’s the multiple choice?”  Having now taught and subbed at multiple schools in Vancouver, I notice that multiple choice testing is the norm, and often can even make up students’ entire grades in some classes. As per the authors, I think it’s important that we “think carefully about the multiple-choice format” (p. 396).  This form of testing can lead or mislead students to correct or incorrect answers, and does not test skills in the same way that a full solution allows us to evaluate process and thinking. I believe the long tradition of provincial exams, along with bigger classes and less prep time encourage multiple choice testing in BC classrooms, but that the Ministry should be more involved with standardizing evaluation so that students are not being primarily evaluated in a multiple choice format.


Question: Do you think that Multiple Choice testing accurately evaluates students?  Do you think the Ministry of Education has a role in dictating the types of assessments in BC classrooms?

Saturday, 9 January 2016

Response to Thurston

Response to On Proof and Progress by William P Thurston

Thurston wrote this article as a response to another (by Jaffe and Quinn). In it, he discusses some of the positive aspects of the article and criticizes some of the assumptions the authors of the article made. In particular, he focuses on the questions of: 1) What is it that mathematicians accomplish? 2) How do people understand mathematics? 3) How is mathematical understanding communicated? 4) What is a proof? And 5) What motivates people to do mathematics. After he considers different aspects of how mathematicians understand these important questions, he discusses some personal experiences that highlight how he gained his perspectives.

In particular, I was interested in Thurston’s discussions about how the social interactions mathematicians partake in through their research motivate them to continue researching particular aspects of the discipline. He states: “most mathematicians don’t like to be lonely, and they have trouble staying excited about a subject, even if they are personally making progress, unless they have colleagues who share their excitement.” (Thurston, 48)

This made me consider how we teach math in schools, particularly secondary schools. As featured in the video we watched in class last week, math classes are often one of the last vestiges of the classroom where students sit in rows in individual desks, and spend a great deal of time expected to be silent either listening to the teacher explain concepts or trying concepts. Group work and discussion regarding approaches to solving problems are often discouraged in math classes, and are rarely evaluated, and students often feel that if they share answers or approaches, they are cheating. Most, if not all of students evaluations come from work which is done individually, and often under pressure. This is in contrast to how mathematicians work in collaborative environments, recognizing that colleagues can play a key role in seeing problems in different ways and helping to maintain motivation and gain understanding.

I try to address this in my class by encouraging group and partner work while working on concepts, and providing assessments in class where students are encouraged to work together and yet still receive marks for their work. Yet, I have been criticized by colleagues for this approach, as they feel it does not accurately reflect student abilities, and some students will simply copy the answers of their classmates.


Question: Do you think it’s appropriate for student evaluation in secondary school to be partly made up of work that is done in collaboration?