2B) Keynote - The Mindset Revolution
Jo Boaler presented the "growth mindset". MYTH: Being good at math is a "gift" or natural ability. This seems a very Western idea, and video clips (Disney/Hollywood) were shown to demonstrate how this "fixed" mindset towards math gets reinforced. The REALITY is every child can excel, with a "growth" mindset. The brain can change, and do so permanently within 3 weeks, but "use it or lose it". (London taxi drivers have some brain shrink after retirement.)
Growth mindset behaviours: Persistence, determination, and a desire to learn from mistakes. To promote this, be careful how you give praise: "You're really smart" encourages FIXED mindset (If I'm smart, when I don't get something, it's not important), while "It's great you've learned this" encourages GROWTH (If I learned that, I can learn more things). Those with the latter mindset outperform the former in math. There are also societal differences here.
A study was done. Among females, the higher they scored on an IQ test, the more difficulty they had with a challenge. This effect was REVERSED for males. Grouping students by ability (higher achievers together, same for lower) was also found to be damaging - because it promoted a FIXED mindset among the higher achievers. Big Question: "How do you maintain a growth mindset when math class is a set of closed questions that you get right or wrong?"
|Up? Out? Wibbly wobbly?|
"If you change a question, it really changes what people do with it." Ideally, find "low floor/high ceiling" tasks. Also math should never be associated with speed (timed tests cause math anxiety, and anxiety blocks working memory). And math evaluations shouldn't include a grade/mark. As soon as students get a grade, they ignore diagnostic feedback - in another study, tests were given with 'feedback only', with 'grade only' and with 'grade and feedback'. The 'feedback only' comprehension scored significantly higher than BOTH the others. (ASDIE: Ashli Black has also recently reflected about "putting grades on papers".)
Every (known) mistake grows a brain synapse. (Carol Dweck) You don't even need to correct the error, just being aware of it promotes growth, whereas when work is (perceived) correct, no brain growth occurs. So mistakes are good - and incompatible with a PERFORMANCE culture! Messages may also be more important than the knowledge obtained; a group of researchers under Geoff Cohen saw significant achievement gains due to the addition of one sentence: "I am giving you this feedback because I believe in you."
Jo concluded with some references to "Stereotypes That Distort How Americans Teach and Learn Math", her "How to Learn Math" course last summer, an upcoming documentary from the makers of "Race to Nowhere", and the website www.youcubed.org
3B) Fake World Math
Dan Meyer's address spoke to the top 4 questions that have come up over the last ten years: 1) What is [math] modeling? 2) What isn't modeling? 3) How do we get students good at modeling? 4) How do we get students to like modeling?
He had two stories to share, a 'Happy' one and a 'Horrible' one, and asked which we wanted to see. Most people shouted 'Horrible' which prompted the following (perhaps inadvertent) teaching moment: "That was not a good question to ask." The order of the slides is baked in... we're doing happy first.
The happy story is UPS. There are 130 stops for a driver to make in a day on average - their "Orion" computer plots an optimal route. (Dan built this topic up by asking the number of routes with 3 stops, etc.) How does the computer do this? What are the factors involved in getting 9 seconds per stop? The audience discussed, and many of the answers were environmental (traffic) or on the driver (reaction time). Dan pointed out the vehicle as well: Fuel, tune-ups and the like.
The unhappy story is Dan Feinberg. Known as "Death's Accountant", after a horrible tragedy (mass hospitalization, loss of life), it's his job to decide 'Who Gets What' of the aid money that pours in. How does he decide THAT? Again discussion, which considered factors like the deceased's family; after, Dan then revealed the decision that loss of two limbs was equivalent in money to death.
|Which job would you rather have?|
3) Performing operations/analysis;
4) Interpreting results;
Dan mentioned a quote by the statistician George Box: "Essentially, all models are wrong, but some are useful." By contrast, school textbooks assert, 'All models are correct, but you may have miscalculated.' They focus on step #3, whereas "We are drawn to 'Information Gaps' between what we know and what we want to know."
Step 1 is interesting. Step 2 is lucrative. Step 3 is trivial these days, because of technology. Yet in an analysis of 83 problems in a text, most involved Step 3. (Identify: 7; Formulate: 20; Perform: 71; Interpret: 67; Validate: 4... those last involving probabilities.)
Students dislike modeling. What ISN'T the answer? More textbook models. "We have so many commercials for math's power. We need students to experience math's power." Perhaps through actual modeling. Perhaps with the use of scaffolding questions (eg, if you don't know the answer, "Tell me a number that you think is too high"). Also, ask "What do you need to know?" more often.
Dan showed his "Super Stairs" video, where he goes up a flight of stairs, then down... then starts going up 1, down 1, up 2, down 2, up 3... and how long will that take? It's a nice example for a few reasons, including being a place where proportions break down. Also, the real answer may be a bit more (less?) than any mathematical model. Dan Meyer's cardio was questioned - not for the first time? (Tough audience!) He did resolve this question with an actual answer.
In conclusion, try to do more of the entire modeling cycle. From Actual Modeling (in the world) through Text Modeling (in class) and BACK. Pseudo-context necessitates shutting down parts of the brain. A shoutout was given to the Estimation180 website. OUR HOMEWORK: Exploit other people's curiosity. Capture your own curiosity.
|A picture I took on day 1. Curious?|
5/6B) OAME Ignite!
This was a double session. Ten speakers were each given five minutes. I blogged about this separately. You can go to that post for "speedy enlightenment", along with my thoughts on the experience.
7B) "AfterMath" II
I went to the OAME Annual General Meeting at 4:30pm, because I figure if you're going to be part of an organization, you can afford 30 minutes to see how it's being run. There were a couple dozen people there. Projected attendance for the conference was 1,560 people. A 2013 deficit had been anticipated, but was not the case. There was talk of restructuring the AGM, perhaps holding it electronically, and doing it after August, to align the budget more sensibly (currently closed off Aug 31, 2013).
|Guess the table number|
All that said, the guitar player they had after the awards was great. With zingers like "If you believe in telekinesis, raise my hand" and a song about using a "guitar capo" he helped to redeem things. I chatted mostly with COMA folk though briefly with someone from up north who had also attended CMEF the previous week.
After getting back to the residence, I did my AMVFriday thing, ending up with what I feel was one of my best selections for May (HoTD). Then sleep and stuff.
There were two really good questions asked of me today. The first was by Bruce McLaurin over breakfast, when he asked me about a comparison of the OAME conference to the CMEF conference the previous week. I'll come back to that one.
1C) How Educators Inspire Students
The second good question was asked of me by Mawi Asgedom after his keynote. I have also blogged about that whole experience separately.
Notable, which I didn't mention in that post, is that money was donated to "Champions for Change" in place of getting gifts for presenters ($10,000 to be used to build a school). Mike "Pinball" Clemons also came out to say a few words, in particular: "When we give help, assistance is still needed later. When we give hope, people can help themselves."
2C) Using the Student Achievement Chart
I got to this session a bit late. Melissa Shields & Shahana Arain (two Grade 8 teachers) were presenting a method for evaluating strands. A strong link was made with the "Desire2Learn" provincial resource. At one point, one of the presenters mentioned how high school teachers were starting to use the site more, so they need to catch up - though they're ahead of me.
Their basic "strand breakdown" went this way:
1- 25% was Knowledge/Understanding. Evaluated 1/4 of the way in, form of a 10 question multiple choice quiz.
2- 25% was Thinking/Inquiry. Evaluated 1/2 of the way in, form of a math folder with a group work component.
3- 25% was Communication. Evaluated 3/4 of the way in, form of tech journals and Desire2Learn discussion forums.
4- 25% was Application. Evaluated at end of strand with a group hands-on experimentation activity.
There is also a 'Survival Guide' notebook given to students at the beginning of the year; it's up to them to decide what goes in it. Ideally filling it will encourage love for math and provide a place to go to for answers. There was no unit testing.
A few specifics: The groups for the T/I component were formed by first getting volunteers who are willing to share their expertise about some strand aspect. With those core groups formed, other students joined where they liked, to participate in information finding. (For instance, one group looked into the weight of backpack handles for the Patterning & Algebra strand.) Each person records their learning in an independent math folder, with a focus on how to attack problems. Noted how a goal of "have neater writing" may not help MATH skills, so focus their efforts while allowing for a chance to improve. The result is students taking ownership, and more differentiated instruction.
The Desire2Learn aspect within the Communication component can be a one-stop-shop replacing the need for a course website. (Noted that students actually requested to do journaling with technology, not pen and paper. Also D2L has a 24 hr helpline.) D2L includes online manipulatives, it's updatable with the curriculum, and discussion can even take place between schools. Forum questions can be embedded for response in the form of: text, images, or audio clips. (Sample question: "If you know one angle in a triangle is 53 degrees, what else do you know about the triangle?") Date stamps let you see when students post, and they all have the same rubric here, which includes a spot for responding to peers.
The final strand conclusion can be the form of a video, a scrapbook - whatever the students deem gets the point across. (We were shown a video of students attempting to put someone inside a soap bubble.) It was noted that it took 3-5 years to establish the math culture at their school for these evaluation methods (regarding colleague/parent/student acceptance). Presently, the reception from students is positive.
3C) How Technology is Changing the Classroom
|Electricity, free of charge.|
The presenters (Christina Anjos aka @MissCasucch, and Ryan Perera) began with how they had "flipped", looking a bit at the pros and cons (including student feedback). For instance, you can use videos for review the following year. Then they moved into HOW to flip the classroom. In particular, you need a YouTube account, time, support, and video capturing software. For the latter, look at durations before deciding. (Jing is one that's free.)
There is a benefit to making videos, rather than assigning ones already online - it's YOUR voice, and you can be consistent with how you use terms in class. (Though you can supplement with other materials when time doesn't permit video creation.) One useful tool is having a quick assessment to start the class, to see what students already understood... allowing for a focus elsewhere.
Ryan demoed "responsive software" for doing such an assessment. Everyone got a "smart clicker" and a paper quiz to complete. Noted that numerical entry has some quirks (eg. negatives), but it compiles everything into visual charts. Students can (hopefully) see that they're not alone. (Recommend not to always show the pie chart to everyone.) There is an anonymous mode, so you don't know who sent what answer. Obvious Con: Class set is $400.
Christina presented an alternative: Google Forms with a Flubaroo script. This flips the pros and cons in that it's free, easy, and images can have fancy math symbols... but requires students to provide the hardware. (We were advised to bring laptops or other devices.) It's possible that the quiz input could be done at home, after viewing the video. Responses get sent to Christina through email, and "Flubaroo" is what does the grading. She can then auto respond through email with results.
A few other technologies: A Q&A Forum (kind of like Desire2Learn forums last session) or Learning Management System (LMS) where teacher is moderator and students post. Dropbox, for sharing files. A "Bamboo tablet", which is a cheaper alternative to iPads, and displays your writing onto any screen. Some hand-eye coordination is helpful here, but you can then teach (have writing appear) while you're at the back of the room. Perhaps while checking on a student. Also, QT codes - "beqrious.com" allows you to create them, and there are free QR reader apps.
4C) "AfterMath" III
That last session wrapped up at 12:45pm; I headed out (driving with Mike Lieff) a little after 1pm. But let's go back to Bruce's question, comparing this OAME conference to CMEF. At the time, I mused that OAME felt more "personal" and CMEF more "broad", alluding to the scope... provincial versus national. For instance, with OAME I went to a session specifically for the MDM 4U course. But there's a little more to it.
|My views may not be your views|
The sessions I attended at CMEF felt like they were "bigger" topics, questions along the lines of why we do things in certain ways. There were topics and ideas that I felt I couldn't incorporate right away, but took away with me to think about. At OAME, there was some why, but more how we do certain things. More specific, if you will, like there was some baseline understanding already. This might be because the CMEF was first (providing the baseline), or because OAME sessions were longer, creating more coverage. Or something else entirely. I'm not sure. Then again, I know teachers who implemented the "randomized groups/vertical surfaces" from CMEF right away, so your mileage may vary!
Either way, don't let the fact I have four posts for OAME versus one for CMEF sway you - quantity is not quality, and Dan Meyer certainly saw something in the CMEF recap (he tweeted out my summary). As usual, I would be interested in hearing any thoughts you have about the matter.
If you went to OAME, here's a link to the online OAME Evaluation.
If didn't go to CMEF, here's a link to all their vignettes.