So I teach a course called Data Management, effectively Probability and Statistics at the Grade 12 University Level. (Possibly AP Stats "Lite", but I don't know enough about that course or US curriculum in general to say with any certainty. It's certainly "Heavier" than our College Level.) I've taught it for a while, perhaps as I'm fairly well suited for it... I have a background in both math and computing, I've been an editor (there's a big written project to do), and I'm personally removed just enough from the maths so as not to visibly cringe when some students invariably use convenience sampling when they shouldn't.

This was the year I finally made the shift towards more level based marking in that course. (Because with three different preps, one of them new, what else was I going to do. I'm also crazy.) There were hiccups along the way, notably how our textbook from 2002 doesn't really follow the strands of 2007 in a coherent way, but aside from the extra million hours dealing with marking (because I suck at it!), it worked well enough that I'm shooting for more of the same this semester.

It's the exam I want to talk about here though. It was sort of a "Choose Your Own Adventure".

This is also the sort of thing that could be generalized to other courses.

####
**EXAM ITSELF**

The Data Management course is five strands, but the fifth is the big project, so only four are tested on the exam. Each have two or three Overall Expectations. Here's what I did. I headlined each individual expectation, then gave students the choice of a few questions for how they would demonstrate their knowledge. For instance:

***

__Probability and Counting Expectation__- Solve problems involving the probability of an event or a combination of events for discrete sample spaces.CHOOSE ANY TWO OF THE FOLLOWING THREE TO SOLVE:

1. (insert question involving Venn diagrams)

2. (insert question involving Tree diagrams)

3. (insert question involving Experimental Probability)

***

(I'm not inserting the actual questions because I'm hoping to just tweak them for June, and students find these things. Email me.) The only two questions where they had NO choice were the scatterplot, and the calculation of central tendency and spread. Because c'mon. Statistics.

To supplement what were then eleven open questions (most having options), there were also 25 multiple choice to checkpoint things like factorials and z-scores, that they might have otherwise been able to bypass. (Yay for scantrons.) They knew this was the setup going in. So, since most of the exams were different in terms of the questions students actually answered, how does one assign a mark?

I FLIPPED A COIN. KIDDING! |

#### THE MARKING

Every strand had 6 or 7 multiple choice and 2 or 3 open response. If they nailed ONE of those (all M.C., or any one open), I deemed that to be a pass (50%). If they met all expectations, I deemed that provincial standard. If they got everything AND showed good form, that was 4++ (100%) for the strand itself. Then I tallied up all four strands and averaged them out, given all four should be of roughly equivalent weight.

Of course, the reality is a bit more complicated (for instance, if they got the open questions wrong but were pretty close, what is that... what if they only messed up on a couple multiple choice... etc.). However, I'm reasonably convinced that I'd have obtained something comparable on "points".

Here's the good bit. When I finally had the time (ie- on a weekend in February) I went back and tallied up which questions students answered (versus avoided), and which of those they were getting right. I can now use that to inform my instruction this semester.

#### THE RESULTS

In Probability, the students had tree diagrams nailed. They avoided experimental probability and pathways questions, so I need to solidify there... though there is the possibility they merely thought the permutation question was easier. (But apparently

*not*, a particular case was consistently missed.)

Nothing was really avoided in the Probability Distributions expectation (all questions roughly equally attempted), but binomial distribution did not go well when selected (that was a surprise, they're given the formula). Expected value was pretty solid. Within the same strand, normal distribution was avoided in favour of hit-and-miss on standard deviation. So it seems I have a distribution problem.

Data organization was good overall, the sampling question was avoided but here the survey option most were getting correct, so that probably WAS a case of the "simpler" option. And I guess some didn't listen to me when I said there will be a mandatory scatterplot, so some people simply need to help themselves and study the stuff I say for sure they'll need to do.

Multiple Choice: The only two questions that more than half of them got wrong involved correlation coefficient (likely mixed up squaring/rooting) and normal approximation of binomial (the z-score question was also weak). No question was perfect across the class, but the five best all related to definitions (of bias, graph use and the like).

So there we go. Any additional tips to suggest, or modifications you might make? I'm liable to do something similar in June.

In conclusion, based on what I found out, I've started with permutations and combinations in the new semester (building

*towards*tree diagrams). I'll also need to angle the normal distribution a little differently, since that wasn't retained. Of course, the overall format is the sort of thing I might want to try with other courses, though I'm not sure how well it translates outside of statistics... and, of course, I'd need to have a lot of time to burn and the inclination to work on it (rather than take a weekend off here or there).

And of course, there are some times when I just want to work on my webcomic.

HETALIA. It needed more math. |

## No comments:

## Post a comment