The Needle's Eye

"This story like a children's tune. It's grown familiar as the moon. So I ride my camel high. And I'm aiming for the needle's eye." - Caedmon's Call

Monday, June 16, 2008

Mouth Pieces

Despite my disdain for numbers, I grew up believing that they were as infallible as faith. And yes, I'm aware of how weird that sounds. In my mind, numbers (and later on statistics) told the truth. Clear-cut and concise. It's not hard to see the lure of quantitative research when it essentially cuts to the chase, cranks out answers [and leaves the rest for us poor teachers to sort through. Ha ha].

I detailed this example in a blog last year, but it applies to this scenario. In the first couple of weeks of the school year, we teachers were given the grade weighting scale mandated by the district. Grade weights were split into two categories: major assessments and minor assessments. Each category measured 50% of students' grades, which immediately raised one big question: what's the difference? I eventually tried to set up a system of sub-categories under each category. I put homework/classwork, journals, and quizzes under minor assessments, reserving the tests and projects for major assessments.

But here's the kicker. We were required (supposedly, because no one checked with me on this) to give students six tests per quarter. Right away, that's six major grades we had to prepare for students before we can even think about what to do in a class. I was also told we should finish each quarter with at least 16 total assessments. 16 grades, six of which are major tests. That's only two grades away from encompassing half of a quarter's work. If each quarter lasts nine weeks, we're talking scheduling tests within a week (or a week and a half) of each other. What about quizzes? Does this system still leave me enough time to schedule quizzes periodically to check progress not sufficient for a test? What of homework? Does this system imply assigning homework practically every night to push the number of assessments up to 16 (and then exceed it)? Even on nights when students are supposed to be studying for tests?

Bracey's statement about variables on page 40 correlates with my newly enlightened view of numbers (mind you, this one incident didn't alter my view; it had been and continues to be in a state of flux as I work to figure out how to balance it with my tendency to rely on qualitative research). He says "Variables [numbers] are not pure indications that reveal their meaning to us immediately. They must be interpreted."

Numbers always are in the hands of people. Since they cannot literally speak for themselves (at least not immediately), we, with our personal biases, platforms, and agendas are their mouth pieces. Nothing wrong with having these because, after all, we're human. We can't help what we are any more than numbers can help being what they are. But we should know how to recognize our bias so that we can monitor how they color the way we interpret numbers.

It doesn't make one person any more right or wrong than another person, as Bracey showed in comparing George W. Bush and his critics; it just shows how each used different statistics (i.e., mean and median) and, I'm sure, vastly different interpretations of the statistics. The key is to be aware of what point you want to make (p. 44), make sure the data supports your point without twisting it out of its context, know what populations your numbers are based on (p. 46), and monitor (i.e., be prepared to adjust) changes over time in the composition of your populations (p. 62). Otherwise, you either come across as a liar (more on that in a moment) or an ignoramus. I want to be trusted and to feel in touch with the world around me, so I have to resign myself sometimes to double-checking my views to make sure they aren't manipulated by statistics, as well as keep the big picture in mind when news breaks of schools measuring record highs or lows in achievement test scores.

I did a study in the spring with one of my Language Arts advanced classes to compare boys and girls' quiz scores on grammar concepts using two different teaching styles. On the surface, the class seemed ideal for a reliable population. No IEPs, APs, and an even split on the numbers for each gender.

But right away, I faced threats to the internal validity of my study. I had a few students drop the class, we lost time to assemblies, I had to squeeze four grammar concepts into a four-week unit for time's sake, etc. So I had to account for these threats as I presented my findings. The point I wanted to make (positive impact of dialogic discourse on grades as opposed to prescribed instruction) did not correlate with the results. If I was dishonest (I mixed that word up with "smart" in class today - my bad. Nothing smart about lying with words or with numbers, kids), I probably could have skewed the data in such a manner as to prove my point - but that's unethical. And if I like my job, I wouldn't want to do that anyway.

As a qualitative researcher, I am not looking for quick, tidy conclusions; I should want to raise more questions so that I can keep experimenting. It gets tedious on occasion, but if it keeps me consisently in-touch, it's ultimately worth the effort.

0 Comments:

Post a Comment

<< Home