The Needle's Eye

"This story like a children's tune. It's grown familiar as the moon. So I ride my camel high. And I'm aiming for the needle's eye." - Caedmon's Call

Thursday, June 12, 2008

Lying Numbers Lie

I'm going on record as saying (or writing in this instance) I wish I'd had Gerald Bracy's Reading Educational Research last spring. I took the second half of the Research & Inquiry class set. Without question, it was the most mind-numbing and least stimulating of my graduate classes thus far, and most of my undergrad classes. I won't get into too many of my reasons for thinking this way, but I will allude to them in a few short statements that sums up what I appreciate thus far about Bracey.

I am not a statistician. If I'm honest, I couldn't care less about the terminology required for understanding statistics because it's not my field of choice - but I do admire and respect those who excel in it. Even so, I do want to know enough about how statistics are used in my field (and they are) so that I don't get blindsided by the games they can play with educational pratice (i.e. to make the state of affairs seem bleaker than it actually is).

And just so I'm clear, if that requires my knowing how to compute standard deviation, variance, correlation coefficients, interpret scatterplots, independent samples t-tests, and ANOVA, fine. But why stop there? Show me how it matters in the real world! Help me to go beyond how these terms work - let me see their strengths and weaknesses. What do the majority of researchers tend to highlight? Why? Are educators more likely to focus upon quantitative or qualitative research? Why? Are "lay people" more likely to listen to quantitative or qualitative research? Why? Which audience matters more? How can I incorporate data to support my teaching practice in ways that show I am aware of the risks inherent in trusting them to the extent that they can damage my credibility? How can I balance my tendency to use descriptive, holistic research with today's necessity to interlace generalizable, experimental research (given that both types matter, but one happens to be the type that more people in power take seriously)?

I've never been a lover of numbers (as my less-than-stellar math grades will attest) and I will freely admit to having a bias against their value that more than likely correlates with my lack of success in using them for my own purposes. Descriptive research is a strength, or at least a mode that I understand well enough to put to competent practice. And I think that says something about the way that we approach teaching math and statistics. Do most students who struggle with math and numbers tend to shy away from using them in the "real world?" If there is anything to that, what does that tell us about the way we teach the subject if quantitative data is the "in-route" these days? Now there's an inquiry essay waiting to happen...

Anyway, back to Bracey. Despite my relief at his take on the games that statistics can play, I was a little bit miffed at his statements on "effective teachers" in the Tracking Growth section. He does a good job of identifying what society at large, or at least TVAAS, thinks effective teachers are: the ones who bring up test scores. True enough.

But he seems to skirt the big question on the table, one that he posits himself as an introduction. If the implication is that test scores are an inadequate definition of effective teachers (which I agree with), then what exactly is an effective teacher? I was waiting for him to try and take a stance on authentic pedagogy backed, backed by carefully phrased statistics from trustworthy growth models, but he never seemed to get around to it. It felt like he was showing only one side, the faulty one, of the issue, which means if he is to be believed, I think you need to establish what research argues on the other side, particularly if you draw up the section with a question that seemed designed to be answered at some point. If there is no simple answer (and there isn't), then why set it up that way?

What was the point of his making up three of the four hypotheses that he shared in the following section, I wonder? Why did he use only one that had been field tested and held its own? His point that hypotheses are in constant need of tweaking to get clarifiable terms is well-taken, but to me, they aren't nearly as useful without the studies behind them. The studies would help to show me how well researchers followed their hypotheses, what variables existed (how they tried to work around them) and a brief sketch of the results that either verified or discredited the hypotheses. This would have given me more of an idea on how malleable hypotheses are in that once the studies begin, researchers often find the need to adjust their terms to adequately carry out experiments. Broad statements are more sharply defined, narrow statements are re-phrased to include more relevant points, and so on.

0 Comments:

Post a Comment

<< Home