Teach by Numbers
Yesterday, I went on about the infallibility of numbers and how that only extends as far as the reach of people that manipulate them (which is to say, not far). We can make the numbers line up in favor of our views, but if we do so, we're dishonest, a traitor to the very research designed to enlighten ys We can reference only the statistics that support the points we wish to make, but if we do so, we're ignorant, missing out on valuable data that we choose to dismiss.
But Bracey is right on page 71 when he states that statistically significant results (I've always struggled with that term) do not always yield practical applications. Suppose in the example I gave yesterday, about comparing teaching styles, that I got statistically significant results that supported my initial theory of dialogic discourse correlating with higher scores than prescriptive instruction. What then?
Do I suddenly go on the talk-show circuit trumpeting my astounding findings to the public? Do I look up NCTE or ASCD and get my results published in some journal under the glamorous headline "DIALOGUE PROMOTES HIGH TEST SCORES!" Sounds tempting, but no. I'd be thoroughly discredited and my findings would be shot to heck because they wouldn't maintain consistency.
I already mentioned at the end of yesterday's entry that conclusive data isn't what I'm after, or I'd be out of work. But the internal issues that plagued the validity of my study are precisely what others would pounce on, and why my findings, however intriguing, wouldn't hold water under repeated experimentation. I had to follow a very carefully coordinated structure, and my schedule had a profound impact on why my findings turned out the way they did. Some of that was intentional; some not so much. What happens when I try applying dialogic discourse to classrooms pushing 30 students, or 35, or 40? What about when I have more time, like a nine-week quarter, or less time, 2-3 weeks, to apply the method to the curriculum? Do I get the same results? I doubt it. And the questions don't end there.
What if I implemented this approach at the beginning of the year (as opposed to near the end of the third quarter)? Would students be more engaged if I caught them early or would they have too much of a foundation on prescribed learning and rebel against me? What if I used it with ALL my classes (as opposed to one advanced, evenly split class)? Would the inevitability of academic plans, Section 504s, and a dominance of one gender over the other (number) yield different results? What if, heaven forbid, I came down with a debilitating illness a few weeks into the year and needed to hire a full-time substitute for my classes? Could I duplicate dialogic discourse onto the sub plans? Could I trust that the substitute would, in effect "copy" the instruction style faithfully until I returned? More to the point, is it fair to expect that much?
Teaching by numbers is statistically possible but not practically plausible. And as Bracey notes on pages 74 and 75, even if we can find a correlation between two variables (which is rather easy; as he details in examples with skirt length and shirt sleeves, we have been doing this for a long a time), it doesn’t necessarily imply a perfect, or strong, relationship.
This fact is what stops us from concluding that students who score well on the SAT causes high grades in their freshman year of college; it stops me from concluding that either prescriptive or descriptive instruction causes higher grades for all middle school students. For some, it probably will, but for others, it will have little to no effect at all. Additional research studies are required to discount other possibilities before we can even come close to causation.
But Bracey is right on page 71 when he states that statistically significant results (I've always struggled with that term) do not always yield practical applications. Suppose in the example I gave yesterday, about comparing teaching styles, that I got statistically significant results that supported my initial theory of dialogic discourse correlating with higher scores than prescriptive instruction. What then?
Do I suddenly go on the talk-show circuit trumpeting my astounding findings to the public? Do I look up NCTE or ASCD and get my results published in some journal under the glamorous headline "DIALOGUE PROMOTES HIGH TEST SCORES!" Sounds tempting, but no. I'd be thoroughly discredited and my findings would be shot to heck because they wouldn't maintain consistency.
I already mentioned at the end of yesterday's entry that conclusive data isn't what I'm after, or I'd be out of work. But the internal issues that plagued the validity of my study are precisely what others would pounce on, and why my findings, however intriguing, wouldn't hold water under repeated experimentation. I had to follow a very carefully coordinated structure, and my schedule had a profound impact on why my findings turned out the way they did. Some of that was intentional; some not so much. What happens when I try applying dialogic discourse to classrooms pushing 30 students, or 35, or 40? What about when I have more time, like a nine-week quarter, or less time, 2-3 weeks, to apply the method to the curriculum? Do I get the same results? I doubt it. And the questions don't end there.
What if I implemented this approach at the beginning of the year (as opposed to near the end of the third quarter)? Would students be more engaged if I caught them early or would they have too much of a foundation on prescribed learning and rebel against me? What if I used it with ALL my classes (as opposed to one advanced, evenly split class)? Would the inevitability of academic plans, Section 504s, and a dominance of one gender over the other (number) yield different results? What if, heaven forbid, I came down with a debilitating illness a few weeks into the year and needed to hire a full-time substitute for my classes? Could I duplicate dialogic discourse onto the sub plans? Could I trust that the substitute would, in effect "copy" the instruction style faithfully until I returned? More to the point, is it fair to expect that much?
Teaching by numbers is statistically possible but not practically plausible. And as Bracey notes on pages 74 and 75, even if we can find a correlation between two variables (which is rather easy; as he details in examples with skirt length and shirt sleeves, we have been doing this for a long a time), it doesn’t necessarily imply a perfect, or strong, relationship.
This fact is what stops us from concluding that students who score well on the SAT causes high grades in their freshman year of college; it stops me from concluding that either prescriptive or descriptive instruction causes higher grades for all middle school students. For some, it probably will, but for others, it will have little to no effect at all. Additional research studies are required to discount other possibilities before we can even come close to causation.
0 Comments:
Post a Comment
<< Home