The Teacher Who Mistook Her Student for a Grammatical Error
In the title piece of his collection of case studies entitled The Man Who Mistook His Wife for a Hat, neurologist and author Oliver Sacks describes the strange case of Dr P., a distinguished music teacher with an implausible ailment. Impressed by Dr P.’s charm and intellect, Sacks nonetheless recognized something was seriously amiss when, at the end of the examination, the patient took hold his wife’s head and attempted to place it upon his own, apparently mistaking her for his hat.
Dr. P.’s erratic behavior, ranging from a total inability to distinguish between the faces of his pupils to random attempts at conversation with water hydrants, parking meters and assorted pieces of furniture, did not prevent him from carrying on a routine if somewhat eccentric existence. Sacks discovered that Dr P. was capable of describing objects with great precision, but failed to recognize what those objects were. For example, he described one object as “a convoluted red form with a linear green attachment,” and another as “a continuous surface . . . infolded on itself” without recognizing the former as a rose and the latter as a glove.
Sacks describes how Dr P. was fully able to discern the discrete parts of an object without being able to fathom the nature of the aggregate of the parts. Dr P.’s eyes “would dart from one thing to another, picking up tiny features, individual features . . . A striking brightness, a colour, a shape would arrest his attention . . . but in no case did he get the scene-as-a-whole. He failed to see the whole, seeing only details, which he spotted like blips on a radar screen.” Dr P.’s inability to recognize faces and common objects was due to his failure to see the relationship of details to one another and to see how they formed a whole.
Reading about Dr P.’s peculiar malady, it called to mind something far from rare in my own professional experience as a teacher of English. Dr P.’s inability to see the forest for the trees, his focus on discrete details and lack of awareness of the object as a whole bore a disquieting resemblance to the way teachers sometimes view their students’ writing. Caught up in their endeavor to find and correct errors and to make sure they are following the rules of composition imparted to them, teachers sometimes get lost in the details and fail to see the composition as a whole. In other words, in the same way that Dr P. could look at an object and describe minute aspects of it with precision without recognizing what it was, teachers could look at a piece of writing and accurately register syntactic and structural minutiae without grasping the essential character of what they had read.
It may seem an exaggeration to compare Dr P.’s extreme agnosia to the focused reading of teachers for the purpose of assessment, but the results are not dissimilar. When we are primed to look for specific elements in a piece of writing, be they the presence of topic sentences, proper citations and specific syntactic or lexical items, or evidence of such transgressions as misspellings, misplaced modifiers or split infinitives, we limit our ability to see and react to the composition as a whole.
Some time ago I participated in a calibration exercise intended to make sure faculty members were assessing placement essays in a consistent way, Copies of a number of essays were made and each teacher read every essay, assigning a number grade to each and jotting down some brief comments explaining the rationale or criteria for the score given. The discrepancies in grades were interesting, but the reasons given for adding or subtracting points were truly enlightening. One teacher tended to grade all essays containing clearly identifiable topic sentences higher than those that lacked them. Some teachers placed a high value on organization while for others, mechanics trumped all. In one especially telling comment, a teacher noted that an essay lost points because the past tense was not used – even though the essay topic was the student’s plans for the future and did not really call for past tense usage. In short, each teacher went into the grading process all set to look for the presence or absence of certain elements and graded accordingly. To the extent that the features different teachers looked for varied, so did the grades they assigned, since it was the details and not the composition as a whole that was being evaluated. Perception of the whole was obscured, as it was for Dr P., by absorption in the details.
In every aspect of life we all have our likes and dislikes and our peculiar pet peeves. There are certain things that set us off whenever we encounter them, and there is, perhaps, no sphere where this is more the case than language use. For me, personally, such hyper-corrections as “between you and I” get under my skin and annoy me far more than such utterances as “He don’t know” or “She ain’t here.” I have to admit that my automatic reaction to hearing someone over-correct is to make silly assumptions about the person’s intelligence or education, even if what is being said is intelligent and insightful. Prejudices regarding how things are said can easily obscure the meaning behind what is said. Such prejudices are difficult to combat, even when one knows intellectually that they lead to misjudgment. The tendency for teachers to sometimes give undue attention to rather insignificant details in a piece of writing is quite understandable, but the unfortunate result is that errors in spelling or grammar may overshadow the real worth of the writing as a whole.
The proclivity to focus on details rather than the whole in assessing writing is reinforced by a widely used academic instrument known as a rubric. I use the term rubric here to refer to any apparatus used to mechanize assessment by relying on predetermined criteria or standards. Rubrics, devised to assure consistency in grading compositions, are presumed to make the assessment of writing more objective and transparent by specifying how much weight or how many points should be assigned to particular elements or qualities of the writing. Because rubrics are impressively effective in achieving consistency in scoring, they are deemed valid and valuable assessment tools. It is a mistake, however, to equate consistent scores with valid results if, in fact, all we have done is agree to limit our judgment in precisely the same way in order to arrive at similar grades. If we determine in advance what features to look for in a composition and encode them in a rubric, we may achieve consistency in grading, but in the process move farther away from rather than closer to a sensible form of evaluation. In a sense, rubrics validate prejudices and elevate the importance of details.
In a wonderful little volume entitled Rethinking Rubrics, Maja Wilson explains how she came to examine rubrics when the one she relied on required her to give a failing a grade to a paper which had obvious merits. She searched for other rubrics with criteria that would better mesh with the kind of evaluation she instinctively felt would be fairer. But she could neither find nor design such a rubric because, as Alfie Kohn put it in his introduction to the book, “improving the design of rubrics, or inventing our own, won’t solve the problem because the problem is inherent to the very idea of rubrics and the goals they serve.”
Rubrics break down a piece of writing into discrete components that are viewed in isolation. This deconstruction is the fatal flaw of the rubric. By reducing writing to constituent elements and assigning values to them, rubrics attempt to replace the faculty of judgment with a more mechanical process, and it is that reductionism which disqualifies rubrics as valid assessment tools. For it is judgment, finally, that is necessary for assessment. The most rubrics can accomplish is to establish profiles that purport to indicate the quality of writing on the basis of some of its characteristics. This kind of rating system is as accurate as other forms of profiling and should satisfy us only if we believe we can deduce a person’s intelligence or honesty on the basis of his or her clothing or hair style. We naturally crave a formula or system for evaluation that is thoroughly objective and concrete, which we can point to when our decisions are questioned and thereby remain secure in our accountability. We want to avoid individual judgment and anything that smacks of subjectivity when we issue grades, but ironically judgment cannot be avoided because it is the very core of evaluation and the necessary ingredient for comprehending the whole and not just the details.
Oliver Sacks asserts that “our mental processes, which constitute our being and life, are not just abstract and mechanical, but personal as well and, as such, involve not just classifying and categorizing, but continual judging and feeling also. If this is missing, we become computer-like, as Dr P. was . . . [and] reduce our apprehension of the concrete and real.” In a sense, Dr P.’s pathology is what we embrace when we adopt mechanical means rather than employing judgment, subjective as it may be, in the evaluation of student work. Dr P functioned, in Sacks words, “precisely as a machine functions . . . [and] construed the world as a computer construes it, by means of key features and schematic relationships. The scheme might be identified . . . without the reality being grasped at all.” To the extent that we strive for computer-like accuracy and consistency in grading, we move farther from grasping the reality before us.
A colleague recently asked me to help him develop a way to “quantify” portfolio assessment so that results would be more institutionally acceptable – for there is no surer way to establish a thing’s validity than to attach some numbers to it. A portfolio is a wonderful vehicle for open-ended evaluation, which is fundamentally at odds with quantification, because it supports teacher accountability without resorting to mechanized assessment. One can, of course, quantify portfolio assessment through the use of rubrics, but that completely defeats the purpose of using portfolios in the first place. It is a bit like going to the animal shelter to find a canine companion, choosing the most active and exuberant dog on the premises and then killing it and taking it to a taxidermist for stuffing. Why get a live dog if a stuffed toy is all that is wanted and why go to the trouble of developing a system of open-ended portfolio evaluation only to reduce it to a number-generating rubric?
Open-ended evaluation allows us to consider the quality of writing (and other kinds of work) from the broadest possible perspective without allowing narrow prejudices to unduly influence our decisions. Such evaluation is profoundly dependent upon judgment and relies on the teacher’s knowledge, experience and skill. Use of rubrics to evaluate student work, by contrast, is tantamount to putting on blinders that prevent full vision and keep attention focused only on certain predetermined elements. We might think of rubrics as prescribed templates within whose defined lines students must keep to receive a satisfactory grade. And just as keeping within the lines is not the only – or even necessarily a valid – criterion of artistic merit, neither is it a valid measure of writing skill. Determining if a piece of writing conforms to the rules and stays within the lines is simple and objective. However, such assessment tells us only how faithfully the rubric was followed and little or nothing about the actual quality of the work produced. That determination cannot be made without the use of judgment. There is no basis for equating the quality or potential of a piece with how well it conforms to predetermined standards, yet that is precisely what is done regularly in countless classrooms. Nor is there any basis for assuming that the quality of writing can be improved by applying rubric-driven rules, as Maja Wilson convincingly and entertainingly demonstrates in an essay entitled Apologies to Sandra Cisneros. In much the same way that standardized tests can reliably tell us only how well students perform on standardized tests, rubrics, despite the claim that they can assess writing quality, only measure conformity to rubric-generated rules.
Ultimately, renunciation of personal judgment in favor of a computational, mechanized system of assessment results in an anti-humanistic process of evaluation and education. Dr P. failed to recognize the individual faces of his pupils not only because he couldn’t make cognitive judgments but because he was unable to see the personal in their expressions. He could not discern what was unique in them and consequently could not differentiate between them. As assessment becomes less personalized and more abstract, we lose sight of reality as we become enmeshed in details and computations, and we dehumanize the educational process itself and run the risk of conflating the unique human beings in our classrooms with the mistakes they happen to make in their writing.
There is one big difference between Dr P.’s malady and that of the English teacher who loses sight of the human being behind the mangled syntax of an assigned essay, and that is that the psychosis of the former was the result of a brain tumor or other degenerative condition while the latter is self-inflicted and stems from an unwillingness to make personal judgments. If we wish to avoid the absurdities occasioned by Dr P.’s pathological inability to recognize what was in front of him, we must turn to a more humanistic, open-ended process of evaluation that does not spurn but rather embraces personal judgment and authentic reaction to the work we are attempting to assess.
Originally published in Global Study Magazine 5.3; Sept. 2009. © Mark Feder