Wednesday, March 2, 2011

Bob Broad's What We Really Value

What We Really Value: Beyond Rubrics in Teaching and Assessing WritingWhat We Really Value by Bob Broad, 2003

Discussion by George Pullman

This blog is part experiment, part blog entry. I read Broad's book about writing assessment today using the Kindle app on the Ipad and highlighted what I thought was interesting and then copied the highlights from kindle.amazon.com and pasted them below. For some reason there were no page numbers associated with these notes. Perhaps that feature hasn't yet made it into the kindle.amazon website yet. At any rate, what we end up with is a somewhat unreadable gist of the text. Handy as a refresher looking back weeks or months later but not really a useful alternative to reading Broad's book. Given more time I might stitch the more salient quotations together to make a gist.

Broad makes a number of interesting points about the weaknesses of rubrics as writing assessments tools, their lack of context, their idealized assumptions about what we think we value when it comes to writing, among other things.

Interestingly, WAC-CTW did a similarly ethnographic activity to generate one of our early sets of rubrics for critical thinking. We took a list of critical thinking traits and asked faculty to rank them in order of importance and then to rank them in terms of how well they believe their students do them.  You can see the results here.

At any rate, below is a list of direct, verbatim quotations from Broad's book, which I urge you to read..
A prime assumption of my work is that a teacher of writing cannot provide an adequate account of his rhetorical values just by sitting down and reflecting on them; neither can a WPA provide an adequate account of the values of her writing program by thinking about them or even by talking about them in general terms with her writing instructors. 
For all its achievements and successes over the past half century (see Yancey), the field of writing assessment has no adequate method for answering one of its most urgent and important questions: What do we value in our students’ writing? What we have instead are rubrics and scoring guides that “over-emphasize formal, format, or superficial-trait characteristics” of composition (Wiggins 132) and that present “generalized, synthetic representations of [rhetorical] performances … too generic for describing, analyzing, and explaining individual performances” (Delandshere and Petrosky 21).  
within the world of positivist psychometrics, the world in which ETS and other commercial testing corporations still operate, precise agreement among judges is taken as the preeminent measure of the validity of an assessment. 
understand and carefully map out the swampy, rocky, densely forested terrain of writing assessment that they found lying before them, they quickly moved to simplify and standardize it thus:
ETS researchers eventually derived from those seven main headings a list of five “factors” that seemed to capture the values of their readers: Ideas: relevance, clarity, quantity, development, persuasiveness Form: organization and analysis Flavor: style, interest, sincerity Mechanics: specific errors in punctuation, grammar, etc. Wording: choice and arrangement of words And thus was born what became the standard, traditional, five-point rubric
Confronted with an apparent wilderness of rhetorical values, they retreated to a simplified, ordered, well-controlled representation that would keep future writing assessment efforts clean of such disturbing features as dissent, diversity, context-sensitivity, and ambiguity. 
The historical context of U.S. culture in 1961 and the following decades, rubrics may have done more good for writing assessment and the teaching of writing than any other concept or technology. During a time when educators were under constant pressure to judge “writing” ability using multiple-choice tests of grammar knowledge, the work of Diederich, French, and Carlton (and other researchers at ETS and elsewhere) legitimized direct assessment of writing (assessment that took actual writing as the object of judgment). 
Rubrics provide badly needed relief and enable faculty to assign and judge actual writing from large numbers of students with relative speed and ease. 
Scoring guides yielded yet another set of advantages: documentation of the process of evaluating writing. 
Students, instructors, and the general public could hold in their hands a clear framework for discussing, teaching, and assessing writing.
Assessments should improve performance (and insight into authentic performance), not just audit it. (129) 
For the assessment to be relevant, valid, and fair, however, it must judge students according to the same skills and values by which they have been taught. 
Very rarely do rubrics emerge from an open and systematic inquiry into a writing program’s values. 
By predetermining criteria for evaluation, such a process shuts down the open discussion and debate among professional teachers of writing that communal writing assessment should provide. 
a rigorous inquiry into what we really value and a detailed document recording the results of that inquiry. 
“Dynamic Criteria Mapping.” 
Huot foresees that the new generation of assessment programs will be
 1. Site-based
2. Locally controlled
3.
Context-sensitive 4. Rhetorically based 5. Accessible 
metamorphosing from the psycho-metric paradigm to a hermeneutic one 
The long-term outcome should be better learning for students of composition, enhanced professional development for writing instructors, and increased leverage with the public for writing programs that can publicize a complex and compelling portrait of 
Precisely because they lacked the teacher’s rich knowledge about a particular student, outside 
evaluators wielded their own distinctive authority in deciding which students passed 
participants volunteered to explain their pass/fail votes. Along with evaluative issues that bore directly upon the decision to pass or fail a particular text, related topics often arose that posed substantial and complex questions or problems for the FYE Program as a whole, such as “How do we define ‘competency’ in English 1?” “How important is it for a writer to ‘fulfill the assignment’?” 
First, I systematically, comprehensively, and recursively analyzed more than seven hundred pages of observational notes, transcripts of group discussions and interviews, and program documents to develop an emic map of City University’s terrain of rhetorical values. Working from my best understanding of their experiences, I then brought that conceptual map into dialogue with critiques of traditional writing assessment—and especially of rubrics and scoring guides—current in the literature of evaluation. Extending grounded theory in this way, I found participants’ complex criteria for evaluation cast in a new light, suggesting new possibilities for improving communal writing assessment, professional development, and student learning in the classroom. Dynamic Criteria Mapping seizes on those new possibilities. 
Using QSR Nvivo software for computer-assisted qualitative data analysis, I coded every passage mentioning criteria for evaluation, defined as “any factor that an instructor said shaped or influenced the pass/fail decision on a student’s text.” 

Without a method for placing side by side statements from program documents and candid statements from various norming and trio sessions—some privileging revision and others privileging unrevised prose—a writing program would lack the ability to identify a serious pedagogical and theoretical fissure 
Interesting criterion comes from Grant Wiggins, who complains in several of his publications that he encounters great difficulty trying to persuade groups of English instructors to place Interesting on their rubrics and scoring guides. 
Researching what we really value when teaching and reading rhetoric, as opposed to placing in a rubric only what we think we are supposed to value (typically “objective” and “formal” features) in large-scale assessment settings.
Cliché can be another’s Subtlety suggests that a writing program’s Dynamic Criteria Map might contain potentially hidden links among criteria depending on different readers’ literary or stylistic orientations. 
Dynamic Criteria Mapping can document and bring to light evaluative systems of which composition faculty might otherwise remain unaware (or about which they prefer to remain silent), 
A key observation at the outset of this section is that Constructing Writers is a widespread and perhaps inescapable feature of reading. We always construct an ethos behind a text as a means of interpreting and evaluating that text. What is new is our awareness that we need to document such evaluative dynamics so we can hold them up to critical scrutiny and make programmatic decisions about how to handle them. 
Ethos as a Textual Criterion consists of inferences drawn by readers on the basis of clues observable in the text. 
Constructing Writers is a Contextual Criterion precisely because the clues from which readers construct these portraits or narratives of authors come from outside of the student-authored text. 
Ted felt that the teacher’s wider knowledge of the student’s work gave the teacher the ability to make a better judgment than outside instructors’ “cold readings” alone could provide. 


 Imagined Details was the dominant mode of Constructing Writers in norming sessions, where (with one or two exceptions) the authors of sample texts were complete strangers to every reader present. 
They frequently inferred, imagined, or simply assumed “facts” about a student-author and her composition processes. 


Portfolios are, in and of themselves, powerful contexts for rhetorical judgment. The discourse of participants in this study lays out several specific ways in which evaluation of portfolios differs from evaluations of single texts. 
Which is not to say that a single essay might not tip the scales one way or another on a borderline portfolio, 
Instructors sometimes attempted to project imaginatively into the English 2 classroom, to imagine a particular student there and especially to imagine themselves or one of their colleagues teaching that student. 
Depending on their deeply felt sense of what were the core goals of English 1, instructors might pass or fail an essay 
Right or wrong, many instructors held an almost magical faith in the capacity of the Writing Center to cure student-authors of their rhetorical ills. 
The common evaluative implication of this faith in the Writing Center was that borderline texts failed because instructors believed authors could have gone the Writing Center to get help with their problems. If texts that came before them for evaluation showed difficulties of various kinds (most often with Mechanics), students were assumed to have neglected to make use of the Writing Center as a resource. 
how the professional growth and awareness (or mood or level of exhaustion) of the instructor-evaluator shapes evaluation. 
DCM transforms the way we understand not only writing assessment but the nature of composition itself. 
Communal writing assessment, and especially Dynamic Criteria Mapping, require more of faculty than do teaching and grading in isolation. 
Instructors become more aware of their own evaluative landscapes; they learn how others often evaluate and interpret texts very differently; and they work together to forge pedagogical policy on such sticky issues as revision policies, how to value in-class timed writing in a portfolio, and plagiarism. 
The purpose of DCM is to discover, negotiate, and publish the truth about the evaluative topography of any given writing program, not to turn away from complexity and dissent 
Sample texts for DCM should be selected because they feature as many kinds of rhetorical successes and failures as possible. 
Tape recordings and transcriptions of norming sessions, trio meetings, and solo interviews. 
scribes should write down the specific criteria to which readers refer when they explain why they passed or failed a particular sample text. Scribes should also note the specific passage in the specific sample text to which a participant refers when invoking one or more criteria. 
Now that they know, perhaps for the first time, how they do value students’ writing, they need to undertake high-powered professional discussions regarding how they should value that writing. In other words, their focus shifts at this point from the descriptive to the normative. 

1 comment:

  1. This approach has potential, but as you suggest, the quotations need to be cut down a bit and given some context. I do something similar when I am researching for a project. I grab quotations, plop them into my blog, and then briefly write why it is important to what I am exploring. This has been very useful to me in the past few years. I can keyword search to organize material I've collected over a couple of years. The Kindle approach will be very useful for me so I don't have to write out each quotation. -b

    ReplyDelete