Saturday, June 4, 2011

BEYOND TEST: ALTERNATIVES IN ASSESSMENT (BROWN: 2004)

In the public eye, tests have acquired an aura of infallibility in our culture of mass producing everything, including the education of school children. Everyone wants a test for everything, especially if the test is cheap, quickly administered, and scored instantaneously. But we saw in Chapter 4 that while the standardized test industry has become a powerful juggernaut of influence on decisions about people's lives, it also has come under severe criticism from the public (Kohn, 2000). A more bal­anced viewpoint is offered by Bailey (1998, p. 204): "One of the disturbing things about tests is the extent to which many people accept the results uncritically, while others believe that all testing is invidious. But tests are simply measurement tools: It is the use to which we put their results that can be appropriate or inappropriate."
It is clear by now that tests are one of a number of possible types of assessment. In Chapter 1, an important distinction was made between testing and assessing. Tests are formal procedures, usually administered within strict time limitations, to sample the performance of a test-taker in a specified domain. Assessment connotes a much broader concept in that most of the time when teachers are teaching, they are also assessing. Assessment includes all occasions from informal impromptu observations and comments up to and including tests.
Early in the decade of the 1990s, in a culture of rebellion against the notion that all people and all skills could be measured by traditional tests, a novel concept emerged that began to be labeled "alternative" assessment. As teachers and students were becoming aware of the shortcomings of standardized tests, "an alternative to standardized testing and all the problems found with such testing" (Huerta-Macias, 1995, p. 8) was proposed. That proposal was to assemble additional measures of students—portfolios, journals, observations, self-assessments, peer-assessments, and the like—in an effort to triangulate data about students. For some, such alternatives held "ethical potential" (Lynch, 2001, p. 228) in their promotion of fairness and the balance of power relationships in the classroom.
Why, then, should we even refer to the notion of "alternative" when assessment already encompasses such a range of possibilities? This was the question to which Brown and Hudson (1998) responded in a TESOL Quarterly article. They noted that to speak of alternative assessments is counterproductive because the term implies something new and different that may be "exempt from the requirements of respon­sible test construction" (p. 657). So they proposed to refer to "alternatives" in assess­ment instead. Their term is a perfect fit within a model that considers tests as a subset of assessment. Throughout this book, you have been reminded that all tests are assessments but, more important, that not all assessments are tests.
The defining characteristics of the various alternatives in assessment that have been commonly used across the profession were aptly summed up by Brown and Hudson (1998, pp. 654-655). Alternatives in assessments
1.       require students to perform, create, produce, or do something;
2.       use real-world contexts or simulations;
3.       are nonintrusive in that they extend the day-to-day classroom activities;
4.       allow students to be assessed on what they normally do in class every day;
5.       use tasks that represent meaningful instructional activities;
6.       focus on processes as well as products;
7.       tap into higher-level thinking and problem-solving skills;
8.       provide information about both the strengths and weaknesses of students;
9.       are multiculturally sensitive when properly administered;
10.   ensure that people, not machines, do the scoring, using human judgment;
11.   encourage open disclosure of standards and rating criteria; and
12.   call upon teachers to perform new instructional and assessment roles.

THE DILEMMA OF MAXIMIZING BOTH PRACTICALITY AND WASHBACK
The principal purpose of this chapter is to examine some of the alternatives in assessment that are markedly different from formal tests. Tests, especially large-scale standardized tests, tend to be one-shot performances that arc timed, multiple-choice, decontextualized, norm-referenced, and that foster extrinsic motivation. On the other hand, tasks like portfolios, journals, and self-assessment are
·           open-ended in their time orientation and format,
·           contextualized to a curriculum,
·           referenced to the criteria (objectives) of that curriculum, and
·           likely to build intrinsic motivation.
One way of looking at this contrast poses a challenge to you as a teacher and test designer. Formal standardized tests are almost by definition highly practical, reli­able instruments. They are designed to minimize time and money on the part of test designer and test-taker, and to be painstakingly accurate in their scoring. Alternatives such as portfolios or conferencing with students on drafts of written work, or observations of learners over time all require considerable time and effort on the part of the teacher and the student. Even more time must be spent if the teacher hopes to offer a reliable evaluation within students across time, as well as across students (taking care not to favor one student or group of students). But the alternative techniques also offer markedly greater washback, are superior formative measures, and, because of their authenticity, usually carry greater face validity.
This relationship can be depicted in a hypothetical graph that shows practicality/reliability on one axis and washback/authenticity on the other, as shown in Figure 10.1. Notice the implied negative correlation: as a technique increases in its washback and authenticity, its practicality and reliability tend to be lower. Conversely, the greater the practicality and reliability, the less likely you are to achieve beneficial washback and authenticity. I have placed three types of assess­ment on the regression line to illustrate.











In-class, short-answer essay tests
 
LOW                  Washback and Authenticity               HIGH
Figure 10.1. Relationship of practicality/reliability to washback/authenticity
The figure appears to imply the inevitability of the relationship: large-scale multiple-choice tests cannot offer much washback or authenticity, nor can portfo­lios and such alternatives achieve much practicality or reliability. This need not be the case! The challenge that faces conscientious teachers and assessors in our pro­fession is to change the directionality of the line: to "flatten" that downward slope to some degree, or perhaps to push the various assessments on the chart leftward and upward. Surely we should not sit idly by, accepting the presumably inescapable conclusion that all standardized tests will be devoid of washback and authenticity. With some creativity and effort, we can transform otherwise inau­thentic and negative-washback-producing tests into more pedagogically fulfilling learning experiences. A number of approaches to accomplishing this end are pos­sible, many of which have already been implicitly presented in this book:
·           building as much authenticity as possible into multiple-choice task types and items
·           designing classroom tests that have both objective-scoring sections and open-ended response sections, varying the performance tasks
·           turning multiple-choice test results into diagnostic feedback on areas of needed improvement
·           maximizing the preparation period before a test to elicit performance rele­vant to the ultimate criteria of the test
·           teaching test-taking strategies
·           helping students to see beyond the test: don't "teach to the test"
·           triangulating information on a student before making a final assessment of competence.
The flip side of this challenge is to understand that the alternatives in assess­ment are not doomed to be impractical and unreliable. As we look at alternatives in assessment in this chapter, we must remember Brown and Hudson's (1998) admo­nition to scrutinize the practicality, reliability, and validity of those alternatives at the same time that we celebrate their face validity, washback potential, and authenticity. It is easy to fly out of the cage of traditional testing rubrics, but it is tempting in doing so to flap our wings aimlessly and to accept virtually any classroom activity as a viable alternative. Assessments proposed to serve as triangulating measures of competence imply a responsibility to be rigorous in determining objectives, response modes, and criteria for evaluation and interpretation.

PERFORMANCE-BASED ASSESSMENT
Before proceeding to a direct consideration of types of alternatives in assessment, a word about performance-based assessment is in order. There has been a great deal of press in recent years about performance-based assessment, sometimes merely called performance assessment (Shohamy, 1995; Norris et al., 1998). Is this different from what is being called "alternative assessment"?
The push toward more performance-based assessment is part of the same gen­eral educational reform movement that has raised strong objections to using stan­dardized test scores as the only measures of student competencies (see, for example,-Valdez-Pierce & O'Malley, 1992; Shepard & Bliem, 1993).The_argument, as you can guess, was that standardized tests do not elicit actual performance on the part of test-takers. If a child were asked, for example, to write a description of earth as seen from space, to work cooperatively with peers to design a three-dimensional model of the solar system, to explain the project to the rest of the class, and to take notes on a videotape about space travel, traditional standardized testing would be involved in none of those performances. Performance-based assessment, however, would require the performance of the above-named actions, or samples thereof, which would be systematically evaluated through direct observation by a teacher and/or possibly by self and peers.
Performance-based assessment implies productive, observable skills, such as speaking and writing, of content-valid tasks. Such performance usually, but not always, brings with it an air of authenticity—real-world tasks that students have had time to develop. It often implies an integration of language skills, perhaps all four skills in the case of project work. Because the tasks that students perform are con­sistent with course goals and curriculum, students and teachers are likely to be more motivated to perform them, as opposed to a set of multiple-choice questions about facts and figures regarding the solar system.
O'Malley and Valdez Pierce (1996) considered performance-based assessment to be a subset of authentic assessment. In other words, not all authentic assessment is performance-based. One could infer that reading, listening, and thinking have many authentic manifestations, but since they are not directly observable in and of themselves, they are not performance-based. According to O'Malley and Valdez Pierce (p. 5), the following are characteristics of performance assessment:
1.       Students make a constructed response.
2.       They engage in higher-order thinking, with open-ended tasks.
3.       Tasks are meaningful, engaging, and authentic.
4.       Tasks call for the integration of language skills.
5.       Both process and product are assessed.
6.       Depth of a student's mastery is emphasized over breadth.
Performance-based assessment needs to be approached with caution. It is tempting for teachers to assume that if a student is doing something, then the process has fulfilled its own goal and the evaluator needs only to make a mark in the grade book that says "accomplished" next to a particular competency. In reality, per­formances as assessment procedures need to be treated with the same rigor as tra­ditional tests. This implies that teachers should
·           state the overall goal of the performance,
·           specify the objectives (criteria) of the performance in detail,
·           prepare students for performance in stepwise progressions,
·           use a reliable evaluation form, checklist, or rating sheet,
·           treat performances as opportunities for giving feedback and provide that feedback systematically, and
·           if possible, utilize self- and peer-assessments judiciously.
To sum up, performance assessment is not completely synonymous with the con­cept of alternative assessment. Rather, it is best understood as one of the primary traits of the many available alternatives to assessment.

PORTFOLIOS
One of the most popular alternatives in assessment, especially within a framework of communicative language teaching, is portfolio development. According to Genesee and Upshur (1996), a portfolio is "a purposeful collection of students' work that demonstrates ... their efforts, progress, and achievements in given areas" (p. 99). Portfolios include materials such as
·           essays and compositions in draft and final forms;
·           reports, project outlines;
·           poetry and creative prose;
·           artwork, photos, newspaper or magazine clippings;
·           audio and/or video recordings of presentations, demonstrations, etc.;
·           journals, diaries, and other personal reflections;
·           tests, test scores, and written homework exercises;
·           notes on lectures; and
·           self- and peer-assessments--comments, evaluations, and checklists.
Until recently, portfolios were thought to be applicable only to younger children who assemble a portfolio of artwork and written work for presentation to a teacher and/or a parent. Now learners of all ages and in all fields of study are benefiting from the tangible, hands-on nature of portfolio development.
Gottlieb (1995) suggested a developmental scheme for considering the nature and purpose of portfolios, using the acronym CRADLE to designate six possible attributes of a portfolio:
Collecting
Reflecting
Assessing
Documenting
Linking
Evaluating
As Collections, portfolios are an expression of students' lives and identities. The appropriate freedom of students to choose what to include should be respected, but at the same time the purposes of the portfolio need to be clearly specified. Reflective practice through journals and self-assessment checklists is an important ingredient of a successful portfolio. Teacher and student both need to take the role of Assessment seriously as they evaluate quality and development over time. We need to recognize that a portfolio is an important Document in demonstrating stu­dent achievement, and not just an insignificant adjunct to tests and grades and other more traditional evaluation. A portfolio can serve as an important Link between stu­dent and teacher, parent, community, and peers; it is a tangible product, created with pride, that identifies a student's uniqueness. Finally, Evaluation of portfolios requires a time-consuming but fulfilling process of generating accountability.
The advantages of engaging students in portfolio development have been extolled in a number of sources (Genesee & Upshur, 1996; O'Malley &Valdez Pierce, 1996; Brown & Hudson, 1998; Weigle, 2002). A synthesis of those characteristics gives us a number of potential benefits. Portfolios
·           foster intrinsic motivation, responsibility, and ownership,
·           promote student-teacher interaction with the teacher as facilitator,
·           individualize learning and celebrate the uniqueness of each student,
·           provide tangible evidence of a student's work,
·           facilitate critical thinking, self-assessment, and revision processes,
·           offer opportunities for collaborative work with peers, and
·           permit assessment of multiple dimensions of language learning.
At the same time, care must be taken lest portfolios become a haphazard pile of "junk" the purpose of which is a mystery to both teacher and student. Portfolios can fail if objectives are not dear, if guidelines are not given to students, if system­atic periodic review and feedback are not present, and so on. Sometimes the thought of asking students to develop a portfolio is a daunting challenge, especially for new teachers and for those who have never created a portfolio on their own. Successful portfolio development will depend on following a number of steps and guidelines.
1.       State objectives clearly. Pick one or more of the CRADLE attributes named above and specify them as objectives of developing a portfolio. Show how those purposes are connected to, integrated with, and/or a reinforcement of your already stated curricular goals. A portfolio attains maximum authenticity and washback when it is an integral part of a curriculum, not just an optional box of materials. Show students how their portfolios will include materials from the course they are taking and how that collection will enhance curricular goals.
2.       Give guidelines on what materials to include. Once the objectives have been determined, name the types of work that should be included. There is some disagreement among "experts" about how much negotiation should take place be­tween student and-teacher over those materials. Hamp-Lyons and Condon (2000) suggested advantages for student control of portfolio contents, but teacher guid­ance will keep students on target with curricular objectives. It is helpful to give Clear directions on how to get started since many students will never have com­piled a portfolio and may be mystified about what to do. A sample portfolio from a previous student can help to stimulate some thoughts on what to include.
3.       Communicate assessment criteria to students. This is both the most. Im­portant aspect of portfolio development and the most complex. Two sources— self-assessment and teacher assessment—must be incorporated in order for students to receive the maximum benefit. Self-assessment should be as clear and simple as possible. O'Malley and Valdez Pierce (1996) suggested the following half-page self-evaluation of a writing sample (with spaces for students to write) for ele­mentary school English language students.
Portfolio self-assessment questions (O'Malley & Valdez Pierce, 1996, p. 42)
1. Look at your writing sample.
a.      What does the sample show that you can do?
b.      Write about what you did well.
2. Think about realistic goals. Write one thing you need to do better. Be specific.
Genesee and Upshur (1996) recommended using a questionnaire format for self-assessment, with questions like the following for a project:
Portfolio project self-assessment questionnaire
1.      What makes this a good or interesting project?
2.      What is the most interesting part of the project?
3.      What was the most difficult part of the project?
4.      What did you learn from the project?
5.      What skills did you practice when doing this project?
6.      What resources did you use to complete this project?
7.      What is the best part of the project? Why?
8.      How would you make the project better?
The teacher's assessment might mirror self-assessments, with similar questions designed to highlight the formative nature of the assessment. Conferences are important checkpoints for both student and teacher. In the case of requested written responses-from students, help your students to process your feedback and show them how to respond to your responses. Above all, maintain reliability in assessing portfolios so that all students receive equal attention and are assessed by the same criteria.
An option that works for some contexts is to include peer-assessment or small group conferences to comment on one another's portfolios. Where the classroom community is relatively closely knit and supportive and where students are willing to expose themselves by revealing their portfolios, valuable feedback can be achieved from peer reviews. Such sessions should have clear objectives lest they erode into aimless chatter. Checklists and questions may serve to preclude such an eventuality
4.       Designate time within the curriculum for portfolio development. If stu­dents feel rushed to gather materials and reflect on them, the effectiveness of the portfolio process is diminished. Make sure that students have time set aside for port­folio work (including in-class time) and that your own opportunities for conferenc­ing are not compromised.
5.       Establish periodic schedules for review and conferencing. By doing so, you will prevent students from throwing everything together at the end of a term.
6.       Designate an accessible place to keep portfolios. It is inconvenient for stu­dents to carry collections of papers and artwork. If you have a self-contained class­room or a place in a reading room or library to keep the materials, that may provide a good option. At the university level, designating a storage place on the campus may involve impossible logistics. In that case, encourage students to create their own accessible location and to bring to class only the materials they need.
7.       Provide positive washback-giving final assessments. When a portfolio has been completed and the end of a term has arrived, a final summation is in order. Should portfolios be graded? be awarded specific numerical scores? Opinion is di­vided; every advantage is balanced by a disadvantage. For example, numerical scores serve as convenient data to compare performance across students, courses, and dis­tricts. For portfolios containing written work, Wolcott (1998) recommended a holis­tic scoring scale ranging from 1 to 6 based on such qualities as inclusion of out-of-class work, error-free work, depth of content, creativity, organization, writing style, and "engagement" of the student. Such scores are perhaps best viewed as nu­merical equivalents of letter grades.
One could argue that it is inappropriate to reduce the personalized and cre­ative process of compiling a portfolio to a number or letter grade and that it is more appropriate to offer a qualitative evaluation for a work that is so open-ended. Such evaluations might include a final appraisal of the work by the student, with questions such as those listed above for self-assessment of a project, and a narrative evaluation of perceived strengths and weakness by the teacher. Those final evalu­ations should emphasize strengths but also point the way toward future learning challenges.
It is clear that portfolios get a relatively low practicality rating because of the time it takes for teachers to respond and conference with their students. Nevertheless, following the guidelines suggested above for specifying the criteria for evaluating portfolios can raise the reliability to a respectable level, and without ques­tion the washback effect, the authenticity, and the face validity of portfolios remain exceedingly high.
In the above discussion, I have tried to subject portfolios to the same specifi­cations that apply to more formal tests: it should be made clear what the objectives are, what tasks are expected of the student, and how the learner's product will be evaluated. Strict attention to these demands is warranted for successful portfolio development to take place.

JOURNALS
Fifty years ago, journals had no place in the second language classroom. When lan­guage production was believed to be best taught under controlled conditions, the concept of "free" writing was confined almost exclusively to producing essays on assigned topics. Today, journals occupy a prominent role in a pedagogical model that stresses the importance of self-reflection in the process of students taking con­trol of their own destiny.
A journal is a log (or "account") of one's thoughts, feelings, reactions, assess­ments, ideas, or progress toward goals, usually written with little attention to struc­ture, form, or correctness. Learners can articulate their thoughts without the threat of those thoughts being judged later (usually by the teacher). Sometimes journals are rambling sets of verbiage that represent a stream of consciousness with no par­ticular point, purpose, or audience. Fortunately, models of journal use in educational practice have sought to tighten up this style of journal in order to give them some focus (Staton et al., 1987).The result is the emergence of a number of overlapping categories or purposes in journal writing, such as the following:
·           language-learning logs
·           grammar journals
·           responses to readings
·           strategies-based learning logs
·           self-assessment reflections
·           diaries of attitudes, feelings, and other affective factors
·           acculturation logs
Most classroom-oriented journals are what have now come to be known as dia­logue journals. They imply an interaction between a reader (the teacher) and the student through dialogues or responses. For the best results, those responses should be dispersed across a course at regular intervals, perhaps weekly or biweekly. One of the principal objectives in a student's dialogue journal is to carry on a conversa­tion with the teacher. Through dialogue journals, teachers can become better acquainted with their students, in terms of both their learning progress and their affective states, and thus become better equipped to meet students' individual needs.
The following journal entry from an advanced student from China, and the teacher's response, is an illustration of the kind of dialogue that can take place.
Dialogue journal sample
Journal entry by Ming Ling, China:
Yesterday at about eight o’clock I was sitting in front of my table, holding a fork and eating tasteless noodles which I usually really like to eat but lost my taste yesterday because I  didn't  feel well.  I had a headache and a fever. My head seemed to be broken. I sometimes felt cold, sometimes hot. I didn’t feel comfortable standing up and I didn't feel comfortable sitting down. I hated everything around me. It seemed to me that I got a great pressure from the atmosphere and I could not breath. I was so sleepy since I had taken some medicine which functioned as an antibiotic.
The room was so quiets. I was there by myself and felt very solitary. Thy dinner reminded me of my mother. Whenever I was  sick  in China, my mother always took care of me and cooked rice gruel, which has to cook more than three hours art and is very delicious, I think. I would be better very soon under the care of my mother. But yesterday, I had to cook by myself even though I way sick. The more I thought,   the less I wanted to eat. Half an hour passed. The noodles were cold, butt was still sitting there and thinking about my mother. Finally I threw out the noodles and went to bed:

Teacher's response:
This is a powerful piece of writing because you really communicate what you were feeling. You used vivid details, like "eating tasteless noodles," "my head seemed to be broken" and "rice gruel, which has to cook more than three hours and is very delicious." These make it easy for the reader to picture exactly what you were going through. The other strong point about this piece is that you bring the reader full circle by beginning and ending with "the noodles."
Being alone when you are sick is difficult. Now, I know why you were so quiet in class.
If you want to do another entry related to this one, you could have a dialogue with your "sick" self. What would your "healthy" self say to the 'sick" self? Is there some advice that could be exchanged about how to prevent illness or how to take care of yourself better when you do get sick? Start the dialogue with your "sick" self speaking first.
With the widespread availability of Internet communications, journals and other student-teacher dialogues have taken on a new dimension. With such inno­vations as "collaboratories" (where students in a class are regularly carrying on email discussions with each other and the teacher), on-line education, and distance learning, journals—out of several genres of possible writing—have gained additional prominence.
Journals obviously serve important pedagogical purposes: practice in the mechanics of writing, using writing as a "thinking" process, individualization, and communication with the teaches At the same time, the assessment qualities- of journal writing have assumed an important role in the teaching-learning process. Because most journals are—or should be—a dialogue between student and teacher, they afford a unique opportunity for a teacher to offer various kinds of feedback.
On the other side of the issue, it is argued that journals are too free a form to be assessed accurately. With so much potential variability, it is difficult to set up cri­teria for evaluation. For some English language learners, the concept of free and unfettered writing is anathema. Certain critics have expressed ethical concerns: stu­dents may be asked to reveal an inner self, which is virtually unheard of in their own culture. Without a doubt, the assessing of journal entries through responding is not an exact science.
It is important to turn the advantages and potential drawbacks of journals into positive general steps and guidelines for using journals as assessment instruments. The following steps are not coincidentally parallel to those cited above for portfolio development:
1.    Sensitively introduce students to the concept of journal writing. For many students, especially those from educational systems that play down the notion of teacher-student dialogue and collaboration, journal writing will be difficult at first. University-level students, who have passed through a dozen years of product writing, will have particular difficulty with the concept of writing without fear of a teacher's scrutinizing every grammatical or spelling error. With modeling, assurance, and purpose, however, students can make a remarkable transition into the po­tentially liberating process of journal writing. Students who are shown examples of journal entries and are given specific topics and schedules for writing will become comfortable with the process.
2.    State the objective(s) of the journal. Integrate journal writing into the ob­jectives of the curriculum in some way, especially if journal entries become topics of class discussion. The list of types of journals at the beginning of this section may coincide with the following examples of some purposes of journals:
Language-learning logs. In English language teaching, learning logs have the advantage of sensitizing students to the importance of setting their own goals and then self-monitoring their achievement. McNamara (1998) suggested restricting the number of skills, strategies, or language categories that students comment on; otherwise students can become overwhelmed with the process. A weekly schedule of a limited number of-strategies-usually accomplishes the purpose of keeping stu­dents on task.
Grammar journals. Some journals are focused only on grammar acquisition. These types of journals are especially appropriate for courses and workshops that focus on grammar. "Error logs" can be instructive processes of consciousness raising for students: their successes in noticing and treating errors spur them to maintain the process of awareness of error.
Responses to readings. Journals may have the specified purpose of simple responses to readings (and/or to other material such as lectures, presentations, films, and videos). Entries may serve as precursors to freewrites and help learners to sort out thoughts and opinions on paper. Teacher responses aid in the further develop­ment of those ideas.
Strategies-based learning logs. Closely allied to language-learning logs are specialized journals that focus only on strategies that learners are seeking to become aware of and to use in their acquisition process. In H. D. Brown's (2002) Strategies for Success: A Practical Guide to Learning English, a systematic strategies-based journal-writing approach is taken where, in each of 12 chapters, learners become aware of a strategy, use it in their language performance, and reflect on that process in a journal.
Self-assessment reflections. Journals can be a stimulus for self-assessment in a more open-ended way than through using checklists and questionnaires. With the possibility of a few stimulus questions, students' journals can extend beyond the scope of simple one-word or one-sentence-responses.
Diaries of attitudes, feelings, and other affective factors. The affective states of learners are an important element of self-understanding. Teachers can thereby become better equipped to effectively facilitate learners' individual journeys toward their goals.
Acculturation logs. A variation on the above affectively based journals is one that focuses exclusively on the sometimes difficult and painful process of acculturation in a non-native country. Because culture and language are so strongly linked, awareness of the symptoms of acculturation stages can provide keys to eventual lan­guage success.
3.    Give guidelines on what kinds of topics to include. Once the purpose or type of journal is clear, students will benefit from models or suggestions on what kinds of topics to incorporate into their journals.
4.    Carefully specify the criteria for assessing or grading journals. Students need to understand the freewriting involved in journals, but at the same time, they need to know assessment criteria. Once you have clarified that journals will not be evaluated for grammatical correctness and rhetorical conventions, state how they will be evaluated. Usually the purpose of the journal will dictate the major assessment criterion. Effort as exhibited in the thoroughness of students' entries will no doubt be important. Also, the extent to which entries reflect the processing of course content might be considered. Maintain reliability by adhering conscien­tiously to the criteria that you have set up.
5.    Provide optimal feedback in your responses. McNamara (1998, p. 39) rec­ommended three different kinds of feedback to journals:
1.       cheerleading feedback, in which you celebrate successes with the students or encourage them to persevere through difficulties,
2.       instructional feedback, in which you suggest strategies or materials, suggest ways to fine-tune strategy use, or instruct students in their writing, and
3.       reality-check feedback, in which you help the students set more realistic expectations for their language abilities.
The ultimate purpose of responding to student journal entries is well captured in McNamara's threefold classification of feedback. Responding to journals is a very personalized matter, but closely attending to the objectives for writing the journal and its specific directions for an entry will focus those responses appropriately.
Peer responses to journals may be appropriate if journal comments are rela­tively "cognitive," as opposed to very personal. Personal comments could make stu­dents feel threatened by other pairs of eyes on their inner thoughts and feelings.
6.    Designate appropriate time frames and schedules for review. Journals, like portfolios, need to be esteemed by students as integral parts of a course. There­fore, it is essential to budget enough time within a curriculum for both writing jour­nals and for your written responses. Set schedules for submitting journal entries periodically; return them in short order.
7.    Provide formative, washback-giving final comments. Journals perhaps even more than portfolios, are the most formative of all the alternatives in assess­ment. They are day-by-day (or at least weekly) chronicles of progress whose pur­pose is to provide a thread of continuous assessment and reassessment, to recognize mid-stream direction changes, and/or to refocus on goals. Should you reduce a final assessment of such a procedure to a grade or a score? Some say yes, some say no (Peyton & Reed, 1990), but it appears to be in keeping with the formative nature of journals not to do so. Credit might be given for the process of actually writing the journal, and possibly a distinction might be made among high, moderate, and low effort and/or quality. But to accomplish the goal of positive washback, narrative summary comments and suggestions are clearly in order.
In sum, how do journals score on principles of assessment? Practicality remains relatively low, although the appropriation of electronic communication increases practicality by offering teachers and students convenient, rapid (and legible!) means of responding. Reliability can be maintained by the journal entries adhering to stated purposes and objectives, but because of individual variations in writing and the accompanying variety of responses, reliability may reach only a moderate level. Content and face validity are very high if the journal entries are closely interwoven with curriculum goals (which in turn reflect real-world needs). In the category of washback, the potential in dialogue journals is off the charts!

CONFERENCES AND INTERVIEWS
For a number of years, conferences have been a routine part of language classrooms, especially of courses in writing. In Chapter 9, reference was made to conferencing as a standard part of the process approach to teaching writing, in which the teacher, in a conversation about a draft, facilitates the improvement of the written work. Such interaction has the advantage of one-on-one interaction between teacher and student, and the teacher's being able to direct feedback toward a student's specific needs.
Conferences are not limited to drafts of written work. Including portfolios and journals discussed above, the list of possible functions and subject matter for con­ferencing is substantial:
·         commenting on drafts of essays and reports
·         reviewing portfolios
·         responding to journals
·         advising on a student's plan for an oral presentation
·         assessing a proposal for a project
·         giving feedback on the results of performance on a test
·         clarifying understanding of a reading
·         exploring strategies-based options for enhancement or compensation
·         focusing on aspects of oral production
·         checking a student's self-assessment of a performance
·         setting personal goals for the near future
·         assessing general progress in a course
Conferences must assume that the teacher plays the role of a facilitator and guide, not of an administrator, of a formal assessment. In this intrinsically motivating atmosphere, students need to understand that the teacher is an ally who is encour­aging self-reflection and improvement. So that the student will be as candid as pos­sible in self-assessing, the teacher should not consider a conference as something to be scored or graded. Conferences are by nature formative, not summative, and their primary purpose is to offer positive washback.
Genesee and Upshur (1996, p. 110) offered a number of generic kinds of ques­tions that may be useful to pose in a conference:
·         What did you like about this work?
·         What do you think you did well?
·         How does it show improvement from previous work? Can you show me the improvement?
·         Are there things about this work you do not like? Are there things you would like to improve?
·         Did you have any difficulties with this piece of work? If so, where, and what did you do [will you do] to overcome them?
·         What strategies did you use to figure out the meaning of words you could not understand?
·         What did you do when you did not know a word that you wanted to write?
Discussions of alternatives in assessment usually encompass one specialised kind of conference: an interview. This term is intended to denote a context in which a teacher interviews a student for a designated assessment purpose. (We are not talking about a student conducting an interview of others in order to gather information on a topic.) Interviews may have one or more of several possible goals, in which the teacher
·         assesses the student's oral production,
·         ascertains a student's needs before designing a course or curriculum,
·         seeks to discover a student's learning styles and preferences,
·         asks a student to assess his or her own performance, and
·         requests an evaluation of a course.
One overriding principle of effective interviewing centers on the nature of the questions that will be asked. It is easy for teachers to assume that interviews are just informal conversations and that they need little or no preparation. To maintain the all-important reliability factor, interview questions should be constructed carefully to elicit as focused a response as possible. When interviewing for oral production assessment, for example, a highly specialized set of probes is necessary to accom­plish predetermined objectives. (Look back at Chapter 7, where oral interviews were discussed.)
Because interviews have multiple objectives, as noted above, it is difficult to generalize principles for conducting them, but the following guidelines may help to frame the questions efficiently:
1.         Offer an initial atmosphere of warmth and anxiety-lowering (warm-up).
2.         Begin with relatively simple questions.
3.         Continue with level-check and probe questions, but adapt to the interviewee as needed.
4.         Frame questions simply and directly.
5.         Focus on only one factor for each question. Do not combine several objec­tives in the same question.
6.         Be prepared to repeat or reframe questions that are not understood.
7.         Wind down with friendly and reassuring dosing comments.
How do conferences and interviews score in terms of principles of assessment? Their practicality, as is true for many of the alternatives to assessment, is low because they are time-consuming. Reliability will vary between conferences and interviews. In the case of conferences, it may not be important to have rater reliability because the whole purpose is to offer individualized attention, which will vary greatly from student to student. For interviews, a relatively high level of reliability should be maintained-with-careful-attention to objectives and procedures. Face validity for both can be maintained at a high level due to their individualized nature. As long as the 'subject matter of the conference/interview is clearly focused on the course and course objectives, content validity should also be upheld. Washback potential and authenticity are high for conferences, but 'possibly only moderate for interviews unless the results of the interview are clearly folded into subsequent learning.

OBSERVATIONS
All teachers, whether they are aware of it or not, observe their students in the class­room almost constantly: virtually every question, every response, and almost every nonverbal behavior is, at some level of perception, noticed. All those intuitive per­ceptions are stored as little bits and pieces of information about students that can form a composite impression of a student's ability. Without ever administering a test or a quiz, teachers know a lot about their students. In fact, experienced teachers are so good at this almost subliminal process of assessment that their estimates of a student's competence are often highly correlated with actual independently adminis­tered test scores. (See Acton, 1979, for an example.)
How do all these chunks of information become stored in a teacher's brain cells? Usually not through rating sheets and checklists and carefully completed observation charts. Still, teachers' intuitions about students' performance are not infallible, and certainly both the reliability and face validity of their feedback to stu­dents can be increased with the help of empirical means of observing their lan­guage performance. The value of systematic observation of students has been extolled for decades (Flanders, 1970; Moskowitz, 1971; Spada & Frölich, 1995), and its utilization greatly enhances a teacher's intuitive impressions by offering tangible corroboration of conclusions. Occasionally, intuitive information is disconfirmed by observation data.
We will not be concerned in this section with the kind of observation that rates a formal presentation or any other prepared, prearranged performance in which the student is fully aware of some evaluative measure being applied, and in which the teacher scores or comments on the performance. We are talking about observation as a systematic, planned procedure for real-time, almost surreptitious recording of student verbal and nonverbal behavior. One of the objectives of such observation is to assess students without their awareness (and possible consequent anxiety) of the observation so that the naturalness of their linguistic performance is maximized.
What kinds of student performance can be usefully observed? Consider the fol­lowing possibilities:
Potential observation foci
·         sentence-level oral production skills.(see microskills, Chapter 7)
—pronunciation of target sounds, intonation, etc.
—grammatical features (verb tenses, question formation, etc.)
·         discourse-level skills (conversation rules, turn-taking, and other macroskills)
·         interaction with classmates (cooperation, frequency of oral production)
·         reactions to particular students, optimal productive pairs and groups, which "zones" of the classroom are more vocal, etc.
·         frequency of student-initiated responses (whole class, group work)
·         quality of teacher-elicited responses
·         latencies, pauses, silent periods (number of seconds, minutes, etc.)
·         length of utterances
·         evidence of listening comprehension (questions, clarifications, attention-giving verbal and nonverbal behavior)
·         affective states (apparent self-esteem, extroversion, anxiety, motivation, etc.)
·         evidence of attention-span issues, learning style preferences, etc.
·         students' verbal or nonverbal response to materials, types of activities, teaching styles
·         use of strategic options in comprehension or production (use of communication strategies, avoidance, etc.)
·         culturally specific linguistic and nonverbal factors (kinesics; proxemics; use of humor, slang, metaphor, etc.)
The list could be even more specific to suit the characteristics of students, the focus of a lesson or module, the objectives of a curriculum, and other factors. The list might expand, as well, to include other possible observed performance. In order to carry out classroom observation, it is of course important to take the following steps:
1.         Determine the specific objectives of the observation.
2.         Decide how many students will be observed at one time.
3.         Set up the logistics for making unnoticed observations.
4.         Design a system for recording observed performances.
5.         Do not overestimate the number of different elements you can observe at one time—keep them very limited.
6.         Plan how many observations you will make.
7.         Determine specifically how you will use the results.
Designing a system for observing is no simple task. Recording your observations can take the form of anecdotal records, checklists, or rating scales. Anecdotal records should be as specific as possible in focusing on the objective of the observation, but they are so varied in form that to suggest formats here would be counterproductive. Their very purpose is more note-taking than record-keeping. The key is to devise a system that maintains the principle of reliability as closely as possible.
Checklists are a viable alternative for recording observation results. Some checklists of student classroom performance, such as the COLT observation scheme devised by Spada and Frohlich (1995), are elaborate grids referring to such variables as
·         whole-class, group, and individual participation,
·         content of the topic,
·         linguistic competence (form, function, discourse, sociolinguistic),
·         materials being used, and
·         skill (listening, speaking, reading, writing),
with subcategories for each variable. The observer identifies an activity or episode, as well as the starring time for each, and checks appropriate boxes along the grid completing such a form in real time may present some difficulty with so many fac­tors to attend to at once.
Checklists can also be quite simple, which is a better option for focusing on only a few factors within real time. On one occasion I assigned teachers the task of noting occurrences of student errors in third-person singular, plural, and -ing mor­phemes across a period of six weeks. Their records needed to specify only the number of occurrences of each and whether each occurrence of the error was ignored, treated by the teacher, or self-corrected. Believe it or not, this was not an easy task! Simply noticing errors is hard enough, but making entries on even a very simple checklist required careful attention. The checklist looked like this:
Observation checklist, student errors


Grammatical Feature



Third person singular
Plural/s/
-ing progressive


Ignored
III
II
IIII


Treated by the teacher
I

I


Self-corrected

I
II

Each of the 30-odd checklists that were eventually completed represented a two-hour class period and was filled in with "ticks" to show the occurrences and the follow-up in the appropriate cell.
Rating scales have also been suggested for recording observations. One type of rating scale asks teachers to indicate the frequency of occurrence of target perfor­mance on a separate frequency scale (always = 5; never = 1). Another is a holistic assessment scale, like the TWE scale described in the previous chapter or the OPI scale discussed in Chapter 7, that requires an overall assessment within a number of categories (for example, vocabulary usage, grammatical correctness, fluency). Rating scales may be appropriate for recording observations after the fact—on the same day but after class, for example. Specific quantities of occurrences may be difficult to record while teaching a lesson and managing a classroom, but immediate subse­quent evaluations can include some data on observations that would otherwise fade from memory in a day or so.
If you scrutinize observations under the microscope of principles of assess­ment, you will probably find moderate practicality and reliability in this type of pro­cedure, especially if the objectives are kept very simple. Face validity and content validity are likely to get high marks since observations are likely to be integrated into the ongoing process of a course. Washback is only moderate if you do little follow up on observing. Some observations for research purposes may yield no washback whatever if the researcher simply disappears with the information and never com­municates anything back to the student. But a subsequent conference with a student can then yield very high washback as the student is made aware of empirical data on targeted performance. Authenticity is high because, if an observation goes rela­tively unnoticed by the student, then there is little likelihood of contrived contexts or playacting.


SELF- AND PEER-ASSESSMENTS
A conventional view of language assessment might consider the notion of self-and peer-assessment as an absurd reversal of politically correct power relation­ships. After all, how could learners who are still in the process of acquisition, especially the early processes, be capable of rendering an accurate assessment of their own performance? Nevertheless, a closer look at the acquisition of any skill reveals the importance, if not the necessity, of self-assessment and the benefit of peer-assessment. What successful learner has not developed the ability to monitor his or her own performance and to use the data gathered for adjustments and cor­rections? Most successful learners extend the learning process well beyond the classroom and the presence of a teacher or tutor, autonomously mastering the art of self-assessment. Where peers are available to render assessments, the advantage of such additional input is obvious.
Self-assessment derives its theoretical justification from a number of well established principles of second language acquisition. The principle of autonomy stands out as one of the primary foundation stones of successful learning. The ability to set one's own goals both within and beyond the structure of a classroom curriculum, to pursue them without the presence of an external prod, and to inde­pendently monitor that pursuit are all keys to success. Developing intrinsic moti­vation that comes from a self-propelled desire to excel is at the top of the list of successful acquisition of any set of skills.
Peer-assessment appeals to similar principles, the most obvious of which is coop­erative learning. Many people go through a whole regimen of education frog kindergarten up through a graduate degree and never come to appreciate the value of collaboration in learning—the benefit of a community of learners capable of teaching each-other something. Peer-assessment is simply one arm of a plethora of tasks and procedures within the domain of learner-centered and collaborative education.
Researchers (such as Brown & Hudson, 1998) agree that the above theoretical underpinnings of self- and peer-assessment offer certain benefits: direct involvement of students in their own destiny, the encouragement of autonomy, and increased motivation because - of their self-involvement. Of course, some noteworthy draw­backs must also be taken into account. Subjectivity is a primary obstacle to overcome. Students may be either too harsh on themselves or too self-flattering, or they may not have the necessary tools to make an accurate assessment. Also, especially in the case of direct assessments of performance (see below), they may not be able to discern their own errors. In contrast, Bailey (1998) conducted a study in which learners showed moderately high correlations (between .58 and .64) between self rated oral production ability and scores on the OPI, which suggests that in the assessment of general competence, learners' self-assessments may be more accurate than one might suppose.

Types of Self- and Peer-Assessment
It is important to distinguish among several different types of self- and peer-assessment and to apply them accordingly. I have borrowed from widely accepted classifica­tions of strategic options to create five categories of self- and peer-assessment° (1) direct assessment of performance, (2) indirect assessment of performance, (3) metacognitive assessment, (4) assessment of socioaffective factors, and (5) stu­dent self-generated tests.
1.    Assessment of [a specific] performance. In this category, a student typically monitors him- or herself—in either oral or written production—and renders some kind of evaluation of performance. The evaluation takes place immediately or very soon after the performance. Thus, having made an oral presentation, the student (or a peer) fills out a checklist that rates performance on a defined scale. Or perhaps the student views a video-recorded lecture and completes a self-corrected compre­hension quiz. A journal may serve as a tool for such self-assessment. Peer editing is an excellent example of direct assessment of a specific performance.
Today, the availability of media opens up a number of possibilities for self and peer-assessment beyond the classroom. Internet sites such as Dave's ESL Cafê (http://www.eslcafe.com/) offer many self-correcting quizzes and tests. On this and other similar sites, a learner may access a grammar or vocabulary quiz on the Internet and then self-score the result, which may be followed by comparing with a partner. Television and film media also offer convenient resources for self- and peer assessment. Gardner (1996) recommended that students in non-English-speaking countries access bilingual news, films, and television programs and then self-assess their comprehension ability. He also noted that video versions of movies with sub­titles can be viewed first without the subtitles, then with them, as another form of self- and/or peer-assessment.
2.    Indirect assessment of [general] competence. Indirect self or peer-assessment targets larger slices of time with a view to rendering an evaluation of general ability, as opposed to one specific, relatively time-constrained performance. The distinction between direct and indirect assessments is the classic competence-performance distinction. Self- and peer-assessments of performance are limited in time and focus to a relatively short performance. Assessments of competence may encompass a lesson over several days, a module, or even a whole term of course work, and the objective is to ignore minor, nonrepeating performance flaws and thus to evaluate general ability. A list of attributes can offer a scaled rating, from "strongly agree" to "strongly disagree," on such items as these:
Indirect self-assessment rating scale
I demonstrate active listening in class.
5
4
3
2
1
I volunteer my comments in small-group work.
5
4
3
2
1
When I don't know a word, I guess from context.
5
4
3
2
1
My pronunciation is very clear.
5
4
3
2
1
I make very few mistakes in verb tenses.
5
4
3
2
1
I use logical connectors in my writing.
5
4
3
2
1
In a successful experiment to introduce self-assessment in his advanced inter­mediate pre-university ESL class, Phillips (2000) created a questionnaire (Figure 10.2) through which his students evaluated themselves on their class participation. The items were simply formatted with just three options to check for each category, which made the process easy for students to perform. They completed the ques­tionnaire at midterm, which was followed up immediately with a teacher-student conference during which students identified weaknesses and set goals for the remainder of the term.
Of course, indirect self- and peer-assessment is not confined to scored rating sheets and questionnaires. An ideal genre for self-assessment is through journals, where students engage in more open-ended assessment and/or make their own fur­ther comments on the results of completed checklists
3.    Metacognitive assessment [for setting goals]. Some kinds of evaluation are more strategic in nature; with the purpose not just of viewing past performance or competence but of setting goals and maintaining an eye on the process of their pur­suit. Personal goal-setting has the advantage of fostering intrinsic motivation and of providing learners with that extra-special impetus from having set and accom­plished one's own goals. Strategic planning and self-monitoring can take the form of journal entries, choices from a list of possibilities, questionnaires, or cooperative (oral) pair or group planning.
A simple illustration of goal-setting self-assessment was offered by Smolen, Newman, Wathen, and Lee (1995). In response to the assignment of making "goal cards," a middle-school student wrote:
1.   My goal for this week is to stop during reading and predict what is going to happen next in the story.
2.     My goal for this week is to finish writing my Superman story.

CLASS PARTICIPATION
Please fill out this questionnaire by checking the appropriate box:
Yes, Definitely          Sometimes        Not Yet
                                                       
A.      I attend class.
I come to class.
I come to class on time.
Comments:________________________
B.      I usually ask questions in class.
I ask the teacher questions.
I ask my classmates questions.
Comments:________________________
C.      I usually answer questions in class.
I answer questions that the teacher asks.
I answer questions that my classmates ask.
Comments:________________________
D.      I participate in group-work
I take equal turns in all three roles (C, W and R).
I offer my opinion.
I cooperate with my group members.
I use appropriate classroom language.
Comments:________________________
Y


















S


















N


















E.       I participate in pair-work.
I offer my opinion.
I cooperate with my partner.
I use appropriate classroom language.
Comments:___________________
F.        I participate in whole-class discussions.
I make comments.
I ask questions.
I answer questions.
I respond to things someone else
says.
I clarify things someone else says.
I use the new vocabulary.
Comments:___________________
G.     I listen actively in class.
I listen actively to the teacher.
I listen actively to my classmates.
Comments:___________________
H.     I complete the peer-reviews.
I complete all of the peer-reviews.
I respond to every question.
I give specific examples.
I offer suggestions.
I use appropriate classroom language.
Comments:___________________
Y


























S

























N

























Figure 10.2. Self-assessment of class participation (Phillips, 2000)

On the back of this same card, which was filled out at the end of the week, was the student's self-assessment:
The first goal help me understand a lot when I'm reading.
I met my goal for this week.
Brown's (1999) New Vistas series offers end-of-chapter self-evaluation checklists that give students the opportunity to think about the extent to which they have reached a desirable competency level in the specific objectives of the unit. Figure 10.3 shows a -Sample of this "checkpoint" feature. Through this technique, students are reminded of the communication skills they have been focusing on and are given a chance to identify those that are essentially accomplished, those that are not yet ful­filled, and those that need more work. The teacher follow-up is to spend more time on items on which a number of students checked "sometimes" or "not yet," or possibly to individualize assistance to students working on their own points of challenge.
I can
Yes!
Sometimes
Not Yet
say the tithe in different ways.
describe an ongoing action.
ask about and describe what people are wearing.
offer help.
accept or decline an offer of help.
ask about and describe the weather and seasons.
write a letter.





















Figure. 10.3. Self-assessment of lesson objectives (Brown, 1999, p. 59)
4.    Socioaffective assessment. Yet another type of self- and peer-assessment comes in the form of methods of examining affective factors in learning. Such as­sessment is quite different from looking at and planning linguistic aspects of acqui­sition. It requires looking at oneself through a psychological lens and may not differ greatly from self-assessment across a number of subject-matter areas or for any set of personal skills: When learners-resolve to-assess and improve motivation, to gauge and lower their own anxiety, to find mental or emotional obstacles to learning and then plan to overcome those barriers, an all-important socioaffective domain is in­voked. A checklist form of such items may Look like many of the questionnaire items in Brown (2002), in which test-takers must indicate preference for one statement over the one on the opposite side:
Self-assessment of styles (Brown, 2002, pp. 2, 13)
I don't mind if people laugh at me when I speak.                                                                 
I like rules and exact information.                      

A B C D

A B C D


I get embarrassed if people laugh at me when I speak.
I like general guidelines and uncertain information.


In the same book, multiple intelligences are self-assessed on a scale of definite agreement (4) to definite disagreement (1):
Self-assessment of multiple intelligences (Brown, 2002, p. 37)
4
3
2
1
I like memorizing words.
4
3
2
1
I like the teacher to explain grammar to me.
4
3
2
1
I like making charts and diagrams.
4
3
2
1
I like drama and role plays.
4
3
2
1
I like singing songs in English.
4
3
2
1
I like group and pair interaction.
4
3
2
1
I like self-reflection and journal writing.
The New Vistas series (Brown, 1999) also presents an end-of-unit section on "Learning Preferences" that calls for self-assessment of an individual's learning pref­erences (Figure 10.4). This information is of value to both teacher and student in identifying preferred styles, especially through subsequent determination to capi­talize on preferences and to compensate for styles that are less than preferred.
Learning Preferences
Think about the work you did in this unit: Put a check next to the items that helped you learn the lessons.
Put two checks next to the ones that helped a lot.






Listening to the teacher
Working by myself
Working with a partner
Working with a group
Asking the teacher questions






Listening to the tapes and doing exercises
Reading
Writing paragraphs
Using the Internet

Figure 10.4. Self-assessment of learning preferences (Brown, 1999, p. 59)
5.  Student-generated tests. A final type of assessment that is not usually classi­fied strictly as self- or peer-assessment is the technique of engaging students in the process of constructing tests themselves. The traditional view of what a test is would never allow students to engage in test construction, but student-generated tests can be productive, intrinsically motivating, autonomy-building processes.
Gorsuch (1998) found that student-generated quiz items transformed routine weekly quizzes into a collaborative and fulfilling experience. Students in small groups were directed to create content questions on their reading passages and to collectively choose six vocabulary items for inclusion on the quiz. The process of creating questions and choosing lexical items served as a more powerful reinforce­ment of the reading than any teacher-designed quiz could ever be. To add further interest, Gorsuch directed students to keep records of their own scores to plot their progress through the term.
Murphey (1995), another champion of self- and peer-generated tests, success­fully employed the technique of directing students to generate their own lists of words, grammatical concepts, and content that they think are important over the course of a unit. The list is synthesized by Murphey into a list for review, and all items on the test come from the list. Students thereby have a voice in determining the content of tests. On other occasions, Murphey has used what he calls "interac­tive pair tests" in which students assess each other using a set of quiz items. One stu­dent's response aptly summarized the impact of this technique:
We had a test today. But it was not a test, because we could study far it beforehand. I gave some questions to my partner and my partner gave me some questions. And we students decided what grade we should get. I hate tests, but I like this kind of test. So please don’t give us a surprise test.  I think, that kind of  test  that we did today is more useful for me than a surprise test because I study for it.
Many educators agree that one of the primary purposes in administering tests is to stimulate review and integration, which is exactly what student-generated testing does, but almost without awareness on the students' part that they are reviewing the material. I have seen a number of instances of teachers successfully facilitating-students in the self-construction of tests. The process engenders intrinsic involvement in reviewing objectives and selecting and designing items for the final form of the test. The teacher of course needs to set certain parameters for such a project and be willing to assist learners in designing items.

Guidelines for Self- and Peer-Assessment
Self- and peer-assessment are among the best possible formative types of assess­ment and possibly the most rewarding, but they must be carefully designed and administered for them to reach their potential. Four guidelines will help teachers bring this intrinsically motivating task into the classroom successfully.
1.    Tell students the purpose of the assessment. Self-assessment is a process that many students—especially those in traditional educational systems—will ini­tially find quite uncomfortable. They need to be sold on the concept. It is therefore essential that you carefully analyze the needs that will be met in offering both self-and peer-assessment opportunities, and then convey this information to students.
2.    Define the task(s) clearly. Make sure the students know exactly what they are supposed to 00: If you are offering a rating sheet or questionnaire, the task is not complex, but an open-ended journal entry could leave students perplexed about what to write. Guidelines and models will be of great help in clarifying the procedures.
3.    Encourage impartial evaluation of performance or ability One of the greatest drawbacks to self-assessment is the threat of subjectivity. By showing stu­dents the advantage of honest, objective opinions, you can maximize the beneficial washback of self-assessments. Peer-assessments, too, are vulnerable to unreliability as students apply varying standards to their peers. Clear assessment criteria can go a long way toward encouraging objectivity
4.    Ensure beneficial washback through follow-up tasks. It is not enough to simply toss a self-checklist at students and then walk away. Systematic follow-up can be accomplished through further self-analysis, journal reflection, written feedback from the teacher, conferencing with the teacher, purposeful goal-setting by the stu­dent, or any combination of the above.


A Taxonomy of Self- and Peer-Assessment Tasks
To sum up the possibilities for self- and peer-assessment, it is helpful to consider a variety of tasks within each of the four skills.
Self- and peer-assessment tasks
Listening Tasks
listening to TV or radio broadcasts and checking comprehension with a partner
listening to bilingual versions of a broadcast and checking comprehension
asking when you don't understand something in pair or group work
listening to an academic lecture and checking yourself on a "quiz" of the content
setting goals for creating/increasing opportunities for listening

Speaking Tasks
filling out student self-checklists and questionnaires
using peer checklists and questionnaires
rating someone's oral presentation (holistically)
detecting pronunciation or grammar errors on a self-recording
asking others for confirmation checks in conversational settings
setting goals for creating/increasing opportunities for speaking

Reading Tasks
reading passages with self-check comprehension questions following
reading and checking comprehension with a partner
taking vocabulary quizzes
taking grammar and vocabulary quizzes on the Internet
conducting self-assessment of reading habits
setting goals for creating/increasing opportunities for reading

Writing Tasks
revising written work on your own
revising written work with a peer (peer editing)
proofreading
using journal writing for reflection, assessment, and goal-setting
setting goals for creating/increasing opportunities for writing
An evaluation of self- and peer-assessment according to our classic principles of assessment yields a pattern that is quite consistent with other alternatives to assessment that have been analyzed in this chapter. Practicality can achieve a mod­erate level with such procedures as checklists and questionnaires, while reliability risks remaining at a low level, given the variation within and across learners. Once students accept the notion that they can legitimately assess themselves, then face validity can be raised from what might otherwise be a low level. Adherence to course objectives will maintain a high degree of content validity. Authenticity and washback both have very high potential because students are centering on their own linguistic needs and are receiving useful feedback.
Table 10.1 is a summary of all six of the alternatives in assessment with regard to their fulfillment of the major assessment principles. The caveat that must accompany such a chart is that, none of the evaluative "marks" should be considered permanent or unchangeable. In fact, the challenge that was presented at the begin­ning of the chapter is reiterated here: take the "low" factors in the chart and create assessment procedures that raise those marks.
-------
Perhaps it is now clear why "alternatives in assessment" is a more appropriatc phrase than "alternative assessment." To set traditional testing and alternativc against each other is counterproductive. All kinds of assessment, from formal con ventional procedures to informal and possibly unconventional tasks, are needed to assemble information on students. The alternatives covered in this chapter may not be markedly different from some of the tasks described in the preceding four chap­ters (assessing listening, speaking, reading, and writing). When we put all of this together, we have at our disposal an amazing array of possible assessment tasks for second language learners of English. The alternatives presented in this chapter simply expand that continuum of possibilities.
Table 10.1. Principled evaluation of alternatives to assessment
Principle
Portfolio
Journal
Conference
Interview
Observation
Self/peer
Practicality
low
low
low
mod
mod
mod
Reliability
mod
mod
low
mod
mod
low
Face validity
high
mod
high
high
high
mod
Content validity
high
high
high
high
high
high
Washback
high
high
high
mod
mod
high
Authenticity
high
high
high
mod
high
high

No comments:

Daftar Isi