Possible to Make Assessment Culturally Inclusive?

A dear friend and colleague of mine, Amy Puett, posed a very interesting question about culturally inclusive language assessments a while back. Unfortunately, I wasn’t very helpful in answering. What I could offer, however, was a space to share this question with the larger world of ELT. If you have any insight into what Amy offers below, we’d be very grateful.  

 

I’m currently working on an MA in TESOL, and I’m doing a research project on biases in spoken assessments. In my experience, there hasn’t been a lot to account for in differences in cultures, ages or purpose when it comes to assessing students in their level of spoken English. The standard guidelines for assessing one’s level of spoken English are generally the CEFR and ACTFL guidelines; however, these don’t account for shy, uncommunicative Korean students who might be under a lot of pressure with their studies or a socially astute Pakistani student who knows when and where to pull out stock phrases to impress others with their English skills. Also, there is little room in these guidelines to accurately describe young learners. I’ve taught many Korean elementary students who are at a B1 level of reading, and I can’t see the students acknowledging their level by saying “I can deal with most situations likely to arise whilst travelling in an area where the language is spoken. I can enter unprepared into conversation on topics that are familiar, of personal interest or pertinent to everyday life (e.g. family, hobbies, work, travel and current events).”

Even the guide to using the CEFR guidelines states in the introduction that ‘The Framework aims to be not only comprehensive, transparent and coherent, but also open, dynamic and non-dogmatic. -Council of Europe (2001a:18)’[1] It also mentions that it’s not expansive in describing young learners. [2] However, I have still witnessed that many students are nonetheless labeled with these terms, which I feel can affect how some teachers teach them. The ACTFL has similar issues. For example, the following is said about a student with ‘low intermediate’ speaking skills:

“At the Intermediate Low sublevel, speakers are primarily reactive and struggle to answer direct questions or requests for information. They are also able to ask a few appropriate questions. Intermediate Low speakers manage to sustain the functions of the Intermediate level, although just barely.”[3]

It would be beyond many people’s conscience to behave in such a manner in some of the countries I’ve taught in, and they would learn how to fake responses or gloss over any gaps in their understanding. I also know language students of higher speaking levels who would act in the same manner due to their naturally shy nature in dealing with foreigners.

I feel there must be more TESOL instructors can do to incorporate a more inclusive set of terminology that would allow either more room for interpretation or include a more sets of guidelines to adequately assess students and describe their levels of spoken English language skills.

My question is can any share their experiences with this issue or recommend relevant research? Any help would be appreciated.

 

6 thoughts on “Possible to Make Assessment Culturally Inclusive?

    1. Thanks, Eily. That did clear up some of the grey areas in assessment for me, and your response brings into question *what* is actually being assessed, which is important.

      I think teaching communication and soft skills definitely aids in teaching spoken English, and in addressing those components in assessment, it would take away a certain degree of competition that comes with assessing language learners. I feel that would be a start in making assessments more inclusive for students across different cultures and ages.

      Like

  1. Hi to Josette and to Amy. I think, first of all, that the problem lies in the area of standardized testing. The minute we start jargonising responses and results outside of the individual, the day and time, the situation and the expected reason for the interaction/outcome, we end up with non-inclusive and non-applicable categories. Adding more jargon, and more categories simply creates an unwieldy and confusing instrument.
    I would argue for a simpler, and dare I say, broader conceptualization of what we really want to measure. What, after all, constitutes a speaking test? Or a communication? And what do we want learners to work on when we do assess them?
    Personally, I feel we need to talk about communication when we talk about assessment. Something like ‘accurate understanding of questions’ ; ‘gist understaning of questions’; questions need to be put with supporting scaffolding and supporting media to be understood’; ‘questions are not understood at all, or misunderstood’. And on the response side ‘can frame a response appropriate to the lead’ ; ‘can frame a response to the lead but may not be able to state exactly the thought or idea’ ; ‘can make some response but incomplete and/or innacurate’ ; ‘cannot make any response’.
    We also need to think of what communication among native speakers really is – read some transcripts of conversations to see how disjointed, ungrammatical and often innacurate it is. We answer a question with a sngle word, or a whole paragraph in which we may stray from the main point several times.
    For me, a conversation where students can take part, steering the topic occasionaly, is a more accurate assessment of their capability.
    And of course, we have to consider the shy students, the ones who are distracted that day, the ones who find the topic boring etc.
    I realise I’m only touching on the topic here, but hope the thoughts can help.

    Liked by 1 person

    1. You bring up a good point about ‘jargonising’ responses and with my question, I might be adding to that. Instead of putting a label on it, it might be better to describe how students interact with others with others, which would be quite appropriate for assessing speaking skills.

      Thanks for your response. You have me a lot to think about.

      Like

  2. Hi, Josette. Thanks for this post. I agree that if we assess our students against some general guidelines, we will inevitably run into problems. I think we teachers should always judge a student’s performance at a given moment against his/her performance at another moment in the past. Nothing else is fair enough. Once the gauge is outside the learner, it will never give us correct answers about the learner’s progress. It’s easier said that done, though, especially if you have to officially grade your students. Thus, I have two gauges – an official guideline and a more personal one. Apart from grading them, I always tell my students how much they have improved, even though the final grade has not improved yet. My students usually accept this without objections. I admit that it’s not ideal, but that’s the way I’m grappling with the problem right now. Like you, I would also appreciate a more inclusive set of terminology.

    Liked by 1 person

    1. Progress is something that many teachers keep in mind when grading students. Maybe the speed in which students pick up concepts or how much they’ve improved in a period of time would be beneficial in adding to an assessment guideline.

      Thanks for your input!

      Like

Leave a comment