[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
:: Volume 20, Issue 1 (4-2017) ::
IJAL 2017, 20(1): 113-150 Back to browse issues page
Investigating the Effect of the Training Program on Raters’ Oral Performance Assessment: A Mixed-Methods Study on Raters’ Think-Aloud Verbal Protocols
Houman Bijani , Mona Khabiri
Islamic Azad University, Central Branch
Abstract:   (1121 Views)
Although the use of verbal protocols is growing in oral assessment, research on the use of raters’ verbal protocols is rather rare. Moreover, those few studies did not use a mixed-methods design. Therefore, this study investigated the possible impacts of rater training on novice and experienced raters’ application of a specified set of standards in rating. To meet this objective, the study made use of verbal protocols produced by 20 raters who scored 300 test takers’ oral performances and analyzed the data both qualitatively and quantitatively. The outcomes demonstrated that through applying the training program, the raters were able to concentrate more on linguistic, discourse, and phonological features; therefore, the extent of their agreement increased specifically among the inexperienced raters. The analysis of verbal protocols also revealed that training how to apply a well-defined rating scale can foster its use for raters both validly and reliably. Various groups of raters approach the task of rating in different ways, which cannot be explored through pure statistical analysis. Thus, think-aloud verbal protocols can shed light on the vague sides of the issue and add to the validity of oral language assessment. Moreover, since the results of this study showed that inexperienced raters can produce protocols of higher quality and quantity in the use of macro and micro strategies to evaluate test takers’ performances, there is no evidence based on which decision makers should exclude inexperienced raters solely because of their lack of adequate experience.
Keywords: Bias, Oral performance assessment, Rater training, Think-aloud verbal protocols
Full-Text [PDF 1064 kb]   (477 Downloads)    
Type of Study: Research | Subject: Special
Received: 2016/11/12 | Accepted: 2017/02/15 | Published: 2017/02/21
1. Attali, Y. (2016). A comparison of newly-trained and experienced raters on a standardized writing assessment. Language Testing, 33(1), 99-115. [DOI:10.1177/0265532215582283]
2. Barkaoui, K. (2011). Think-aloud protocols in research on essay rating: An empirical study on their veridicality and reactivity. Language Testing, 28(1), 51-75. [DOI:10.1177/0265532210376379]
3. Bowles, M. A. (2010). The think-aloud controversy in second language research. New York: Routledge.
4. Carey, M. D., Mannell, R. H., & Dunn, P. K. (2011). Does a rater familiarity with a candidate's pronunciation affect the rating in oral proficiency interviews? Language Testing, 28(2), 201-219. [DOI:10.1177/0265532210393704]
5. Cohen, A. D. (1994). Verbal reports on learning strategies. TESOL Quarterly, 28(4), 678-682.
6. Cumming, A., Kantor, R., & Powers, D. E. (2002). Decision making while rating ESL/EFL writing tasks: A descriptive framework. The Modern Language Journal, 86(1), 67-96. [DOI:10.1111/1540-4781.00137]
7. Davis, L. (2016). The influence of training and experience on rater performance in scoring spoken language. Language Testing, 33(1), 117-135. [DOI:10.1177/0265532215582282]
8. Ducasse, A. M., & Brown, A. (2009). Assessing paired orals: Raters' orientation to interaction. Language Testing, 26(3), 423-443. [DOI:10.1177/0265532209104669]
9. Erdosy, M. U. (2004). Exploring variability in judging writing ability in a second language: A study of four experienced raters of ESL compositions. Princeton, NJ: Educational Testing Service.
10. Ericsson, K. A., & Simon, H. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press.
11. Green, A. (1998). Verbal protocol analysis in language testing research. Cambridge: Cambridge University Press.
12. Kim, H. J. (2011). Investigating raters' development of rating ability on a second language speaking assessment. Unpublished PhD thesis, University of Columbia.
13. Kim, H. J. (2015). A qualitative analysis of rater behavior on an L2 speaking assessment. Language Assessment Quarterly, 12(3), 239-261. [DOI:10.1080/15434303.2015.1049353]
14. Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26(2), 275-304. [DOI:10.1177/0265532208101008]
15. Knoch, U. (2011). Investigating the effectiveness of individualized feedback to rating behavior –a longitudinal study. Language Testing, 28(2), 179-200. [DOI:10.1177/0265532210384252]
16. Kuiken, F., & Vedder, I. (2014). Raters' decisions, rating procedures and rating scales. Language Testing, 31(3), 279-284. [DOI:10.1177/0265532214526179]
17. Ling, G., Mollaun, P., & Xi, X. (2014). A study on the impact of fatigue on human raters when scoring speaking responses. Language Testing, 31(4), 479-499. [DOI:10.1177/0265532214530699]
18. Lumley, T. (2005). Assessing second language writing: The rater's perspective. Frankfurt, Germany: Peter Lang.
19. Luoma, S. (2004). Assessing speaking. Cambridge. Cambridge University Press. [DOI:10.1017/CBO9780511733017]
20. McNamara, T. F. (1996). Measuring second language performance. London: Longman.
21. McNamara, T. F., & Lumley, T. (1997). The effect of interlocutor and assessment mode variables in overseas assessments of speaking skills in occupational settings. Language Testing, 14(2), 140-156. [DOI:10.1177/026553229701400202]
22. Nakatsuhara, F. (2011). Effect of test-taker characteristics and the number of participants in group oral tests. Language Testing, 28(4), 483-508. [DOI:10.1177/0265532211398110]
23. Papajohn, D. (2002). Concept mapping for rater training. TESOL Quarterly, 36(2). 219-233. [DOI:10.2307/3588333]
24. Sasaki, T. (2014). Recipient orientation in verbal report protocols: Methodological issues in concurrent think-aloud. Second Language Studies, 22(1), 1-54.
25. Sawaki, Y. (2007). Construct validation of analytic rating scales in a speaking assessment: Reporting a score profile and a composite. Language Testing, 24(3), 355-390. [DOI:10.1177/0265532207077205]
26. Shohamy, E. (1994). The validity of direct versus semi-direct oral tests. Language Testing, 11(2), 99-123. [DOI:10.1177/026553229401100202]
27. Smagorinsky, P. (2001). Rethinking protocol analysis from a cultural perspective. Annual Review of Applied Linguistics, 21(3), 233-245. [DOI:10.1017/S0267190501000149]
28. Trace, J., Janssen, G., & Meier, V. (2017). Measuring the impact of rater negotiation in writing performance assessment. Language Testing, 34(1), 3-22. [DOI:10.1177/0265532215594830]
29. Wagner, M. J. (2006). Utilizing the visual channel: An investigation of the use of video texts of second language listening ability. Unpublished doctoral dissertation, Teachers College, Columbia University, New York.
30. Wallace, M. J. (1991). Training foreign language teachers-A reflective approach. Cambridge: Cambridge University Press.
31. Weigle, S. C. (1999). Investigating rater/prompt interactions in writing assessment: Quantitative and qualitative approaches. Assessing Writing, 6(2), 145-178. [DOI:10.1016/S1075-2935(00)00010-6]
32. Wolfe, E. W. (2004). Identifying rater effects using latent trait models. Psychology Science, 46(1), 35-51.
Add your comments about this article
Your username or Email:


XML     Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Bijani H, Khabiri M. Investigating the Effect of the Training Program on Raters’ Oral Performance Assessment: A Mixed-Methods Study on Raters’ Think-Aloud Verbal Protocols. IJAL. 2017; 20 (1) :113-150
URL: http://ijal.khu.ac.ir/article-1-2766-en.html

Volume 20, Issue 1 (4-2017) Back to browse issues page
Iranian Journal of Applied Linguistics
Persian site map - English site map - Created in 0.15 seconds with 31 queries by YEKTAWEB 3795