Measuring the Communication Competency of EFL Learners Essay

Exclusively available on Available only on IvyPanda® Written by Human No AI

Introduction

The acronym EFL stands for English as a foreign language where various tests are used to measure the communication and language competency of individuals who use English as a second or foreign language.

The teaching and testing of EFL has been on the rise in the recent past as more and more people seek to have their language speaking skills evaluated in terms of their ability to promote communication competency (Davies 2000).

There are various tests that are used to measure English as a foreign language but some of the most commonly used include TOEFL (Test of English as a Foreign Language), IELTS (International English Language Testing System), TSE (Test of Spoken English) and TWE (Test of Written English).

The earliest works in language assessment in the US date back to the 1950s after pioneering studies were conducted by individuals such as Robert Lado and David Harris (Taylor and Falvey 2007).

TOEFL, which was the first large scale assessment of measuring English, was developed by the Educational Testing Service in Princeton, New Jersey in the year 1961. The test was designed with the sole purpose of assessing the English language competency and ability of students who were applying to be admitted to either Canadian or US colleges and universities.

TOEFL is still the most commonly used English language proficiency in the world and it is now also available in the Internet based format which is known as the TOEFL IBT. The other tests such as IELTS were developed in recent years to deal with the growing need for English language proficiency and competence testing for people applying for college admissions, work opportunities and training activities.

The purpose of this study will be to assess the effectiveness of direct and indirect tests in measuring the communication competency of EFL learners as well as their abilities in speaking the English language (Silly 2006).

Direct, Indirect and Semi Direct EFL Tests

As mentioned in the introductory part of this discussion, the main focus of these tests is usually on the assessment of English as a second or foreign language in either the school, college or university context. The English language is also assessed on the basis of its use in the workplace and also in the cases of applicants using the tests for immigration, citizenship and asylum purposes.

The main areas of assessment that are usually considered in EFL tests include reading, writing, speaking and listening where the language user is tested on their ability and competency in using the English language. The weights are usually allocated based on the understanding that the user has of the English language and also their ability to use the language in a practical way (Alderson and Banerjee 2002).

EFL Language assessment tests are usually administered through a variety of ways which include the direct, semi direct and indirect testing. The direct method of assessing English involves the learner actually doing the skills that need to be assessed in the test through the use of actual samples of the examinee’s writing, speaking, listening and reading skills to judge their proficiency in the language.

Direct tests incorporate the use of procedures where the examinee engages in a face-to-face communicative exchange with an interviewer or a group of interlocutors. The concept behind this method of assessing English proficiency is mostly associated with productive rather than receptive skills of the individual.

The direct method of assessing the user’s speaking abilities involves using oral proficiency interviews (OPI) that are based on flexible and unstructured oral interviews to gauge the speaking abilities of the individual (Fulcher 2003).

The oral proficiency interviews are usually conducted by trained interviewers who assess the speaking proficiency and capability of the interviewee through the use of a global band scale. The OPI has become the most commonly used language in measuring the speaking proficiency of English as a second or foreign language since its introduction in the 1970s.

Different models of oral proficiency interviews have emerged during the last decade to respond to the various criticisms that have emerged about the validity and reliability of OPI tests.

These newer models have incorporated more standardization in their procedures where a range of specified tasks vary in terms of characteristics such as interviewee and interviewer roles, functional demands and stimulus characteristics. The IELTS test has been able to incorporate all these aspects in its testing procedures to ensure that the speaking weights have been standardized (Fulcher 2010).

The direct ESL language assessments treat speaking as an active and generative socio-cognitive process where the examinee or interviewee demonstrates a variety of skills that are orchestrated to make sense of certain conversations and also communicate with the interviewer during the speaking test.

As mentioned earlier in the discussion, speaking tests are usually facilitated by trained interviewers and the general method of dispensing these tests is through single or multiple samples of speaking that have been generated under either controlled or uncontrolled conditions which have been supported by instruction and feedback (Bazerman 2008).

The various speaking sub-tests take the form of structured interviews that are made up of five distinct sections which are used to measure the varying speaking and communication demands given to the examinees.

These subtests include an introduction where the interviewer and interviewee introduce themselves, an extended discourse task where the interviewee speaks at length on a familiar topic, an elicitation task where the interviewer is required to elicit certain information from the interviewee, a speculation and attitudes task where the candidate is encouraged to talk about their future plans and a conclusion test where the interview is brought to a close (O’Loughlin 2001).

The indirect speaking tests on the other hand refer to those procedures where the examinee is not required to speak at all during the test. Indirect assessment tests usually measure the probable speaking abilities of the examinee by observing the various types of knowledge and skills that are associated with the examinee’s speaking ability.

Indirect assessment tests in speaking require the use of passive recognition of errors and the selection of suitable examples rather than the active generation of speaking tasks.

The major features of these types of tests are that they objectively measure the speaking abilities and competency of the examinee, they have a high statistical reliability, speaking tests allow for the standardization of the test and they also measure the inferential judgement of the examinee during the speaking exercise (Douglas 2010).

During these EFL assessment tests, the examinee performs various nonspeaking tasks that are related to the scores allocated to actual speaking tasks. A conversational cloze test is a common type of indirect test made up of a written passage that is based on the transcript of a conversation.

The words in this transcript are usually deleted and replaced with a blank space that requires the examinee to fill in the correct words. The scores of conversational cloze tests are usually similar to direct speaking assessment tests despite the fact examinees do not speak at all during these tests.

All they are required to do is provide the suitable responses to the cloze tests after which the examiner allocates the suitable test scores at a later time (Comings et al, 2006).

Semi direct tests on the other hand measure the speaking abilities of the examinee by tape recording the speaking test and also printing test booklets that will assess the speaking abilities and competency of the examinee. Semi direct speaking tests rarely incorporate the use of face-to-face conversation with interviewers as is the case with the direct and indirect methods of testing.

In the case of a tape recording, the examinee’s performance is usually made after which it is rated at a later time by one or more trained assessors. Some of the most commonly used semi direct tests include the test of spoken English (TSE), the recorded oral proficiency examination (ROPE) and the simulated oral proficiency interview (SOPI) (Fulcher and Davidson 2007).

Semi-direct tests were also introduced in the 1970s, the same time as the TOEFL tests and they have also experienced considerable growth over the last few years. Semi-direct tests were the first English language tests that incorporated the standardization of speaking assessments while at the same time retaining the communicative aspect of the oral proficiency interviews.

The growth in use of these tests was mostly attributed to the fact they were more cost efficient than the direct tests especially when they administered in a group setting. These tests were also able to provide a practical solution in situations where it was not possible to administer a direct test to the examinee. For example in the event it was difficult to use one trained interviewer for the speaking test (McNamara 1996).

Effectiveness and Validity of the Direct Speaking Tests

Clark conducted a study in 1979 to determine the validity of the three tests in measuring the speaking capabilities of an individual. He argued that the most valid and preferable of the three EFL tests is direct assessment where the individual’s speaking capabilities are measured in a reliable and accurate way.

This method determined whether the examinee had a close relationship with both the context of the speaking test and the real life. This statement means that direct tests presented a more authentic reflection of the communication aspects that took place in the real world.

Clark in his assessment of the validity of the OPI tests noted that the procedures used in this test failed to establish a relationship between the context of the test and the real world (Shaw and Weir 2007).

Clark noted that OPI tests had several deficiencies that made these methods of testing English speaking invalid. One of these problems was that the examinee was aware that they were talking to the interviewee or language assessor making it difficult to establish a connection with the real world.

Another problem with the OPI method was that the language elicited during the interview did not reflect the type of conversations that took place in the real world. The fact that the interviewer controlled the interview made it difficult for the interviewee to ask any questions or make any comments or opinions with regards to the test (Lazaraton 2002).

Hughes and Van Lier (1989 cited by O’Loughlin 2001) conducted their own individual studies on the effectiveness of the direct assessment method in speaking languages where they noted that the validity of oral tests or assessments relied on the information asymmetry that existed between the test facilitator and the examinee.

Hughes noted that in the oral assessments, the interviewee spoke to the facilitator as if they were addressing a superior and they were also unwilling to take any initiatives in starting the next conversation or discussion points given that the interviewer had control of the session. This meant that only one style of speaking or communication was elicited from these oral tests meaning that the test was one sided (Cohen 1998).

Hughes offered a recommendation that these oral tests should have included activities such as role playing and discussion so that the type of interaction between the interviewer and the interviewee was varied.

Van Lier on the other hand pursued a stronger version of this argument where he questioned whether an interview could validly assess the oral proficiency of the examinee by contrasting the essential features of conversations used within the interviews (Ferrer et al. 2010).

Van Lier noted that an interview during the speaking test had asymmetrical contingency meaning that the trained facilitator had a plan of how the interview was meant to go and also how he would control the direction of the interview based on that plan.

Van Lier differentiated these tests from a normal conversation which he described was characterised by face-to-face interactions, unplanned discussion points, unpredictable sequences and outcomes and also the equal distribution of talking points amongst the various participants of the conversation.

According to such a characterization, the emphasis of an interview mostly depended on the elicitation of information rather than on the support of successful conversations (O’Loughlin 2001).

With regards to the student-to-student interaction during the test, the validity of this method of assessing the speaking competency of the examinee has proved to be an effective and valid method as it establishes a connection between the real world conversations and the context of the test.

Student-to-student interactions eliminate the need of eliciting for information from the interviewee as the communication is not planned or scheduled during these tests. The examinee is able to explore various aspects of their speaking skills without having to wait for an oral command from the facilitator of the test.

These interactions during speaking tests ensure that all aspects of the examinee’s speaking competency have been measured and evaluated by the facilitator overseeing the test (Schumm 2006).

Effectiveness of Indirect and Semi-Indirect Methods

A lot of research and discussions have been conducted with regards to the pros and cons of measuring the speaking proficiency of examinees or test takers through the use of these methods. There has been considerable debate and criticism of the various methods of assessing the speaking abilities of examinees during speaking tests. This has mostly emerged as a result reliability and validity problems with some of these tests.

The most criticised test is the indirect method of assessment which incorporates the use of standardized tests that have been fragmented and decontextualized.

Of all the three speaking procedures, the indirect assessment test is viewed to be the least valid measure of a person’s ability to speak English because the examinee is not required to speak at all during the course of the test. These tests mostly rely on the probability that the test taker can be able to Speak English proficiently (Luoma 2004).

Indirect speaking tests have been considered by many English facilitators and assessors to be highly invalid and ineffective methods of measuring the speaking capabilities of test takers because they incorporate a substantial portion of specific testing formats such as mimicry of heard phrases, the descriptions of pictures, sounds and visual objects and the reading out loud of printed text.

This method also removes the face-to-face interaction that takes place between the interviewer and interviewee, making it difficult for the examiner to determine the accurate proficiency of the test taker. There have been many advantages and disadvantages that have been propagated for the use of indirect testing in measuring the learner’s ability to speak and pronounce English (Comings et al 2006).

A study conducted by Lado in 1961 on the effectiveness of indirect tests in measuring the English competency of individuals revealed that indirect tests could not be used as substitutes for direct tests. This was mostly attributed to their impracticality and inability to accurately measure the actual speaking abilities of the examinee.

Underhill in his 1987 study also criticised indirect assessments in EFL speaking tests by arguing that the reasonability of the tests did not hide the fact that they were highly invalid and ineffective in measuring the speaking abilities of the examinees.

Underhill argued that as long as the testers and test takers were not happy with the test, the results of the indirect assessment would more than likely yield poor results. He noted that the best way of determining the validity of the test was to question the different people who used the test in measuring their speaking competency (Weigle 2002).

In terms of the semi-direct methods of English assessment, the validity of these methods is somewhat similar to that of direct tests as examinees are required to carry out some practical speaking tests that are less realistic when compared to those of the direct tests. Such tests involve responding to tape recorded questions, imitating voice models done on tape recordings and describing visual objects out loud to the examiner.

The validity of this method of English speaking assessment has also experienced some criticism as it does not involve any active speaking. The examinee participates in artificial language use when they respond to tape recorded questions or voice imitation which means that they are not able to experience real life communication.

This method while similar to direct testing has a higher reliability because of the uniformity of elicited responses from the examinee which is usually performed in a standardised procedure. This uniformity is however impossible in direct testing because of the variability of the plan being used by the interviewer during the test (Davidson and Lynch 2002).

Lazarton (1996 cited by O’Loughlin 2001) supported the reliability of semi direct tests by arguing that the potential of the interviewer’s uneven performance during the face-to-face interview was one of the major reasons why the semi direct tests were more appealing as they removed the variability introduced by the examiner during the speaking tests.

Lazarton’s observations were true when the unstandardised OPI tests were used in assessing the speaking abilities of test takers. The content and form of questions in OPI’s varied in a considerable way from one interview to another, making it difficult for this method to be viewed as reliable (Lazaraton 2002).

According to Underhill, the lack of standardised questions created adverse effects on the performance of the examinee in the tests, making the direct method of assessing the speaking skills of examinee’s unreliable.

Underhill pinpointed that the lack of scripts in oral interviews gave direct tests more flexibility than the semi-direct tests. He noted that semi direct speaking tests had a more predictable content when it came to the candidate output during the tests as they were required to participate in unsequenced test questions.

The scoring criteria for semi- direct tests were more easily constructed and accurate when compared to the direct assessment method. Underhill noted that this was more than likely to yield more reliable results which would determine the accuracy of the examiner’s speaking abilities.

Apart from being more reliable than the direct tests, semi-direct assessments offer more practicality in assessing the speaking proficiency of the candidate than the direct tests. This is because a group of candidates can be tested at once through the use of a language laboratory which means that semi-direct tests are more economical and efficient (Lynch 2003).

Semi direct tests are also more practical than direct and indirect tests because the marking is done in real time since the performance of the candidate during the test was recorded in audio. This creates some convenience for the test marker and test taker as they can be able to get the results of the test at any time.

Semi direct tests follow a fixed structure which allows the examiners to listen to the audio recordings by forwarding the tapes to important parts of the examinees performance.

These tests are less costly when compared to the direct tests as they do not require any selection and training of interviewers. They might prove to be unreliable when the tape recordings are of a poor quality but this might be overcome with live assessment tests (Sundh 2003).

Conclusion

The findings of the discussion have revealed that direct tests are more effective than the semi direct and indirect assessments of speaking English. These tests are however more costly and less reliable than the semi direct tests. Semi direct tests are less costly and they can be administered to many candidates at the same time through the use of language laboratories.

Language assessors who want to save time and money prefer the use of semi direct tests as they also offer real time tests scores. The study has also revealed that of all the three tests, the indirect assessments are the least reliable as they do not measure the speaking ability of the candidate in a practical way.

References

Alderson, J.C. and Banerjee, J. (2002) Language testing and assessment. Language Teaching, Vol.35, pp 79-113

Bazerman, C., (2008) Handbook of research on writing: history, society, school, individual, text. Oxford, UK: Taylor and Francis Group

Cohen, A.D. (1998) Strategies in Learning and Using a Second Language. New Jersey: Longman Publishers

Comings, J., Garner, B., and Smith, C., (2006) Review of adult learning and literacy, Volume 6. New Jersey: Lawrence Erlbaum Associates.

Davidson, F. and Lynch, B. (2002) Testcraft: A guide to writing and using language test specifications. New York: Yale university press

Davies, A. (2000) Dictionary of Language Testing. Cambridge, UK: Cambridge University Press

Douglas, D. (2010) Understanding Language Testing. New York: Oxford University Press

Ferrer, H., Speck, B.P., Franch, P.B., and Signes, C.G., (2001) Teaching English in a Spanish setting. Valencia: Universitat de Valencia.

Fulcher, G., (2003) Testing second language speaking. New Jersey: Pearson Education

Fulcher, G. and Davidson, F. (2007) Language Testing and Assessment: an advanced resource book. Oxford, UK: Routledge

Fulcher, G. (2010) Practical Language Testing. New York: Oxford University Press

McNamara, T. (1996) Measuring Second Language Performance. New Jersey: Longman Publishers

Lazaraton, A., (2002) A qualitative approach to the validation of oral language tests. Cambridge: Cambridge University Press

Luoma, S. (2004) Assessing Speaking. Cambridge, UK: Cambridge University Press

Lynch, B.K., (2003) Language assessment and programme evaluation. Edinburgh, UK: Edinburg University Press Limited

O’Loughlin, K.J., (2001) The equivalence of direct and semi-direct speaking tests. Cambridge, UK: Cambridge Press

Shaw, S. and Weir, C.J. (2007) Examining writing: research and practice in assessing second language writing. Cambridge: Cambridge University Press

Schumm, J.S., (2006) Reading assessment and instruction for all learners. New York: Guilford Press

Silly, H., (2006) Total preparation for the new TOEFL IBT. New York: Studyrama

Sundh, S., (2003) Swedish school leavers’ oral proficiency in English. Sweden: Uppsala Universitet.

Taylor, L. and Falvey, P. (eds) (2007) IELTS Collected Papers: research in speaking and writing assessment. Cambridge: Cambridge University Press

Weigle, S. (2002) Assessing Writing. Cambridge, UK: Cambridge University Press

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2019, May 24). Measuring the Communication Competency of EFL Learners. https://ivypanda.com/essays/efl-learners-ability-essay/

Work Cited

"Measuring the Communication Competency of EFL Learners." IvyPanda, 24 May 2019, ivypanda.com/essays/efl-learners-ability-essay/.

References

IvyPanda. (2019) 'Measuring the Communication Competency of EFL Learners'. 24 May.

References

IvyPanda. 2019. "Measuring the Communication Competency of EFL Learners." May 24, 2019. https://ivypanda.com/essays/efl-learners-ability-essay/.

1. IvyPanda. "Measuring the Communication Competency of EFL Learners." May 24, 2019. https://ivypanda.com/essays/efl-learners-ability-essay/.


Bibliography


IvyPanda. "Measuring the Communication Competency of EFL Learners." May 24, 2019. https://ivypanda.com/essays/efl-learners-ability-essay/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1