In lieu of an abstract, here is a brief excerpt of the content:

The Canadian Modern Language Review / La revue canadienne des langues vivantes 63.1 (2006) 1-12


Editorial/Éditorial :
Second Language Vocabulary Acquisition/Acquisition du vocabulaire d'une langue seconde
Marlise Horst
Tom Cobb

In the call for papers for this special issue on second language (L2) vocabulary research, we suggested several possible themes for submissions, including formulaic sequences and corpus-based approaches. We are pleased to report that high-quality submissions on these topics arrived – from researchers in Canada and as far away as New Zealand – and that we also received excellent papers on many other subjects. The result is a special issue that addresses a range of topics. The embarras de richesse makes it difficult to organize the issue into neat thematic sections, but it clearly bodes well for the future of an area that was once considered neglected (Meara, 1980).

What is in this issue?

The issue begins with two pieces on formulaic sequences, both with a focus on speech. In the first of these, David Wood looks closely at the sequences that learners of English use in narrative retell tasks, which are not always accurate but appear to become more so over time. He identifies five key functions of formulaic sequences and demonstrates how they contribute to fluent speech. Fluent performance in real time is also the theme in Tess Fitzpatrick and Alison Wray's investigation. In this highly original study, the researchers helped learners prepare formulaic utterances for specific conversations they were planning to have and examined their ability to employ them accurately in practice and real conversations in relation to individual variables such as proficiency and aptitude.

In the category of corpus-based research is Paul Nation's study of 14 new frequency lists from the recently completed British National Corpus (BNC) of 100 million words of mainly written, but also spoken, English. Corpus-based frequency lists are of great pedagogical importance, and their impact on the L2 vocabulary research of recent years can hardly be overstated. In addition to identifying the vocabulary that learners of [End Page 1] English would do well to study at various stages of their development, frequency lists have also made useful new research instruments possible. One example is the set of tools now available for assessing learners' vocabulary size, such as the Vocabulary Levels Test (Nation, 1990; Schmitt, 2000) and the Eurocentres Vocabulary Size Test (Meara & Buxton, 1987); both of these widely used instruments rely on the principle of sampling words from frequency bands. Another example is lexical frequency profiling (Laufer & Nation, 1995), a technique for analyzing texts in terms of the proportions accounted for by frequent and less frequent words. Until now, however, profiling software has been able to make only four rather rough frequency distinctions: Words in submitted texts were either on West's (1953) lists of the 1,000 and 2,000 most frequent words, on Coxhead's (2000) Academic Word list, or 'off-list' (on none of the three previous lists). By contrast, the new BNC lists (and the online profiler based on them, available at www.lextutor.ca/vp/bnc/.) allows for words in a submitted text to be categorized at 16 different levels of frequency. In his study in this issue, Nation tests the new lists against various corpora and determines more definitively than was previously possible the vocabulary sizes that learners of English need for unassisted comprehension of both spoken and written text.

Testifying to the value of frequency profiling as a tool for research are studies by Marlise Horst and Laura Collins and by Valentin Ovtcharov, Tom Cobb, and Randall Halter. Horst and Collins used the standard four-level scheme to examine the longitudinal development of lexical richness in a series of learner corpora consisting of narratives written by young francophone learners of English. The unexpected finding of little change inspired the authors to reexamine the data using finer measures including a Greco-Latin cognate index and a types-per...

pdf

Share