Teaching linguistic argumentation through a writing-intensive approach
We present the results of a study on whether writing-intensive learning techniques can assist beginner students in learning linguistic argumentation. The analysis is based on student submissions (eighty submissions from twenty students, 22,328 words) from a typical Introduction to Linguistics course, which were analyzed with the Coh-Metrix tool (McNamara et al. 2014), a suite of tests that measures cohesion of the linguistic formulation of the text and coherence of the mental representation. The essays show improvement in descriptive measures (lexical diversity, use of content words) and greater simplicity in terms of readability, suggesting a growth in the sophistication of the students' argumentation and disciplinary knowledge.*
linguistic argumentation, undergraduate education, writing studies, writing-intensive learning, cohesion, coherence, Coh-Metrix
Our study addresses the role of writing-intensive learning techniques and iterative writing support in an introductory linguistics course. Students in the course were unfamiliar with both the disciplinary content and the argumentation style typical of linguistic writing. We assess the improvement in argumentation strategies through coherence and cohesion measures commonly employed in psycholinguistics and education.
Within the writing-intensive framework, the instructor's goal is to teach students about the metacognitive, instantiated content knowledge, implicit and explicit modes of disciplinary research, and textual and verbal representations that are meaningful in their field (Bazerman 2005, Strachan 2008). One way is by engaging students in learning to read, write, and analyze the core genres that represent the subject area. Writing seen in this way is constitutive of the discipline, that is, representative of the kinds of critical thinking and methodologies that are relevant in the discourse of the discipline. Our study is to be understood within this framework, where teaching is focused on learning the disciplinary content through writing, rather than teaching about writing in general.
The study was conducted within a typical Introduction to Linguistics course (LING 220), which covered the topics of phonetics, phonology, morphology, syntax, and semantics, as well as topics in language acquisition, sociolinguistics, and diachronic change. Unlike other offerings of the course, this instance also placed emphasis on the value of written argumentation academically and professionally and offered specific instruction on how students can (i) use writing in the construction of a linguistic analysis and (ii) present a coherent, convincing argument supporting their conclusion.
The analysis is based on student submissions for four in-class writing assignments: an intake assessment essay completed in the first week, two midterm essays (weeks 5 [End Page e339] and 11), and an outgoing assessment essay in the last week of the course. Scanned copies of these handwritten submissions were digitized, yielding twenty complete sets of students' work. Our hypothesis was that the students' written work would show improvement throughout the term in terms of readability, cohesion, and coherence as a result of the writing-intensive instruction.
In order to assess the progress in students' work, we used the Coh-Metrix tool (McNamara et al. 2014), a suite of tests that measures cohesion of the linguistic formulation of the text and coherence of the mental representation through an array of features such as reading level, lexical density and complexity, anaphors, conjunctions, syntactic complexity, and latent semantic analysis representations of semantic similarity (Landauer & Dumais 1997). Coh-Metrix indices have been validated in numerous previous studies as reliable indicators of textual complexity (Graesser et al. 2014, Graesser et al. 2011, Graesser et al. 2004, Kovanović et al. 2018, McNamara, Louwerse, et al. 2010, Polio & Yoon 2018, Xu & Liu 2016).
Analyses of the twenty students' assignments shows that their essays improved in descriptive measures (lexical diversity, use of content words), while at the same time they showed greater simplicity in terms of readability. We believe these results point to a growth in the sophistication of the students' argumentation and disciplinary knowledge.
Our proposal for linguists is that this type of teaching makes explicit the data analysis, hypothesis formulation, and hypothesis validation that are implicit in linguistic research articles and reports, as they are in many other data-analysis disciplines. There is extensive research, along with many guides, on how to teach the scientific method and how to teach scientific writing (e.g. Chalmers 2013, Coffin et al. 2005, Gauch 2012, Gimbel 2011, Paltridge et al. 2009); not much of that research, however, addresses the teaching of linguistics, which can be viewed as teaching the scientific method and general principles about how to draw generalizations from data. Anderson (2016) does investigate how students develop thinking around phonology, from setting up the problem and making a distinction between observations and conclusions, to making categories explicit. She used think-aloud protocols to probe students' thinking. Our research takes this one step further, by examining written products of such thinking, and, although we do not directly address whether students learn the content, we do evaluate the coherence of their writing as an indicator of higher-level argumentative writing.
A related body of work where teaching linguistics is discussed regards its benefits for language teaching (both in first and second language teaching) and in the teaching of English for Specific Purposes (a few examples include Cook 2016, Denham 2015, Halliday et al. 1964, Hudson 2004, and Hyland 2006). In linguistics, however, the discussion about how metaknowledge and reflection benefit the teaching of linguistics itself is rather recent and still peripheral. Assessing the existing literature is not straightforward, because there are many studies on writing or English that are informed by linguistic theory (cf. Hyland 2015, 2016, Wingate 2012), but very few on explicitly teaching linguistics through core research in the discipline.
There are, of course, publications on linguistics pedagogy, such as in the Teaching Linguistics section of this journal, the education and pedagogy section of Language and Linguistics Compass, and the teaching issues of American Speech. These published studies tend to focus on different styles of teaching, teaching specific topics or theoretical frameworks, or ways of reaching broader audiences (e.g. Canada 2018, Culicover & Hume 2013, Curzan 2009, Hildebrandt 2018, Johnson & Palmer 2015, Lasnik 2013, Mahfouz 2017, Petray 2004). By contrast, Humphrey and Dreyfus (2012) directly address the topic of teaching writing in linguistics in a study of the Embedded Literacy [End Page e340] Support program for master's students in Applied Linguistics at the University of Sydney. One of their findings is that, even at the master's level, students benefit from explicit instruction on how to write as a linguist, especially when it comes to using linguistic theories in order to explain how language choices create meaning. In a similar way, we believe we can adopt writing-intensive or writing-in-the-discipline frameworks for students in an introductory course, so that we can make explicit the types of thinking and argumentation that linguists apply when analyzing language.
We begin by discussing the writing-intensive framework (§2) and how we applied it in a linguistics course (§3). We review the literature on the writing-intensive framework because, to our knowledge, it has not been a prominent approach in the teaching of linguistics. We then outline the process of data collection and describe the data obtained (§4). In §5, we explain the analysis methodology, and discuss its results in §6 and the results of an online survey and exit interview in §7. The methodology we apply here can be a useful tool to evaluate the implementation of writing-intensive approaches. The conclusions in §8 summarize our view of the importance of writing-intensive teaching in linguistics, and future avenues for teaching and research in this area.
2. Framework: writing-intensive teaching and learning
One aim of this project is to investigate whether iterative writing support and a writing-intensive learning curriculum can assist students in learning unfamiliar disciplinary content knowledge. The question of the links between writing and learning has several pathways. In many ways one could say that this question has been one of the primary drive belts for the emergence of the field of Writing Studies, which encompasses composition studies (Bartholomae 1985, Elbow 1986, 1998, Elbow & Sorcinelli 2005, Russell 2002), new rhetorical genre theory (RGS; Artemeva & Freedman 2006, 2015, Coe et al. 2002, Freedman & Medway 1994, Miller 1984), writing across the curriculum (WAC; Bazerman 2005), writing in the disciplines (WID; Berkenkotter et al. 1995, Monroe 2002, Prior 1998), writing-intensive learning (WIL; Strachan 2008), transitions from writing in the university to the workplace (Dias et al. 2013, Dias & Paré 2000), writing for professional purposes (Spilka 1998, Winsor 1996), critical thinking (Bean 2011), and academic literacies for L2 learners (Hyland 2004, Johns 1997, Swales 1990).
We present a literature review from the field of Writing Studies that situates the question of how writing-intensive learning approaches can assist students in learning new content knowledge and disciplinary methodologies in the historical rationale and context of Simon Fraser University's (SFU) development of writing-intensive learning (WIL) requirements for undergraduate students (2002–2007). We outline some basic concepts that inform questions about the roles that writing takes in learning new content knowledge in higher education. We also briefly link to studies from other research and technical disciplines that have explored the use of writing-intensive learning to teach scientific and disciplinary methodological content to newcomers. One interesting finding that seems to be emerging from these outside disciplinary studies is that WIL teaching approaches have a broadly positive impact on student learning outcomes that generally exceeds the initial goals of the studies (Armstrong 2010, Brownell et al. 2013, Cortes 2007, Leggette & Homeyer 2015).
2.1. Writing-intensive learning at SFU: implementation and rationale (2002–2007)
The first author, who initiated this project, had taught other writing-intensive (W) courses and loosely modeled the core assignments and pedagogical approaches on the five WIL criteria established in the curriculum guidelines set down by a curriculum task force and a writing support group, which were approved by the university's [End Page e341] Senate in 2003. The relevant WIL criteria are as follows (the fifth criterion was not relevant to LING 220):
The genre and disciplinary approach of the WIL guidelines were informed by extensive research about writing and learning. In a nutshell, while acquiring the knowledge of a discipline is partly a matter of learning its vocabulary, conventions, organization structures, and modes of reasoning, it also requires learning substantive matters that Charles Bazerman (cited in Strachan 2008:51) lists as '[the] issues it addresses, the concrete objects it manipulates, the questions it has excluded or already answered to the satisfaction of the community … or the things that might be said to accomplish its objectives'. Seen this way, knowledge learning and writing cannot be easily uncoupled, and it is the disciplinary content experts who are best placed to teach students how to learn to write the target genres that are 'constitutive' of the disciplinary knowledge, 'even when the knowledge may also be communicated orally and with some graphic illustration' (Strachan 2008:51).
With respect to the WIL curriculum at SFU, the intention was that students would be taught about the distinctive features of the texts they were to write as they were learning the subject material; be provided the rationale and reasoning that shaped the texts from within the research and professional discourse; have opportunities for feedback on their developing understanding of the subject material through low-stakes writing; and receive situated expert feedback on synthesizing literature, critical thinking, analysis, and standard research methodologies on revised drafts of major assignments. The overarching goal of W courses was to develop students' explicit understanding of the specific disciplinary genres and concepts of their majors, as well as their ability to transfer this understanding to other learning and writing contexts beyond their university careers. For this reason, the LING 220 study is of interest in terms of investigating whether writing-intensive learning support does indeed enhance students' learning of unfamiliar disciplinary content and argumentative structures.
LING 220 is a typical introductory course in linguistics at a North American university, covering the core components of the field: phonetics, phonology, morphology, syntax, and semantics; it may also expand to a variety of topics related to language acquisition, sociolinguistics, and language change. The goal of the course is to provide students with instruction on core methods of linguistic analysis. At the time when this research began, we did not have a WIL course at the lower division (first- and second-year courses). This gap had created concerns for students in upper-division courses (third- and fourth-year courses), as they were not well prepared to construct linguistic arguments in brief essay form. Further detail on the course content is provided in §3. [End Page e342]
2.2. Writing to learn in higher education
Writing Studies research understands writing in the disciplines as being constitutive of a discipline in that its modes of thinking, methods, and discourse are represented in its literate practices. Seen this way, undergraduate education can be understood as a gradual immersion in the literate, linguistic, cognitive, and communication practices that are meaningful for research, communication, and professional development in students' major disciplines (Bazerman 2005, Bean 2011, Monroe 2002, 2003). A professor's role is to teach students about the metacognitive, instantiated content knowledge, implicit and explicit modes of disciplinary research, and textual and verbal representation that are meaningful in their field by teaching the key genres that represent the methodology, research, and professional work that are embodied by those forms of writing.
Writing-intensive learning instruction seeks to develop students' awareness that not only does writing represent the knowledge of the area of study, but that each discipline (even nearby disciplines) also has distinctive textual modes for representing what counts as evidence, argumentation structures, and methods for analysis. That is, when a scholar writes in the respective distinctive genres of their discipline, they are 'doing' indigenous history, organic chemistry, systemic functional linguistics, ethnography, and so forth (Monroe 2002, 2003). Students do not come equipped to write in the specialized discourse of the disciplines required in higher education. As Russell (2002:313) expresses it:
Nor do they see the textual ways a discipline carries on its work and (re)produces its ideology … what counts as good writing in a course or a field is profoundly shaped by its questions, goals, methods, and epistemology.
Russell (2013:161) further calls this transition from understanding writing as mainly showing that one knows facts or information (autonomous transcription of speech) to showing that one knows how to think as a participant in the intellectual activity of a discipline as moving from 'traversal' to 'specialized' concepts about writing. It also supports other research claims that students are more engaged and believe they learn more when they have meaningful writing projects, and this finding has been corroborated in other large-scale survey studies on students' university experiences (Bean 2011, Light 2001).
2.3. Writing-intensive learning and argumentation skills
At this point, the question arises as to whether we are teaching disciplinary content, genres, or writing. The answer could be 'all of these'. What emerges as a key understanding is that students may not immediately gain writing proficiency in one course. However, they may gain a more complex understanding of the rationale for disciplinary writing tasks, especially if sustained over the course of a program of study. The literature on writing-intensive learning points to the importance of providing students with ongoing engagement with practice and with dialogue about what counts as knowledge, argument, and evidence in the disciplinary discourse. Of most use are faculty members demonstrating reasons for particular methodological practices, opportunities for peer review and revision of major texts, and the provision of wider examples about the situated nature of the field's communicative practices (Dias & Paré 2000, Monroe 2002, 2003, Strachan 2008).
Recent studies have shown that writing-intensive learning does assist students in learning the argumentation and conceptual structures of their disciplines, although students may not recognize this as writing per se, usually because of their beliefs that writing is nothing more than using correct grammar (Armstrong 2010, Brownell et al. 2013, Cortes 2007, Leggette & Homeyer 2015, Russell 2002, 2013). In Armstrong [End Page e343] 2010, in particular, the primary goal of the learning outcomes was to deepen students' explicit research skills and scaffold critical analyses of texts for research projects:
The assignment sequence is designed to move students through the logical stages of the research and writing process and also to engage them in the dialectical relationship between research and critical thinking. Based on student feedback, most students indicate their appreciation of this staging of the research and writing process as enabling them to succeed in the completion of their papers.
Brownell and colleagues' 2013 research study within a neuroimmunology course at Stanford University investigated the links between students' comprehension of primary scientific literature and their abilities to communicate scientific content to both laypersons and scientific audiences. The authors ran the course for three consecutive years and evaluated its impact on students using a combination of pre- and postcourse survey questions and coded open-ended responses (Brownell et al. 2013:70). The goal of the course was to develop students' scientific literacy, comprehension, and communication of complex scientific material. The findings of the research suggested that the writing-intensive format did improve students' scientific writing, as well as content synthesis and communication skills.
2.4. Summing up
There is a large body of research and theory investigating and justifying the relationship between writing and learning across many disciplines in higher education. The issues covered in this literature review suggest multiple ways in which we can understand how writing-intensive learning may facilitate content learning and introduce novice students to the core disciplinary methods of research, argument, and analysis. Many of the studies challenge the separation of disciplinary writing and content learning. As we will see below, the results of our study suggest that LING 220 students believed they had learned more about how to structure a linguistic argument and were more confident about completing written assignments due to the iterative writing assignments, explicit rubrics, and feedback on written work. One participant in the exit interview stated that '[n]ow I know, the way how to write as a linguist, like the way of writing, I would say linguistic writing, it's a little bit different from like other formal essays, so now I kinda know the format of it' (participant 3). This student statement supports Russell's (2013:175) claim that 'attention to writing, then, is not a distraction from teaching content, but a means of teaching it more effectively … writing is rather a shared responsibility, a means of teaching and learning and critical thinking for both students and researchers'.
3. Course context
As noted above, LING 220 is a typical introductory course in linguistics and covers the core components of the field: phonetics, phonology, morphology, syntax, and semantics. Each of these topics is discussed over a two- or three-week period, and during the final weeks of the term individual instructors may introduce other topics such as language acquisition, sociolinguistics, or language change. The course typically enrolls between 100 and 150 students, with a high percentage of EAL (English as an Additional Language) students, typically between 40 and 50% (SFU's average ranges between 25 and 30%). LING 220 functions as a portal course in the Linguistics program. Not only must all students who wish to enroll in Linguistics take this course, but they must also achieve a minimum grade of C+.
For the past decade, at least, the weekly content has been delivered in a two-hour lecture, with an additional one-hour tutorial that focuses on the answers to short homework assignments. The tutorials have a class size of eighteen students and are mostly taught [End Page e344] by teaching assistants (TAs), who are either master's or Ph.D. program students in the Department of Linguistics. The instructor teaches one tutorial as well. The course grade is based partly on these homework assignments, for which the students have to provide brief answers to very focused questions, but do not need to provide an explanation. There are also two or three exams, which either are entirely composed of multiple-choice questions, or may include one or two short answer responses based on a problem set. The textbook used is Contemporary linguistic analysis by O'Grady and Archibald (2012), in a variety of editions.
Our interest in teaching written argumentation in LING 220 grew out of our experience teaching WIL courses in the upper division of our program. LING 220 is not a formally accredited W course, and, as noted above, there was no lower-division WIL course when this research began. This had created concerns for students enrolling in upper-division courses, who found they were not well prepared for constructing linguistic arguments in brief essay form. Although students could take lower-division WIL courses in other departments, these did not address the challenges that writing in linguistics presents. Instructors in our program had also been noting that this gap was forcing them to curtail their writing expectations in upper-division course offerings. As a result, writing requirements even in fourth-year-level courses had been scaled down.
3.1. Initial study
In May 2014, the first author applied for and received a Teaching and Learning Development Grant from The Institute for the Study of Teaching and Learning in the Disciplines at SFU in order to test the feasibility of offering LING 220 as a writing-intensive course. During the summer and fall of 2014 he collected and created materials for teaching written argumentation in linguistics, compiling a digital database of phonology and morphology problem sets and developing efficient but substantive feedback methods. In order to test the appropriateness and efficacy of these materials, we conducted an initial study for a period of six weeks in the spring term of 2015 in a LING 220 offered by a different colleague.
The course had ninety-seven students distributed across six tutorials, all of which participated in the study. Three of these tutorials were instructed in written argumentation in phonology and the other three in morphology. The tutorials that did not receive the instruction in written argumentation for a particular subject otherwise received all of the same materials as the ones that did. Each of these special tutorials lasted for two weeks and was followed by a midterm exam, as seen in Table 1.
Our main objective in this initial study was to determine whether instruction in written argumentation affects a student's understanding and retention of the content. This was assessed by comparing the performance of the two groups of students on specially created multiple-choice questions within the two midterm exams and the final. The only significant difference we observed was for the phonology question of intermediate difficulty on the first midterm. The group that received instruction in written argumentation [End Page e345] had an average of 62%, while the other students had an average of 49%. This difference is significant on its own, but not when the performance of the two groups on the other exam questions is taken into account. Nonetheless, we see that at the very least instruction in written argumentation does not affect student performance negatively, and it appears to be somewhat connected to performance improvement on more difficult questions. The lack of significant differences between the two groups on the subsequent exams is most likely attributable to the fact that the problem-solving techniques applicable in phonology can be transferred to morphology problems in a straightforward manner.
At the end of the course we conducted a brief survey to gauge the effectiveness of our materials and feedback methods. Thirty-two of the ninety-seven students participated in the survey, which informed us that we needed to redesign the morphology material. Students commented that the take-home assignment was too challenging compared to the examples discussed in the classroom.
3.2. Main study
The results that we report in this article are based on a full offering of LING 220 as a writing-intensive course during the summer term of 2015. In the following sections, we discuss the assignments that constituted the core of the instruction in written argumentation, the type of feedback that we provided to students, and two indirect sources of data on the effectiveness of this pedagogy: an online survey, and exit interviews.
The writing-intensive offering of LING 220 in the summer term of 2015 was the only one available to students.1 The course was clearly promoted as a course with an emphasis on writing, but university policy prevented us from offering W credit to the seventy-two students who enrolled, because the course had not been preapproved as writing-intensive. In the second meeting of the class, the students were asked to compose a brief essay discussing a simple morphology problem set on word identification in Nepali (Genetti 2014:148). The purpose of this in-class writing assignment was for us to establish a baseline of each student's composition and argumentation ability before they received explicit instruction on how to accomplish this task. In order to ensure that students approached this assignment with the appropriate attention to detail, it was weighted as 8% of the course grade. The next week enrollment decreased dramatically to forty-three (most likely due to the emphasis on writing), and finally stabilized at forty.
The students were required to turn in six longer take-home essays on a biweekly basis. Before each essay was due, they received explicit instruction on how to discuss in writing a problem set on a specific component of language. The first assignment was a very basic description of articulatory phonetics of English, the second a phonology problem set on Swampy Cree (O'Grady & Archibald 2012:98), the third on Swahili morphology (Stewart & Vaillette 2001:143), the fourth on the morphophonology of Chamorro (O'Grady & Archibald 2012:137), the fifth on syntactic constituency in English, and finally one on first language acquisition (O'Grady & Archibald 2012:362).
The students also wrote two exams that included an essay question. The first exam was written in week six, after the lectures on phonetics and phonology. Students were [End Page e346] asked to discuss a problem set on Greek phonology, which included both phonemic and allophonic alternations. The second exam was written in week eleven, after lectures on morphology, syntax, and first language acquisition, and students were asked to discuss a problem from Greek morphophonology.
In the last week of the course, the students wrote an in-class essay on an intermediate problem on the morphophonology of Daga (Genetti 2014:97). The purpose of this assignment was for us to assess the degree to which students' composition and written argumentation skills had improved over the course. In order to provide motivation, the students were told that the grade for this last assignment would replace the grade from the first assignment if it was higher.
3.4. Instruction and feedback
In the tutorials, we discussed the purpose of learning written argumentation and how this skill could be transferred to other academic tasks, such as writing a paper, or even how it would apply professionally outside of academia. We then discussed an entry-level (in terms of difficulty) problem set for the particular language component that was the focus of the lecture, for example, phonology. First, we covered how we arrive at a correct answer, and then we discussed how the entire argument could be modeled and written. The students were provided both with a flowchart of the argumentation in general and a detailed write-up of the particular problem set. Then the students had to work on a different problem set and turn in their write-up before the following lecture. Students were given permission to collaborate on figuring out an answer but had to compose their own essays.
The next tutorial in the two-week cycle was dedicated to providing feedback. First, all assignments were assessed by two TAs who used a rubric, but also provided brief comments on the submission itself. The TAs were encouraged to provide positive comments as well as negative ones, and to focus on two or three more-serious issues, so as to not overwhelm the students. Appendix A provides an example of a corrected submission from the first in-class assignment. In the tutorial, we first discussed the correct solution to the problem set. Then students were given the rubric (Appendix B) and were asked to use it to evaluate a made-up submission, created by the instructor. The mock submissions (see Appendix C for an example) typically included errors in argumentation, such as logical leaps or the wrong application of a criterion, for example arguing that complementary distribution between two phones demonstrated that they are allophones of different phonemes, whereas the correct answer is that they are allophones of the same phoneme. After the students completed their evaluation of the mock submission, we went through each aspect of the rubric as a class and discussed how the made-up answer should be assessed. As a final step, the students received their own write-ups together with the evaluation sheets prepared by the TAs. The purpose of this approach was to familiarize students with the rubric, to prepare them to receive constructive criticism, and to instill in them the understanding that drafting and revision are integral aspects of the writing process. Finally, each student was required to attend one session with their assigned TA, in which they discussed a problem set and collaborated in the creation of an argumentation outline for its solution.
3.5. Additional data
At the end of the term, we also asked for the students' opinions on their learning experience with the writing-intensive material. First, we asked students to complete an online survey (Appendix D, twenty-nine respondents), which we followed up with exit interviews (Appendix E). The latter were conducted by a research assistant who was not involved in the teaching of the course and were also completed by twenty-nine students. [End Page e347]
4.1. Participant data
The results from the exit interviews give us a rough sketch of the forty students who took this course. Of the twenty-nine students who participated in the interviews, eleven were already or intended to become Linguistics majors.2 The other students came from a wide variety of programs such as Computer Science, Economics, English, and Psychology. The full list can be found in Table 2.
Furthermore, the students came from a variety of language and ethnic backgrounds. Only ten of them were native speakers of English; among the remaining nineteen students the most common language was Mandarin (ten), and we also see Cantonese (two), Punjabi (two), Korean, Nepali, Spanish, Tagalog, and Vietnamese. A few students identified as bilingual (English-Punjabi, Tagalog-English, Vietnamese-English). Thirteen students were born in Canada; among the other sixteen their time in Canada ranged between two and fifteen years, with an average of 5.125 years. Twelve students had only been at the university for a year or less, and the maximum was six years (two students). The average time of attendance was 2.2 years. Finally, most students (twenty-two) had had a writing course before taking LING 220.
4.2. Written assignment data
Of the forty students, only twenty completed all four summative written course components: a first assignment, essay questions from midterm 1 and midterm 2, and a last assignment. The take-home assignments were not considered, as we could not be certain that the students did not receive tutoring help with these. Thus, we collected and transcribed the submissions from this smaller set of students to create the data set for this study. The raw data counts for all students, in terms of number of words and sentences, are provided in Table 3. Note that these are aggregate data, for all twenty students. Average counts per student are also included.
5. Analysis methodology
Our goal was to test whether students' written work showed improvement over the course of the term. Improvement in writing is multifaceted [End Page e348] and can be measured in different ways, but the fundamental characteristics of 'good' writing are cohesion and coherence, that is, writing that displays connectedness in terms of the entities being discussed (cohesion) and the arrangement of the propositions (coherence). This view is inspired by the work of Halliday and Hasan (1976), where 'cohesion' encompasses semantic relations such as reference, synonymy, and meronymy, while 'conjunction' refers to the types of relations among propositions that can be signaled by conjunctions (e.g. because, if, in other words). A large body of research in cohesion and conjunction/coherence has consistently demonstrated that the two are crucial to the perception of overall coherence in discourse and that a good command of the language (by either first or second language learners) involves sophisticated use of cohesion and conjunction devices (see e.g. Degand & Sanders 2002, Graesser et al. 2014, Halliday & Hasan 1976). Moreover, learning to write within a specific genre, what we here describe as writing in the discipline, leads to better outcomes in understanding and retaining content (see Brownell et al. 2013, Dias & Paré 2000, Russell 2013, Strachan 2008, and others cited in §2). Students in secondary and post-secondary education are learning the language of science and of specific scientific disciplines (Halliday & Martin 1993, Veel 1997).
We were also interested in more low-level measures of text complexity, such as reading level, lexical density, and length of words and phrases. We did not expect a marked improvement in these, and it is difficult to know how much of any possible change was the result of this course in particular or of students' general development throughout the term, as they progress through their university education. We expected, however, that students would pay special attention to those aspects because the course emphasized writing and learning to write in linguistics.
In order to measure cohesion and coherence on the one hand, and linguistic complexity on the other, we used Coh-Metrix, a computer tool that calculates different indices of cohesion automatically3 (Graesser et al. 2004, McNamara et al. 2014, McNamara, Louwerse, et al. 2010). We chose Coh-Metrix because it provides objective quantitative measures of text quality. It has been successfully deployed in assessing student essays and can be a useful tool in exploring different aspects of text quality (Graesser et al. 2014, Graesser et al. 2011, Graesser et al. 2004, Kovanović et al. 2018, McNamara, Louwerse, et al. 2010, Polio & Yoon 2018, Xu & Liu 2016).
Coh-Metrix processes texts making use of in-house lexicons, part-of-speech taggers, and syntactic analyzers. It also applies latent semantic analyses (Landauer & Dumais 1997) to capture word and sentence meaning. Coh-Metrix produces over 100 indices, some of which are not relevant to our study because of the nature or length of our texts. In the Coh-Metrix environment, four groups of measures specifically address our research objective of best capturing the coherence of short essays on data analysis: (a) measures of text readability, (b) measures of linguistic diversity and complexity, (c) measures of connectedness, and (d) the synthesizing measure of easability. We describe each of these groups below.
5.1. Measures of text readability
These are captured in standard measures of readability, the Flesch Reading Ease and the Flesch-Kinkaid Grade Level Score (Flesch 1948, McNamara et al. 2014). We also included descriptive measures of sentence length (mean number of words per sentence) and word length (mean number of syllables per word). In general, we would expect the students' work to become more readable [End Page e349] as the term progresses, and their sentences to become shorter and more descriptive. It is also possible for mean number of syllables to increase as new, complex terminology is deployed.
5.2. Measures of linguistic diversity and complexity
Linguistic complexity can be measured in several ways, but one of the most commonly used involves a measure using type-token ratio: the number of unique words in a text (types) in relation to the total number of words (tokens). Type-token ratio is often adjusted for text length, because longer texts are more likely to have repeated words. Generally speaking, we expect students' type-token ratio to increase over the course of the term, for two reasons, having to do with writing instruction and with discipline-specific instruction. As a result of the writing instruction, we expect students to become more sophisticated writers, who use variety in their work. As a result of explicit linguistic instruction, we expect their technical vocabulary to increase, which will presumably result in more diverse vocabulary.
At the structural level, Coh-Metrix can also evaluate syntactic complexity, by examining the mean number of words before the main verb (as an indicator of subject complexity and focus constructions) and the average number of modifiers per noun phrase. Those two measures indicate embeddedness and structural complexity.
5.3. Measures of connectedness
The most local and syntactically embedded measure of connectedness is referential cohesion. It is local and syntactic because it occurs across sentences and clauses, relating entities (as opposed to relational coherence, which relates propositions). Coh-Metrix calculates several indices of referential cohesion, but all of them are based on the notion of overlap. For instance, local cohesion is measured in terms of the overlap between consecutive adjacent sentences, and whether that overlap is manifested as the overlap of nouns, nouns and pronouns, or word stems (regardless of class). Global cohesion assesses the overlap between all of the sentences in a paragraph or text. Because most of our analyses involved using the entirety of the students' output as data (i.e. all of the students' writing for essay 1), we did not consider measures of referential cohesion. The students were discussing similar topics and concepts, and thus overlap is likely to be high when all of their writing for one assignment is collated. That would not mean, however, that the writing is more or less cohesive. In addition, McNamara, Crossley, and McCarthy (2010), in an analysis of college essays, found that essay quality was unrelated to cohesion, and other studies reported in McNamara et al. 2014:107–12 suggest that, as writers' sophistication increases, explicit indicators of cohesion decrease. The question of how much cohesion leads to a good text is still quite open. Coh-Metrix is partly limited because it measures (referential) cohesion as overlap, whereas the view of cohesion in Halliday & Hasan 1976 is more general and includes semantic relations (synonyms, hypernyms, part-whole relations, etc.).
At the more global level of connection, we included indicators of linking between propositions. Propositions tend to be related through conjunctions that indicate an additive, temporal, or causal connective. Coh-Metrix provides indices for five different types of connectives: causal, logical, adversative/contrastive, temporal, and additive. Connectives facilitate text comprehension, in particular for readers with lower levels of fluency (Degand & Sanders 2002, Millis & Just 1994).
5.4. Measures of easability
McNamara et al. (2014:86ff.) report that, of all the measures of text complexity and readability, five are the most indicative, and they label these 'measures of text easability'. They are: narrativity, syntactic simplicity, word concreteness, referential cohesion, and deep cohesion. In general, there is a correlation between [End Page e350] measures of easability and grade-level estimates, with texts at lower grade levels having a stronger narrative structure and simpler syntax. Word concreteness also decreases across grade levels. Cohesion (both referential and deep) does not, however, exhibit such a strong correlation with readability. This is because, McNamara et al. (2014) argue, cohesion is orthogonal to readability. Cohesion is nevertheless important because referential cohesion links sentences to each other through words and ideas that overlap the sentences. Deep cohesion measures the degree to which the text contains causal and intentional connectives to signal the causal and logical relations within the text.
Following the terminology suggested by McNamara et al. (2014), we use the word measure to describe theoretical constructs, as we did in our description in the previous section. In this section, we refer to indices to describe how Coh-Metrix assesses each measure. We report below results for those indices.
6.1. Indices of text readability
The four sources of data on students' writing showed slightly varying levels of readability, but in general they point to texts that are easier to read as the term progresses. Table 4 shows the results. The Flesch Reading Ease formula produces a number between 0 and 100, where a higher score indicates easier reading. The range 60–70 corresponds to an eighth- and ninth-grade reading level, and the scores on our assignments begin at the lower end (closer to grade 9) and increase slightly as the term goes on. On this measure, an increase means an easier-to-read text, so that toward the end of the semester the texts are closer to a seventh-grade level (70–80 range).
The Flesch-Kincaid Grade Level converts the Reading Ease to a grade-school level ranging from 0 to 12. An increase in this number means an increase in grade-level difficulty, making it the obverse of the Flesch Reading Ease metric. We can see that our results on this measure are slightly different from those of the Flesch Reading Ease, but point in the same direction, toward simpler texts: the texts are placed at roughly a grade 10 level for the first assignment and decrease over the course of the term, ending at about grade 8. This is not necessarily a negative result. Both the Flesch-Kincaid and the Flesch Reading Ease indices are a function of word and sentence length, and, though with some variation in the essays in the middle of the term, sentence length decreased slightly, with word length remaining more or less constant, as seen in Table 4.
To more clearly illustrate how simpler sentences may be better, let us examine two samples. These are passages from the same student (student #5), one from the first assignment (example 1) and one from the last (example 2). In both cases, the structure and the content are similar. Both are describing data and providing evidence for the analysis of the data. In the first case, the student included four dependent clauses in the sentence, which makes it complex and harder to follow. This sentence has a grade level in Coh-Metrix of 14. By contrast, the passage in 2 breaks down similar ideas across three sentences and a total of five clauses. The grade level for this passage is 4.5, indicating a clearer, easier-to-read text. [End Page e351]
(1) The only Nepali word in sentence three that is different than sentence one is usko, which would mean usko = 'his', since that is the only word that is different between the two sentences.
(2) Next, we can look at 2, 8, 16, and 20. By comparing, 'mother' would mean [ina], as they all have that morpheme in common. Since we know what 'mother' is, by elimination, [ya] would mean 'your'.
6.2. Indices of linguistic diversity and complexity
Type-token ratio measures complement the reading ease indicators and provide a more nuanced picture. The type-token ratio for content words increased throughout the term (with a dip for essay 1), pointing to a higher use of technical terms (see Table 5). We compared the word lists of all four course components, and some of the words that appear in the last assignment, but not in any of the previous work, include gloss, alveolar, bound, and comparative. These words indicate a more sophisticated content knowledge. More importantly, we find for the first time in the last assignment a number of words used to describe the data-analysis process: anomalies, assumption, hypothesized, generilizations (sic).
In terms of syntactic complexity, the results show few changes over the course of the term, but a tendency toward less complexity, in terms of both left-embeddedness (number of words before the main verb) and mean number of modifiers per noun phrase. This is consistent with the results of the text-readability measures.
6.3. Indices of connectedness
We first concentrate on the incidence and distribution of connectives. The incidence score is the occurrence per 1,000 words, for all connective types. The picture is quite complex, and no significant change is observed over the course of the term. Before we examine the data in more detail, it is important to point out that, in terms of distribution, Coh-Metrix classifies connectives into the following five classes.
• Causal (because, so)
• Logical (and, or)
• Adversative/contrastive (although, whereas)
• Temporal (first, until)
• Additive (and, moreover)
Coh-Metrix also distinguishes between positive connectives (also, moreover) and negative connectives (however, but). Since we believe that both play an important role in argumentation, we did not include the positive/negative distinction in our analyses.
In general, Figure 1 shows that connective use decreased slightly from the beginning to the end of the course, with a peak at essay 1. In all types of connectives, the changes were minor, with a general tendency toward employing fewer connectives (a change from 108.1 per 1,000 words in the first assignment to 102.2 in the last assignment).
6.4. Indices of easability
Easability scores are provided as percentages across the four course components in Figure 2. Recall that easability is an aggregate of various measures that attempt to capture how easy a text is to read. The measures also vary [End Page e352]
across different disciplines. For instance, language arts texts tend to be higher in narrativity than social studies or science texts (McNamara et al. 2014). What we see in Fig. 2 is an increase in narrativity and syntactic simplicity (with a large decrease in narrativity for essay 1). As we pointed out earlier, simplicity in this case tended to indicate shorter, more to-the-point sentences, which is a desirable outcome. The word-concreteness levels are quite low, and there is no clear trend, but we see that referential cohesion seems to decrease. This is probably due to shorter sentences, with less overlap between them. Deep cohesion (the extent to which logical relations are signaled with a connective) also sees a trend toward lower scores.
In summary, over the course of the term, students learned to unpack complex sentences into simpler, shorter sentences. They learned more technical terminology, and also words relating to analyzing and describing data. Their use of connectives was not very sophisticated, however, as they did not employ many to signal connections between clauses and sentences. The easability scores show an increase in syntactic simplicity and a decrease in deep cohesion, indicating that students did not rely on connectives to mark logical relations. This should perhaps be the next step in their writing development: once they have broken their sentences down into easier-to-process chunks, they should signal the relations between those chunks more clearly. [End Page e353]
7. Results of survey and exit interviews
During the last two weeks of the course, the students were invited to complete an anonymous online survey about their experience, and to follow it up with an exit interview. A total of twenty-nine students participated in these. Note that these were offered to all of the students, which is why we have more participants in the survey and interview than in our set of assignments for the study.
7.1. Online survey
The online survey (the questions are reproduced in Appendix D) comprised nine Likert-scale questions, which were scored from 1–5 (with 1 being the least favorable score), two 'yes/no' questions, and two open-ended questions. The results presented in Table 6 show that, overall, students had a positive perception of the impact of the writing-intensive learning approach.
On the first 'yes/no' question, 86% of the respondents (twenty-five of twenty-nine) thought that they learned something important in the tutorials. In the second, 79% of the respondents (twenty-three of twenty-nine) thought that they now felt more confident about completing written assignments. We chose yes/no questions for these two items instead of open-ended questions because of our plan to explore these issues in more detail in the exit interviews.
The first of the two open-ended questions, a follow-up to the question that asked how difficult it was to write the essay, asked: 'Please explain why it was easy or difficult and what exactly caused difficulties (e.g., following the structure of the linguistic argument, expressing the argument in written academic English, etc.)'. Many replies (twelve of twenty-nine) stated that the detailed explanation of how to structure the linguistic argument was helpful in terms of constructing the written assignment. Six students mentioned language or grammar as the main difficulty.
The second question asked: 'What was the most helpful aspect of the second tutorial?'. More than half of the replies (sixteen of twenty-nine) remarked that the combination of working on the rubric with a middle-of-the-road mock answer, receiving their individual feedback, and being given a model of a complete essay was very helpful to them. Four students mentioned that group work was also very helpful for understanding the process of written argumentation.
7.2. Exit interviews
There were also twenty-nine students who participated in the exit interviews, but we cannot know if they are the same ones who participated in the online survey since the survey was anonymous. In addition to the demographic data presented in §4.1, we also collected the students' impressions on the following issues (see Appendix E for the full questionnaire).
We asked students how confident they felt about their academic writing ability before they took the course. Fourteen reported that their confidence had been low, nine said [End Page e354] that their confidence had been at a middle level, and six had had high confidence. It is important to note that even the last six added that, although their confidence had been high in terms of general academic writing, they felt that writing in linguistics was something quite different, and they had not been entirely certain how their skills would transfer.
We also asked students how confident they felt about their writing after the course. All but one reported significant improvement. Again, for the students with high confidence, the improvement was felt to be specifically about writing in linguistics. Here are two indicative statements, uncorrected:
• 'A little more confident, cause I know what it's expected in my writing, so it's a bit easier now.' (participant 8)
• 'Pretty confident like I think I can go through it straightforward it's pretty simple.' (participant 5)
The exit interviews also reveal that the students found the various components of the writing-intensive approach helpful. For the biweekly writing assignments, all students said that they were helpful, although two students said that they were not challenging enough. The value they identified was that the assignments provided a good way to prepare for the exams, not only in terms of writing the problem essay, but also in understanding the concepts taught in each module (phonology, morphology, and syntax). Of the twenty-nine students interviewed, twenty-four replied that the one-on-one session with a TA was also very helpful, especially for those who were able to have the meeting early in the term, as it helped them to identify aspects of the argumentation process that they were skipping over or to better understand concepts such as 'minimal pair' or 'complementary distribution'. Finally, twenty-seven of the students found the rubric method of providing feedback helpful, mostly for its specificity and its function as a blueprint for constructing their essays, and twenty-five of them said that they would recommend the course to other students.
7.3. Implications for the curriculum
The Department of Linguistics at SFU considered the reports on this study during a retreat on curriculum changes as it prepared for an external review in early 2016. Although there was broad support for introducing written argumentation at the lower division, the reduction in enrollment was too alarming to allow changing LING 220 to a WIL course. Instead, the department proposed, and the external reviewers supported, the creation of a separate lower-division course that focuses on written argumentation (LING 282W), which has now been offered since 2017. The introduction of this course has had wider implications for the entire undergraduate curriculum. The first is that most of the 300- and 400-level courses now list a lower-division W course as a prerequisite. This has led to increased demand for writing support, so the department was able to successfully apply for funding to establish a Writing in Linguistics support center, where students can receive writing help on their course assignments. The department began assessing the effect of these changes on students' academic writing in fall 2018 as part of SFU's overall inclusion of educational goals in its curriculum.
We have presented the findings of an initial study on the feasibility of offering an Introduction to Linguistics course under the guidelines of a writing-intensive framework, as well as the effects of such an approach on both student retention of content and improvement in writing. Our analyses sought to discern whether students' writing improved over the course of a term as a result of a writing-intensive intervention. [End Page e355] For that purpose, we used a tool that provides measures of coherence and cohesion in students' written work, Coh-Metrix. Our results show that students' writing became easier to read in terms of reading level and syntactic complexity, mostly because sentences tended to become shorter and better structured. There was also improvement regarding the understanding and use of discipline-specific terms. However, it is also clear that students still have far to go in terms of making connections between sentences clear through the use of connectives.
Our conclusion is that some of these interventions can be implemented in any introductory course, whether officially writing-intensive or not. The students were not explicitly taught about shorter sentences or connectives, and perhaps more direct instruction in those areas would lead to even better outcomes. Further aspects of the data could be analyzed, including argumentation structure and coherence relations. Additional work could include probing whether there was a difference in students' retention of content, and how they fared in upper-division courses. Our department has recently made changes so that a lower-division writing-intensive course is a prerequisite for most upper-division courses, and we hope to study students' performance through the program. The two courses with writing-intensive components discussed in this article (the initial and main study courses) helped redirect our department's attention to writing in the discipline, attention that had begun to slide a few years ago under enrollment pressures, as our classes had grown in size, and because of an increase in students with English as an Additional Language. We believe this renewed focus on writing will result in better retention of content and better communication skills.
In all, the study sheds light on the kinds of both disciplinary and general knowledge that linguistics courses provide and shows the usefulness of cohesion and coherence measures in the analysis of textual data. [End Page e356]
8888 University Drive
Burnaby BC, Canada V5A 1S6
revision invited 30 May 2018;
revision received 1 July 2018;
revision invited 14 October 2018;
revision received 25 November 2018;
accepted 20 December 2018]
Appendix A. Sample Assignment (corrected)
Appendix B. Grading Rubric
The rubric was based on the following sources:
|No improvement needed||Minor improvement needed||Major improvement needed|
|Knowledge/understanding of the basic linguistic concepts||The essay demonstrates a thorough understanding of the basic linguistic concepts.||The essay demonstrates some understanding of the basic linguistic concepts.||Lack of understanding of the basic linguistic concepts is obvious.|
|Analysis||Thorough examination of the data. Patterns discovered and described in an efficient way.||Adequate examination of the data. Some degree of comparison, contrast & evaluation is present.||Little or no examination of the data. The essay lacks comparison, contrast, and evaluation. No patterns discovered.|
|Use of examples||Specific and relevant examples from the dataset are used to support all the claims. Insightful and applicable connections between examples are made.||Some examples from the dataset are used to support most of the claims. Sufficient connections between examples are made.||No examples from the dataset are used or incomplete examples are used to only partially support the claims. Irrelevant examples are used. No or few connections between examples are made.|
|Introduction||Introduction contains detailed background information about the linguistic phenomena in question. The goal of the analysis is clearly stated.||Introduction adequately explains the background. The goal of the analysis is somewhat clear.||Background represents a random collection of facts that are unclear or not related to the topic. The goal of the analysis is vague or not stated.|
|Body||Arguments||Well-developed arguments are directly related to the problem.||Arguments are related to the problem, but some may have fragmentary evidence.||Arguments are not clear or not related to the problem.|
|Evidence||Supporting examples are concrete and well described in the text.||Some examples are present, but they do not provide a sufficient support for the arguments.||Examples are not relevant to the argument or no supporting evidence is used.|
|Development||The narrative is consistent, coherent and logical. Logical progression of ideas with a clear structure that enhances the thesis.||Relatively logical progression of ideas. Organization is clear.||The narrative is undeveloped and/or illogical. No discernible organization.|
|Transitions||Transitions are smooth and graceful.||Transitions are present, but some maybe used incorrectly.||Transitions are not present.|
|Conclusion||Conclusion effectively answers the question asked in the problem.||Conclusion is recognizable and ties up most of the ideas.||Conclusion is absent or does not answer the question.|
|Style||Style is appropriate and consistent throughout the essay.||Style is inconsistent at times: too rigid or too conversational.||Mostly inappropriate style.|
|Language||Appropriate use of idiomatic phrases. Good word choice.||In some cases, word choice is inaccurate, inappropriate, or unidiomatic.||Writing is confusing and hard to follow.|
|Grammar||Sentences are strong, with varied structure. There are no errors in grammar.||Writing is mostly clear and sentences have relatively varied structure. There are some minor errors in grammar.||A high density of errors in the use of articles, plurals of nouns, form and tense of verbs, subject-verb agreement, etc. Contains fragments and/or run-on sentences.|
|Spelling, punctuation, capitalization||Punctuation, spelling, capitalization are correct. No errors.||Some errors which are minor in nature and don't detract from overall meaning of paper.||Distracting errors in punctuation, spelling, and capitalization.|
Appendix C. Example of an intentionally incorrect response to a problem set for students to evaluate using the rubric
'Consider the following data from Swampy Cree (Native Canadian Language of the Algonquian family, from O'Grady and Archibald, 2012: 98) and provide a brief essay about the phonemic distribution of voiceless and voiced stops (e.g., [p] and [b], [tʃ] and [dʒ], etc.).'
|[ko:go:s]||'pig'||[tahki]||'often'||[namwa:tʃ]||'not at all'|
Swampy Cree mock answer. Not correct.
This exercise is about data from Swampy Cree and it asks us to determine the phonemic distribution between the following pairs
[p] and [b], [t] and [d], [k] and [g], [tʃ] and [dʒ]
[p] appears in many different environments but [b] only appears in the middle of a word. Therefore we can say that they are in contrastive distribution and allophones of the same phoneme. In other words, /p/ becomes b in the middle of a word.
The same pattern holds for the other pairs.
The allophone [d] appears only in the middle of a word.
The allophone [g] appears only in the middle of a word.
The allophone [dʒ] appears only in the middle of a word.
So the phonemes are /p/, /t/, /k/ and /tʃ/ and [b], [d], [g], and [dʒ] are the allophones.
The general rule would be that voiceless stops and affricates become voiced in the middle of the word.
Appendix D. Online survey questions
1. How easy was it to follow the instructor when he was explaining the structure of linguistic argumentation during the tutorial?
2. How well do you think you remember the structure of linguistic argumentation now?
3. How well do you think you understand the structure of linguistic argumentation now?
4. How clear was the model of the written essay provided by the instructor?
5. How helpful was the model of the written essay provided by the instructor?
6. How difficult was it to write an essay about the data set?
7. Please explain why it was easy or difficult and what exactly caused difficulties (e.g. following the structure of the linguistic argument, expressing the argument in written academic English, etc.).
8. How helpful was the second tutorial in terms of addressing some issues you may have encountered while writing?
9. What was the most helpful aspect of the second tutorial?
10. Overall, how prepared did you feel for the exam after attending the writing tutorials?
11. How prepared did you feel for the exam after completing the writing assignment?
12. Do you feel that you learned something important in the tutorials about written argumentation?
13. Do you now feel more confident in dealing with written assignments?
Appendix E. Exit interview questionnaire
1. How many years have you been at SFU?
2. What is your major?
a. If Linguistics: Why did you choose Linguistics?
b. If not Linguistics: Why did you choose this major?
3. Are you a native speaker of English?
a. If not: What is your native language?
b. How long have you been in Canada?
c. Have you attended the preparatory school on this campus?
4. Have you had a writing course before?
5. What was your confidence in your writing abilities before this course?
6. What is your confidence now?
7. Was your presentation with the TA helpful?
a. If yes, what was helpful?
b. If yes, what did you learn?
c. If no, what could be changed?
8. Were the writing assignments helpful?
a. If yes, what was most helpful?
b. If no, what was least helpful?
9. Was the rubric feedback method helpful?
a. If yes, what was the most helpful aspect?
b. If no, what was the least helpful aspect?
10. Would you recommend this course to others?
* This work was funded by a Simon Fraser University Teaching and Learning Development Grant to Pappas and an Insight Grant from the Social Sciences and Humanities Research Council of Canada to Taboada. We acknowledge that we live and work on unceded territory of the Coast Salish Nations. We would like to thank our students for their participation in the study, our teaching assistants for their enthusiasm in trying something new, and Dr. Anne Rimrott for allowing us to conduct the initial study in her course. We thank Irina Presnyakova for assembling the first draft of the rubric. Thanks also to our transcribers, Luca Cavasso, Emilie Francis, Mara Katz, and Soraya Mazhari, and in particular to Luca Cavasso for data analysis. The Coh-Metrix team at the University of Memphis helped us run the analyses on our larger data sets. Finally, we would like to thank the two anonymous referees for their helpful feedback and the editors for their patient guidance. All errors and omissions are our own.
1. We did not conduct a comparison between W and non-W approaches during the summer term. This would have required offering two parallel sessions with different instructors, an outlay of resources that the department could not afford.
2. Note that we interviewed twenty-nine students, but have full sets of written assignments from only twenty (see §4.2).