Find Us on Facebook
Follow Us
Join Us

Cookies disabled

Please, enable third-party cookie to enjoy social media box

Friday, 07 June 2019 10:22

Special Issue of MonTI-CTS SPRING-CLEANING: A CRITICAL REFLECTION

Special Issue of MonTI-CTS SPRING-CLEANING: A CRITICAL REFLECTION
Editors: María Calzada Pérez and Sara Laviosa

This special issue is intended to be a self-reflexive research work that looks back and forward upon corpusbased translation studies (CTS). This is not the first time such an endeavour has been carried out. The reason is quite obvious. It is always healthy and productive to assess and re-assess the state-of-the-art before we put forward new (un)desirable premonitions. And, with corpus-based studies, the future and the past merge at an incredibly fast speed.

Similarly to other publications in the field (see, by way of example, Laviosa 1998; Laviosa 2002; Olohan 2004; Xiao 2010; Kruger et al. 2011), looking back brings us to, at least 1993, when Mona Baker officially envisaged a turning point in the history of the discipline: 

I would like to argue that this turning point will come as a direct consequence of access to large corpora of both original and translated texts, and the development of specific methods and tools for interrogating such corpora in ways which are appropriate to the needs of translation scholars. (Baker 1993: 235)


Baker was not the first person to undertake corpus-based research (see, for example, (Gellerstam 1986; Lindquist 1989), but she was undoubtedly the scholar who most forcefully predicted what the future had in store. And her premonitions were realized in virtually no time. Already in 1998, there was enough corpusbased work for Sara Laviosa to put forward possibly the most well-known compilation on the subject in a special issue for Meta. Journal des traducteurs. By 2004, corpus-based studies was, in Baker’s (Baker 2004: 169) own words “too much rather than too little to go on”.


Indeed, research has grown exponentially from 1993 onwards, as all monographs testify, in the very aspects Baker had anticipated. Corpora became larger and larger; and then smaller and smaller (but more specialized). They are, as (Xiao & Yue 2009) show us, monolingual and multilingual; parallel, comparable, comparative; general and specialized; they adopt simple or complex configurations, as (Zanettin 2012) , reminds us when he talks about star- or diamond-shaped corpora. They are built upon multiple layers of parameters (cf. Laviosa 2012).


Methods (and theoretical results) have also proliferated and have meant “new ways of looking at translation” (Kenny in Laviosa 2011: 13) . Drawing on Partington, Duguid, & Taylor (2013: 13) , these new perspectives can be said to derive from different forms of comparison. Thus, simple comparisons entail the analysis of two different subcorpora (like when Moropa 2011 studies a set of texts in English vis-à-vis their Xhosa translations); serial comparisons involve the contrastive analysis of corpus A and corpus B, and then corpus A and corpus C, and so on (like when, for instance, Bosseaux 2006 examines Virginia Woolf’s The Waves and two of its translations into French). Multiple comparisons occur when corpus A is set against a pool of subcorpora at once. Partington et al. (2013: 13) explain that “those studies which employ the BNC [British National Corpus] or the Bank of English [BoE] as a background or reference corpus are of this multiple-comparison type” (for example, when Kenny 2001 double-checks her GEPCOLT results against the BNC, she is performing multiple comparisons). Diachronic comparisons involve the exploration of translation-related corpus throughout time and are still, admittedly, rare (Calzada Perez 2017; Calzada Pérez 2018) however, does precisely this with her European Comparable and Parallel Corpus Archive, ENPC). All these comparative methods have been put at the service of notably descriptive and applied translation studies. The aim was to unveil regularities of various kinds (as Zanettin 2012, most aptly exemplifies): of translation, of translators, of languages, of learning behaviour, of interpreting protocols.


Corpus tools have also beed devised at a frantic speed. There are all kinds of programs for each of the stages of compilation: web crawlers (some of which specialized in corpus building such as BootCaT), editing suites for a wide variety of formats (from txt raw corpora to xml marked up and annotated corpora); parsers, taggers and annotators (such as CLAWS, Tree Tagger, FreeLing; USAS); Corpus Management systems of very different types (like IMS Open Corpus Workbench, MODNLP; CQPWeb, SketchEngine; WMatrix); Concordancers (like AntConc, WordSmith Tools, TCA2, Glossa). There are also a wide variety of plugins generating all kinds of information for analysis: statistics, word lists, keyword lists, concordances, collocates, word clouds, word profiles, tree graphs.


With such an exponential growth, some predictions have been fulfilled, others have been abandoned. Hence, we believe it is time we pause and reflect (critically) upon our research domain. And we want to do so in what we see is a relatively innovative way: by importing Taylor and Marchi 's (2018) spirit from corpus-assisted discourse studies (CADS) into CTS. Like them, we want to place our emphasis precisely on the faulty areas within our studies. We believe that, rather than hiding them under the carpet, we can learn valuable lessons from them. Thus, we aim to deal with the issues we have left undone; or those we have neglected. In short, and drawing on Taylor and Marchi’s (2008) work, we propose to devote this volume to revisiting our own partiality and cleaning some of our dustiest corners.


Regarding partiality, Taylor and Marchi (2018: 8) argue that 

Understandably, most people just get on with the task of doing their research rather than discussing what didn’t work and how they balanced it. However, this then means that any new comers to the area, or colleagues starting out on a new project, have to reinvent those checks and balances anew each time.


Going back to our previous research, identifying some of its pitfalls, and having another goal at what did not work is a second chance we believe we deserve. Looking at objects of study from various viewpoints (out of new personal projects or joined efforts) may bring about a polyhedric multiplicity that we think will add up to what we already know. Plunging into (relatively) new practices, such as triangulation (see Malamatidou 2017), from our CTS springboard, may be one of the ways in which we can now contribute to going back to post-modernity; and do things differently.

As to dusty corners (“both the neglected aspects of analysis and under-researched topics and text types” (Taylor and Marchi, 2018: 9), we share many of  those presented by Taylor and Marchi at the Corpus Assisted Discourse Studies Conference in 2018. In this way, we need further methods to identify (translated) absence; we could benefit from further protocols and tools to delve into similarities (as well as differences); we would do well to concentrate on voices that are still silent, non-dominant languages, non-named languages, multimodal texts, amongst many other concerns.


The present CFP, then, is interested in theoretical, descriptive, applied and critical papers (from CTS and external fields) that make a contribution to tackling CTS partiality and dusty spots of any kind. We particularly (but not only) welcome papers including:
• critical evaluation of one’s own work
• awareness of (old/new) research design issues
• use of new protocols and tools to examine corpora
• identification of areas where accountability is required and methods to guarantee accountability
• cases of triangulation of all kinds
• studies of absences in originals and/or translations
• studies of new voices, minoritised (and non-named) languages, multimodal texts, etc.
• pro-active proposals to bring CTS forward

Practical information and deadlines
Please submit abstracts (in Catalan, English, Italian, and Spanish) of approximately 500 words, including  relevant references (not included in the word count), to both This email address is being protected from spambots. You need JavaScript enabled to view it. and This email address is being protected from spambots. You need JavaScript enabled to view it..
Abstract deadline: 1 November 2019
Acceptance of proposals: 1 January 2020
Submission of papers: 31 May 2020
Acceptance of papers: 15 September 2020
Submission of final versions of papers: 15 November 2020
Publication: December 2020

Read 721 times

© Copyright 2014 - All Rights Reserved

Icons by http://www.fatcow.com/free-icons