Maartje De Meulder and Andy Carmichael
Interpreting from a signed into a spoken language can be the ‘elephant in the room’. Signed language interpreters often perceive that this is their weakest working language direction (Napier, Rohan & Slatyer 2005). For deaf people to trust that an interpreter produces an accurate and effective rendition is often a leap in the dark because we do not have a real-time mechanism to monitor how we are being interpreted. Most of the time the elephant just stays where it is: we go about our business and do not talk about our concerns or curiosity regarding signed to spoken interpretation, and just hope that it isn’t too bad.
This is no different for deaf academics, even though, in our profession, our signed utterances typically have to be interpreted into a spoken language which we may well be able to read and write to a high academic standard. Therefore, it is crucial that interpretations in this language direction are of high academic quality. How can we evaluate this, and is it worth doing so at all?
Deaf academics and interpreters use specific strategies to assess the (quality of) the spoken language output in situations of simultaneous interpreting, for example when giving a presentation.
Some of these strategies are proactive:
- giving interpreters a full written-out version of the presentation;
- giving interpreters a list of terms with phonetic descriptions of how to pronounce specific words;
- making sure interpreters read relevant publications so the register they use reflects that of the deaf academic;
- pre-meetings to discuss deaf academic’s signs for specific/technical language;
- a full ‘dry run’ of the presentation together, in person or online.
Other strategies enable real-time control:
- asking for a team interpreter to relay the spoken interpretation through sign language, i.e. do a re-interpretation (this obviously has several drawbacks);
- using real-time captioning or CART services (which of course can have errors too).
Yet other strategies are post-situ:
- discussing with interpreters how they got on with interpreting into a spoken language (e.g. choice of specific words or phrases);
- checking with friends or colleagues who are familiar with the deaf academic’s discourse and comfortable with giving feedback (actually few people can do this and/or are comfortable with this because they are either not aware of the deaf academic’s register/not really comfortable with giving feedback/have no clue about what sign language interpreters are actually doing);
- audio-recording interpreters, and discussing the transcript with them.
The last strategy is the most time consuming but also seems to be the most effective method to assess signed to spoken language interpretation, although not in real-time, especially when working with designated interpreters.
This post focuses on one such strategy: discussing a verbatim transcript. The first time Maartje used this strategy was after her PhD defence in 2016. Her signed defence was video-recorded, and there were cameras on the interpreters as well so their performance (both visual and audio) was recorded too. The spoken interpretation was transcribed verbatim and compared against the signed version (see De Meulder, Napier and Stone 2018).
Working together for a keynote presentation
In July 2019, Maartje gave a one-hour keynote presentation at the World Congress of the World Federation of the Deaf (WFD) in Paris: “Sign language rights for all, and forever?”. The full presentation in International Sign is available here, as well as the slides. The keynote was based on a presentation she gave a few months before at a conference in Oslo, although the beginning and end of the presentation were different, and some of the arguments were finetuned. Andy interpreted this presentation.
Maartje specifically asked for Andy to interpret her keynote in Paris too, since they could build on their experience working together in Oslo and because he’s one of her designated interpreters for conferences where English is the main spoken language. They also see each other semi-regularly socially, which means they know each other also outside of professional contexts, and are familiar with each other’s way of thinking, sense of humour, and use of different registers.
At the WFD conference, there were about 800 people in the room and the presentation was live-streamed to two other rooms containing several hundred more people.
The audience was a very mixed group of deaf people from around the globe, with varying abilities in International Sign. For the presentation Maartje signed International Sign and Andy worked to English. This was interpreted back into French Sign Language (LSF) on the stage and also on the main screen, since LSF was one of the official conference languages (see picture above). It was also interpreted back to a number of national signed languages, by (hearing and deaf) interpreters who were standing or seated at various locations in the audience in the three rooms, and who had headphones to hear Andy’s voice. Several other deaf interpreters in the audience worked with deafblind participants. There were also English captions on one of the side screens, provided by automated speech recognition technology. However, the quality of the captions was poor, and according to the translator who transcribed the interpretation for this post, the captions on the day only covered 60 to 70 percent of what Andy said.
Maartje sent Andy slides and notes beforehand. During lunch break on the day right before the presentation we had some 15 minutes prep time in the interpreters’ room, mainly to fill in the LSF interpreters who would also be on the stage. Andy had an extremely high workload, not only on the day itself but also on the days before (and after) the talk, with the presentation happening on the third of five straight 12 hour-working days.
After the conference, Andy realized that both the signed and the spoken version were formally recorded by the conference team and could therefore be compared against each other. We got permission from conference staff to receive the recordings (including the captions), and asked a translator to transcribe verbatim the voice recording of Andy’s target text rendition. This gave us a transcript of 9 pages (6,300 words).
Comments on the transcript
Maartje is happy with the quality of the signed to spoken rendition. She generally finds it to be an accurate reflection of her personal and academic register (i.e. how she would talk to people about the topic of this presentation, and how she would write about it academically). Actually, this should not come as a surprise. We were both well prepared, we worked together for this presentation before, we are well attuned to each other etc. This shows the value of a designated interpreter model. The transcript also makes clear that the rendered spoken output for a specific presentation is not an ad hoc rendition but the product of collaboration which must be seen on a timeline with previous collaborations, preparation, debriefing, knowing each other in other contexts, etc.
So rather than really use the transcript to asses quality of the spoken language output, our discussion was more about interpreting practices and strategies, for example nuances and word choices, audience design, and the benefits for deaf academics of seeing a verbatim transcript of a signed to spoken language interpretation.
Word choices, and the importance of audience design
[Time indications in the sections below refer to the YouTube video of the presentation.]
When reflecting on the transcript, in some instances Maartje would have made different word choices, such as when Andy said “But many of these languages still cease to exist. They become extinct because nobody uses them” – Maartje actually meant and also signed that speakers drop the languages they speak, cease to use them because there is nobody to use them with. Andy added some of this in his next sentence: “There’s no reason or rationale for using them, so they then switch to using other majority languages.”
In one other instance Andy said “… so I’m interested in how language policies, uhh, connect to politics, political theory.” The brief hesitation and repetition here was also because Maartje signed “POLICY” in BSL, then “POLITICS” as she knows it from VGT and after that “POLITICS” in ASL (2:42-2:44 and video clip 1).
A few renditions (maybe two over the whole presentation) were not exactly as Maartje meant it but all in all did not completely change the source message. For example, Andy said “So this language deprivation concept is very much from the United Nations, we see this less so in the European context”, while Maartje actually signed “United States”.
When reflecting on the transcript, we discussed that a lot depends on the context and on audience design. Languages “ceasing to exist” and “becoming extinct” might not be the proper lingo for an academic applied linguistics conference audience, but in the context of this specific WFD conference might be appropriate, even if it was not Maartje’s exact intention. The context of the WFD conference (800 strong audience, time pressures, etc.) also did not warrant interruption. Interpreters often have to make split second decisions, evaluating how worthwhile it is to interrupt someone. Also, we did not discuss specific word choices in advance (which would have been more necessary for an academic conference) but rather focused on the key message Maartje wanted to convey, and the specifics about the strengths and weaknesses of each argument Maartje presented, because these were the core of her presentation.
Andy from his side noticed there was some “interpreted text” in the rendition, and also that he used the word ‘so’ too much. An example of “interpreted text” (which shares features with “presenter speak”) is “There are seven main claims, you may well have more, please do uuh inform me of them” (6:00), where right at the end of the sentence the word “them” should have been “those” (Maartje asked the audience to let her know if they thought of other claims). Another example contains a false start and then an ungrammatical sentence: “The first argu…, the first thing that I the first argument that I propose: sign languages are among the languages that deaf people use…” (7:26).
Spoken language interpretation as a relay for other interpreters
For the presentation at the WFD congress, the interpretation to English was not only meant for the audience, but also for the national (hearing) sign language interpreters in the audience. Andy thus functioned as a relay for them. This influenced his target lexical and grammatical choices a great deal. Because his colleagues often will have English further down their repertoire than native or near native fluency levels, he would ensure his lexical choices are widely dispersed in global English. For example one instance he initially used a word (“milieu”) that was too obscure and then did a repair fixing it with a more accessible version (“media”) and bookmarked that with yet another option (“area”): “… we need to assure that our rights are conferred in that milieu, in that media, in that area too.” He would also simplify his sentence structure, as well as ‘doubling up’ by giving alternative phraseologies to key concepts (such as with “milieu”, “media” and “area” above). When Maartje signed “LOBBY” as a verb (5:29 to 5:31 and video clip 2) Andy provided three wordings for one concept in his target rendition: “… we’re making these stands, why we’re lobbying, why we’re canvassing …”.
Another example of doubling, or rather ‘hedging’, is where Andy transmitted the concept of ‘interconnectedness’ which Maartje clearly signed “INTERCONNECTED” (6:32 to 6:38 and video clip 3), but also offered an alternative: “… and overlaps and there’s interconnectedness between them but I will deal with them one by one.”
This all has to be balanced though with fidelity to the source message, so interpreters should not be ‘dumbing down’ the sophistication of the original message. If the register choice in the source message is not appropriate for the audience, then that is the presenter’s responsibility primarily.
What do we learn from this?
Analysing transcripts in this way is time consuming and not feasible to do for every single presentation. The benefits however go beyond assessing the quality of the spoken language output and improve working relationships with interpreters. Another potential benefit, in some cases, is that deaf professionals can leverage the lexical choices made by interpreters, who are in many cases native users of languages such as English, to influence their own lexical choices for those concepts in future writing and presenting. For example Maartje read Andy’s use of “a thorny argument” or “a spectrum of strengths and weaknesses”, and took note of these for future writing.
Another benefit of seeing a verbatim transcript is that deaf professionals can see written down all the phatics, fillers and redundancies that are part of normal speech and therefore turn up in interpretations. Hearing people don’t like silences and actually need some of these breaks from pure content to process the information, especially during a one-hour talk. So, being aware of how much of these are within “normal range” for a hearing presenter speaking with authority is useful. Then, if you are looking at a transcript of a less successful interpretation, you may be able to identify specific issues from the placements, frequencies and types of fillers. The transcript should also be checked with the signed source text (if available) to rule out lack of clarity of the source message as a reason for excessive fillers.
We want to highlight that this strategy is not suitable to use at any stage of the working relationship with interpreters. When you are working with a new interpreter for the first time and record them right away this could possibly be perceived as a professional affront. The strategy could work if there is an existing working relationship, a basic level of trust to build on, and if you are both positive that you want to continue to invest in the working relationship. It also implies that interpreters working in these settings are confident enough to have their work stand up to professional (and in this case public) scrutiny, and that deaf professionals are prepared to scrutinize their own presentation skills and language use. This is not self-evident and for some interpreters and deaf presenters might be outright scary, but we believe this contributes to stronger working relationships and better outcomes.
Maartje De Meulder is lecturer/senior researcher at University of Applied Sciences Utrecht. She specializes in Deaf Studies and applied language studies. As a deaf academic, she is a ‘super user’ of sign language interpreting services, and interested in how deaf academics and interpreters work together. She’s on Twitter as @mdemeulder
Andy Carmichael grew up in the Scottish Deaf community, trained and qualified as an interpreter in England first of all, and has since worked across the globe in a multitude of settings, his working languages being English, BSL, Auslan, and IS. Andy now works full time as an interpreter at Heriot Watt University, and is currently chair of the Association of Sign Language Interpreters UK. He’s on Twitter as @AndyCar70