© 2018 original year of publication

Distance interpreting has been around for longer than some of us have been practicing.[1] Then why is everyone talking about it as if it were brand new? Easy: remote simultaneous interpreting, one of its modes, has been fitted with new clothes and it is, well, AWESOME.

So, what are we talking about?

Let’s just define what I am calling distance interpreting here: if speakers and interpreters are in different rooms, that is distance interpreting. That’s the simplest definition out there, but I invite you to read AIIC’s for more details[2]. Distance interpreting can be consecutive (over the phone, for example) or simultaneous (when your booth is in a different room, at the same venue as the event or not). The past few years have seen the emergence of cloud-enabled remote simultaneous interpreting -RSI, as the market calls it- which allows for all parties at an event to be in different locations. And that’s what I will be addressing.

Change is the constant

I firmly believe that change is the constant. And technology is here to prove me right. Then, it behooves us, the professionals, champions of our craft, to stay up-to-date with developments and ensure the final product meets our requirements, introduces improvements to our deliverables, and enhances our performance.

Ok, I will grant you what I described above is the perfect scenario. But that is what we should be striving to achieve.

I urge everyone who has questions about cloud-enabled RSI platforms to contact any, or all, of the companies offering the service and to participate in demonstrations – both for interpreters and clients; they are different in content, not only focus.

There’s no substitute for personal experience

In August 2017, I showcased two companies that offer the service in the first-ever, 100% online translation and interpreting conference, Congresso Virtual de Tradução e Interpretação (ConVTI). The providers were Headvox, from France, and KUDO, from the US. I was impressed with both on many fronts. It is important to understand that those two firms are headed by business people with knowledge of the industry and interpreters with intimate knowledge of our profession, its requirements, and demands.

But I want to focus on my personal experience, which extends also to Interprefy, a third platform whose demo I attended. And I am not going to be comparing the companies. Sorry if I disappoint you, but I want to focus on my professional experience and convey that to you. Why? You should play with these new toys, I mean tools, yourself, fall in love, or not, at your own pace, and increase the scope of your own knowledge firsthand. I won’t rob you of that experience[3].

Skepticism 0 x RSI Platforms 3

My skepticism was very much on high when I attended my first demo. Surely, they could not have good image, good sound, or the interpreter’s phone would ring or the dog bark… Yes, I was ready for a bad experience.

Yup, they beat my skepticism to a pulp. First of all, each platform has special requirements and suggestions if you intend to work from your specially set up environment. And if you recall, I did mention that business people and interpreters got together to develop these platforms? Their collective input covered everything from hardware enhancement (great video and sound output) to working conditions (special requirements from browser to hardware to work environment, etc.). And they are not done yet.

The Virtual Booth

I have experienced all three of the platforms I mentioned as a client, and two of them as an interpreter also. In each of the situations, I received a link to join the event and had to enter minimum information to create a profile. Depending on the platform, the sign-in page would also let me choose my role in the demo – attendee or interpreter.

The basics are the same: Speakers act as if they were on a webcast platform, talk to their computer cameras and share their screens as needed, and allow attendees to ask questions out loud.  Speakers also have access to an event dedicated platform technician who will take care of any difficulties. Attendees can choose what they want to see: material or speaker – they have some control over this choice. They can also select what language they want to listen to – and they can change languages easily with one click while watching the event. Some of the requirements for a good experience are the use of a WebRTC[4]-enabled browser, good headphones, and stable internet connection. The specifications are provided ahead of time and I recommend following them for the best experience.  Attendees can also take advantage of a chat area to communicate among themselves, with the speaker(s) and technicians in case of difficulties. Interpreters should identify themselves in that role at login, select the language(s) they will be working into and the language(s) they will be listening to – that will allow for the use of the relay if needed.  And yes, there is a mute, or cough, button too.  The basic requirements are: a stable, hardwired internet connection, WebRTC-enabled browser, a good set of headphones with a good directional mic, and a quiet environment. Again, there are specifications to be followed, which include details such as browser download speed, type of headphones, sound level, definitions of “quiet environment,” etc. The technicians are also available for the interpreters, and one of their jobs is to monitor all communications and functionalities.

Visually speaking, the screen offerings are similar between the different platforms: an area for viewing the presentation and speaker, area for interpreters to communicate among themselves, another for reaching the technician/operator in case of any technical difficulties, and areas for controlling their language input and output – similar to a regular console.

Cloud Nine

My first experience was as a prospective client (it was a demo), and I was mesmerized by the experience. I could hear clearly, the visuals were appealing and in focus, there was no loss of quality. Just like in an in-person scenario, I could watch the speaker or the presentation and still hear the interpreter seamlessly. If I wanted to change the language I was listening to, all I had to do was click on a little button, I did not have to go find out which channel the language I wanted was on, then miss part of the event while I switched channels.

I also had the opportunity to interpret. Again, the experience was similar to being in a booth: no problem watching the speakers or seeing the presentation, clear sound in and out, the commands were at my fingertips. No extra stress whatsoever – that’s because I took advantage of every opportunity available to familiarize myself with the tools in advance. And before you ask, I did use my cell phone just to try it out and the outcome was the same: I was very happy with the results.

Some of the problems we had – during demos and actual events – were outside the scope of control of the platforms: internet connectivity, microphone mishaps, low volume, mic not active, user error, etc. The interpreters who worked at ConVTI mentioned some difficulty with the handover; their input was taken to heart by the two platforms we showcased and changes have already been implemented.

We are all subject to problems at in-person events, too. In the last event I worked at, we had an issue with the converter. It started to overheat, the cover was removed to prolong the equipment’s life, but the matter was a difference in amperage and the makeshift solution did not help. And I have a plethora of other horror stories from the booth to tell as well.

Shaping our future

Now what? Where do we go from here? I can only answer for myself and I have my path already plotted. I am following all the developments on this front – discussions on standards, attending demos and inviting colleagues to do the same. I have also used the technology professionally during the Congresso Virtual de Tradução e Interpretação-ConVTI2017, and plan on using it again.

Will cloud-enabled RSI take over the world of conference interpreting? NO. Not in the very near future. That’s an easy answer. I share the vision of the companies developing these platforms: They are providing individuals who can not make it to an event the opportunity to participate – as interpreters, attendees or speakers. In doing so, they are also creating (and delivering) the possibility for us, as interpreters, to continue to earn a living, even when we are not able to travel.

At one of the demos, we learned the story of a delegate to an international meeting who found herself stranded on the tarmac on her way there and was forced to join the session via cloud-enabled RSI using the plane’s wi-fi. During ConVTI, one of our interpreters used her cellphone to deliver her interpretation -flawlessly- while being driven to her quiet environment (timing issues). Not desirable situations, granted. But disasters were averted: the delegate was able to make her input heard; ConVTI attendees in Ecuador, Spain, Colombia, Mexico, Uruguay, and Argentina who were dependent on the Spanish booth, could still follow the event.

Can you also see the possibilities?


[1] See my 2012 article Who’s in Control? – A Look at Remote Interpreting

[2] AIIC’s Position on Distance Interpreting and Annexes

[3] Websites of companies mentioned above: headvox.com, kudoway.com and interprefy.com

[4] Web real-time communication: a collection of protocols and programming applications using JavaScript APIs to facilitate real-time communication. Learn more at WebRTC.org.