Lately, our teams have been working hard on an extremely large collection of content for a high-profile client. With so many people involved on the same large project, our biggest priority is to have a workflow to implement edits and ensure consistency across all files of this project from start to finish. With that high volume of content, we were definitely maximizing every possible tool and skill: glossary development, the development of content databases, adherence to client style guides or requests, and multi-step process by which to net potential errors.
(1) Implementing a style guide
First in our arsenal of tools is a style guide – a set of agreed-upon guidelines for the project that we send out to our talented linguists. Not all projects have one, but for the larger projects or for more high-profile clients, the requester often provides instructions or a style guide to define tone, terminology, punctuation, capitalization, formatting, and other characteristics of content. Multiple hands means that variations in translating/writing style and word use are inevitable but manageable – as long as every person involved is sticking by the same rules.
(2) Computer-assisted QA
After translation, we're able to apply a customizable QA process that helps a language specialist or linguist cross-check thousands (or tens of thousands) of words to look for various possible errors or inconsistencies. We run this initial QA after translation and editing are complete. It searches for the following error types, among others, and flags each for resolution by a human:
- whether non-translatable terms indeed stayed un-translated. We maintain lists of items that are not to be translated under certain contexts, or not to be translated at all.
- whether specialized terms were translated consistently throughout. Glossaries allow us to mutually settle on a particular translation for a particular term – and to enforce its use.
- whether forbidden terms have been used. The same way our tools check for the correct term, they can flag erroneous terms, which are presented to the user to fix.
- whether the proper tags and code have been replicated (in the correct places) in the target language content
- whether character formatting has been replicated in the target language content
- whether capitalization matches. This could vary by language depending on the unique capitalization rules of that particular language, or the complete lack of capitalization (in languages such as Chinese that use script characters).
*Client feedback: Some clients request that content be transmitted to them for their team(s)' final review, before it goes to the Glyph design/multimedia team for publishing and implementation.
(3) Final visual QA before delivery
After computer-assisted QA and, if applicable, after the client has had a look, content moves to the design/multimedia team, which handles the presentation of each project in the target language. This could mean layout and publishing, voiceover and video engineering, or other preparation in final file format.
After design, the final product gets 1-2 more sets of eyes for a visual QA. We look at the source and target files side-by-side, with content in its natural habitat, to look for remaining changes. During this post-design QA, we are also able to look at the entire forest instead of just trees... aiming for consistency across a large collection and, as necessary, make edits to our content databases for future implementation.