Fix Your Gender-Biased Machine Translation

The de facto paradigm in machine translation still relies on a rudimentary framework known as phrase-based statistical machine translation, which refers to an approach to handling language where corpora of textual data is organized by phrases in a way that makes it easier to extract the elements of a sentence. As a consequence, while machine translation in the realm of grammar has gradually increased its accuracy, it has remained unable to master the intricacies of gender-related and referential systems in natural languages.

Gender Bias in Language Representation

The impact of machine translation on language and society is growing by the day. Services like Google Translate make it possible for users to communicate with each other in dozens of languages. However, the potential biases that may be contained within the translated output makes it particularly challenging to assess the quality of such translations. Hence, it becomes increasingly obvious that to counteract gender bias in the machine translation process, modification to traditional translational frameworks is required.

In its most basic form, gender bias in translation can affect how a sentence is translated and interpreted. This can happen at a micro and macro level (with reference to how a word is translated and interpreted into different languages) and can happen at various stages throughout a translation process.

When the phenomenon of gender-related meaning is combined with a neural machine translation (NMT) framework that fails to consider the human context and intent of the linguistic elements within the input text, biases are likely to creep in and impact both accuracy and correctness. The kind of bias that occurs in NMTs stems from a mismatch between the input data and a statistically corollary internal representation, and the incorrect output it produces. In other words, gender-bias in NMT affects its ability to grasp gender-related concepts and provide accurate and non-problematic representations of the input language text.

Gender in Translation

Different cultural norms and even language communities may put an emphasis on certain phrases, in an attempt to make up for the dearth of common, generic phrases that embody the concept in question. In many cases, it can lead to unfortunate gender stereotyping and reinforcing other, anti-feminist, cultural norms in an unintentionally sexist manner. As an example, Google Translate has historically translated the Turkish phrase “He/she is a doctor,” into the masculine form―but only when translating from Turkish to English. And, on the contrary, “he/she is a nurse” has always been translated into the feminine form by Google Translate. The criticism of Google Translate, and the problem that it has with gender-related phrases, is quite understandable.

This preference for gender-specific language is a bias that has grown out of the historical need, in most languages, for gendered sentences and words to convey general, default concepts. Such biases in language are an area of active research and teach us that if we train our models on biased data, then they will likely also be biased.

Complicating the problem is the fact that translations involving gender (such as using the correct pronoun and having gender agreement) are particularly sensitive, because they can directly refer to people and how they self-identify. While these problems may sound abstract, they are not theoretical. Anecdotally, many groups have expressed concern over the accuracy of Google Translate, and even published studies documenting how gender-biased NMTs can be prone to errors when attempting to produce gender-neutral content.

As part of its efforts to promote awareness of bias in machine translation and address the problem, Google has released the Translated Wikipedia Biographies dataset (2021). The objective of which is to provide a framework for long-term advances in the area of machine translation by establishing a consistent set of data for self-assessment and improvement of learning systems focused on gender in translation.

Solutions

In a perfect world, a neural machine translation framework would represent the input language text as an exact representation of the human language, without variance of any kind.

Recent advances in machine translation (MT) and natural language processing (NLP) are producing new state of the art results, including near human-level performance on some language pairs. However, while impressive gains have been achieved, high quality machine translation has proven elusive for many reasons―language systems tend to reflect the values of their creators. Training such systems on data that systematically depicts gender bias will inevitably produce biased outputs, because MT systems learn the biases and internalize them as part of the input processes.

Given the power of MT and NLP to shape cultural context, research in the next decade will likely highlight ways that language and translation can influence the larger society, society’s decisions and policies, and/or even cultures themselves. In the meantime, though, if you want to guarantee accurate and non-gender-biased translations―even when there may be some MT involved, you can call on the services of professional language service providers. Ask about our “human touch” to spot and correct any gender bias, which is an integral part of our machine post-editing services aimed at turning at any MT into the equivalent of a human translation.

 

Photo by Dainis Graveris on Unsplash