# Additive Interventions Yield Robust Multi-Domain Machine Translation Models

Published in Proceedings of the Seventh Conference on Machine Translation (WMT), 2022

Recommended citation: Additive Interventions Yield Robust Multi-Domain Machine Translation Models (Rippeth & Post, WMT 2022) https://aclanthology.org/2022.wmt-1.14/

Additive interventions are a recently-proposed mechanism for controlling target-side attributes in neural machine translation by modulating the encoder’s representation of a source sequence as opposed to manipulating the raw source sequence as seen in most previous tag-based approaches. In this work we examine the role of additive interventions in a large-scale multi-domain machine translation setting and compare its performance in various inference scenarios. We find that while the performance difference is small between intervention-based systems and tag-based systems when the domain label matches the test domain, intervention-based systems are robust to label error, making them an attractive choice under label uncertainty. Further, we find that the superiority of single-domain fine-tuning comes under question when training data is scaled, contradicting previous findings.

BibTeX:

@inproceedings{rippeth-post-2022-additive,
title = "Additive Interventions Yield Robust Multi-Domain Machine Translation Models",
author = "Rippeth, Elijah  and
Post, Matt",
booktitle = "Proceedings of the Seventh Conference on Machine Translation (WMT)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wmt-1.14",
pages = "220--232",
abstract = "Additive interventions are a recently-proposed mechanism for controlling target-side attributes in neural machine translation by modulating the encoder{'}s representation of a source sequence as opposed to manipulating the raw source sequence as seen in most previous tag-based approaches. In this work we examine the role of additive interventions in a large-scale multi-domain machine translation setting and compare its performance in various inference scenarios. We find that while the performance difference is small between intervention-based systems and tag-based systems when the domain label matches the test domain, intervention-based systems are robust to label error, making them an attractive choice under label uncertainty. Further, we find that the superiority of single-domain fine-tuning comes under question when training data is scaled, contradicting previous findings.",
}