Beyond the Hype of Neural Machine Translation, Diego Bartolome (tauyou) and Gema Ramirez (Prompsit Language Engineering)

Download Beyond the Hype of Neural Machine Translation, Diego Bartolome (tauyou) and Gema Ramirez (Prompsit Language Engineering)

Post on 14-Apr-2017

1.087 views

TRANSCRIPT

<ul><li><p>Beyond the Hype of Neural Machine </p><p>Translation</p><p>Tauyou &amp; Prompsit</p><p>(Diego) dbc@tauyou.com | (Gema) gramirez@prompsit.com </p><p>mailto:diego.bartolome@tauyou.commailto:gramirez@prompsit.com</p></li><li><p>Why neural nets?artificial neural networks [...] are able to be trained </p><p>from examples without the need for a thorough understanding of the task in hand, and able to show surprising generalization performance and predicting power</p><p>Mikel L. Forcada (Neural Networks: Automata and Formal Models of Computation)</p></li><li><p>Why neural nets in MT now?MT maturity</p><p> MT is widely used (but planning to use it everywhere) MT for some languages is still not good enough (yes for others) RBMT, SMT and hybrid MT approaches widely exploited</p><p> Resources availability</p><p> Computational power available and cheap (GPUs) Deep learning algorithms and frameworks available Data to learn from also available (corpora)</p></li><li><p>So, why not?Promising results from WMT16 competition: all best systems are NMT ones</p><p>SMT NMT</p><p>BLEU TER BLEU TER</p><p>en-fi* 14.8 0.76 17.8 0.72</p><p>en-ro 27.4 0.61 28.7 0.60 </p><p>en-ru 24.0 0.68 26.0 0.65</p><p>en-de 31.4 0.58 34.8 0.54</p><p>en-cz 24.1 0.67 26.3 0.63</p><p>* en-fi are Prompsits + DCU systems</p></li><li><p>Neural nets are... ...computational models inspired by Biology ...playing increasing key roles in Graphics and Pattern Recognition ...experiencing a new edge thanks to hardware and deep learning ...made of encoding/decoding neurons ...applied to translation (= neural MT = NMT):</p><p> encode SL words as vectors that represent the relevant information </p><p> decode vectors into words preserving syntactic and semantic information in the TL</p></li><li><p>NMT requires... Hardware: raw 10xCPUs or GPU </p><p>(times get shorter with GPUs) Software: deep learning framework </p><p>(Theano, Torch, etc.) + NMT libraries Data: bilingual corpora </p><p>(monolingual for LM only) Learning &amp; (early) stopping: iteratively, translation models are created. Picking up a model: evaluation and selection of best model(s) Translating: model(s) are used to translate</p></li><li><p>Down to the NMT business</p></li><li><p>Applying NMT to generic and in-domain use casesGeneric English -- Swedish SMT vs. NMT</p><p> Same generic corpus (8M segments), same training and test sets SMT: Moses-based with no tuning on CPU NMT: Theano-based Groundhog NMT toolkit on GPU</p><p>Domain-specific English -- Norwegian SMT vs. NMT</p><p> Same in-domain corpus (800K segments), same training and test sets SMT: Moses-based + tuning on CPU NMT: Theano-based Groundhog NMT toolkit on GPU</p></li><li><p>Comparison for generic English - SwedishSMT NMT</p><p>Training time 48 hours (CPU) 2 weeks (GPU)</p><p>Translation time 00:12:35 (866 segments) 01:38:47 (866 segments)</p><p>CPU usage in translation 56% (CPU) 100% (CPU)</p><p>Space in disk 37.7 GB 9.1GB</p><p>BLEU score 0.440 0.404</p><p>Identical matches 19.33% (161/866) 12% (104/866)</p><p>Edit distance similarity 0.78 0.746</p></li><li><p>Comparison for in-domain English - NorwegianSMT NMT</p><p>Training time 1.8 hours (3 CPUs) 7 days (1 GPU)</p><p>Translation time 00:01:22 (1,000 segments) 02:08:00 (1,000 segments)</p><p>CPU usage in translation 56% (CPU) 100% (CPU)</p><p>Space in disk 2.3 GB 6.5GB</p><p>BLEU score 0.53 0.62</p><p>Identical matches 27.76% (276/1000) 30% (300/1000)</p><p>Edit distance similarity 0.77 0.83</p></li><li><p>Conclusions SMT vs. NMT: technical insight</p><p>SMT NMT</p><p>Space in disk Smaller</p><p>CPU during translation </p><p>RAM during translation Lesser</p><p>Training speed rate Faster Can be optimized by hardware</p><p>Translation speed rate Faster Can be optimized by hardware</p></li><li><p>In domain</p><p>SMT NMT</p><p>BLEU </p><p>Identical matches </p><p>Edit distance similarity </p><p>Translators feedback </p><p>Generic</p><p>SMT NMT</p><p>BLEU </p><p>Identical matches </p><p>Edit distance similarity </p><p>Translators feedback </p><p>Conclusions SMT vs. NMT: qualitative insight</p></li><li><p>Final conclusions NMT is a new big player in MT: </p><p> Research now focusing heavily on NMT: already outperforms SMT in many cases</p><p> Use case results: with little effort, it is on par with SMT Hardware requirements are more demanding for NMT: </p><p>higher budget Translators feedback: SMT is still better</p></li><li><p>Final conclusions SMT, and other approaches, more robust and alive</p><p> Better quality and consistency in MT output. Better ROI, specially for real-time translation applications </p><p>where speed is critical Deep learning for other NLP applications?</p><p> Of course! Vivid in quality estimation, terminology, sentiment analysis, etc.</p></li><li><p>Thanks! Go raibh maith agaibh!</p><p>Tauyou &amp; Prompsit </p><p>(Diego) dbc@tauyou.com | (Gema) gramirez@prompsit.com </p><p>mailto:diego.bartolome@tauyou.commailto:gramirez@prompsit.com</p></li></ul>

Recommended

View more >