Experiments with a PPM Compression-Based Method for English-Chinese Bilingual Sentence Alignment

Liu, W. and Chang, Z. and Teahan, W.J. (2014) Experiments with a PPM Compression-Based Method for English-Chinese Bilingual Sentence Alignment. In: Statistical Language and Speech Processing Volume 8791 of the series Lecture Notes in Computer Science. Springer, pp. 70-81. ISBN 9783319113968

Full-text not available from this repository..


Alignment of parallel corpora is a crucial step prior to training statistical language models for machine translation. This paper investigates compression-based methods for aligning sentences in an English-Chinese parallel corpus. Four metrics for matching sentences required for measuring the alignment at the sentence level are compared: the standard sentence length ratio (SLR), and three new metrics, absolute sentence length difference (SLD), compression code length ratio (CR), and absolute compression code length difference (CD). Initial experiments with CR show that using the Prediction by Partial Matching (PPM) compression scheme, a method that also performs well at many language modeling tasks, significantly outperforms the other standard compression algorithms Gzip and Bzip2. The paper then shows that for sentence alignment of a parallel corpus with ground truth judgments, the compression code length ratio using PPM always performs better than sentence length ratio and the difference measurements also work better than the ratio measurements.

Item Type: Book Section
Subjects: Research Publications
Departments: College of Physical and Applied Sciences > School of Computer Science
Date Deposited: 09 Apr 2016 02:34
Last Modified: 09 Apr 2016 02:34
ISBN: 9783319113968
URI: http://e.bangor.ac.uk/id/eprint/6486
Identification Number: DOI: 10.1007/978-3-319-11397-5_5
Publisher: Springer
Administer Item Administer Item

eBangor is powered by EPrints 3 which is developed by the School of Electronics and Computer Science at the University of Southampton. More information and software credits.