如何用庞大的语言模型调整机器翻译模型



Moses是一个用于构建机器翻译模型的软件。KenLM是摩西使用的语言模型软件。

我有一个16GB的文本文件,我用它来构建一个这样的语言模型:

bin/lmplz -o 5 <text > text.arpa

生成的文件(text.arpa)为38GB。然后我将语言模型二值化为:

bin/build_binary text.arpa text.binary

二值化语言模型(text.binary)增长到71GB。

moses中,训练翻译模型后,应使用MERT算法调整模型的权重。这可以简单地用https://github.com/moses-smt/mosesdecoder/blob/master/scripts/training/mert-moses.pl.

MERT适用于小型语言模型,但对于大型语言模型,它需要相当长的时间才能完成。

我在谷歌上搜索了一下,发现了KenLM的过滤器,它承诺将语言模型过滤到更小的大小:https://kheafield.com/code/kenlm/filter/

但我对如何让它发挥作用一无所知。命令帮助提供:

$ ~/moses/bin/filter
Usage: /home/alvas/moses/bin/filter mode [context] [phrase] [raw|arpa] [threads:m] [batch_size:m] (vocab|model):input_file output_file
copy mode just copies, but makes the format nicer for e.g. irstlm's broken
parser.
single mode treats the entire input as a single sentence.
multiple mode filters to multiple sentences in parallel.  Each sentence is on
a separate line.  A separate file is created for each sentence by appending
the 0-indexed line number to the output file name.
union mode produces one filtered model that is the union of models created by
multiple mode.
context means only the context (all but last word) has to pass the filter, but
the entire n-gram is output.
phrase means that the vocabulary is actually tab-delimited phrases and that the
phrases can generate the n-gram when assembled in arbitrary order and
clipped.  Currently works with multiple or union mode.
The file format is set by [raw|arpa] with default arpa:
raw means space-separated tokens, optionally followed by a tab and arbitrary
text.  This is useful for ngram count files.
arpa means the ARPA file format for n-gram language models.
threads:m sets m threads (default: conccurrency detected by boost)
batch_size:m sets the batch size for threading.  Expect memory usage from this
of 2*threads*batch_size n-grams.
There are two inputs: vocabulary and model.  Either may be given as a file
while the other is on stdin.  Specify the type given as a file using
vocab: or model: before the file name.  
For ARPA format, the output must be seekable.  For raw format, it can be a
stream i.e. /dev/stdout

但当我尝试以下操作时,它被卡住了,什么也没做:

$ ~/moses/bin/filter union lm.en.binary lm.filter.binary
Assuming that lm.en.binary is a model file
Reading lm.en.binary
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100

二值化后,应该如何处理语言模型?有没有其他步骤可以操作大型语言模型来减少调整时计算负载

调整大型LM文件的常用方法是什么

如何使用KenLM的过滤器

(更多详细信息https://www.mail-archive.com/moses-support@mit.edu/msg12089.html)

回答如何使用KenLM 的filter命令

cat small_vocabulary_one_word_per_line.txt 
| filter single 
"model:LM_large_vocab.arpa" 
output_LM_small_vocab.

注意:single可以替换为unioncopy。如果在没有参数的情况下运行filter二进制文件,请参阅正在打印的帮助中的更多信息。

最新更新