用于单词标记化和句子标记化的NLTK可用语言



我需要了解哪些语言可以在NLTK中进行标记化。我想我需要这样设置语言:

import nltk.data
lang = "WHATEVER_LANGUAGE"
tokenizer = nltk.data.load('nltk:tokenizers/punkt/'+lang+'.pickle')
text = "something in some specified whatever language"
tokenizer.tokenize(text)

我需要了解我可以使用哪些语言,但在nltk文档中找不到任何信息。

您可以在这里查看关于类似问题的评论:https://stackoverflow.com/a/71069740/11551168

The list of the languages supported by the NLTK tokenizer is as follows:
'czech'
'danish'
'dutch'
'english'
'estonian'
'finnish'
'french'
'german'
'greek'
'italian'
'norwegian'
'polish'
'portuguese'
'russian'
'slovene',
'spanish'
'swedish'
'turkish'
It corresponds to the pickles stored in C:UsersXXXAppDataRoamingnltk_datatokenizerspunkt (in Windows). This is what you enter with the key 'language' when tokenizing, e.g.
nltk.word_tokenize(text, language='italian')

最新更新