NLTK:查找单词大小为 2k 的上下文



我有一个语料库,我有一个词。对于语料库中单词的每次出现,我都想得到一个列表,其中包含单词之前的 k 个单词和单词之后的 k 个单词。我在算法上做得很好(见下文),但我想知道 NLTK 是否为我错过的需求提供了一些功能?

def sized_context(word_index, window_radius, corpus):
    """ Returns a list containing the window_size amount of words to the left
    and to the right of word_index, not including the word at word_index.
    """
    max_length = len(corpus)
    left_border = word_index - window_radius
    left_border = 0 if word_index - window_radius < 0 else left_border
    right_border = word_index + 1 + window_radius
    right_border = max_length if right_border > max_length else right_border
    return corpus[left_border:word_index] + corpus[word_index+1: right_border]
如果你想

使用nltk的功能,你可以使用nltk的ConcordanceIndex。为了显示的宽度基于单词数而不是字符数(后者是ConcordanceIndex.print_concordance的默认值),您可以仅使用如下所示的内容创建一个ConcordanceIndex子类:

from nltk import ConcordanceIndex
class ConcordanceIndex2(ConcordanceIndex):
    def create_concordance(self, word, token_width=13):
        "Returns a list of contexts for @word with a context <= @token_width"
        half_width = token_width // 2
        contexts = []
        for i, token in enumerate(self._tokens):
            if token == word:
                start = i - half_width if i >= half_width else 0
                context = self._tokens[start:i + half_width + 1]
                contexts.append(context)
        return contexts

然后,您可以像这样获得结果:

>>> from nltk.tokenize import wordpunct_tokenize
>>> my_corpus = 'The gerenuk fled frantically across the vast valley, whereas the giraffe merely turned indignantly and clumsily loped away from the valley into the nearby ravine.'  # my corpus
>>> tokens = wordpunct_tokenize(my_corpus)
>>> c = ConcordanceIndex2(tokens)
>>> c.create_concordance('valley')  # returns a list of lists, since words may occur more than once in a corpus
[['gerenuk', 'fled', 'frantically', 'across', 'the', 'vast', 'valley', ',', 'whereas', 'the', 'giraffe', 'merely', 'turned'], ['and', 'clumsily', 'loped', 'away', 'from', 'the', 'valley', 'into', 'the', 'nearby', 'ravine', '.']]

我上面创建的create_concordance方法基于 nltk 的 ConcordanceIndex.print_concordance 方法,其工作原理如下:

>>> c = ConcordanceIndex(tokens)
>>> c.print_concordance('valley')
Displaying 2 of 2 matches:
                                  valley , whereas the giraffe merely turn
 and clumsily loped away from the valley into the nearby ravine .

最简单的、nltk 式的方法是使用 nltk.ngrams()

words = nltk.corpus.brown.words()
k = 5
for ngram in nltk.ngrams(words, 2*k+1, pad_left=True, pad_right=True, pad_symbol=" "):
    if ngram[k+1].lower() == "settle":
        print(" ".join(ngram))

pad_leftpad_right确保所有单词都被查看。如果您不让您的索引跨越句子(因此:很多边界情况),这一点很重要。

如果要忽略窗口大小中的标点符号,可以在扫描前将其剥离:

words = (w for w in nltk.corpus.brown.words() if re.search(r"w", w))

最新更新