将三元组、双字母组和单字母组与文本匹配;如果单字符或双字符是已匹配的三字符的子字符串,则传递;python



main_text是包含已标记词性的句子的列表的列表:

 main_text = [[('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN'), ('likes','VB'),    
              ('tea','NN'), ('and','CC'), ('hats', 'NN')], [('the', 'DT'), ('red','JJ')                   
               ('queen', 'NN'), ('hates','VB'),('alice','NN')]]  

ngrams_to_match是一个包含词性标记三元组的列表列表:

 ngrams_to_match = [[('likes','VB'),('tea','NN'), ('and','CC')],
                    [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')],
                    [('hates', 'DT'), ('alice', 'JJ'), ('but', 'CC') ],
                    [('and', 'CC'), ('the', 'DT'), ('rabbit', 'NN')]]

(a)对于main_text中的每个句子,首先检查ngrams_to_match中是否匹配完整的三元组。如果匹配,则返回匹配的三元组和句子。

(b)然后,检查每个三元组的第一个元组(一元组)或前两个元组(二元组)在main_text中是否匹配。

(c)如果单字组或双字组是已经匹配的三字组的子字符串,则不返回任何内容。否则,返回双字母或单字母匹配和句子。

下面是输出结果:

 trigram_match = [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')], sentence[0]
 trigram_match = [('likes','VB'),('tea','NN'), ('and','CC')], sentence[0]
 bigram_match = [('hates', 'DT'), ('alice', JJ')], sentence[1]

条件(b)给出了bigram_match。

错误的输出将是:

 trigram_match = [('the', 'DT'), ('mad', 'JJ'), ('hatter', 'NN')], sentence[0]
 bigram_match =  [('the', 'DT'), ('mad', 'JJ')] #*bad by condition c
 unigram_match = [ [('the', 'DT')] #*bad by condition c
 trigram_match = [('likes','VB'),('tea','NN'), ('and','CC')], sentence[0]
 bigram_match = [('likes','VB'),('tea','NN')] #*bad by condition c
 unigram_match [('likes', 'VB')]# *bad by condition c

等等

下面这段非常丑陋的代码在这个简单的例子中可以正常工作。但我想知道是否有人有更精简的方法。

 for ngram in ngrams_to_match:
  for sentence in main_text:
        for tup in sentence:
            #we can't be sure that our part-of-speech tagger will
            #tag an ngram word and a main_text word the same way, so 
            #we match the word in the tuple, not the whole tuple
        if ngram[0][0] == tup[0]: #if word in the first ngram matches...
            unigram_index = sentence.index(tup) #...then this is our index
            unigram = (sentence[unigram_index][0]) #save it as a unigram
            try:   
                        if sentence[unigram_index+2][0]==ngram[2][0]:
                 if sentence[unigram_index+2][0]==ngram[2][0]:  #match a trigram
                      trigram = (sentence[unigram_index][0],span[1][0], ngram[2][0])#save the match
                      print 'heres the trigram-->', sentence,'n', 'trigram--->',trigram
            except IndexError:
            pass
            if ngram[0][0] == tup[0]:# == tup[0]:  #same as above
                unigram_index = sentence.index(tup)               
                if sentence[unigram_index+1][0]==span[1][0]:  #get bigram match     
                bigram = (sentence[unigram_index][0],span[1][0])#save the match
                if bigram[0] and bigram[1] in trigram:  #no substring matches
                                     pass                             
                else:
                    print 'heres a sentence-->', sentence,'n', 'bigram--->', bigram
                if unigram in bigram or trigram:  #no substring matches
                    pass
                else:
                    print unigram 

我尝试过使用生成器来实现这一点。我在你的说明中发现了一些空白,所以我做了一些假设。

如果单字组或双字组是已经匹配的三字组的子字符串,则不返回任何内容。 -对于哪个gram引用的是搜索元素还是匹配的元素有一点模糊。这让我开始讨厌使用N-gram词(上周之前我从未听说过)。

使用添加到found集合中的内容来修改排除的搜索元素。

# assumptions:
# - [('hates','DT'),('alice','JJ'),('but','CC')] is typoed and should be:
#   [('hates','VB'),('alice','NN'),('but','CC')]
# - matches can't overlap, matched elements are excluded from further checking
# - bigrams precede unigrams
main_text = [
  [('the','DT'),('mad','JJ'),('hatter','NN'),('likes','VB'),('tea','NN'),('and','CC'),('hats','NN')],
  [('the','DT'),('red','JJ'),('queen','NN'),('hates','VB'),('alice','NN')]
]
ngrams_to_match = [
  [('likes','VB'),('tea','NN'),('and','CC')],
  [('the','DT'),('mad','JJ'),('hatter','NN')],
  [('hates','VB'),('alice','NN'),('but','CC')],
  [('and','CC'),('the','DT'),('rabbit','NN')]
]
def slice_generator(sentence,size=3):
  """
  Generate slices through the sentence in decreasing sized windows. If True is sent to the
  generator, the elements from the previous window will be excluded from future slices.
  """
  sent = list(sentence)
  for c in range(size,0,-1):
    for i in range(len(sent)):
      slice = tuple(sent[i:i+c])
      if all(x is not None for x in slice) and len(slice) == c:
        used = yield slice
        if used:
          sent[i:i+size] = [None] * c
def gram_search(text,matches):
  tri_bi_uni = set(tuple(x) for x in matches) | set(tuple(x[:2]) for x in matches) | set(tuple(x[:1]) for x in matches)
  found = set()
  for i, sentence in enumerate(text):
    gen = slice_generator(sentence)
    send = None
    try:
      while True:
        row = gen.send(send)
        if row in tri_bi_uni - found:
          send = True
          found |= set(tuple(row[:x]) for x in range(1,len(row)))
          print "%s_gram_match, sentence[%s] = %r" % (len(row),i,row)
        else:
          send = False
    except StopIteration:
      pass
gram_search(main_text,ngrams_to_match)

收益率:<>前3 _gram_match句子[0]=(("的"、"DT")("疯狂"、"JJ")("帽匠"、"神经网络"))3 _gram_match句子[0]=(("喜欢","VB")("茶"、"神经网络")("one_answers","CC"))2 _gram_match句子[1]=(("讨厌","VB")("爱丽丝","神经网络"))

最新更新