我想修改默认的标记器,将URL拆分为单独的单词。以下是我目前拥有的
import spacy
nlp = spacy.blank('en')
infixes = nlp.Defaults.infixes + [r'.']
infix_regex = spacy.util.compile_infix_regex(infixes)
nlp.tokenizer.infix_finditer = infix_regex.finditer
print(list(nlp('www.internet.com')))
# ['www.internet.com']
# want it to be ['www', '.', 'internet', '.', 'com']
我正在查看令牌化器的使用示例和源代码,但我无法解决这个特殊情况。
您没有看到您想要的结果,因为url首先被URL_MATCH
捕获(它具有更高的优先级(:
import spacy
nlp = spacy.blank('en')
txt = 'Check this out www.internet.com'
doc = nlp(txt)
nlp.tokenizer.explain(txt)
[('TOKEN', 'Check'),
('TOKEN', 'this'),
('TOKEN', 'out'),
('URL_MATCH', 'www.internet.com')]
可能的解决方案之一:
nlp.tokenizer.url_match = None
infixes = nlp.Defaults.infixes + [r'.']
infix_regex = spacy.util.compile_infix_regex(infixes)
nlp.tokenizer.infix_finditer = infix_regex.finditer
doc = nlp(txt)
list(doc)
[Check, this, out, www, ., internet, ., com]