我的问题是关于"module 'textacy' has no attribute 'Doc'"



找不到模块"textacy"没有属性"Doc"我正在尝试从空间中提取动词短语,但没有这样的库。请帮助我如何使用空间提取动词短语或形容词短语。我想做完全浅层解析。

def extract_named_nouns(row_series):
    """Combine nouns and non-numerical entities. 
    Keyword arguments:
    row_series -- a Pandas Series object
    """
    ents = set()
    idxs = set()
    # remove duplicates and merge two lists together
    for noun_tuple in row_series['nouns']:
        for named_ents_tuple in row_series['named_ents']:
            if noun_tuple[1] == named_ents_tuple[1]: 
                idxs.add(noun_tuple[1])
                ents.add(named_ents_tuple)
        if noun_tuple[1] not in idxs:
            ents.add(noun_tuple)
    return sorted(list(ents), key=lambda x: x[1])
def add_named_nouns(df):
    """Create new column in data frame with nouns and named ents.
    Keyword arguments:
    df -- a dataframe object
    """
    df['named_nouns'] = df.apply(extract_named_nouns, axis=1)

    from __future__ import unicode_literals
    import spacy,en_core_web_sm
    import textacy
    from textacy import io
    #using spacy for nlp
    nlp = en_core_web_sm.load()
    sentence = 'The author is writing a new book.'
    pattern = r'<VERB>?<ADV>*<VERB>+'
    doc = textacy.Doc.load(sentence, metadata=metadata, lang='en_core_web_sm')
    # doc = textacy.corpus.Corpus(sentence, lang='en_core_web_sm')
    lists = textacy.extract.pos_regex_matches(doc, pattern)
    for list in lists: 
        print(list.text)

模块"文本"没有属性"文档">

尝试遵循以下示例:https://chartbeat-labs.github.io/textacy/getting_started/quickstart.html#make-a-doc

它应该像以下那样简单:

doc = textacy.make_spacy_doc("The author is writing a new book.", lang='en_core_web_sm')

您可能会考虑仅使用空间(没有文本(及其内置的匹配器(https://spacy.io/usage/rule-based-matching(。

spacy_lang = textacy.load_spacy_lang("en_core_web_en")
docx_textacy = spacy_lang(sentence)

相关内容

  • 没有找到相关文章

最新更新