删除python中的单词扩展名



我有一个有几个单词的文本。我想去掉所有的导数扩展。例如,我想删除扩展-ed -ing并保留初始动词。如果我有验证或验证以保持验证f.e.,我发现python中的方法条从字符串的开始或结束删除特定字符串,但不是我想要的。有没有库可以做这样的事情,比如在python中?

我试着执行从提议的帖子的代码,我注意到一个奇怪的修剪在几个字。例如,我有以下文本

 We goin all the way βπƒβ΅οΈβ΅οΈ        
 Think ive caught on to a really good song ! Im writing π       
 Lookin back on the stuff i did when i was lil makes me laughh π‚       
 I sneezed on the beat and the beat got sicka       
 #nashnewvideo http://t.co/10cbUQswHR       
 Homee βοΈβοΈβοΈπ΄      
 So much respect for this man , truly amazing guy βοΈ @edsheeran  
 http://t.co/DGxvXpo1OM"        
 What a day ..      
 RT @edsheeran: Having some food with @ShawnMendes      
 #VoiceSave  christina π        
 Im gunna make the βοΈ sign my signature pose       
 You all are so beautiful .. π soooo beautiful      
 Thought that was a really awesome quote        
 Beautiful things don't ask for attention"""

在使用以下代码之后(我也删除了非拉丁字符和url)

 we  goin  all  the  way 
 think  ive  caught  on  to  a  realli  good  song  im  write 
 lookin  back  on  the  stuff  i  did  when  i  wa  lil  make  me  laughh 
 i  sneez  on  the  beat  and  the  beat  got  sicka 
 nashnewvideo 
 home 
 so  much  respect  for  thi  man  truli  amaz  guy 
 what  a  day 
 rt  have  some  food  with 
 voicesav  christina 
 im  gunna  make  the  sign  my  signatur  pose 
 you  all  are  so  beauti  soooo  beauti 
 thought  that  wa  a  realli  awesom  quot 
 beauti  thing  dont  ask  for  attent 

例如,它将beautiful修剪为beautiful,并引用为really to really。我的代码如下:

 reader = csv.reader(f)
    print doc
    for row in reader:
        text =  re.sub(r"(?:@|https?://)S+", "", row[2])
        filter(lambda x: x in string.printable, text)
        out = text.translate(string.maketrans("",""), string.punctuation)
        out = re.sub("[Wd]", " ", out.strip())
        word_list = out.split()
        str1 = ""
        for verb in word_list:
                 verb = verb.lower()
                 verb = nltk.stem.porter.PorterStemmer().stem_word(verb)
                 str1 = str1+" "+verb+" " 
        list.append(str1)
        str1 = "n"

您可以使用lemmatizer代替stemmer。下面是一个使用python NLTK的示例:

from nltk.stem import WordNetLemmatizer
s = """
 You all are so beautiful soooo beautiful
 Thought that was a really awesome quote
 Beautiful things don't ask for attention
 """
wnl = WordNetLemmatizer()
print " ".join([wnl.lemmatize(i) for i in s.split()]) #You all are so beautiful soooo beautiful Thought that wa a really awesome quote Beautiful thing don't ask for attention

在某些情况下,它可能不像你期望的那样:

print wnl.lemmatize('going') #going

则可以将两种方法结合:stemminglemmatization

你的问题有点笼统,但如果你有一个已经定义的静态文本,最好的方法是编写自己的stemmer。因为PorterLancaster词干器遵循自己的词缀剥离规则,而WordNet lemmatizer只在结果单词在其字典中时才移除词缀。

你可以这样写:

import re

def stem(word):
    for suffix in ['ing', 'ly', 'ed', 'ious', 'ies', 'ive', 'es', 's', 'ment']:
        if word.endswith(suffix):
            return word[:-len(suffix)]
    return word

def stemmer(phrase):
    for word in phrase:
        if stem(word):
            print re.findall(r'^(.*)(ing|ly|ed|ious|ies|ive|es|s|ment)$', word)

所以对于"processing processes",你将有:

 >> stemmer('processing processes')
[('process', 'ing'),('process', 'es')]

最新更新