如何使用正则表达式删除标签,@user, tweet的链接



我需要使用Python预处理tweet。现在我想知道什么是正则表达式来删除所有的标签,@user和链接的推文分别?

例如,

  1. original tweet: @peter I really love that shirt at #Macy. http://bet.ly//WjdiW4
    • 处理tweet: I really love that shirt at Macy
  2. 原推文:@shawn Titanic tragedy could have been prevented Economic Times: Telegraph.co.ukTitanic tragedy could have been preve... http://bet.ly/tuN2wx
    • 处理tweet: Titanic tragedy could have been prevented Economic Times Telegraph co ukTitanic tragedy could have been preve
  3. 原始推文:I am at Starbucks http://4sh.com/samqUI (7419 3rd ave, at 75th, Brooklyn)
    • 处理tweet: I am at Starbucks 7419 3rd ave at 75th Brooklyn

我只需要每条推文中有意义的词语。我不需要用户名,链接或者标点符号

下面的例子是一个近似的例子。不幸的是,没有正确的方法可以通过正则表达式来实现。下面的正则表达式只是一个URL(不只是http),任何标点符号,用户名或任何非字母数字字符的条带。它还用一个空格分隔单词。如果你想解析推文,你需要更多的智能系统。考虑到没有标准的tweet feed格式,一些预知自学习算法。

这是我的建议。

' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())

,这是你的例子的结果

>>> x="@peter I really love that shirt at #Macy. http://bit.ly//WjdiW4"
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'I really love that shirt at Macy'
>>> x="@shawn Titanic tragedy could have been prevented Economic Times: Telegraph.co.ukTitanic tragedy could have been preve... http://bit.ly/tuN2wx"
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'Titanic tragedy could have been prevented Economic Times Telegraph co ukTitanic tragedy could have been preve'
>>> x="I am at Starbucks http://4sq.com/samqUI (7419 3rd ave, at 75th, Brooklyn) "
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'I am at Starbucks 7419 3rd ave at 75th Brooklyn'
>>> 

这里有几个不完美的例子

>>> x="I c RT @iamFink: @SamanthaSpice that's my excited face and my regular face. The expression never changes."
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'I c RT that s my excited face and my regular face The expression never changes'
>>> x="RT @AstrologyForYou: #Gemini recharges through regular contact with people of like mind, and social involvement that allows expression of their ideas"
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'RT Gemini recharges through regular contact with people of like mind and social involvement that allows expression of their ideas'
>>> # Though after you add # to the regex expression filter, results become a bit better
>>> ' '.join(re.sub("([@#][A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'RT recharges through regular contact with people of like mind and social involvement that allows expression of their ideas'
>>> x="New comment by diego.bosca: Re: Re: wrong regular expression? http://t.co/4KOb94ua"
>>> ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z t])|(w+://S+)"," ",x).split())
'New comment by diego bosca Re Re wrong regular expression'
>>> #See how miserably it performed?
>>> 

有点晚,但这个解决方案可以防止标点错误,如#hashtag1,#hashtag2(不带空格),并且实现非常简单

import re,string
def strip_links(text):
    link_regex    = re.compile('((https?):((//)|(\\))+([wd:#@%/;$()~_?+-=\.&](#!)?)*)', re.DOTALL)
    links         = re.findall(link_regex, text)
    for link in links:
        text = text.replace(link[0], ', ')    
    return text
def strip_all_entities(text):
    entity_prefixes = ['@','#']
    for separator in  string.punctuation:
        if separator not in entity_prefixes :
            text = text.replace(separator,' ')
    words = []
    for word in text.split():
        word = word.strip()
        if word:
            if word[0] not in entity_prefixes:
                words.append(word)
    return ' '.join(words)

tests = [
    "@peter I really love that shirt at #Macy. http://bet.ly//WjdiW4",
    "@shawn Titanic tragedy could have been prevented Economic Times: Telegraph.co.ukTitanic tragedy could have been preve... http://bet.ly/tuN2wx",
    "I am at Starbucks http://4sh.com/samqUI (7419 3rd ave, at 75th, Brooklyn)",
]
for t in tests:
    strip_all_entities(strip_links(t))

#'I really love that shirt at'
#'Titanic tragedy could have been prevented Economic Times Telegraph co ukTitanic tragedy could have been preve'
#'I am at Starbucks 7419 3rd ave at 75th Brooklyn'

这将适用于您的示例。如果你的推文中有链接,它将失败,悲惨

result = re.sub(r"(?:@S*|#S*|http(?=.*://)S*)", "", subject)
编辑:

也适用于内部链接,只要它们之间用空格分隔。

使用API。为什么要重新发明轮子?

我知道这不是一个正则表达式,但是:

>>>
>>> import urlparse
>>> string = '@peter I really love that shirt at #Macy. http://bit.ly//WjdiW#'
>>> new_string = ''
>>> for i in string.split():
...     s, n, p, pa, q, f = urlparse.urlparse(i)
...     if s and n:
...         pass
...     elif i[:1] == '@':
...         pass
...     elif i[:1] == '#':
...         new_string = new_string.strip() + ' ' + i[1:]
...     else:
...         new_string = new_string.strip() + ' ' + i
...
>>> new_string
'I really love that shirt at Macy.'
>>>

最新更新