我正在为安全新闻构建一个网络爬虫,但链接重复



我正在构建一个网络拖纸器来刮擦多个网站,因此我不必直接访问该网站。

目前,我在重复的URL上遇到问题,脚本可以做我想要的,但是链接正在复制,我不想要。

这是我的代码:

def HackerNews():
hackerNews = ['https://www.darkreading.com/attacks-breaches.asp','https://www.darkreading.com/application-security.asp',
           'https://www.darkreading.com/vulnerabilities-threats.asp', 'https://www.darkreading.com/endpoint-security.asp',
           'https://www.darkreading.com/IoT.asp','https://www.darkreading.com/vulnerabilities-threats.asp'
           ]
keywords = ["bitcoin", "bit", "BTC", "Bit", "Security","Attack", "Breach","Cyber",
"Ransomware","Botnet","Worm","Hacked","Hack","Hackers","Flaw", "Risk","Danger" ]
for link in hackerNews:
   request = urllib2.Request(link)
   request.add_header('User-Agent', 'Mozilla 5.0')
   websitecontent = urllib2.urlopen(request).read()
   soup = BeautifulSoup(websitecontent, 'html.parser')
   headers = soup.findAll('header', {'class' : 'strong medium'})
   for h in headers:
       a = h.find("a")
       for keyword in keywords:
           if keyword in a["title"]:
               print("Title: " + a["title"] + " nLink: " "https://darkreading.com" + a["href"])
HackerNews()

这是输出的示例:

标题:Android Ransomware套件正在上升,黑暗网络
链接:https://darkreading.com/mobile/android-ransomware-kits-kit-on-the-rise-in-the-dark-web-/d/d/d-id/1330591

标题:比特币矿工Nicehash被黑客入侵,可能损失了6200万美元的比特币链接:https://darkreading.com/cloud/bitcoin-miner-nicehash-hacked-poss-lose-lose-------- bitcoin/d/d/d-id/1330585

标题:比特币矿工Nicehash被黑客入侵,可能损失了6200万美元的比特币链接:https://darkreading.com/cloud/bitcoin-miner-nicehash-hacked-poss-lose-lose-------- bitcoin/d/d/d-id/1330585

标题:比特币矿工Nicehash被黑客入侵,可能损失了6200万美元的比特币链接:https://darkreading.com/cloud/bitcoin-miner-nicehash-hacked-poss-lose-lose-------- bitcoin/d/d/d-id/1330585

标题:Uber使用$ 100K的漏洞赏金来支付,沉默佛罗里达黑客:报告链接:https://darkreading.com/attacks-breaches/uber-used-qus-------------------------------------------------------------------------------------------hacker-report/d/d-id/1330584

Div>

好吧,您可以用所有链接制作一个字典,而不是直接打印它。或者,如果您想将其保留为井井有条的列表。在附加之前,您可以检查它是否已经在列表中。

def HackerNews():
hackerNews = ['https://www.darkreading.com/attacks-breaches.asp','https://www.darkreading.com/application-security.asp',
           'https://www.darkreading.com/vulnerabilities-threats.asp', 'https://www.darkreading.com/endpoint-security.asp',
           'https://www.darkreading.com/IoT.asp','https://www.darkreading.com/vulnerabilities-threats.asp'
           ]
keywords = ["bitcoin", "bit", "BTC", "Bit", "Security","Attack", "Breach","Cyber",
"Ransomware","Botnet","Worm","Hacked","Hack","Hackers","Flaw", "Risk","Danger" ]
output = []
for link in hackerNews:
   request = urllib2.Request(link)
   request.add_header('User-Agent', 'Mozilla 5.0')
   websitecontent = urllib2.urlopen(request).read()
   soup = BeautifulSoup(websitecontent, 'html.parser')
   headers = soup.findAll('header', {'class' : 'strong medium'})
   for h in headers:
       a = h.find("a")
       for keyword in keywords:
           if keyword in a["title"]:
               if (a["title"], a["href"]) not in output:
                   output.append((a["title"], a["href"]))
   for link in output:        
       print("Title: " + link[0] + " nLink: " "https://darkreading.com" + link[1])
HackerNews()

没有解决您的缩进问题,也没有对其进行测试。但这应该传达我的观点:)


编辑:在Python 3工作:

from urllib.request import Request, urlopen
from bs4 import BeautifulSoup

def HackerNews():
    hackerNews = ['https://www.darkreading.com/attacks-breaches.asp','https://www.darkreading.com/application-security.asp',
               'https://www.darkreading.com/vulnerabilities-threats.asp', 'https://www.darkreading.com/endpoint-security.asp',
               'https://www.darkreading.com/IoT.asp','https://www.darkreading.com/vulnerabilities-threats.asp'
               ]
    keywords = ["bitcoin", "bit", "BTC", "Bit", "Security","Attack", "Breach","Cyber",
    "Ransomware","Botnet","Worm","Hacked","Hack","Hackers","Flaw", "Risk","Danger" ]
    output = []
    for link in hackerNews:
       request = Request(link)
       request.add_header('User-Agent', 'Mozilla 5.0')
       websitecontent = urlopen(request).read()
       soup = BeautifulSoup(websitecontent, 'html.parser')
       headers = soup.findAll('header', {'class' : 'strong medium'})
       for h in headers:
           a = h.find("a")
           for keyword in keywords:
               if keyword in a["title"]:
                   if (a["title"], a["href"]) not in output:
                       output.append((a["title"], a["href"]))
       for link in output:
           print("Title: " + link[0] + " nLink: " "https://darkreading.com" + link[1])
HackerNews()

最新更新