如何使用python提取亚马逊产品链接



我是Python初学者,我只是想从亚马逊页面废弃产品链接。例如,我想扔掉这一页http://www.amazon.com/s/ref=sr_in_-2_p_4_18?me=A3MZ96G5C78IVQ&fst=as%3Aoff&rh=p_4%3AFunKo&ie=UTF8&qid=1477811368,我在python中使用这段代码

from bs4 import BeautifulSoup
import requests
url = "http://www.amazon.com/s/ref=sr_in_-2_p_4_18?me=A3MZ96G5C78IVQ&fst=as%3Aoff&rh=p_4%3AFunKo&ie=UTF8&qid=1477811368"
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
file = open("parseddata.txt", "wb")
links = soup.find_all('a', {'class': 'a-link-normal s-access-detail-page a-text-normal'})
for link in links:
print(link.get('href'))
file.write(href + 'n')
file.close()

我只想要产品标题链接作为输出。谁能告诉我我哪里做错了?

在请求header中添加一个user-agent,假装您不是机器人。

from bs4 import BeautifulSoup
import requests
url = "http://www.amazon.com/s/ref=sr_in_-2_p_4_18?me=A3MZ96G5C78IVQ&fst=as%3Aoff&rh=p_4%3AFunKo&ie=UTF8&qid=1477811368"
# add header
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36'
}
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.content, "lxml")
file = open(r"parseddata.txt", "w")
links = soup.find_all('a', {'class': 'a-link-normal s-access-detail-page a-text-normal'})
for link in links:
    print(link.get('href'))
    file.write(link.get('href')+ 'n')
file.close()
<标题> 结果
https://www.amazon.com/Funko-POP-Marvel-Dancing-Bobble/dp/B00N1EJXUU/ref=sr_1_1/160-5408618-6684940?m=A3MZ96G5C78IVQ&s=merchant-items&ie=UTF8&qid=1477822032&sr=1-1&refinements=p_4%3AFunKo
https://www.amazon.com/Funko-POP-Movies-Potter-Action/dp/B019JIA4IQ/ref=sr_1_2/160-5408618-6684940?m=A3MZ96G5C78IVQ&s=merchant-items&ie=UTF8&qid=1477822032&sr=1-2&refinements=p_4%3AFunKo
https://www.amazon.com/FunKo-2390-Funko-Darth-Maul/dp/B005F1QBMK/ref=sr_1_3/160-5408618-6684940?m=A3MZ96G5C78IVQ&s=merchant-items&ie=UTF8&qid=1477822032&sr=1-3&refinements=p_4%3AFunKo
........

最新更新