确实使用美丽的python,刮擦了前100名工作结果



我是Python Web刮擦的新手,我想从Integht中刮掉前100个工作结果,而IM只能刮擦第一页的结果,即顶级结果。。这是我的代码,谁能帮助我解决这个问题?

import urllib2
from bs4 import BeautifulSoup
import json
URL = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka"
soup = BeautifulSoup(urllib2.urlopen(URL).read(), 'html.parser')
results = soup.find_all('div', attrs={'class': 'jobsearch-SerpJobCard'})
for x in results:
company = x.find('span', attrs={"class":"company"})
print 'company:', company.text.strip()
job = x.find('a', attrs={'data-tn-element': "jobTitle"})
print 'job:', job.text.strip()

以10的批次进行10更改URL中的启动值。您可以循环增加并添加添加变量

https://www.indeed.co.in/jobs?q=software developer&l=bengaluru, karnataka& start = 0

https://www.indeed.co.in/jobs?q=software developer&l=bengaluru, karnataka& amp; start = 1

,例如

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
results = []
url = 'https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru,+Karnataka&start={}'
with requests.Session() as s:
    for page in range(5):
        res = s.get(url.format(page))
        soup = bs(res.content, 'lxml')
        titles = [item.text.strip() for item in soup.select('[data-tn-element=jobTitle]')]
        companies = [item.text.strip() for item in soup.select('.company')]
        data = list(zip(titles, companies))
        results.append(data)
newList = [item for sublist in results for item in sublist]
df = pd.DataFrame(newList)
df.to_json(r'C:UsersUserDesktopdata.json')

如果在范围循环中包含代码:

from bs4 import BeautifulSoup
import json
import urllib2
URL = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka&start="
for i in range(0 , 100 , 10):
    soup = BeautifulSoup(urllib2.urlopen(URL+str(i)).read(), 'html.parser')
    results = soup.find_all('div', attrs={'class': 'jobsearch-SerpJobCard'})
    for x in results:
        company = x.find('span', attrs={"class":"company"})
        print 'company:', company.text.strip()
        job = x.find('a', attrs={'data-tn-element': "jobTitle"})
        print 'job:', job.text.strip()  

尝试以下代码。它将导航到下一页最多10页。如果您想获得100多个记录,只需将while page_num<100:替换为while True:

from bs4 import BeautifulSoup
import pandas as pd
import re
headers = {'User-Agent':
       'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
page = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka"
company_name = []
job_title = []
page_num = 10
session = requests.Session()
while True:
    pageTree = session.get(page, headers=headers)
    pageSoup = BeautifulSoup(pageTree.content, 'html.parser')
    jobs= pageSoup.find_all("a", {"data-tn-element": "jobTitle"})
    Companys = pageSoup.find_all("span", {"class": "company"})
    for Company, job in zip(Companys, jobs):
        companyname=Company.text
        company_name.append(companyname.replace("n",""))
        job_title.append(job.text)
    if pageSoup.find("span", text=re.compile("Next")):
        page = "https://www.indeed.co.in/jobs?q=software+developer&l=Bengaluru%2C+Karnataka&start={}".format(page_num)
        page_num +=10
    else:
        break
print(company_name)
print(job_title)
df = pd.DataFrame({"company_name":company_name,"job_title":job_title})
print(df.head(1000))

最新更新