普通抓取请求返回403 WARC



我正在尝试从常见的抓取档案中抓取一些WARC文件,但我似乎没有通过服务器获得成功的请求。下面提供了一个最小的python示例来复制该错误。我尝试在请求头中添加UserAgent,但它确实有帮助。对如何继续有什么想法吗?

import io
import time
import justext # >= 2.2.0
import argparse
import requests # >= 2.23.0
import pandas as pd # pandas >= 1.0.3
from tqdm import tqdm
from warcio.archiveiterator import ArchiveIterator warcio >= 1.7.3

def debug():
common_crawl_data = {"filename":"crawl-data/CC-MAIN-2016-07/segments/1454702018134.95/warc/CC-MAIN-20160205195338-00121-ip-10-236-182-209.ec2.internal.warc.gz",
"offset":244189209,
"length":989
}
offset, length = int(common_crawl_data['offset']), int(common_crawl_data['length'])
offset_end = offset + length - 1
prefix = 'https://commoncrawl.s3.amazonaws.com/'
resp = requests.get(prefix + common_crawl_data['filename'], headers={'Range': 'bytes={}-{}'.format(offset, offset_end)})
raw_data = io.BytesIO(resp.content)
uri = None
page = None

for record in ArchiveIterator(raw_data, arc2warc=True):
uri = record.rec_headers.get_header('WARC-Target-URI')
R = record.content_stream().read()
try:
page = R.strip().decode('utf-8')
except:
page = R.strip().decode('latin1')
print(uri, page)
return uri, page
debug()

请参阅这篇commoncrawl博客文章,了解最近对一些未经身份验证的请求生成403的更改。

相关内容

  • 没有找到相关文章

最新更新