您好,欢迎来到三六零分类信息网!老站,搜索引擎当天收录,欢迎发信息

Python自定义scrapy中间模块避免重复采集的方法

2024/2/25 16:57:34发布12次查看
本文实例讲述了python自定义scrapy中间模块避免重复采集的方法。分享给大家供大家参考。具体如下:
from scrapy import logfrom scrapy.http import requestfrom scrapy.item import baseitemfrom scrapy.utils.request import request_fingerprintfrom myproject.items import myitemclass ignorevisiteditems(object): middleware to ignore re-visiting item pages if they were already visited before. the requests to be filtered by have a meta['filter_visited'] flag enabled and optionally define an id to use for identifying them, which defaults the request fingerprint, although you'd want to use the item id, if you already have it beforehand to make it more robust. filter_visited = 'filter_visited' visited_id = 'visited_id' context_key = 'visited_ids' def process_spider_output(self, response, result, spider): context = getattr(spider, 'context', {}) visited_ids = context.setdefault(self.context_key, {}) ret = [] for x in result: visited = false if isinstance(x, request): if self.filter_visited in x.meta: visit_id = self._visited_id(x) if visit_id in visited_ids: log.msg(ignoring already visited: %s % x.url, level=log.info, spider=spider) visited = true elif isinstance(x, baseitem): visit_id = self._visited_id(response.request) if visit_id: visited_ids[visit_id] = true x['visit_id'] = visit_id x['visit_status'] = 'new' if visited: ret.append(myitem(visit_id=visit_id, visit_status='old')) else: ret.append(x) return ret def _visited_id(self, request): return request.meta.get(self.visited_id) or request_fingerprint(request)
希望本文所述对大家的python程序设计有所帮助。
该用户其它信息

VIP推荐

免费发布信息,免费发布B2B信息网站平台 - 三六零分类信息网 沪ICP备09012988号-2
企业名录 Product