scrapy-redis是一个三方的基于redis的分布式爬虫框架,配合scrapy使用,可以实现分布式爬虫功能
分别执行如下命令创建一个scrapy03爬虫项目,爬取quanben.net站点小说资源
scrapy startproject scrapy03 scrapy genspider quanben quanben.netpip3 install scrapy-redis
主要引入RedisCrawlSpider插件,通过redis可以实现爬虫分布式效果
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy_redis.spiders import RedisCrawlSpider class QuanbenSpider(RedisCrawlSpider): name = 'quanben' allowed_domains = ['quanben.net'] # start_urls = ['https://www.quanben.net/8/8583/4296044.html'] redis_key = 'quanben:start_urls' rules = ( Rule(LinkExtractor(allow=r'https://www.quanben.net/8/8583/\d+'), callback='parse_item', follow=True), ) def parse_item(self, response): # 章节标题 title = response.xpath('//h1/text()').extract_first() # 内容 content = response.xpath('string(//div[@id="BookText"])').extract_first().strip() # 通过yield,将这个title、content传递给 pipelines.py做进一步处理 yield { 'title': title, 'content': content }在settings.py文件中配置RedisCrawlSpider相关信息,并且设置好redis库ip和端口
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" SCHEDULER = "scrapy_redis.scheduler.Scheduler" SCHEDULER_PERSIST = True ITEM_PIPELINES = { 'scrapy_redis.pipelines.RedisPipeline': 400, } REDIS_HOST = '192.168.1.100' REDIS_PORT = 6379 LOG_LEVEL = 'DEBUG' DOWNLOAD_DELAY = 11)执行start.py文件启动爬虫,此时爬虫会进入等待开始url地址状态
... 2020-07-04 18:06:37 [scrapy.core.engine] INFO: Spider opened 2020-07-04 18:06:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2020-07-04 18:06:37 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:60232)将开始url地址写入redis库中,爬虫会正式开始执行 在redis客户端中,执行命令lpush quanben:start_urls + 开始地址
D:\3.dev\soft\redis>redis-cli.exe -h 192.168.1.100 -p 6379 192.168.1.100:6379> lpush quanben:start_urls https://www.quanben.net/8/8583/4296044.html (integer) 13)效果查看 数据已经成功写入redis库中