Scrapy第一個Spider
Spider定義從提取數據的初始 URL,如何遵循分頁鏈接以及如何提取和分析在 items.py 定義字段的類。Scrapy 提供了不同類型的蜘蛛,每個都給出了一個具體的目的。
在 first_scrapy/spiders 目錄下創建了一個叫作 「first_spider.py」 文件,在這裏可以告訴 scrapy 。要如何查找確切數據,這裏必須要定義一些屬性:
name: 它定義了蜘蛛的唯一名稱;
allowed_domains: 它包含了蜘蛛抓取的基本URL;
start-urls: 蜘蛛開始爬行的URL列表;
parse(): 這是提取並解析刮下數據的方法;
下面的代碼演示了蜘蛛代碼的樣子:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class firstSpider(scrapy.Spider):
name = "first"
allowed_domains = ["yiibai.com"]
start_urls = [
"http://www.yiibai.com/scrapy/scrapy_create_project.html",
"http://www.yiibai.com/scrapy/scrapy_environment.html"
]
def parse(self, response):
filename = response.url.split("/")[-1]
print 'Curent URL => ', filename
with open(filename, 'wb') as f:
f.write(response.body)
執行結果如下所示 -
D:first_scrapy>scrapy crawl first
2016-10-03 10:40:10 [scrapy] INFO: Scrapy 1.1.2 started (bot: first_scrapy)
2016-10-03 10:40:10 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'first_scrapy.spiders', 'SPIDER_MODULES': ['first_scrapy.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'first_scrapy'}
2016-10-03 10:40:10 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-10-03 10:40:11 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-03 10:40:11 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-03 10:40:11 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-03 10:40:11 [scrapy] INFO: Spider opened
2016-10-03 10:40:11 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-10-03 10:40:11 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-03 10:40:11 [scrapy] DEBUG: Crawled (200) <GET http://www.yiibai.com/robots.txt> (referer: None)
2016-10-03 10:40:11 [scrapy] DEBUG: Crawled (200) <GET http://www.yiibai.com/scrapy/scrapy_create_project.html> (referer: None)
2016-10-03 10:40:11 [scrapy] DEBUG: Crawled (200) <GET http://www.yiibai.com/scrapy/scrapy_environment.html> (referer: None)
Curent URL => scrapy_create_project.html
Curent URL => scrapy_environment.html
2016-10-03 10:40:12 [scrapy] INFO: Closing spider (finished)
2016-10-03 10:40:12 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 709,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 15401,
'downloader/response_count': 3,
'downloader/response_status_count/200': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 10, 3, 2, 40, 12, 98000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'response_received_count': 3,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2016, 10, 3, 2, 40, 11, 614000)}
2016-10-03 10:40:12 [scrapy] INFO: Spider closed (finished)
D:first_scrapy>