Crawling LinkedIn while authenticated with Scrapy
所以我在 Scrapy 中通过经过身份验证的会话通读了 Crawling 并且我被挂断了,我 99% 确定我的解析代码是正确的,我只是不相信登录正在重定向并且正在成功。
我也遇到了 check_login_response() 的问题,不确定它正在检查哪个页面。虽然"退出"是有道理的。
====== 已更新 ======
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | from scrapy.contrib.spiders.init import InitSpider from scrapy.http import Request, FormRequest from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.contrib.spiders import Rule from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from linkedpy.items import LinkedPyItem class LinkedPySpider(InitSpider): name = 'LinkedPy' allowed_domains = ['linkedin.com'] login_page = 'https://www.linkedin.com/uas/login' start_urls = ["http://www.linkedin.com/csearch/results?type=companies&keywords=&pplSearchOrigin=GLHD&pageKey=member-home&search=Search#facets=pplSearchOrigin%3DFCTD%26keywords%3D%26search%3DSubmit%26facet_CS%3DC%26facet_I%3D80%26openFacets%3DJO%252CN%252CCS%252CNFR%252CF%252CCCR%252CI"] def init_request(self): #"""This function is called before crawling starts.""" return Request(url=self.login_page, callback=self.login) def login(self, response): #"""Generate a login request.""" return FormRequest.from_response(response, formdata={'session_key': '[email protected]', 'session_password': 'somepassword'}, callback=self.check_login_response) def check_login_response(self, response): #"""Check the response returned by a login request to see if we aresuccessfully logged in.""" if"Sign Out" in response.body: self.log("\ \ \ Successfully logged in. Let's start crawling!\ \ \ ") # Now the crawling can begin.. return self.initialized() # ****THIS LINE FIXED THE LAST PROBLEM***** else: self.log("\ \ \ Failed, Bad times :(\ \ \ ") # Something went wrong, we couldn't log in, so nothing happens. def parse(self, response): self.log("\ \ \ We got data! \ \ \ ") hxs = HtmlXPathSelector(response) sites = hxs.select('//ol[@id=\'result-set\']/li') items = [] for site in sites: item = LinkedPyItem() item['title'] = site.select('h2/a/text()').extract() item['link'] = site.select('h2/a/@href').extract() items.append(item) return items |
通过在 self.initialized() 前面添加 \\'Return\\' 解决了这个问题
再次感谢!
-马克
1 | class LinkedPySpider(BaseSpider): |
应该是:
1 | class LinkedPySpider(InitSpider): |
你也不应该重写我在回答中提到的
如果您不了解如何定义提取链接的规则,请仔细阅读文档:
http://readthedocs.org/docs/scrapy/en/latest/topics/spiders.html#scrapy.contrib.spiders.Rule
http://readthedocs.org/docs/scrapy/en/latest/topics/link-extractors.html#topics-link-extractors