lundi 10 août 2015

Scrapy recursive website crawl after login

I had coded a spider to crawl website after login

import scrapy
from scrapy.selector import HtmlXPathSelector
from scrapy.http import FormRequest, Request
from scrapy.selector import Selector
from scrapy.loader import ItemLoader
from scrapy.contrib.spiders import CrawlSpider,Rule
from scrapy.contrib.linkextractors import LinkExtractor

class LoginSpider(scrapy.Spider):
    name = "login"
    allowed_domains = ["mydomain.com"]
    start_urls = ['http://ift.tt/1J5SlcI']

    rules = [Rule(LinkExtractor(allow=('//a[contains(text(), "Next")]'), restrict_xpaths=('//a[contains(text(), "Previous")]',)), 'parse_info')]

    def parse(self, response):
        return [FormRequest.from_response(response,
            formdata={"username":"myemail","password":"mypassword"},
            callback=self.parse_info, dont_filter=True)]

    def parse_info(self, response):
        items = []
        for tr in range(1, 5):
            xpath = "/html/body/table/tbody/tr[%s]/td[1]/text()" % tr
            td1 = Selector(response=response).xpath(xpath).extract()
            item = MyItem()
            item['col1'] = td1
            items.append(item)

        return items

And the html

<html>
   <table>
       <tbody>
          <tr><td>Row 1</td></tr>
          <tr><td>Row 2</td></tr>
       </tbody>
   </table>
   <div><a href="?page=2">Next</a></div>
   <div><a href="?page=2">Previous</a></div>
</html>

So what the spider does is it automatically login the user from the login page and redirect to the home page with the html above.

Now what I want to achieve is that I want to scrape the next page after the first page using the python script above.

I have read about the Scrapy documentation about Rules implementation but I have no success to make it work. Please help me out I'm stuck on this for over a day now. Thank you.




Aucun commentaire:

Enregistrer un commentaire