Using Scrapy with authenticated (logged in) user session

In the code above, the FormRequest that is being used to authenticate has the after_login function set as its callback. This means that the after_login function will be called and passed the page that the login attempt got as a response.

It is then checking that you are successfully logged in by searching the page for a specific string, in this case "authentication failed". If it finds it, the spider ends.

Now, once the spider has got this far, it knows that it has successfully authenticated, and you can start spawning new requests and/or scrape data. So, in this case:

from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request

# ...

def after_login(self, response):
    # check login succeed before going on
    if "authentication failed" in response.body:
        self.log("Login failed", level=log.ERROR)
    # We've successfully authenticated, let's have some fun!
        return Request(url="",

def parse_tastypage(self, response):
    hxs = HtmlXPathSelector(response)
    yum ='//img')

    # etc.

If you look here, there’s an example of a spider that authenticates before scraping.

In this case, it handles things in the parse function (the default callback of any request).

def parse(self, response):
    hxs = HtmlXPathSelector(response)
        return self.login(response)
        return self.get_section_links(response)

So, whenever a request is made, the response is checked for the presence of the login form. If it is there, then we know that we need to login, so we call the relevant function, if it’s not present, we call the function that is responsible for scraping the data from the response.

I hope this is clear, feel free to ask if you have any other questions!


Okay, so you want to do more than just spawn a single request and scrape it. You want to follow links.

To do that, all you need to do is scrape the relevant links from the page, and spawn requests using those URLs. For example:

def parse_page(self, response):
    """ Scrape useful stuff from page, and spawn new requests

    hxs = HtmlXPathSelector(response)
    images ='//img')
    # .. do something with them
    links ='//a/@href')

    # Yield a new request for each link we found
    for link in links:
        yield Request(url=link, callback=self.parse_page)

As you can see, it spawns a new request for every URL on the page, and each one of those requests will call this same function with their response, so we have some recursive scraping going on.

What I’ve written above is just an example. If you want to “crawl” pages, you should look into CrawlSpider rather than doing things manually.

Leave a Comment