Scraping Python.org with Scrapy
Scrapy is a very popular open source Python scraping framework for extracting data. It was originally designed for only scraping, but it is has also evolved into a powerful web crawling solution.
In our previous recipes, we used Requests and urllib2 to fetch data and Beautiful Soup to extract data. Scrapy offers all of these functionalities with many other built-in modules and extensions. It is also our tool of choice when it comes to scraping with Python.
Scrapy offers a number of powerful features that are worth mentioning:
- Built-in extensions to make HTTP requests and handle compression, authentication, caching, manipulate user-agents, and HTTP headers
- Built-in support for selecting and extracting data with selector languages such as CSS and XPath, as well as support for utilizing regular expressions for selection of content and links
- Encoding support to deal with languages and non-standard encoding declarations
- Flexible APIs to reuse and write custom middleware and pipelines, which provide a clean and easy way to implement tasks such as automatically downloading assets (for example, images or media) and storing data in storage such as file systems, S3, databases, and others
Getting ready...
There are several means of creating a scraper with Scrapy. One is a programmatic pattern where we create the crawler and spider in our code. It is also possible to configure a Scrapy project from templates or generators and then run the scraper from the command line using the scrapy
command. This book will follow the programmatic pattern as it contains the code in a single file more effectively. This will help when we are putting together specific, targeted, recipes with Scrapy.
This isn't necessarily a better way of running a Scrapy scraper than using the command line execution, just one that is a design decision for this book. Ultimately this book is not about Scrapy (there are other books on just Scrapy), but more of an exposition on various things you may need to do when scraping, and in the ultimate creation of a functional scraper as a service in the cloud.
How to do it...
The script for this recipe is 01/03_events_with_scrapy.py
. The following is the code:
import scrapy from scrapy.crawler import CrawlerProcess class PythonEventsSpider(scrapy.Spider): name = 'pythoneventsspider' start_urls = ['https://www.python.org/events/python-events/',] found_events = [] def parse(self, response): for event in response.xpath('//ul[contains(@class, "list-recent-events")]/li'): event_details = dict() event_details['name'] = event.xpath('h3[@class="event-title"]/a/text()').extract_first() event_details['location'] = event.xpath('p/span[@class="event-location"]/text()').extract_first() event_details['time'] = event.xpath('p/time/text()').extract_first() self.found_events.append(event_details) if __name__ == "__main__": process = CrawlerProcess({ 'LOG_LEVEL': 'ERROR'}) process.crawl(PythonEventsSpider) spider = next(iter(process.crawlers)).spider process.start() for event in spider.found_events: print(event)
The following runs the script and shows the output:
~ $ python 03_events_with_scrapy.py {'name': 'PyCascades 2018', 'location': 'Granville Island Stage, 1585 Johnston St, Vancouver, BC V6H 3R9, Canada', 'time': '22 Jan. – 24 Jan. '} {'name': 'PyCon Cameroon 2018', 'location': 'Limbe, Cameroon', 'time': '24 Jan. – 29 Jan. '} {'name': 'FOSDEM 2018', 'location': 'ULB Campus du Solbosch, Av. F. D. Roosevelt 50, 1050 Bruxelles, Belgium', 'time': '03 Feb. – 05 Feb. '} {'name': 'PyCon Pune 2018', 'location': 'Pune, India', 'time': '08 Feb. – 12 Feb. '} {'name': 'PyCon Colombia 2018', 'location': 'Medellin, Colombia', 'time': '09 Feb. – 12 Feb. '} {'name': 'PyTennessee 2018', 'location': 'Nashville, TN, USA', 'time': '10 Feb. – 12 Feb. '} {'name': 'PyCon Pakistan', 'location': 'Lahore, Pakistan', 'time': '16 Dec. – 17 Dec. '} {'name': 'PyCon Indonesia 2017', 'location': 'Surabaya, Indonesia', 'time': '09 Dec. – 10 Dec. '}
The same result but with another tool. Let's go take a quick review of how this works.
How it works
We will get into some details about Scrapy in later chapters, but let's just go through this code quick to get a feel how it is accomplishing this scrape. Everything in Scrapy revolves around creating a spider. Spiders crawl through pages on the Internet based upon rules that we provide. This spider only processes one single page, so it's not really much of a spider. But it shows the pattern we will use through later Scrapy examples.
The spider is created with a class definition that derives from one of the Scrapy spider classes. Ours derives from the scrapy.Spider
class.
class PythonEventsSpider(scrapy.Spider): name = 'pythoneventsspider' start_urls = ['https://www.python.org/events/python-events/',]
Every spider is given a name
, and also one or more start_urls
which tell it where to start the crawling.
This spider has a field to store all the events that we find:
found_events = []
The spider then has a method names parse which will be called for every page the spider collects.
def parse(self, response): for event in response.xpath('//ul[contains(@class, "list-recent-events")]/li'): event_details = dict() event_details['name'] = event.xpath('h3[@class="event-title"]/a/text()').extract_first() event_details['location'] = event.xpath('p/span[@class="event-location"]/text()').extract_first() event_details['time'] = event.xpath('p/time/text()').extract_first() self.found_events.append(event_details)
The implementation of this method uses and XPath selection to get the events from the page (XPath is the built in means of navigating HTML in Scrapy). It them builds the event_details
dictionary object similarly to the other examples, and then adds it to the found_events
list.
The remaining code does the programmatic execution of the Scrapy crawler.
process = CrawlerProcess({ 'LOG_LEVEL': 'ERROR'}) process.crawl(PythonEventsSpider) spider = next(iter(process.crawlers)).spider process.start()
It starts with the creation of a CrawlerProcess which does the actual crawling and a lot of other tasks. We pass it a LOG_LEVEL of ERROR to prevent the voluminous Scrapy output. Change this to DEBUG and re-run it to see the difference.
Next we tell the crawler process to use our Spider implementation. We get the actual spider object from that crawler so that we can get the items when the crawl is complete. And then we kick of the whole thing by calling process.start()
.
When the crawl is completed we can then iterate and print out the items that were found.
for event in spider.found_events: print(event)
Note
This example really didn't touch any of the power of Scrapy. We will look more into some of the more advanced features later in the book.