Chapter 6. Heavyweight Scraping with Scrapy

As your scraping goals get more ambitious, hacking solutions with Beautiful Soup and requests can get very messy very fast. Managing the scraped data as requests spawn more requests gets tricky, and if your requests are being made synchronously, things start to slow down rapidly. A whole load of problems you probably hadn’t anticipated start to make themselves known. It’s at this point that you want to turn to a powerful, robust library that solves all these problems and more. And that’s where Scrapy comes in.

Where Beautiful Soup is a very handy little penknife for fast and dirty scraping, Scrapy is a Python library that can do large-scale data scrapes with ease. It has all the things you’d expect, like built-in caching (with expiration times), asynchronous requests via Python’s Twisted web framework, user-agent randomization, and a whole lot more. The price for all this power is a fairly steep learning curve, which this chapter is intended to smooth, using a simple example. I think Scrapy is a powerful addition to any dataviz toolkit and really opens up possibilities for web data collection.

In “Scraping Data”, we managed to scrape a dataset containing all the Nobel Prize winners by name, year, and category. We did a speculative scrape of the winners’ linked biography pages, which showed that extracting the country of nationality was going to be difficult. In this chapter, we’ll set the bar on our Nobel Prize data ...

Get Data Visualization with Python and JavaScript, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.