Table of Contents Introduction to Web Scraping

Scrapy concepts

Reddit-less front page

Extracting amazon price data

Considerations at scale Introduction to web scraping Web scraping is one of the tools at a developer’s disposal when looking to gather data from the internet. While consuming data via an API has become commonplace, most of the websites online don’t have an API for delivering data to consumers. In order to access the data they’re looking for, web scrapers and crawlers read a website’s pages and feeds, analyzing the site’s structure and markup language for clues. Generally speaking, information collected from scraping is fed into other programs for validation, cleaning, and input into a datastore or its fed onto other processes such as natural language processing (NLP) toolchains or machine learning (ML) models. There are a few Python packages we could use to illustrate with, but we’ll focus on Scrapy for these examples. Scrapy makes it very easy for us to quickly prototype and develop web scrapers with Python. Scrapy vs. Selenium and Beautiful Soup If you’re interested in getting into Python’s other packages for web scraping, we’ve laid it out here: Scrapy concepts Before we start looking at specific examples and use cases, let’s brush up a bit on Scrapy and how it works. Spiders: Scrapy uses Spiders to define how a site (or a bunch of sites) should be scraped for information. Scrapy lets us determine how we want the spider to crawl, what information we want to extract, and how we can extract it. Specifically, Spiders are Python classes where we’ll put all of our custom logic and behavior.

import scrapy



class NewsSpider(scrapy.Spider):

name = 'news'

...

Selectors: Selectors are Scrapy’s mechanisms for finding data within the website’s pages. They’re called selectors because they provide an interface for “selecting” certain parts of the HTML page, and these selectors can be in either CSS or XPath expressions. Items: Items are the data that is extracted from selectors in a common data model. Since our goal is a structured result from unstructured inputs, Scrapy provides an Item class which we can use to define how our scraped data should be structured and what fields it should have.

import scrapy



class Article(scrapy.Item):

headline = scrapy.Field()

...

Reddit-less front page Suppose we love the images posted to Reddit, but don’t want any of the comments or self posts. We can use Scrapy to make a Reddit Spider that will fetch all the photos from the front page and put them on our own HTML page which we can then browse instead of Reddit. To start, we’ll create a RedditSpider which we can use traverse the front page and handle custom behavior.

import scrapy



class RedditSpider(scrapy.Spider):

name = 'reddit'

start_urls = [

'https://www.reddit.com'

]

Above, we’ve defined a RedditSpider , inheriting Scrapy’s Spider. We’ve named it reddit and have populated the class’ start_urls attribute with a URL to Reddit from which we’ll extract the images. At this point, we’ll need to begin defining our parsing logic. We need to figure out an expression that the RedditSpider can use to determine whether it’s found an image. If we look at Reddit’s robots.txt file, we can see that our spider can’t crawl any comment pages without being in violation of the robots.txt file, so we’ll need to grab our image URLs without following through to the comment pages. By looking at Reddit, we can see that external links are included on the homepage directly next to the post’s title. We’ll update RedditSpider to include a parser to grab this URL. Reddit includes the external URL as a link on the page, so we should be able to just loop through the links on the page and find URLs that are for images.

class RedditSpider(scrapy.Spider):

...

def parse(self, response):

links = response.xpath('//a/@href')

for link in links:

...

In a parse method on our RedditSpider class, I’ve started to define how we’ll be parsing our response for results. To start, we grab all of the href attributes from the page’s links using a basic XPath selector. Now that we’re enumerating the page’s links, we can start to analyze the links for images.

def parse(self, response):

links = response.xpath('//a/@href')

for link in links:

# Extract the URL text from the element

url = link.get()

# Check if the URL contains an image extension

if any(extension in url for extension in ['.jpg', '.gif', '.png']):

...

To actually access the text information from the link’s href attribute, we use Scrapy’s .get() function which will return the link destination as a string. Next, we check to see if the URL contains an image file extension. We use Python’s any() built-in function for this. This isn’t all-encompassing for all image file extensions, but it’s a start. From here we can push our images into a local HTML file for viewing.

def parse(self, response):

links = response.xpath('//img/@src')

html = ''



for link in links:

# Extract the URL text from the element

url = link.get()

# Check if the URL contains an image extension

if any(extension in url for extension in ['.jpg', '.gif', '.png']):

html += '''

< a href="{url}" target="_blank">

< img src="{url}" height="33%" width="33%" />

< /a>

'''.format(url=url)



# Open an HTML file, save the results

with open('frontpage.html', 'a') as page:

page.write(html)

# Close the file

page.close()

To start, we begin collecting the HTML file contents as a string which will be written to a file called frontpage.html at the end of the process. You’ll notice that instead of pulling the image location from the ‘//a/@href/‘ , we’ve updated our links selector to use the image’s src attribute: ‘//img/@src’ . This will give us more consistent results, and select only images. As our RedditSpider’s parser finds images it builds a link with a preview image and dumps the string to our html variable. Once we’ve collected all of the images and generated the HTML, we open the local HTML file (or create it) and overwrite it with our new HTML content before closing the file again with page.close() . If we run scrapy runspider reddit.py , we can see that this file is built properly and contains images from Reddit’s front page. But, it looks like it contains all of the images from Reddit’s front page – not just user-posted content. Let’s update our parse command a bit to blacklist certain domains from our results. If we look at frontpage.html , we can see that most of Reddit’s assets come from redditstatic.com and redditmedia.com. We’ll just filter those results out and retain everything else. With these updates, our RedditSpider class now looks like the below:

import scrapy



class RedditSpider(scrapy.Spider):

name = 'reddit'

start_urls = [

'https://www.reddit.com'

]



def parse(self, response):

links = response.xpath('//img/@src')

html = ''



for link in links:

# Extract the URL text from the element

url = link.get()

# Check if the URL contains an image extension

if any(extension in url for extension in ['.jpg', '.gif', '.png'])\

and not any(domain in url for domain in ['redditstatic.com', 'redditmedia.com']):

html += '''

< a href="{url}" target="_blank">

< img src="{url}" height="33%" width="33%" />

< /a>

'''.format(url=url)



# Open an HTML file, save the results

with open('frontpage.html', 'w') as page:

page.write(html)



# Close the file

page.close()

We’re simply adding our domain whitelist to an exclusionary any() expression. These statements could be tweaked to read from a separate configuration file, local database, or cache – if need be.

Want to Code Faster? Kite is a plugin for PyCharm, Atom, Vim, VSCode, Sublime Text, and IntelliJ that uses machine learning to provide you with code completions in real time sorted by relevance. Start coding faster today. Send Download Link Download Kite Free

Extracting Amazon price data If you’re running an ecommerce website, intelligence is key. With Scrapy we can easily automate the process of collecting information about our competitors, our market, or our listings. For this task, we’ll extract pricing data from search listings on Amazon and use the results to provide some basic insights. If we visit Amazon’s search results page and inspect it, we notice that Amazon stores the price in a series of divs, most notably using a class called .a-offscreen . We can formulate a CSS selector that extracts the price off the page:

prices = response.css('.a-price .a-offscreen::text').getall()

With this CSS selector in mind, let’s build our AmazonSpider .

import scrapy



from re import sub

from decimal import Decimal





def convert_money(money):

return Decimal(sub(r'[^\d.]', '', money))





class AmazonSpider(scrapy.Spider):

name = 'amazon'

start_urls = [

'https://www.amazon.com/s?k=paint'

]



def parse(self, response):

# Find the Amazon price element

prices = response.css('.a-price .a-offscreen::text').getall()



# Initialize some counters and stats objects

stats = dict()

values = []



for price in prices:

value = convert_money(price)

values.append(value)



# Sort our values before calculating

values.sort()



# Calculate price statistics

stats['average_price'] = round(sum(values) / len(values), 2)

stats['lowest_price'] = values[0]

stats['highest_price'] = values[-1]

Stats['total_prices'] = len(values)



print(stats)

A few things to note about our AmazonSpider class: convert_money(): This helper simply converts strings formatted like ‘$45.67’ and casts them to a Python Decimal type which can be used for computations and avoids issues with locale by not including a ‘$’ anywhere in the regular expression. getall(): The .getall() function is a Scrapy function that works similar to the .get() function we used before, but this returns all the extracted values as a list which we can work with. Running the command scrapy runspider amazon.py in the project folder will dump output resembling the following:

{'average_price': Decimal('38.23'), 'lowest_price': Decimal('3.63'), 'highest_price': Decimal('689.95'), 'total_prices': 58}