17. Web scraping¶
[status: mostly-complete-needs-polishing-and-proofreading]
17.1. Motivation, prerequisites, plan¶
The web is full of information and we often browse it visually with a browser. But when we collect a scientific data set from the web we do not want to have a “human in the loop”, rather we want an automatic program to collect that data so that our results can be reproducible and our procedure can be fast and automatic.
Although my focus here is mainly on scientific applications, web scraping can also be used to mirror a web site.
Prerequisites
The 10-hour “serious programming” course.
The “Data files and first plots” mini-course in Section 2
You should install the program wget:
$ sudo apt install wget
Plan
Our plan is to find some interesting data sets on the web.
In our first approach in Section 17.3 we will
download them to our disk using the command line program wget
and
plot them with gnuplot. Then in
Section 17.4 we will show how you can
retrieve data in your python program.
Finally in Section 17.5 we will scratch the surface of all the amazing scientific data sets that can be found on the web.
We will try to look at both time history and image data. Time histories are data sets where we look at an interesting quantity as it changes in time.
Examples of time histories include temperature as a function of time (in fact, all sorts of weather and climate data) and stock market prices as a function of time.
Examples of image data include telescope images of the sky and satellite imagery of the earth and of the sun.
17.2. What does a web page look like underneath? (HTML)¶
To introduce students to the staples of a web page, remember:
- Not everyone knows what HTML is.
- Few people have seen HTML.
So we introduce HTML (hypertext markup language) by example first, and then point out what “hypertext” and “markup” mean.
So I type up a quick html page, and the students watch on the projector and type their own. The page I put up is a simple hello page at first, then I add a link.
<html>
<head>
<title>A simple web page</title>
</head>
<body>
<h1>Mark's web page</h1>
<p>This is Mark's web page</p>
<p>Now a paragraph with some <i>text in italics</i>
and some <b>text in boldface</b>
</p>
</body>
</html>
Save this to a file called, for example, myinfo.html
in your
home directory and then view it by pointing a web browser to
file:///home/MYLOGINNAME/myinfo.html
(yes, there are three slashes
in the file URL file:///...
).
That simple web page lets me explain what I mean by markup: bits of
text like <p>
and <i>
and <head>
are not text in the
document: they specify how the document should be rendered (for
example <b>
and <i>
specify how the text should look, <p>
breaks the text into paragraphs). Some of the tags don’t affect the
text at all, but tell us how the document should be understood (for
example the metadata tags <html>
and <title>
).
Then let’s add a hyperlink: a link to the student’s school. My html page now looks like:
<html>
<head>
<title>A simple web page</title>
</head>
<body>
<h1>Mark's web page</h1>
<p>This is Mark's web page</p>
<p>Now a paragraph with some <i>text in italics</i>
and some <b>text in boldface</b>
</p>
<p>Mark went to high school at
<a href="http://liceoparini.gov.it/">Liceo Parini</a>
</p>
</body>
</html>
Then save and reload the page in your browser.
Here I’ve introduced the hyperlink. In HTML this is made up of an
element called <a>
(anchor) which has an attribute called href
which has the URL of the hyperlink.
So as we write programs that pick apart a web page we now know what
web pages look like. If we want to find the links in a web page we
can use the Python string find()
method to look for <a
and
then for </a>
and to use the text in between the two.
17.3. Command line scraping with wget
¶
In Section 2.7 we had our first glimpse
of the command wget
, a wonderful program which grabs a page from
the web and puts the result into a file on your disk. This type of
program is sometimes called a “web crawler” or “offline browser”.
wget can even follow links up to a certain depth and reproduce the web hierarchy on a local disk.
In areas with poor network connectivity people can use wget when there is a brief moment of good newtorking: they download all they need in a hurry, then point their browser to the data on their local disk.
17.3.1. First download with wget¶
Let us make a directory in which to work and start getting data.
$ mkdir scraping
$ cd scraping
$ wget https://raw.githubusercontent.com/fivethirtyeight/data/master/alcohol-consumption/drinks.csv
We now have a file called drinks.csv
- how do we explore it?
I would first use simple file tools:
less drinks.csv
shows lines like this:
country,beer_servings,spirit_servings,wine_servings,total_litres_of_pure_alcohol
Afghanistan,0,0,0,0.0
Albania,89,132,54,4.9
Algeria,25,0,14,0.7
Andorra,245,138,312,12.4
Angola,217,57,45,5.9
## ...
If you like to see data in a spreadsheet you could try to use libreoffice or gnumeric:
libreoffice drinks.csv
17.3.2. Simple analysis of the drinks.csv
file¶
Sometimes you can learn quite a bit about what’s in a file with simple shell tools, without using a plotting program or writing a data analysis program. I will show you a some things you can do with one line shell commands.
Looking at drinks.csv
we see that the fourth column is the number
of wine servings per capita drunk in that country. Let us use the
command sort
to order the file by wine consumption.
A quick look at the sort
documentation with man sort
shows us
that the -t
option can be used to use a comma instead of white
space to separate fields. We also find out that the -k
option can
be used to specify a key and -g
to sort numerically (including
floating point). Put these together to try running:
sort -t , -k 4 -g drinks.csv
this will show you all those countries in order of increasing wine consumption, rather than in alphabetical order. To see just the last few 15 lines you can run:
sort -t , -k 4 -g drinks.csv | tail -15
This is a great opportunity to laugh at the confirmation of some stereotypes and the negation of others.
If you look at the last few lines you see that the French consume the most wine per capita, followed by the Portuguese.
If you sort by the 5th column you will see the overall use of alcohol and the 3rd column will show you the use of spirits (hard liquor) while the 2nd column shows consumption of beer.
17.3.3. Looking at birth data¶
$ wget https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_2000-2014_SSA.csv
$ tr '\r' '\n' < US_births_2000-2births_2000-2014_SSA-newline.csv
$ gnuplot
gnuplot> plot 'US_births_2000-2014_SSA-newline.csv' using 5 with lines
17.4. Scraping from a Python program¶
#! /usr/bin/env python3
import urllib.request
day_map = {1: 'mon', 2: 'tue', 3: 'wed', 4: 'thu', 5: 'fri',
6: 'sat', 7: 'sun'}
def main():
f = urllib.request.urlopen('https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_2000-2014_SSA.csv')
## this file has carriage returns instead of newlines, so
## f.readlines() won't work in all cases. I read the whole
## file in, and then split it into lines
entire_file = f.read()
f.close()
lines = entire_file.split()
print('lines:', lines[:3])
dataset = []
for line in lines[1:]:
# print('line:', line, str(line))
line = line.decode('utf-8')
words = line.split(',')
# print(words)
values = [int(w) for w in words]
dataset.append(values)
day_of_week_hist = process_dataset(dataset)
print_histogram(day_of_week_hist)
def process_dataset(dataset):
## NOTE: the fields are:
## year,month,date_of_month,day_of_week,births
print('dataset has %d lines' % len(dataset))
## now we form a histogram of births according to the day of the
## week
day_of_week_hist = {}
for i in range(1, 8):
day_of_week_hist[i] = 0
for row in dataset:
day_of_week = row[3]
month = row[1]
n_births = row[4]
day_of_week_hist[day_of_week] += n_births
return day_of_week_hist
def print_histogram(hist):
print(hist)
keys = list(hist.keys())
keys.sort()
print('keys:', keys)
for day in keys:
print(day, day_map[day], hist[day])
main()
17.5. Finding neat scientific data sets¶
https://www.dataquest.io/blog/free-datasets-for-projects/ (they mention fivethirtyeight)
https://github.com/fivethirtyeight/data
https://api.nasa.gov/api.html#apod
https://api.nasa.gov/api.html#NeoWS
17.5.1. Time histories¶
Temperature
Births
wget https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_2000-2014_SSA.csv