SlideShare a Scribd company logo
Intro to Web Scraping with Python
Vancouver PyLadies Meetup
March 7, 2016
Maris Lemba
2
Before we begin
● Do consider whether the site you are interested in allows scraping by
examining its robots.txt file
– This robots exclusion standard is used by web servers to communicate with web
crawlers and other robots
– Located in the top-level directory of web server
– Can specify which directories the site owner does not want robots to access
– Can have different access rules for different robots
● If you intend to publish the output from your scraping, consider whether it
is legal to do so
– It is generally ok to re-publish “factual data”
#Sample robots.txt file
User-agent: Googlebot
Disallow: /folder/
User-agent: *
Disallow: /photos/
3
Basic components of a web page
● HTML – a set of markup tags for describing web documents
● CSS – style sheet language for describing the presentation of an HTML document
● JavaScript – popular client-side programming language used for web applications
Sample html source
This is what it would look like in the
browser
If we wanted to scrape information from the “Day of the week” table, we can locate it in the
html inside the <table> ... </table> element.
If there are multiple tables, we can use attributes such as id to identify the right table.
4
How the web works
user interface
web server
web browser
database, data
handling, etc
server-based system
user interface
Ajax “engine”
(javascript)
web browser
HTTP request
HTTP request
HTML +
CSS data
web server
database, data
handling, etc
server-based system
XML/JSON or
HTML/CSS or
Javascript data
Schematic adapted from https://en.wikipedia.org/wiki/Ajax_%28programming%29
ClientServer
Javascript
call
HTML+CSS
data
Conventional model of web application Ajax model of web application
5
Scraping ad hoc vs at scale
● Use case 1 (basic scraping): I need to scrape a few pages,
maybe crawl a small site. I can wait to have all data retrievals
done sequentially.
– This is what we will focus on in this talk
– Two scenarios:
● All the data you need is in the HTML files you get from the server
● The page is dynamically generated using Ajax
● Use case 2: I am crawling large sites periodically. I need task
scheduling, asynchronous retrieval, pipeline automation, and
lots of other tools to make this easier.
– You may want to look into a crawling/scraping framework like Scrapy
6
Agenda
● We'll walk through examples for the two
scenarios under “use case 1”
– We will use two web sites,
pacificroadrunners.ca/firsthalf/ and
ultrasignup.com, to scrape the results for all
years of two running races
● Pacific Road Runners First Half Marathon
● Chuckanut 50K
7
Basic scraping: static HTML
● Tasks:
– Inspect the HTML source to locate where in the Document Object Model (DOM) the data of
interest resides
● Right-click on the page in browser window and select “View Page Source”
– Retrieve the HTML file using built-in urllib library (note that the interface to this library has
changed between Python 2 and 3)
– Parse the HTML file and extract the data of interest, using either a built-in parser like html.parser,
or third-party libraries such as lxml, html5lib or BeautifulSoup
● Which library you choose for parsing depends on
– how sloppy your html is: if your html is “incorrect”, the parsers may handle it differently
– whether you want to maximize speed (choose lxml)
– what external dependencies you can live with (lxml is a binding to popular C libraries libxml2
and libxslt)
– how “easy” you want the interface to be (BeautifulSoup has an easy-to-get-started API and great
documentation with examples)
8
urllib library
● High level standard library for retrieving data across the Web
(not just html files)
● urllib library was split into several submodules in Python
3 for working with URLs:
– urllib.request, urllib.parse, urllib.error, 
urllib.robotparser
● The urllib.request provides methods for opening URLs
● The urllib.request.urlopen() method is similar to
the built-in function open() for opening files
9
BeautifulSoup library
● Python third-party library for extracting data from html and
xml files
● Works with html.parser, lxml, html5lib
● Provides ways to navigate, search and modify the parse
tree based on the position in the parse tree, tag name, tag
attributes, css classes using regular expressions, user-
defined functions etc
● Excellent tutorial with examples at
http://www.crummy.com/software/BeautifulSoup/bs4/doc/
● Supports Python 2.7 and 3
10
Quick intro to BeautifulSoup
● BeautifulSoup constructor creates an object that
represents original documents as a nested data
structure
●
The simplest way to address a tag is to use its name
●
find() and find_all() methods allow to search for an
element; find() returns the first element found, and
find_all() a list of elements
●
You can access a tag's attributes by treating the tag
like a dictionary
●
You can extract text from the element
Parsing samplepage.html with
BeautifulSoup
11
Example: Results tab of Vancouver First Half race
The main Results page
has links to yearly results
12
HTML source inspection: front page of results
Find the links in the HTML source file corresponding to the rows of yearly results you
see in the browser
Note that the links we are interested
in are all inside
<div class=”entry-content”> ... </div>
The links to each year's page
are inside an <a> tag, as a
value to attribute named href
The text field of the <a> ... </a> element is a
string containing the year of the race
13
Extract all links to individual year's results
Note that the collected links are relative; we can append
them to the main url to get absolute links
14
HTML source inspection: individual year
Find the table of results in the DOM This is the table we want, each
runner's info in <tr> ... </tr>
Each participant's entry is a table
row <tr> ... </tr>.
The text (participant's placement,
name, finishing time, etc) are
inside <td> ... </td> elements
nested in the table row.
15
Scrape results for all years
First five finishers of the 2016 race on
the web page
Let's print out the first five of the
2016 entry in our dictionary
16
What is Ajax
● Name comes from Asynchronous JavaScript and XML, even
though XML is no longer the only format for data exchange
● Refers to a group of web technologies used to implement web
applications that communicate with the server in the background
without interfering with the display of the page or reloading the
entire page
● For web scraping, this means that the data you are interested in
may not be in the source HTML file you receive from the server
but is generated on the client side
● You can use browser development tools (“inspect element”) to
inspect the Ajax-generated HTML that your browser displays
17
Second example: ultrasignup.com
So far, web site looks similar to previous example. We are interested in
scraping results for all years of the race.
18
Results page for an individual year
This is a sample page for 2014 results. It looks like the results
are in a neat table.
19
Inspecting the DOM: the result table is not
filled out in the source HTML
However, the <table id=”list”> element is empty
when we view source html file.
This means we can't simply extract the result table
from the HTML file that the server sent to us like we
did in the previous example.
We see all result rows in
<table id=”list”> </table>
when inspecting the element
in browser developer tools
such as Mozilla Firebug
(which shows the generated
HTML)
Again, the details of the
results are within the
<td> ... </td> element inside
each <tr> ... </tr> element
20
Scraping dynamically generated HTML
● Your scraping system needs to be able to execute JavaScript and
either retrieve the results directly, or generate the HTML as seen in
the browser and scrape data from that
● A popular choice for the latter is to use Selenium, a suite of tools for
browser automation
– Python's binding to Selenium suite is the library selenium
– Selenium supports several browsers (Firefox, Chrome, etc), but for scraping
a headless non-graphical browser like PhantomJS is a good choice
– Once Selenium has generated the HTML after running JavaScript, you can
either access it as a string and use this as input to BeautifulSoup for data
extraction, or you can navigate the DOM and find elements within Selenium
21
Quick intro to selenium
● Selenium WebDriver accepts commands, sends them to the browser, and retrieves results
● You can navigate and search the DOM
– find_element_by_id('my_id')
– find_elements_by_tag_name('table')
– find_elements_by_class_name('my_class_name')
– find_elements_by_xpath()
● For example, find_elements_by_xpath(“//table[@id='list']”) will find a table with the id = “list”
● Tutorial on xpath syntax: www.w3schools.com/xsl/xpath_intro.asp
– and other methods (using css selectors, link text, etc)
● You can click on links using WebDriver's click() method
● You can retrieve the generated html source by calling .page_source on webdriver
instance and parse it outside of selenium
● More on Selenium with Python:
http://selenium-python.readthedocs.org/index.html
22
Scraping ultrasignup race results using
Selenium and PhantomJS
23
And it works!
Note that the first two lists (rows in web page table) in our dictionary corresponding to
a year do not actually contain finisher results.
Let's ignore those, and print out the subsequent results for the first 12 finishers.
This table is from the 2014 results page for
comparison: the results match
24
Additional resources for scraping projects
Additional third-party libraries you may want to look at:
● requests library
– Filling out forms/login information before accessing data and managing cookies
● mechanize library
– Navigating through forms, session and cookie handling
– Does not yet support Python 3
– Does not run JavaScript
● scrapy library
– Framework for web crawling and scraping: managing crawling, asynchronous
scheduling and processing of requests, cookie and session handling, built-in
support for data extraction etc
– Does not yet support Python 3
25
References for further reading
● BeautifulSoup
– http://www.crummy.com/software/Bea
utifulSoup/bs4/doc/
● Selenium with Python
– http://selenium-python.readthedocs.org/
● General web scraping
– Web Scraping with Python by Ryan Mitchell,
O'Reilly 2015
Ad

Recommended

What is Web-scraping?
What is Web-scraping?
Yu-Chang Ho
 
Introduction to Web Scraping using Python and Beautiful Soup
Introduction to Web Scraping using Python and Beautiful Soup
Tushar Mittal
 
Web Scrapping Using Python
Web Scrapping Using Python
ComputerScienceJunct
 
Tutorial on Web Scraping in Python
Tutorial on Web Scraping in Python
Nithish Raghunandanan
 
Web Scraping and Data Extraction Service
Web Scraping and Data Extraction Service
PromptCloud
 
What is web scraping?
What is web scraping?
Brijesh Prajapati
 
Web Scraping using Python | Web Screen Scraping
Web Scraping using Python | Web Screen Scraping
CynthiaCruz55
 
Web scraping in python
Web scraping in python
Viren Rajput
 
Web Scraping With Python
Web Scraping With Python
Robert Dempsey
 
Web scraping in python
Web scraping in python
Saurav Tomar
 
Web scraping
Web scraping
Ashley Davis
 
Web Scraping Basics
Web Scraping Basics
Kyle Banerjee
 
Web Scraping
Web Scraping
Carlos Rodriguez
 
Scraping data from the web and documents
Scraping data from the web and documents
Tommy Tavenner
 
Getting started with Web Scraping in Python
Getting started with Web Scraping in Python
Satwik Kansal
 
Web scraping
Web scraping
Selecto
 
Web Scraping
Web Scraping
primeteacher32
 
WEB Scraping.pptx
WEB Scraping.pptx
Shubham Jaybhaye
 
Introduction to web development
Introduction to web development
Mohammed Safwat
 
Web mining
Web mining
Tanjarul Islam Mishu
 
Web Development
Web Development
Aditya Raman
 
Skillshare - Introduction to Data Scraping
Skillshare - Introduction to Data Scraping
School of Data
 
Web Development Ppt
Web Development Ppt
Bruce Tucker
 
Web Designing Presentation
Web Designing Presentation
RahulSuri30
 
Web Development with HTML5, CSS3 & JavaScript
Web Development with HTML5, CSS3 & JavaScript
Edureka!
 
Web scraping &amp; browser automation
Web scraping &amp; browser automation
BHAWESH RAJPAL
 
Web mining
Web mining
SarthakSahoo8
 
Web crawler
Web crawler
poonamkenkre
 
Scraping with Python for Fun and Profit - PyCon India 2010
Scraping with Python for Fun and Profit - PyCon India 2010
Abhishek Mishra
 
Developing effective data scientists
Developing effective data scientists
Erin Shellman
 

More Related Content

What's hot (20)

Web Scraping With Python
Web Scraping With Python
Robert Dempsey
 
Web scraping in python
Web scraping in python
Saurav Tomar
 
Web scraping
Web scraping
Ashley Davis
 
Web Scraping Basics
Web Scraping Basics
Kyle Banerjee
 
Web Scraping
Web Scraping
Carlos Rodriguez
 
Scraping data from the web and documents
Scraping data from the web and documents
Tommy Tavenner
 
Getting started with Web Scraping in Python
Getting started with Web Scraping in Python
Satwik Kansal
 
Web scraping
Web scraping
Selecto
 
Web Scraping
Web Scraping
primeteacher32
 
WEB Scraping.pptx
WEB Scraping.pptx
Shubham Jaybhaye
 
Introduction to web development
Introduction to web development
Mohammed Safwat
 
Web mining
Web mining
Tanjarul Islam Mishu
 
Web Development
Web Development
Aditya Raman
 
Skillshare - Introduction to Data Scraping
Skillshare - Introduction to Data Scraping
School of Data
 
Web Development Ppt
Web Development Ppt
Bruce Tucker
 
Web Designing Presentation
Web Designing Presentation
RahulSuri30
 
Web Development with HTML5, CSS3 & JavaScript
Web Development with HTML5, CSS3 & JavaScript
Edureka!
 
Web scraping &amp; browser automation
Web scraping &amp; browser automation
BHAWESH RAJPAL
 
Web mining
Web mining
SarthakSahoo8
 
Web crawler
Web crawler
poonamkenkre
 
Web Scraping With Python
Web Scraping With Python
Robert Dempsey
 
Web scraping in python
Web scraping in python
Saurav Tomar
 
Scraping data from the web and documents
Scraping data from the web and documents
Tommy Tavenner
 
Getting started with Web Scraping in Python
Getting started with Web Scraping in Python
Satwik Kansal
 
Web scraping
Web scraping
Selecto
 
Introduction to web development
Introduction to web development
Mohammed Safwat
 
Skillshare - Introduction to Data Scraping
Skillshare - Introduction to Data Scraping
School of Data
 
Web Development Ppt
Web Development Ppt
Bruce Tucker
 
Web Designing Presentation
Web Designing Presentation
RahulSuri30
 
Web Development with HTML5, CSS3 & JavaScript
Web Development with HTML5, CSS3 & JavaScript
Edureka!
 
Web scraping &amp; browser automation
Web scraping &amp; browser automation
BHAWESH RAJPAL
 

Viewers also liked (10)

Scraping with Python for Fun and Profit - PyCon India 2010
Scraping with Python for Fun and Profit - PyCon India 2010
Abhishek Mishra
 
Developing effective data scientists
Developing effective data scientists
Erin Shellman
 
Web Scraping is BS
Web Scraping is BS
John D
 
Bot or Not
Bot or Not
Erin Shellman
 
Python beautiful soup - bs4
Python beautiful soup - bs4
Eueung Mulyana
 
Python, web scraping and content management: Scrapy and Django
Python, web scraping and content management: Scrapy and Django
Sammy Fung
 
Parse The Web Using Python+Beautiful Soup
Parse The Web Using Python+Beautiful Soup
Jim Chang
 
Web Scraping with Python
Web Scraping with Python
Paul Schreiber
 
Scraping the web with python
Scraping the web with python
Jose Manuel Ortega Candel
 
Beautiful soup
Beautiful soup
mustafa sarac
 
Scraping with Python for Fun and Profit - PyCon India 2010
Scraping with Python for Fun and Profit - PyCon India 2010
Abhishek Mishra
 
Developing effective data scientists
Developing effective data scientists
Erin Shellman
 
Web Scraping is BS
Web Scraping is BS
John D
 
Python beautiful soup - bs4
Python beautiful soup - bs4
Eueung Mulyana
 
Python, web scraping and content management: Scrapy and Django
Python, web scraping and content management: Scrapy and Django
Sammy Fung
 
Parse The Web Using Python+Beautiful Soup
Parse The Web Using Python+Beautiful Soup
Jim Chang
 
Web Scraping with Python
Web Scraping with Python
Paul Schreiber
 
Ad

Similar to Intro to web scraping with Python (20)

Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
LITTINRAJAN
 
Pydata-Python tools for webscraping
Pydata-Python tools for webscraping
Jose Manuel Ortega Candel
 
Null 1
Null 1
MarcosHuenchullanSot
 
Python FDP self learning presentations..
Python FDP self learning presentations..
chaitra742243
 
Module-5 Ppt.pptx
Module-5 Ppt.pptx
ssuser44f56b1
 
chapter1_introHTML.pdf..................
chapter1_introHTML.pdf..................
safaameur1
 
Python ScrapingPresentation for dummy.pptx
Python ScrapingPresentation for dummy.pptx
norel46453
 
Web Scraping Workshop
Web Scraping Workshop
GDSC UofT Mississauga
 
Web scraping using scrapy - zekeLabs
Web scraping using scrapy - zekeLabs
zekeLabs Technologies
 
Web_Scraping_Presentation_today pptx.pptx
Web_Scraping_Presentation_today pptx.pptx
YuvrajTkd
 
Data web analytics scraping 12345_II.pptx
Data web analytics scraping 12345_II.pptx
utjimmyx
 
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
ThinkODC
 
Web programming using python frameworks.
Web programming using python frameworks.
Puneet Kumar Bhatia (MBA, ITIL V3 Certified)
 
Data-Analytics using python (Module 4).pptx
Data-Analytics using python (Module 4).pptx
DRSHk10
 
Datasets, APIs, and Web Scraping
Datasets, APIs, and Web Scraping
Damian T. Gordon
 
Python webinar 2nd july
Python webinar 2nd july
Vineet Chaturvedi
 
BDACA - Lecture6
BDACA - Lecture6
Department of Communication Science, University of Amsterdam
 
How to scrape data as economics student
How to scrape data as economics student
Nikolay Tretyakov
 
Sesi 8_Scraping & API for really bnegineer.pptx
Sesi 8_Scraping & API for really bnegineer.pptx
KevinLeo32
 
Unit 2_Crawling a website data collection, search engine indexing, and cybers...
Unit 2_Crawling a website data collection, search engine indexing, and cybers...
ChatanBawankar
 
Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
Web scraping with BeautifulSoup, LXML, RegEx and Scrapy
LITTINRAJAN
 
Python FDP self learning presentations..
Python FDP self learning presentations..
chaitra742243
 
chapter1_introHTML.pdf..................
chapter1_introHTML.pdf..................
safaameur1
 
Python ScrapingPresentation for dummy.pptx
Python ScrapingPresentation for dummy.pptx
norel46453
 
Web scraping using scrapy - zekeLabs
Web scraping using scrapy - zekeLabs
zekeLabs Technologies
 
Web_Scraping_Presentation_today pptx.pptx
Web_Scraping_Presentation_today pptx.pptx
YuvrajTkd
 
Data web analytics scraping 12345_II.pptx
Data web analytics scraping 12345_II.pptx
utjimmyx
 
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
Guide for web scraping with Python libraries_ Beautiful Soup, Scrapy, and mor...
ThinkODC
 
Data-Analytics using python (Module 4).pptx
Data-Analytics using python (Module 4).pptx
DRSHk10
 
Datasets, APIs, and Web Scraping
Datasets, APIs, and Web Scraping
Damian T. Gordon
 
How to scrape data as economics student
How to scrape data as economics student
Nikolay Tretyakov
 
Sesi 8_Scraping & API for really bnegineer.pptx
Sesi 8_Scraping & API for really bnegineer.pptx
KevinLeo32
 
Unit 2_Crawling a website data collection, search engine indexing, and cybers...
Unit 2_Crawling a website data collection, search engine indexing, and cybers...
ChatanBawankar
 
Ad

Recently uploaded (20)

Measurecamp Copenhagen - Consent Context
Measurecamp Copenhagen - Consent Context
Human37
 
Indigo dyeing Presentation (2).pptx as dye
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
Shifting Focus on AI: How it Can Make a Positive Difference
Shifting Focus on AI: How it Can Make a Positive Difference
1508 A/S
 
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
pelaezmaryjoy90
 
@Reset-Password.pptx presentakh;kenvtion
@Reset-Password.pptx presentakh;kenvtion
MarkLariosa1
 
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
CristineGraceAcuyan
 
The Influence off Flexible Work Policies
The Influence off Flexible Work Policies
sales480687
 
Boost Business Efficiency with Professional Data Entry Services
Boost Business Efficiency with Professional Data Entry Services
eloiacs eloiacs
 
Crafting-Research-Recommendations Grade 12.pptx
Crafting-Research-Recommendations Grade 12.pptx
DaryllWhere
 
Model Evaluation & Visualisation part of a series of intro modules for data ...
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
Artigo - Playing to Win.planejamento docx
Artigo - Playing to Win.planejamento docx
KellyXavier15
 
Attendance Presentation Project Excel.pptx
Attendance Presentation Project Excel.pptx
s2025266191
 
llm_presentation and deep learning methods
llm_presentation and deep learning methods
sayedabdussalam11
 
Lesson-3_Program-Outcomes-and-Student-Learning-Outcomes_For-Students.pdf
Lesson-3_Program-Outcomes-and-Student-Learning-Outcomes_For-Students.pdf
SarahMaeDuallo
 
reporting monthly for genset & Air Compressor.pptx
reporting monthly for genset & Air Compressor.pptx
dacripapanjaitan
 
英国毕业证范本利物浦约翰摩尔斯大学成绩单底纹防伪LJMU学生证办理学历认证
英国毕业证范本利物浦约翰摩尔斯大学成绩单底纹防伪LJMU学生证办理学历认证
taqyed
 
Indigo_Airlines_Strategy_Presentation.pptx
Indigo_Airlines_Strategy_Presentation.pptx
mukeshpurohit991
 
UPS and Big Data intro to Business Analytics.pptx
UPS and Big Data intro to Business Analytics.pptx
sanjum5582
 
Flextronics Employee Safety Data-Project-2.pptx
Flextronics Employee Safety Data-Project-2.pptx
kilarihemadri
 
Communication_Skills_Class10_Visual.pptx
Communication_Skills_Class10_Visual.pptx
namanrastogi70555
 
Measurecamp Copenhagen - Consent Context
Measurecamp Copenhagen - Consent Context
Human37
 
Indigo dyeing Presentation (2).pptx as dye
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
Shifting Focus on AI: How it Can Make a Positive Difference
Shifting Focus on AI: How it Can Make a Positive Difference
1508 A/S
 
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
Statistics-and-Computer-Tools-for-Analyzing-of-Assessment-Data.pptx
pelaezmaryjoy90
 
@Reset-Password.pptx presentakh;kenvtion
@Reset-Password.pptx presentakh;kenvtion
MarkLariosa1
 
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
624753984-Annex-A3-RPMS-Tool-for-Proficient-Teachers-SY-2024-2025.pdf
CristineGraceAcuyan
 
The Influence off Flexible Work Policies
The Influence off Flexible Work Policies
sales480687
 
Boost Business Efficiency with Professional Data Entry Services
Boost Business Efficiency with Professional Data Entry Services
eloiacs eloiacs
 
Crafting-Research-Recommendations Grade 12.pptx
Crafting-Research-Recommendations Grade 12.pptx
DaryllWhere
 
Model Evaluation & Visualisation part of a series of intro modules for data ...
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
Artigo - Playing to Win.planejamento docx
Artigo - Playing to Win.planejamento docx
KellyXavier15
 
Attendance Presentation Project Excel.pptx
Attendance Presentation Project Excel.pptx
s2025266191
 
llm_presentation and deep learning methods
llm_presentation and deep learning methods
sayedabdussalam11
 
Lesson-3_Program-Outcomes-and-Student-Learning-Outcomes_For-Students.pdf
Lesson-3_Program-Outcomes-and-Student-Learning-Outcomes_For-Students.pdf
SarahMaeDuallo
 
reporting monthly for genset & Air Compressor.pptx
reporting monthly for genset & Air Compressor.pptx
dacripapanjaitan
 
英国毕业证范本利物浦约翰摩尔斯大学成绩单底纹防伪LJMU学生证办理学历认证
英国毕业证范本利物浦约翰摩尔斯大学成绩单底纹防伪LJMU学生证办理学历认证
taqyed
 
Indigo_Airlines_Strategy_Presentation.pptx
Indigo_Airlines_Strategy_Presentation.pptx
mukeshpurohit991
 
UPS and Big Data intro to Business Analytics.pptx
UPS and Big Data intro to Business Analytics.pptx
sanjum5582
 
Flextronics Employee Safety Data-Project-2.pptx
Flextronics Employee Safety Data-Project-2.pptx
kilarihemadri
 
Communication_Skills_Class10_Visual.pptx
Communication_Skills_Class10_Visual.pptx
namanrastogi70555
 

Intro to web scraping with Python

  • 1. Intro to Web Scraping with Python Vancouver PyLadies Meetup March 7, 2016 Maris Lemba
  • 2. 2 Before we begin ● Do consider whether the site you are interested in allows scraping by examining its robots.txt file – This robots exclusion standard is used by web servers to communicate with web crawlers and other robots – Located in the top-level directory of web server – Can specify which directories the site owner does not want robots to access – Can have different access rules for different robots ● If you intend to publish the output from your scraping, consider whether it is legal to do so – It is generally ok to re-publish “factual data” #Sample robots.txt file User-agent: Googlebot Disallow: /folder/ User-agent: * Disallow: /photos/
  • 3. 3 Basic components of a web page ● HTML – a set of markup tags for describing web documents ● CSS – style sheet language for describing the presentation of an HTML document ● JavaScript – popular client-side programming language used for web applications Sample html source This is what it would look like in the browser If we wanted to scrape information from the “Day of the week” table, we can locate it in the html inside the <table> ... </table> element. If there are multiple tables, we can use attributes such as id to identify the right table.
  • 4. 4 How the web works user interface web server web browser database, data handling, etc server-based system user interface Ajax “engine” (javascript) web browser HTTP request HTTP request HTML + CSS data web server database, data handling, etc server-based system XML/JSON or HTML/CSS or Javascript data Schematic adapted from https://en.wikipedia.org/wiki/Ajax_%28programming%29 ClientServer Javascript call HTML+CSS data Conventional model of web application Ajax model of web application
  • 5. 5 Scraping ad hoc vs at scale ● Use case 1 (basic scraping): I need to scrape a few pages, maybe crawl a small site. I can wait to have all data retrievals done sequentially. – This is what we will focus on in this talk – Two scenarios: ● All the data you need is in the HTML files you get from the server ● The page is dynamically generated using Ajax ● Use case 2: I am crawling large sites periodically. I need task scheduling, asynchronous retrieval, pipeline automation, and lots of other tools to make this easier. – You may want to look into a crawling/scraping framework like Scrapy
  • 6. 6 Agenda ● We'll walk through examples for the two scenarios under “use case 1” – We will use two web sites, pacificroadrunners.ca/firsthalf/ and ultrasignup.com, to scrape the results for all years of two running races ● Pacific Road Runners First Half Marathon ● Chuckanut 50K
  • 7. 7 Basic scraping: static HTML ● Tasks: – Inspect the HTML source to locate where in the Document Object Model (DOM) the data of interest resides ● Right-click on the page in browser window and select “View Page Source” – Retrieve the HTML file using built-in urllib library (note that the interface to this library has changed between Python 2 and 3) – Parse the HTML file and extract the data of interest, using either a built-in parser like html.parser, or third-party libraries such as lxml, html5lib or BeautifulSoup ● Which library you choose for parsing depends on – how sloppy your html is: if your html is “incorrect”, the parsers may handle it differently – whether you want to maximize speed (choose lxml) – what external dependencies you can live with (lxml is a binding to popular C libraries libxml2 and libxslt) – how “easy” you want the interface to be (BeautifulSoup has an easy-to-get-started API and great documentation with examples)
  • 8. 8 urllib library ● High level standard library for retrieving data across the Web (not just html files) ● urllib library was split into several submodules in Python 3 for working with URLs: – urllib.request, urllib.parse, urllib.error,  urllib.robotparser ● The urllib.request provides methods for opening URLs ● The urllib.request.urlopen() method is similar to the built-in function open() for opening files
  • 9. 9 BeautifulSoup library ● Python third-party library for extracting data from html and xml files ● Works with html.parser, lxml, html5lib ● Provides ways to navigate, search and modify the parse tree based on the position in the parse tree, tag name, tag attributes, css classes using regular expressions, user- defined functions etc ● Excellent tutorial with examples at http://www.crummy.com/software/BeautifulSoup/bs4/doc/ ● Supports Python 2.7 and 3
  • 10. 10 Quick intro to BeautifulSoup ● BeautifulSoup constructor creates an object that represents original documents as a nested data structure ● The simplest way to address a tag is to use its name ● find() and find_all() methods allow to search for an element; find() returns the first element found, and find_all() a list of elements ● You can access a tag's attributes by treating the tag like a dictionary ● You can extract text from the element Parsing samplepage.html with BeautifulSoup
  • 11. 11 Example: Results tab of Vancouver First Half race The main Results page has links to yearly results
  • 12. 12 HTML source inspection: front page of results Find the links in the HTML source file corresponding to the rows of yearly results you see in the browser Note that the links we are interested in are all inside <div class=”entry-content”> ... </div> The links to each year's page are inside an <a> tag, as a value to attribute named href The text field of the <a> ... </a> element is a string containing the year of the race
  • 13. 13 Extract all links to individual year's results Note that the collected links are relative; we can append them to the main url to get absolute links
  • 14. 14 HTML source inspection: individual year Find the table of results in the DOM This is the table we want, each runner's info in <tr> ... </tr> Each participant's entry is a table row <tr> ... </tr>. The text (participant's placement, name, finishing time, etc) are inside <td> ... </td> elements nested in the table row.
  • 15. 15 Scrape results for all years First five finishers of the 2016 race on the web page Let's print out the first five of the 2016 entry in our dictionary
  • 16. 16 What is Ajax ● Name comes from Asynchronous JavaScript and XML, even though XML is no longer the only format for data exchange ● Refers to a group of web technologies used to implement web applications that communicate with the server in the background without interfering with the display of the page or reloading the entire page ● For web scraping, this means that the data you are interested in may not be in the source HTML file you receive from the server but is generated on the client side ● You can use browser development tools (“inspect element”) to inspect the Ajax-generated HTML that your browser displays
  • 17. 17 Second example: ultrasignup.com So far, web site looks similar to previous example. We are interested in scraping results for all years of the race.
  • 18. 18 Results page for an individual year This is a sample page for 2014 results. It looks like the results are in a neat table.
  • 19. 19 Inspecting the DOM: the result table is not filled out in the source HTML However, the <table id=”list”> element is empty when we view source html file. This means we can't simply extract the result table from the HTML file that the server sent to us like we did in the previous example. We see all result rows in <table id=”list”> </table> when inspecting the element in browser developer tools such as Mozilla Firebug (which shows the generated HTML) Again, the details of the results are within the <td> ... </td> element inside each <tr> ... </tr> element
  • 20. 20 Scraping dynamically generated HTML ● Your scraping system needs to be able to execute JavaScript and either retrieve the results directly, or generate the HTML as seen in the browser and scrape data from that ● A popular choice for the latter is to use Selenium, a suite of tools for browser automation – Python's binding to Selenium suite is the library selenium – Selenium supports several browsers (Firefox, Chrome, etc), but for scraping a headless non-graphical browser like PhantomJS is a good choice – Once Selenium has generated the HTML after running JavaScript, you can either access it as a string and use this as input to BeautifulSoup for data extraction, or you can navigate the DOM and find elements within Selenium
  • 21. 21 Quick intro to selenium ● Selenium WebDriver accepts commands, sends them to the browser, and retrieves results ● You can navigate and search the DOM – find_element_by_id('my_id') – find_elements_by_tag_name('table') – find_elements_by_class_name('my_class_name') – find_elements_by_xpath() ● For example, find_elements_by_xpath(“//table[@id='list']”) will find a table with the id = “list” ● Tutorial on xpath syntax: www.w3schools.com/xsl/xpath_intro.asp – and other methods (using css selectors, link text, etc) ● You can click on links using WebDriver's click() method ● You can retrieve the generated html source by calling .page_source on webdriver instance and parse it outside of selenium ● More on Selenium with Python: http://selenium-python.readthedocs.org/index.html
  • 22. 22 Scraping ultrasignup race results using Selenium and PhantomJS
  • 23. 23 And it works! Note that the first two lists (rows in web page table) in our dictionary corresponding to a year do not actually contain finisher results. Let's ignore those, and print out the subsequent results for the first 12 finishers. This table is from the 2014 results page for comparison: the results match
  • 24. 24 Additional resources for scraping projects Additional third-party libraries you may want to look at: ● requests library – Filling out forms/login information before accessing data and managing cookies ● mechanize library – Navigating through forms, session and cookie handling – Does not yet support Python 3 – Does not run JavaScript ● scrapy library – Framework for web crawling and scraping: managing crawling, asynchronous scheduling and processing of requests, cookie and session handling, built-in support for data extraction etc – Does not yet support Python 3
  • 25. 25 References for further reading ● BeautifulSoup – http://www.crummy.com/software/Bea utifulSoup/bs4/doc/ ● Selenium with Python – http://selenium-python.readthedocs.org/ ● General web scraping – Web Scraping with Python by Ryan Mitchell, O'Reilly 2015