rvest
The Goal of this session is to learn how to get data from the World Wide Web using R. Although we are going to talk about a few concepts first, the core of this session will be spent on getting data from websites that do not offer any interface to automate information retrieval, like via Web services such as REST, SOAP nor application programming interfaces (APIs). Therefore it is necessary to scrape
the information embedded in the website itself.
When you want to extract information or download data from a website that is too large for efficient manual downloading or needs to be frequently updated, you should first:
As usual, it is a good start to look at the CRAN View to have an idea of R packages available: https://CRAN.R-project.org/view=WebTechnologies
Here are some of the key packages:
Rcurl
: low level wrapper for libcurl
that provides convenient functions to allow you to fetch URIs, get & post forms; Quick guide.httr
: similar to Rcurl; provides a user-friendly interface for executing HTTP methods and provides support for modern web authentication protocols (OAuth 1.0, OAuth 2.0). It is a wrapper around the curl
packagervest
: a higher level package mostly based on httr. It is simpler to use for basic tasks.Rselenium
: can be used to automate interactions and extract page content from dynamically generated webpages (i.e., those requiring user interaction to display results like clicking on button)There are also function in the utils
package, such as download.file()
. Note that these functions do not handle https (motivation behind the curl
R package)
In this session we are going to use rvest
: first for a simple tutorial, followed by a challenge in groups
At the heart of web communications is the request message, which is sent via Uniform Resource Locators (URLs). Basic URL
structure:
The protocol is typically http or https for secure communications. The default port is 80, but one can be set explicitly, as illustrated in the above image. The resource path is the local path to the resource on the server.
The actions that should be performed on the host are specified via HTTP verbs. Today we are going to focus on two actions that are often used in web forms:
GET
: fetch an existing resource. The URL contains all the necessary information the server needs to locate and return the resource.POST
: create a new resource. POST requests usually carry a payload that specifies the data for the new resource.Status codes:
1xx
: Informational Messages2xx
: Successful; most known is 200: OK, request was successfully processed3xx
: Redirection4xx
: Client Error; the famous 404: resource not found5xx
: Server ErrorThe HyperText Markup Language (HTML
) describes and defines the content of a webpage. Other technologies besides HTML are generally used to describe a webpage’s appearance/presentation (CSS) or functionality (JavaScript).
“Hyper Text” in HTML refers to links that connect webpages to one another, either within a single website or between websites. Links are a fundamental aspect of the Web.
HTML uses “markup” to annotate text, images, and other content for display in a Web browser. HTML markup includes special “elements” such as <head>
, <title>
, <body>
, <header>
, <footer>
, <article>
, <section>
, <p>
, <div>
, <span>
, <img>
, and many others.
Using you web browser, you can inspect the HTML content of any webpage of the World Wide Web.
The eXtensible Markup Language (XML) provides a general approach for representing all types of information, such as data sets containing numerical and categorical variables. XML provides the basic, common, and quite simple structure and syntax for all “dialects” or vocabularies. For example, HTML
, SVG
and EML
are specific vocabularies of XML.
XPath
is quite simple but yet very powerful. Similar syntax to a file system hierarchy, it allows to identify nodes of interest by specifying paths through the tree, based on node names, node content, and a node’s relationship to other nodes in the hierarchy. We typically use XPath to locate nodes in a tree and then use R functions to extract data from those nodes and bring the data into R.
Cascading Style Sheets (CSS
) is a stylesheet language used to describe the presentation of a document written in HTML or XML. CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. In CSS, selectors are used to target the HTML elements on a web page that we want to style. There are a wide variety of CSS selectors available, allowing for fine grained precision when selecting elements to style.
Want to train on how to use CSS to select objects? Here is your interactive site: http://flukeout.github.io/
rvest
rvest
is a set of wrappers functions around the xml2
and httr
packages
Main functions:
read_html
: read a webpage into R as XML (document and nodes)html_nodes
: extract pieces out of HTML documents using XPath and/or CSS selectorshtml_attr
: extract attributes from HTML, such as href
html_text
: extract text contentFor more information on the package: here
In this example we are going to extract information about Santa Barbara’s wine shops from https://santabarbaraca.com/. This website collects a lot of information on how to best visit our town. For example they have information about The Funk Zone. Here is their listing of the local wine shops https://santabarbaraca.com/plan-your-trip/wine/wine-shops/
We are going to scrape the name of the wine shops and their websites out of this web page and compile this information into a csv to share with our friends!
s
#install.packages("rvest")
library("rvest")
URL <- "https://santabarbaraca.com/plan-your-trip/wine/wine-shops/"
# Read the webpage into R
webpage <- read_html(URL)
# Parse the webpage for bars
wine_listing <- html_nodes(webpage, ".listing-title")
# Extract the name of the bar
wine_shops <- html_text(wine_listing)
wine_shops
## [1] "Jamie Slone Wines Tasting Room"
## [2] "Bien Nacido & Solomon Hills Tasting Room"
## [3] "Riverbench Vineyard & Winery"
## [4] "Grassini Family Vineyards Tasting Room"
## [5] "Santa Barbara Winery"
## [6] "Au Bon Climat"
## [7] "Jaffurs Wine Cellars"
# Parse the page for the nodes containing the website URLs
websites <- html_nodes(webpage, ".website-button.button")
# Extract the reference
websites_urls <- html_attr(websites, "href")
websites_urls
## [1] "https://www.jamieslonewines.com/"
## [2] "http://biennacidoestate.com/"
## [3] "https://riverbench.com"
## [4] "http://www.GrassiniFamilyVineyards.com"
## [5] "http://www.sbwinery.com"
## [6] "http://www.aubonclimat.com"
## [7] "http://www.jaffurswine.com"
# Create the data frame
df_wine_shops <- data.frame(wine_tasting_room = wine_shops,
website = websites_urls)
# write it to csv
write.csv(df_wine_shops, "./data/places_you_will_go.csv", row.names = FALSE)
datatable(df_wine_shops)
Please always check that the data you are scraping are publicly available data and that there is no personal or confidential information gathered. Also please do not overload the web server you are scraping: when getting a large amount of data, it is often recommended to insert pauses between the requests sent to the web server to let it handle other requests.
rvest
website: https://rvest.tidyverse.orgrnoaa
: http://bradleyboehmke.github.io/2016/01/scraping-via-apis.html