A common scenario for web scraping is when the data we want is available in plain html, but in different parts of the web, and not in a table format. In this scenario, we will need to find a way to extract each element, and then put it together into a data frame manually.
The motivating example here will be the website ipaidabribe.com
, which contains a database of self-reports of bribes in India. We want to learn how much people were asked to pay for different services, and by which departments.
url <- 'http://ipaidabribe.com/reports/paid'
We will also be using rvest
, but in a slightly different way: prior to scraping, we need to identify the CSS selector of each element we want to extract.
A very useful tool for this purpose is selectorGadget
, an extension to the Google Chrome browser. Go to the following website to install it: http://selectorgadget.com/
. Now, go back to the ipaidabribe website and open the extension. Then, click on the element you want to extract, and then on the rest of highlighted elements that you do not want to extract. After only the elements you’re interested in are highlighted, copy and paste the CSS selector into R.
Now we’re ready to scrape the website:
library(rvest, warn.conflicts=FALSE)
## Loading required package: xml2
bribes <- read_html(url) # reading the HTML code
amounts <- html_nodes(bribes, ".paid-amount span") # identify the CSS selector
amounts # content of CSS selector
## {xml_nodeset (10)}
## [1] <span>Paid INR 250\r\n </span>
## [2] <span>Paid INR 13,000\r\n </span>
## [3] <span>Paid INR 50\r\n </span>
## [4] <span>Paid INR 750\r\n </span>
## [5] <span>Paid INR 1,000\r\n </span>
## [6] <span>Paid INR 1,000\r\n </span>
## [7] <span>Paid INR 10,000\r\n </span>
## [8] <span>Paid INR 400\r\n </span>
## [9] <span>Paid INR 3,000\r\n </span>
## [10] <span>Paid INR 500\r\n </span>
html_text(amounts)
## [1] "Paid INR 250\r\n "
## [2] "Paid INR 13,000\r\n "
## [3] "Paid INR 50\r\n "
## [4] "Paid INR 750\r\n "
## [5] "Paid INR 1,000\r\n "
## [6] "Paid INR 1,000\r\n "
## [7] "Paid INR 10,000\r\n "
## [8] "Paid INR 400\r\n "
## [9] "Paid INR 3,000\r\n "
## [10] "Paid INR 500\r\n "
We still need to do some cleaning before the data is usable:
amounts <- html_text(amounts)
(amounts <- gsub("Paid INR | |\r|\n|,", "", amounts)) # remove text, white space, and commas
## [1] "250" "13000" "50" "750" "1000" "1000" "10000" "400"
## [9] "3000" "500"
(amounts <- as.numeric(amounts)) # convert to numeric
## [1] 250 13000 50 750 1000 1000 10000 400 3000 500
Let’s do another one: transactions during which the bribe ocurred
transaction <- html_nodes(bribes, ".transaction a")
(transaction <- html_text(transaction))
## [1] "Police Harassment" "7/12 Extract"
## [3] "Updating Passport" "Reconnection of Electricity"
## [5] "Police Verification for Passport" "Police Verification for Passport"
## [7] "Transfer of Property" "Police Verification for Passport"
## [9] "Pension Processing" "Police Verification for Passport"
And one more: the department that is responsible for these transactions
# and one more
dept <- html_nodes(bribes, ".name a")
(dept <- html_text(dept))
## [1] "Police" "Revenue"
## [3] "Passport" "Electricity and Power Supply"
## [5] "Police" "Passport"
## [7] "Stamps and Registration" "Police"
## [9] "Education" "Police"
This was just for one page, but note that there are many pages. How do we scrape the rest? First, following the best practices on coding, we will write a function that takes the URL of each page, scrapes it, and returns the information we want.
scrape_bribe <- function(url){
bribes <- read_html(url)
# variables that we're interested in
amounts <- html_text(html_nodes(bribes, ".paid-amount span"))
amounts <- as.numeric(gsub("Paid INR | |\r|\n|,", "", amounts))
transaction <- html_text(html_nodes(bribes, ".transaction a"))
dept <- html_text(html_nodes(bribes, ".name a"))
# putting together into a data frame
df <- data.frame(
amounts = amounts,
transaction = transaction,
dept = dept,
stringsAsFactors=F)
return(df)
}
And we will start a list of data frames, and put the data frame for the initial page in the first position of that list.
bribes <- list()
bribes[[1]] <- scrape_bribe(url)
str(bribes)
## List of 1
## $ :'data.frame': 10 obs. of 3 variables:
## ..$ amounts : num [1:10] 250 13000 50 750 1000 1000 10000 400 3000 500
## ..$ transaction: chr [1:10] "Police Harassment" "7/12 Extract" "Updating Passport" "Reconnection of Electricity" ...
## ..$ dept : chr [1:10] "Police" "Revenue" "Passport" "Electricity and Power Supply" ...
How should we go about the following pages? Note that the following urls had page=XX
, where XX
is 10, 20, 30… So we will create a base url and then add these additional numbers. (Note that for this exercise we will only scrape the first 5 pages.)
base_url <- "http://ipaidabribe.com/reports/paid?page="
pages <- seq(0, 40, by=10)
And now we just need to loop over pages, and use the function we created earlier to scrape the information, and add it to the list. Note that we’re adding a couple of seconds between HTTP requests to avoid overloading the page, as well as a message that will informs us of the progress of the loop.
for (i in 2:length(pages)){
# informative message about progress of loop
message(i, '/', length(pages))
# prepare URL
url <- paste(base_url, pages[i], sep="")
# scrape website
bribes[[i]] <- scrape_bribe(url)
# wait a couple of seconds between URL calls
Sys.sleep(2)
}
## 2/5
## 3/5
## 4/5
## 5/5
The final step is to convert the list of data frames into a single data frame that we can work with, using the function do.call(rbind, LIST)
(where LIST
is a list of data frames).
bribes <- do.call(rbind, bribes)
head(bribes)
## amounts transaction dept
## 1 250 Police Harassment Police
## 2 13000 7/12 Extract Revenue
## 3 50 Updating Passport Passport
## 4 750 Reconnection of Electricity Electricity and Power Supply
## 5 1000 Police Verification for Passport Police
## 6 1000 Police Verification for Passport Passport
str(bribes)
## 'data.frame': 50 obs. of 3 variables:
## $ amounts : num 250 13000 50 750 1000 1000 10000 400 3000 500 ...
## $ transaction: chr "Police Harassment" "7/12 Extract" "Updating Passport" "Reconnection of Electricity" ...
## $ dept : chr "Police" "Revenue" "Passport" "Electricity and Power Supply" ...
Let’s get some quick descriptive statistics to check everything worked. First, what is the most common transaction during which a bribe was paid?
tab <- table(bribes$transaction) # frequency table
tab <- sort(tab, decreasing=TRUE) # sorting the table from most to least common
head(tab)
##
## Police Verification for Passport Others
## 9 4
## Traffic Violations Police Harassment
## 4 3
## 7/12 Extract Registration of Property
## 2 2
What was the average bribe payment?
summary(bribes$amount)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 40 425 1000 26519 8000 800000
And what was the average payment for each department?
agg <- aggregate(bribes$amount, by=list(dept=bribes$dept), FUN=mean)
agg[order(agg$x, decreasing = TRUE),] # ordering from highest to lowest
## dept x
## 2 Electricity and Power Supply 200487.500
## 6 Others 57020.000
## 10 Revenue 52766.667
## 13 Urban Development Authorities 20000.000
## 5 Municipal Services 9666.667
## 11 Stamps and Registration 3933.333
## 1 Education 3000.000
## 8 Police 2171.875
## 12 Transport 1666.667
## 7 Passport 390.000
## 4 Forest 250.000
## 9 Post Office 50.000
## 3 Food, Civil Supplies and Consumer Affairs 40.000