## 12.1 PDF files

We will cover how to scrap data from:

1. extract pdf files stored in offline

### 12.1.1 Offline pdf file

We need to install and load pdftools package to do the extraction.

install.packages("pdftools")
library(pdftools)

To read pdf as textfile, use pdf_text().

txt <- pdf_text("path/file.pdf")

Then we can extract a particular page.

test <- txt[49] #page 49

The pdf file contains a table.

To extract rows into list, we use the function scan.

rows<-scan(textConnection(test),
what="character", sep = "\n")

Then we can delimit rows into cells.

row =unlist(strsplit(rows[1]," \\s+ "))

### 12.1.2 Online pdf file

First we download a pdf file from the web. We use the function download.file.

Import the pdf file and then extract P.49 where it has a table. Then we scan to separate text file into rows.

Then we loop over the rows (starting from row 7) for the following operations: 1. split each row that is separated by space \\s+ using strsplit, 2. unlist the result to make it a vector, and (3) store the third cells if it is not empty.

link <- paste0(
"http://www.singstat.gov.sg/docs/",
"default-source/default-document-library/",
"publications/publications_and_papers/",
"cop2010/census_2010_release3/",
"cop2010sr3.pdf")

txt <- pdf_text("census2010_3.pdf")
test <- txt[49]  #P.49
rows<-scan(textConnection(test), what="character",
sep = "\n")

name<-c()
total<-c()

for (i in 7:length(rows)){
row = unlist(strsplit(rows[i]," \\s+ "))
if (!is.na(row[3])){
name <- c(name, row[2])
total<- c(total,
as.numeric((gsub(",", "", row[3]))))
}
}

We will use the RCurl package to download a large of csv files. Very often, we need to download a lot of csv files from the website. Luckily csv files are stored on the website with structured url paths.

For example, suppose that we want to download the all historical weather data of Singapore airport. We go to the website http://www.weather.gov.sg/climate-historical-daily/. Then we can see from the bottom that the links for download csv file is http://www.weather.gov.sg/files/dailydata/DAILYDATA_S24_201712.csv.

Hence, we will use getURL to get the file and the use textConnection to read the csv file directly.

install.packages("RCurl")
library(RCurl)
"dailydata/DAILYDATA_S24_201712.csv")
x <- getURL(URL)
df<-read.csv(textConnection(x))

However, very often, we want to download more months. Then we can use loop. By guessing and checking, we know that S24 refers to Changi airport, 2017 is the year and 12 is December. To download the whole year of data, then we have to download all 12-month data, and at each time the link dynamically changes and the data is combined each round:

site<-"http://www.weather.gov.sg/files/dailydata/"
months <- c("01","02","03","04","05","06",
"07","08","09","10","11","12")
df <-data.frame()
for (month in months){
filename <-paste0("DAILYDATA_S24_2017",month,".csv")
df <-rbind(df,temp)
}

Alternatively, we can download each months as a separate csv file into a single folder and then combine all csv files at the end. This is particularly useful when the csv files are huge.

The following codes first download all the csv files into a temp folder and then combine all csv files in that folder. To combine all csv files in the folder, we need to obtain the path of all files using list.file'' where the optionfull.names’‘is set to be TRUE to also get the directory path. Then we need to have a list of csv files by using lapply with the import function fread. Finally, we use rbindlist’’ to combine all data in the list.

site<-"http://www.weather.gov.sg/files/dailydata/"
months <- c("01","02","03","04","05","06",
"07","08","09","10","11","12")
df <-data.frame()
for (month in months){
filename <-paste0("DAILYDATA_S24_2017",month,".csv")
df <- rbindlist(lst,fill=TRUE)