20.4 Lab (Skip!): Setting up GCP research credits

The following steps can be used if you followed all the steps described in the Google Docs document.

First, we’ll install & load all relevant packages.

library(pacman)
pacman::p_load(googleLanguageR, readr, tidyverse, bigrquery)

Then we setup a working directory. Fill in the the quotation marks with the directory where the created JSON-File is located & read in the JSON-File (gl_auth).

setwd("")
# PB: setwd(".../2021_computational_social_science/data")




gl_auth("your_JSON_file.json")
# PB: gl_auth(".../keys/css-seminar-2021-a1e75382ae2c.json")

As an example, we’ll use a sample of tweets as the basis for using the Google Language products that we import below.

nytimes <- read.csv("nytimes_headlines.csv")
titles <- as.character(nytimes$Title[1:25])

20.4.1 Google Translation API

We use the Cloud Translation API to translate the tweets from english to german (other languages can of course be choosen as well. Check the language codes under the following link and replace the string “de” in the target command: https://developers.google.com/admin-sdk/directory/v1/languages.

tr_data <- gl_translate(titles, target ="de")
View(tr_data)

20.4.1.1 Google Natural Language API

Next, we use the Cloud Natural Language API to analyze the sentiment of each tweet. We choose “analyzeSentiment” to analyze the sentiment on the sentence level. The chunk returns a list object where the variable “score” indicates the sentiment level from -1 (negative) to +1 (positive).

sent_data <- gl_nlp(titles, nlp_type = "analyzeSentiment")