## 20.4 Lab (Skip!): Setting up GCP research credits

The following steps can be used if you followed all the steps described in the Google Docs document.

First, we’ll install & load all relevant packages.

library(pacman)
pacman::p_load(googleLanguageR, readr, tidyverse, bigrquery)

Then we setup a working directory. Fill in the the quotation marks with the directory where the created JSON-File is located & read in the JSON-File (gl_auth).

setwd("")
# PB: setwd(".../2021_computational_social_science/data")

gl_auth("your_JSON_file.json")
# PB: gl_auth(".../keys/css-seminar-2021-a1e75382ae2c.json")

As an example, we’ll use a sample of tweets as the basis for using the Google Language products that we import below.

nytimes <- read.csv("nytimes_headlines.csv")
titles <- as.character(nytimes\$Title[1:25])

tr_data <- gl_translate(titles, target ="de")
View(tr_data)
sent_data <- gl_nlp(titles, nlp_type = "analyzeSentiment")