We’ll now turn to a different type of Twitter data – static data, either recent tweets or user-level information. This type of data can be retrieved with Twitter’s REST API. We will use the tweetscores package here – this is a package that I created to facilitate the collection and analysis of Twitter data.

Searching recent tweets

It is possible to download recent tweets, but only up those less than 7 days old, and in some cases not all of them.

load("~/my_oauth")
library(tweetscores)
## Loading required package: R2WinBUGS
## Loading required package: coda
## Loading required package: boot
## ##
## ## tweetscores: tools for the analysis of Twitter data
## ## Pablo Barbera (LSE)
## ## www.tweetscores.com
## ##
library(streamR)
## Loading required package: RCurl
## Loading required package: bitops
## Loading required package: rjson
## Warning: package 'rjson' was built under R version 3.4.4
## Loading required package: ndjson
## Warning: package 'ndjson' was built under R version 3.4.4
searchTweets(q=c("kennedy", "supreme court"), 
  filename="../data/kennedy-tweets.json",
  n=1000, until="2018-07-01", 
  oauth=my_oauth)
## 100 tweets. Max id: 1013210573645938688
## 148 hits left
## 200 tweets. Max id: 1013210313162883072
## 147 hits left
## 300 tweets. Max id: 1013210021675577344
## 146 hits left
## 400 tweets. Max id: 1013209744461348864
## 145 hits left
## 500 tweets. Max id: 1013209510113107968
## 144 hits left
## 600 tweets. Max id: 1013209268152135680
## 143 hits left
## 700 tweets. Max id: 1013209045967212544
## 142 hits left
## 800 tweets. Max id: 1013208803217567744
## 141 hits left
## 900 tweets. Max id: 1013208558740140032
## 140 hits left
## 1000 tweets. Max id: 1013208361247100928
tweets <- parseTweets("../data/kennedy-tweets.json")
## 1000 tweets have been parsed.

What are the most popular hashtags?

library(stringr)
ht <- str_extract_all(tweets$text, "#(\\d|\\w)+")
ht <- unlist(ht)
head(sort(table(ht), decreasing = TRUE))
## ht
##         #MN08 #Election2018    #Minnesota        #Trump    #MNprimary 
##            21            19            19            14            13 
##        #MNpol 
##            10

You can check the documentation about the options for string search here.

Extracting users’ profile information

This is how you would extract information from user profiles:

wh <- c("realDonaldTrump", "POTUS", "VP", "FLOTUS")
users <- getUsersBatch(screen_names=wh,
                       oauth=my_oauth)
## 1--4 users left
str(users)
## 'data.frame':    4 obs. of  9 variables:
##  $ id_str         : chr  "818910970567344128" "25073877" "822215679726100480" "818876014390603776"
##  $ screen_name    : chr  "VP" "realDonaldTrump" "POTUS" "FLOTUS"
##  $ name           : chr  "Vice President Mike Pence" "Donald J. Trump" "President Trump" "Melania Trump"
##  $ description    : chr  "Vice President Mike Pence. Husband, father, & honored to serve as the 48th Vice President of the United States."| __truncated__ "45th President of the United States of America\U0001f1fa\U0001f1f8" "45th President of the United States of America, @realDonaldTrump. Tweets archived: https://t.co/eVVzoBb3Zr" "This account is run by the Office of First Lady Melania Trump. Tweets may be archived. More at https://t.co/eVVzoBb3Zr"
##  $ followers_count: int  6384634 53219516 23538114 10679230
##  $ statuses_count : int  4380 38100 3349 327
##  $ friends_count  : int  11 47 39 6
##  $ created_at     : chr  "Tue Jan 10 20:02:44 +0000 2017" "Wed Mar 18 13:46:38 +0000 2009" "Thu Jan 19 22:54:28 +0000 2017" "Tue Jan 10 17:43:50 +0000 2017"
##  $ location       : chr  "Washington, D.C." "Washington, DC" "Washington, D.C." "Washington, D.C."

Which of these has the most followers?

users[which.max(users$followers_count),]
##     id_str     screen_name            name
## 2 25073877 realDonaldTrump Donald J. Trump
##                                                          description
## 2 45th President of the United States of America\U0001f1fa\U0001f1f8
##   followers_count statuses_count friends_count
## 2        53219516          38100            47
##                       created_at       location
## 2 Wed Mar 18 13:46:38 +0000 2009 Washington, DC
users$screen_name[which.max(users$followers_count)]
## [1] "realDonaldTrump"

Download up to 3,200 recent tweets from a Twitter account:

getTimeline(filename="../data/realDonaldTrump.json", screen_name="realDonaldTrump", n=1000, oauth=my_oauth)
## 200 tweets. Max id: 1008725438972211200
## 883 hits left
## 400 tweets. Max id: 1002510522032541701
## 882 hits left
## 600 tweets. Max id: 994182263960162304
## 881 hits left
## 800 tweets. Max id: 985504808646971392
## 880 hits left
## 1000 tweets. Max id: 973187513731944448

What are the most common hashtags?

tweets <- parseTweets("../data/realDonaldTrump.json")
## 1000 tweets have been parsed.
ht <- str_extract_all(tweets$text, "#(\\d|\\w)+")
ht <- unlist(ht)
head(sort(table(ht), decreasing = TRUE))
## ht
##          #MAGA    #RightToTry        #TaxDay      #G7Summit #DrainTheSwamp 
##             22              4              4              3              2 
##   #MemorialDay 
##              2

Building friend and follower networks

Download friends and followers:

followers <- getFollowers("MethodologyLSE", 
    oauth=my_oauth)
## 12 API calls left
## 1389 followers. Next cursor: 0
## 11 API calls left
friends <- getFriends("MethodologyLSE", 
    oauth=my_oauth)
## 8 API calls left
## 121 friends. Next cursor: 0
## 7 API calls left

What are the most common words that friends of the LSE Methodology Twitter account use to describe themselves on Twitter?

# extract profile descriptions
users <- getUsersBatch(ids=friends, oauth=my_oauth)
## 1--121 users left
## 2--21 users left
# create table with frequency of word use
library(quanteda)
## Warning: package 'quanteda' was built under R version 3.4.4
## Package version: 1.3.0
## Parallel computing: 2 of 4 threads used.
## See https://quanteda.io for tutorials and examples.
## 
## Attaching package: 'quanteda'
## The following object is masked from 'package:utils':
## 
##     View
tw <- corpus(users$description[users$description!=""])
dfm <- dfm(tw, remove=c(stopwords("english"), stopwords("spanish"),
                                 "t.co", "https", "rt", "rts", "http"),
           remove_punct=TRUE)
topfeatures(dfm, n = 30)
##        lse   research    science     london     school  economics 
##         48         39         30         27         25         25 
##  political     social department     centre     public     policy 
##         24         21         14         11         11         11 
##       news   teaching     events      lse's university         uk 
##         11         10         10         10          9          8 
##    twitter    account         us    society   official  programme 
##          8          8          8          8          8          7 
##  institute       data   politics   analysis   academic   european 
##          7          7          7          6          6          6
# create wordcloud
par(mar=c(0,0,0,0))
textplot_wordcloud(dfm, rotation=0, min_size=1, max_size=5, max_words=100)

Estimating ideology based on Twitter networks

The tweetscores package also includes functions to replicate the method developed in the Political Analysis paper Birds of a Feather Tweet Together. Bayesian Ideal Point Estimation Using Twitter Data. For an application of this method, see also this Monkey Cage blog post.

# download list of friends for an account
user <- "p_barbera"
friends <- getFriends(user, oauth=my_oauth)
## 7 API calls left
## 1330 friends. Next cursor: 0
## 6 API calls left
# estimating ideology with MCMC methods
results <- estimateIdeology(user, friends, verbose=FALSE)
## p_barbera follows 15 elites: BarackObama, nytimes, maddow, RepKarenBass, MaxineWaters, brianstelter, carr2n, chucktodd, fivethirtyeight, NickKristof, nytgraphics, nytimesbits, NYTimeskrugman, nytlabs, thecaucus
# trace plot to monitor convergence
tracePlot(results, "theta")

# comparing with other ideology estimates
plot(results)
## Warning: Ignoring unknown parameters: width

Other types of data

The REST API offers also a long list of other endpoints that could be of use at some point, depending on your research interests.

  1. You can search users related to specific keywords:
users <- searchUsers(q="london school of economics", count=100, oauth=my_oauth)
users$screen_name[1:10]
##  [1] "LSEnews"        "LSEGovernment"  "LSEManagement"  "StudyLSE"      
##  [5] "LSEIRDept"      "kenbenoit"      "Thomgua"        "SJRickard"     
##  [9] "MethodologyLSE" "LSEMaths"
  1. If you know the ID of the tweets, you can download it directly from the API. This is useful because tweets cannot be redistributed as part of the replication materials of a published paper, but the list of tweet IDs can be shared:
# Downloading tweets when you know the ID
getStatuses(ids=c("474134260149157888", "266038556504494082"),
            filename="../data/old-tweets.json",
            oauth=my_oauth)
## 897 API calls left
## 896 API calls left
parseTweets("../data/old-tweets.json")
## 2 tweets have been parsed.
##                                                             text
## 1 Are you allowed to impeach a president for gross incompetence?
## 2           The electoral college is a disaster for a democracy.
##   retweet_count favorite_count favorited truncated             id_str
## 1            NA             NA     FALSE     FALSE 474134260149157888
## 2            NA             NA     FALSE     FALSE 266038556504494082
##   in_reply_to_screen_name
## 1                      NA
## 2                      NA
##                                                                                 source
## 1 <a href="http://twitter.com/download/android" rel="nofollow">Twitter for Android</a>
## 2                   <a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>
##   retweeted                     created_at in_reply_to_status_id_str
## 1     FALSE Wed Jun 04 10:23:11 +0000 2014                        NA
## 2     FALSE Wed Nov 07 04:45:09 +0000 2012                        NA
##   in_reply_to_user_id_str lang listed_count verified       location
## 1                      NA   en        90053     TRUE Washington, DC
## 2                      NA   en        90053     TRUE Washington, DC
##   user_id_str
## 1    25073877
## 2    25073877
##                                                          description
## 1 45th President of the United States of America\U0001f1fa\U0001f1f8
## 2 45th President of the United States of America\U0001f1fa\U0001f1f8
##   geo_enabled                user_created_at statuses_count
## 1        TRUE Wed Mar 18 13:46:38 +0000 2009          38100
## 2        TRUE Wed Mar 18 13:46:38 +0000 2009          38100
##   followers_count favourites_count protected                user_url
## 1        53219524               25     FALSE https://t.co/OMxB0x7xC5
## 2        53219524               25     FALSE https://t.co/OMxB0x7xC5
##              name time_zone user_lang utc_offset friends_count
## 1 Donald J. Trump        NA        en         NA            47
## 2 Donald J. Trump        NA        en         NA            47
##       screen_name country_code country place_type full_name place_name
## 1 realDonaldTrump           NA      NA         NA        NA         NA
## 2 realDonaldTrump           NA      NA         NA        NA         NA
##   place_id place_lat place_lon lat lon expanded_url url
## 1       NA       NaN       NaN  NA  NA           NA  NA
## 2       NA       NaN       NaN  NA  NA           NA  NA
  1. Lists of Twitter users, compiled by other users, are also accessible through the API.
# download user information from a list
MCs <- getList(list_name="new-members-of-congress", 
               screen_name="cspan", oauth=my_oauth)
## 897 API calls left
## 20 users in list. Next cursor: 5427698142933319684
## 896 API calls left
## 40 users in list. Next cursor: 4611686021745187729
## 895 API calls left
## 60 users in list. Next cursor: 0
## 894 API calls left
head(MCs)
##             id             id_str                 name     screen_name
## 1 8.272798e+17 827279765287559171    Rep. Mike Johnson  RepMikeJohnson
## 2 8.235530e+17 823552974253342721     Anthony G. Brown RepAnthonyBrown
## 3 8.171385e+17 817138492614524928             Ted Budd      RepTedBudd
## 4 8.170763e+17 817076257770835968    Adriano Espaillat    RepEspaillat
## 5 8.170502e+17 817050219007328258 Rep. Blunt Rochester   RepBRochester
## 6 8.168339e+17 816833925456789505  Nanette D. Barragán     RepBarragan
##                         location
## 1                 Washington, DC
## 2                 Washington, DC
## 3   Davie County, North Carolina
## 4 https://www.facebook.com/Congr
## 5 Delaware, USA - Washington, DC
## 6                  San Pedro, CA
##                                                                                                                                          description
## 1                                                 Proudly serving Louisiana's 4th Congressional District. Member on @HouseJudiciary & @NatResources.
## 2  Congressman proudly representing Maryland's Fourth District. Member of @HASCDemocrats & @NRDems. Father, husband & retired @USArmyReserve Colonel
## 3                                                                                         Proudly serving the 13th district of North Carolina. #NC13
## 4                                U. S. Representative proudly serving New York’s 13th Congressional District. Follow my work in Washington and #NY13
## 5                           Official Twitter page for U.S. Representative Lisa Blunt Rochester (D-DE). Tweets from Rep. Blunt Rochester signed -LBR.
## 6 Official account. Honored to represent California's 44th Congressional District. #CA44 Member of the @HispanicCaucus @USProgressives @Dodgers fan.
##                       url followers_count friends_count
## 1 https://t.co/qLAyhrFbRl            3266           401
## 2 https://t.co/2u5X332ICM            9562           832
## 3 https://t.co/VTsvWe0pia            5298           195
## 4 https://t.co/lcRqmQFAbz           10660          1248
## 5 https://t.co/Fe3XCG51wO            7057           315
## 6 https://t.co/Mt3nPi7hSH            8065           603
##                       created_at time_zone lang
## 1 Thu Feb 02 22:17:20 +0000 2017        NA   en
## 2 Mon Jan 23 15:28:24 +0000 2017        NA   en
## 3 Thu Jan 05 22:39:33 +0000 2017        NA   en
## 4 Thu Jan 05 18:32:15 +0000 2017        NA   en
## 5 Thu Jan 05 16:48:47 +0000 2017        NA   en
## 6 Thu Jan 05 02:29:18 +0000 2017        NA   en

This is also useful if e.g. you’re interested in compiling lists of journalists, because media outlets offer these lists in their profiles.

  1. List of users who retweeted a particular tweet – unfortunately, it’s limited to only 100 most recent retweets.
# Download list of users who retweeted a tweet (unfortunately, only up to 100)
rts <- getRetweets(id='942123433873281024', oauth=my_oauth)
## 74 API calls left
## 75 retweeters. Next cursor: 0
## 73 API calls left
# https://twitter.com/realDonaldTrump/status/942123433873281024
users <- getUsersBatch(ids=rts, oauth=my_oauth)
## 1--75 users left
# create table with frequency of word use
library(quanteda)
tw <- corpus(users$description[users$description!=""])
dfm <- dfm(tw, remove=c(stopwords("english"), stopwords("spanish"),
                                 "t.co", "https", "rt", "rts", "http"),
           remove_punct = TRUE)
# create wordcloud
par(mar=c(0,0,0,0))
textplot_wordcloud(dfm, rot.per=0, scale=c(3, .50), max.words=100)
## Warning: scalemax.wordsrot.per is deprecated; use min_size and
## max_sizemax_wordsrotation instead

  1. And one final function to convert dates in their internal Twitter format to another format we could work with in R:
# format Twitter dates to facilitate analysis
tweets <- parseTweets("../data/realDonaldTrump.json")
## 1000 tweets have been parsed.
tweets$date <- formatTwDate(tweets$created_at, format="date")
hist(tweets$date, breaks="month")

Now time for another challenge!