Monday, January 30, 2012

Updated Sentiment Analysis and a Word Cloud for Netflix - The R Way!

The Netflix investors must be happy and cheerful as the stock is up more than 78% since the beginning of the year (YES, 78%, Source: Yahoo Finance!).  I am not going to talk about what turned the stock around after a much talked/hyped about Netflix debacle of the late 2011 that earned Reed Hastings quite a few UNWANTED title and every one demanded his resignation from the top post.  Not so fast, Mr. Bear!  Reed Hastings must be smiling!  After a stellar performance this year including carefully released stats on viewership, streaming hours as well as a solid Q4'11 earnings, Netflix is back and most importantly viewers are back!

Well, is is not coincidental that the sentiment for Netflix is also improving, 68% of the tweets now have positive sentiment.  See the table below:

Total PositiveNegativeAverageTotalSentiment

*Make sure you understand and interpret this analysis correctly. This analysis is not based on NLP. 

I updated the sentiment analysis that I did last year, ,  (I was then just beginning to play with Twitter and Text Mining packages in R) and used advanced packages like "TM" and  "WordCloud".  The new analysis is based on more than 6,800 words which are most commonly prescribed in various sentiment analysis blogs/books. (Check out Hu and Liu

I came across this excellent blog by Jeffrey Bean, @JeffreyBean, ( and his tutorial. Thank you Mr. Bean!  Please follow the instructions from Bean's slides and the R code listed there as well as the R code here:

Here is the updated R code snippets -
#Populate the list of sentiment words from Hu and Liu (

huliu.pwords <- scan('opinion-lexicon/positive-words.txt', what='character', comment.char=';')
huliu.nwords <- scan('opinion-lexicon/negative-words.txt', what='character', comment.char=';')

# Add some words
huliu.nwords <- c(huliu.nwords,'wtf','wait','waiting','epicfail', 'crash', 'bug', 'bugy', 'bugs', 'slow', 'lie')
#Remove some words
huliu.nwords <- huliu.nwords[!huliu.nwords=='sap']
huliu.nwords <- huliu.nwords[!huliu.nwords=='cloud']
#which('sap' %in% huliu.nwords)

twitterTag <- "@Netflix"
# Get 1500 tweets - an individual is only allowed to get 1500 tweets
 tweets <- searchTwitter(tag, n=1500)
  tweets.text <- laply(tweets,function(t)t$getText())
  sentimentScoreDF <- getSentimentScore(tweets.text)
  sentimentScoreDF$TwitterTag <- twitterTag

# Get rid of tweets that have zero score and seperate +ve from -ve tweets
sentimentScoreDF$posTweets <- as.numeric(sentimentScoreDF$SentimentScore >=1)
sentimentScoreDF$negTweets <- as.numeric(sentimentScoreDF$SentimentScore <=-1)

#Summarize finidings
summaryDF <- ddply(sentimentScoreDF,"TwitterTag", summarise, 
                 PositiveTweets=sum(posTweets), NegativeTweets=sum(negTweets), 

summaryDF$TotalTweets <- summaryDF$PositiveTweets + summaryDF$NegativeTweets

#Get Sentiment Score
summaryDF$Sentiment  <- round(summaryDF$PositiveTweets/summaryDF$TotalTweets, 2)

Saving the best for the last, here is a word cloud (also called tag cloud) for Netflix built in R-

I will be putting the R code up here for building a word cloud after scrubbing it.

Happy Analyzing!


  1. Hi,

    May I ask how to get this function "getSentimentScore"?


    1. All Things R: Updated Sentiment Analysis And A Word Cloud For Netflix - The R Way! >>>>> Download Now

      >>>>> Download Full

      All Things R: Updated Sentiment Analysis And A Word Cloud For Netflix - The R Way! >>>>> Download LINK

      >>>>> Download Now

      All Things R: Updated Sentiment Analysis And A Word Cloud For Netflix - The R Way! >>>>> Download Full

      >>>>> Download LINK KC

  2. Here you go Powell.

    getSentimentScore <- function(tweets)
    scores <- laply(tweets, function(singleTweet) {
    # clean up tweets with R's regex-driven global substitute, gsub()
    singleTweet <- gsub('[[:punct:]]', '', singleTweet)
    singleTweet <- gsub('[[:cntrl:]]', '', singleTweet)
    singleTweet <- gsub('\\d+', '', singleTweet)
    #Convert to lower case for comparision, split the tweet into single words and flatten the list
    tweetWords <- unlist(str_split(tolower(singleTweet), '\\s+'))
    # compare our words to the dictionaries of positive & negative terms
    # match() returns the position of the matched term or NA, apply to convert to boolean
    pos.matches <- !, huliu.pwords))
    neg.matches <- !, huliu.nwords))
    # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():
    score <- sum(pos.matches) - sum(neg.matches)
    return(data.frame(SentimentScore=scores, Tweet=tweets))

  3. This post has a lot of importance to the people…I hope you can continue to inspire and post more of this…Thanks

  4. Thanks for sharing this topic, Jitender. Nice work. My graduation paper was on the same line.
    I did some feature extraction and product sentiment analysis. I didnt get to the summarization part, though. Financial News

  5. Thank you so much for this nice information. Hope so many people will get aware of this and useful as well. And please keep update like this.

    Text Analytics Software

    Text Analytics with R

  6. Good blog…Variety of information which is helpful to improve my knowledge even more and very thoughtful blog…Thanks for the article!!!

    online Social Media App
    Social Media App online
    online chat
    Play free online games
    free online games