sentiment analysis of tweets using neural networks
DESCRIPTION
TRANSCRIPT
Sentiment analysis of tweets using Neural Networks
Adrian Palacios
Universidad Politecnica de Valencia
June 6th, 2013
1 de 15
Introduction
The objective of this work is:
• To use Neural Networks (using the April toolkit) for the polarityclassification of tweets.
• To check how NNs behave when applying different techniques forpreprocessing the data.
• We don’t look for good results, we are just experimenting withthese techniques.
2 de 15
Preprocessing of tweets
Prior to the training of NNs, we need to obtain a feature vectorrepresentation for the samples (tweets):
3 de 15
Preprocessing techniques
To achieve this, we create a bag of words after applying one of thefollowing preprocessing techniques:
1. Unigrams.
2. Bigrams.
3. Stemming.
4. Lemmatization.
5. Part-of-Speech tagging.
4 de 15
Stemming
Stemming: A process that chops off the suffixes of a given wordfollowing some predefined rules.
Examples:
• Stem(run): run.
• Stem(ran): ran.
• Stem(running): run.
5 de 15
Lemmatization
Lemmatization: A process that determines the lemma (canonicalform of the lexeme) of a given word.
Examples:
• Lemma(run): run.
• Lemma(ran): run.
• Lemma(running): run.
6 de 15
PoS tagging
PoS tagging: The assignation Part-of-Speech tags to the words of agiven sentence.
7 de 15
Learning techniques
The polarity classification will be made:
• Using a Multilayer Perceptron with a single layer,
• after 5-fold cross-validation technique,
• and an ensemble of the resulting MLPs.
8 de 15
Hyper-parameter search
We will perform a random search for hyper-parameter optimizationinstead of a grid search.
9 de 15
Ensemble methods
After training is done, since we use 5-fold cross-validation, we get 5MLPs for each set of parameters.
To be consistent, we merge these 5 classifiers into a single one usingthe bootstrap aggregating method (votes have equal weight) for theensemble.
10 de 15
Corpus
We will work with the corpus provided at the 2012 edition of theWorkshop on Sentiment Analysis at SEPLN.
Training Test
Samples 7219 60798
11 de 15
Training results
Accuracy of the validation set classification:
3 levels 5 levels
Unigrams 54.44 45.62
Bigrams 54.09 39.99
Stemming 62.34 47.49Lemmatization 61.60 46.75
PoS-tagging 52.58 38.40
12 de 15
Test results
Accuracy of the test set classification (average and ensemble):
3 levels 5 levels
Unigrams 32.13 26.12
Bigrams 32.39 28.21
Stem. 32.34 26.81
Lemma. 31.84 26.18
PoS-tag. 35.22 35.22
3 levels 5 levels
Unigrams 32.16 26.52
Bigrams 32.32 29.32
Stem. 32.23 27.16
Lemma. 31.80 26.49
PoS-tag. 35.22 35.22
13 de 15
Conclusions
Results are bad, but we can improve by:
• Using more complex techniques for preprocessing.
• Using more complex models for learning.
• Exploring more values for random hyper-parameter search.
• Learning from PoS tagged tweets in a different way.
14 de 15
Questions?
The tools used for the experiments can be found at:
• The NLTK: nltk.org
• Freeling: nlp.lsi.upc.edu/freeling
• The April toolkit: github.com/pakozm/april-ann
15 de 15