Video Tutorial: Sentiment Analysis. Sentiment analysis is the interpretation and classification of emotions (positive, negative and neutral) within text data using text analysis techniques.
This function is available for the following languages:
- Simplified Chinese
- identifying and cataloguing a piece of text according to the tone conveyed by it.
- understanding the social sentiment of a brand, product or service.
- identifying respondent sentiment toward the subject that is discussed in online conversations and feedback.
- analysing student evaluations of lectures, seminars, or study programs.
Sentiment analysis works best on structured data like open-ended questions in a survey, evaluations, online conversations, etc.
To open the tool, select Code > Search & Code > Sentiment Analysis from the main menu.
Select documents or document groups that you want to search and click Continue.
Select the base unit for the search and the coding:
Select the type of sentiment you want to code:
ATLAS.ti proposes subcode labels for each sentiment: Positive / Neutral / Negative. If you want to use different names, you can change them here.
Manage Models: If you want to improve your results, you can download and install a more comprehensive model.
Click on Manage Models if you want to install or uninstall an extended model.
Click Continue to begin searching the selected documents. On the next screen, the search results are presented, and you can review them.
The result page shows you a Quotation Reader indicating where the quotations are when coding the data with the proposed code. If coding already exist at the quotation, it will also be shown.
By clicking on the eye icon, you can change between small. medium and large previews.
You can code all results with one of the proposed codes or with all proposed codes at once; or you can go through review each data segment and then code it by clicking on the plus next to the code name.
Depending on the area you have selected at the beginning, either the sentence or the paragraph is coded.
The regular Coding Dialogue is also available to add or remove codes.
We are using spaCy as our natural language processing engine. More detailed information can be found here.
Input data gets processed in a pipeline - one step after the other as to improve upon the derived knowledge of the prior step. Click here for further details.
The first step is a tokenizer to chunk a given text into meaningful parts and replace ellipses etc. For example, the sentence:
“I should’ve known(didn't back then).” will get tokenized to: “I should have known ( did not back then ).“
The tokenizer uses a vocabulary for each language to assign a vector to a word. This vector was pre-learned by using a corpus and represents a kind of similarity in usage in the used corpus. Click here more information.
The next component is a tagger that assigns part-of-speech tags to every token and lexemes if the token is a word. The character sequence “mine”, for instance, has quite different meanings depending on whether it is a noun of a pronoun.
Thus, it is not just a list of words that is used as benchmark. Therefore, there is also no option to add your own words to a list or to see the list of words that is used.
The sentiment analysis pipeline is trained on a variety of texts ranging from social media discussions to peoples’ opinions on different subjects and products. We are using modified pre-trained/built-from-the-ground-up models - depending on the language.