Currently supported languages are: English, German, Spanish and Portuguese
Sentiment analysis is the interpretation and classification of emotions (positive, negative and neutral) within text data using text analysis techniques.
- identifying and cataloguing a piece of text according to the tone conveyed by it.
- understanding the social sentiment of a brand, product or service.
- identifying respondent sentiment toward the subject matter that is discussed in online conversations and feedback.
- analysing student evaluations of lectures, seminars, or study programs.
Sentiment analysis works best on structured data like open-ended questions in a survey, evaluations, online conversations, etc.
To open the tool, select the Search & Code tab and from there Sentiment Analysis.
Select documents or document groups that you want to search and click Continue.
Select whether the base unit for the search, and the later coding, should be paragraphs or sentences, and which sentiment you want to search for.
ATLAS.ti proposes code label for each sentiment: Positive / Neutral / Negative. If you want to use different code names, you can change them here.
Manage Models: If you want to improve your results, you can download and install a more comprehensive model. Currently, it is available for German and English language texts. More languages will be added in the future. The size of the German model is ~ 230 MB and for the English model ~ 110 MB.
Click on Manage Models if you want to install or uninstall an extended model.
Click Continue to begin searching the selected documents. On the next screen, the search results are presented, and you can review them.
The result page shows you a Quotation Reader indicating where the quotations are when coding the data with the proposed code. If coding already exist at the quotation, it will also be shown.
By clicking on the eye icon, you can change between small. medium and large previews.
You can code all results with one of the the proposed codes or with all proposed codes at once; or you can go through review each data segment and then code it by clicking on the plus next to the code name.
You can code all results at once by clicking Apply Proposed Codes, and from there you either select Apply All Codes, or you select one of the codes from the list.
Depending on the area you have selected at the beginning, either the sentence or the paragraph is coded.
The regular Coding Dialogue is also available to add or remove codes.
We are using spaCy as our natural language processing engine. More detailed information can be found here.
Input data gets processed in a pipeline - one step after the other as to improve upon the derived knowledge of the prior step. Click here for further details.
The first step is a tokenizer to chunk a given text into meaningful parts and replace ellipses etc. For example, the sentence:
“I should’ve known(didn’t back then).” will get tokenized to: “I should have known ( did not back then ).“
The tokenizer uses a vocabulary for each language to assign a vector to a word. This vector was pre-learned by using a corpus and represents a kind of similarity in usage in the used corpus. Click here more information.
The next component is a tagger that assigns part-of-speech tags to every token and lexemes if the token is a word. The character sequence “mine”, for instance, has quite different meanings depending on whether it is a noun of a pronoun.
Thus, it is not just a list of words that is used as benchmark. Therefore, there is also no option to add your own words to a list or to see the list of words that is used.
The sentiment analysis pipeline is trained on a variety of texts ranging from social media discussions to peoples’ opinions on different subjects and products. We are using modified pre-trained/built-from-the-ground-up models - depending on the language.