Methodology
To construct our analysis, we applied an assortment of rhetorical tags to the discourse files of the candidates. These tags account for the rhetorical techniques that the candidates use during their interviews and and speeches. Taking into account the topics that the candidates address, these rhetorical techniques shed light on how the candidates appeal to their respective voter base and attempt to attract those who might not normally favor their party. Where foreign countries are concerned, we can learn how particular parties profess to approach foreign relations. This is of particular interest given the tense relationship that Russian currently has with the United States and many European countries.
Rhetorical devices
To best understand how each politician talks to the general public, we decided to develop a system for annotating rhetorical tags, adapted from van Dijk's framework for ideological discourse analysis in “Politics, Ideology, and Discourse”. His classification system for CDA is based on the “ideological square” that describes the general aims of ideological discourse:
- emphasize our positives
- emphasize opponents' negatives
- de-emphasize our negatives
- de-emphasize their positives
To better understand the rhetorical tags and how they have been applied in other research, we looked at Rashidi and Souzandehfar's critical discourse analysis of the debates around the Iraq War. Although their analysis was applied to a specific topic, war, it gave us ideas on how to extend van Dijk's theory by tracking the targets of certain devices, the truth of arguments, and sentiment of certain descriptions.
Some of these rhetorical devices were not applicable to the project. For example, the candidates did not use statistics often in their arguments, so we decided to merge it with the “evidence” rhetorical device. We also added two rhetorical device tags to better describe how Putin and Zhirinovskii talk. In the process of reading, we discovered that both candidates tell stories or anecdotes to make their points. The candidates will occaisionally make explicit assumptions in their speeches, and we wanted to capture this as well.
Rhetorical device
from van Dijk homebrewed |
XML Tag | Description | Example |
---|---|---|---|
Actor Description | <ru:actorDesc> | The <evidence> tag marks when the speaker describes an individual or group. We marked whether the sentiment was positive, negative, or neutral. |
Соединенные Штаты везде по всем миру активно вмешиваются в выборные кампании других стран. target: United States sentiment: negative |
Anecdote | <ru:anecdote> | The <anecdote> tag marks when the speaker uses a story as part of their argument. |
Мы с Вами вчера встречались вечером, мы с Вами сегодня целый день вместе работали, сейчас опять встречаемся. Когда я пришёл на мероприятие нашей компании Russia Today и сел за стол, рядом сидел какой-то господин с одной стороны, с другой стороны кто-то ещё сидел, потом я выступил, ещё о чём-то поговорили, я встал и ушёл. И потом мне сказали: « Вы знаете, это американский гражданин, он тем-то занимался, он раньше в спецслужбах работал, а теперь он занимается тем-то и тем-то». Я сказал: « Ну и хорошо, вы с ним как-то сотрудничаете?» – « Нет, мы его просто пригласили как одного из гостей». Я говорю: « Ну молодцы, слава богу». Всё! Я с ним даже практически не разговаривал. Я с ним только поздоровался, сел рядом, потом попрощался, встал и ушел. Всё моё знакомство с господином Флинном. |
Assumption | <ru:assumption> | The <assumption> tag marks when the speaker explicitly makes an assumption in their argument. |
Если бы было что-нибудь выдающееся, из ряда вон выходящее, он бы, конечно, доложил министру, министр – мне. |
Authority | <ru:authority> | The <authority> tag marks when the speaker references an authority. We also annotated what sources the politicians referenced. |
Мы говорили об этом и с бывшим Президентом Обамой, мы говорили с некоторыми другими официальными лицами – никто никогда не предъявил мне никаких прямых доказательств. Когда мы говорили с Президентом Обамой на этот счёт, это Вам лучше у него спросить, думаю, что он Вам скажет, что он тоже уверен в этом, но, когда мы с ним говорили, я увидел, что и он засомневался.authority: President Obama |
Categorization | <ru:cat> | The <cat> tag marks when the speaker assigns an individual to a group. We identified both the target of categorization, and the group to which they were assigned. |
Разрушит. Разрушитель по натуре. Как Ельцин.target: Aleksei Navalnyi group: Revolutionaries (in general) |
Consensus Building | <ru:consensus> | The <consensus> tag marks when the speaker identifies with a group, usually to curry political favor. |
Обязательно. У нас все возрастные… |
Disclaimer | <ru:disclaimer> | The <disclaimer> tag marks when the speaker presents an idea as positive, only to reject it later. | No examples |
Evidence | <ru:evidence> | The <evidence> tag marks when the speaker uses supposed evidence to prove a point. We marked whether the evidence was true, false, or unknown. |
В Европе взрывают, в Париже взрывают, в России взрывают, в Бельгии взрывают, война идёт на Ближнем Востоке – вот о чём надо думать truth: true |
Hyperbole | <ru:hyperbole> | The <hyperbole> tag marks when the speaker exaggerates meaning by making questionable or absurd assumptions. |
Вы думаете, что у меня есть время каждый день общаться с нашими послами во всём мире, что ли? |
Implication | <ru:implication> | The <implication> tag marks when the speaker infers implicit information in making his/her argument. |
И у этого кандидата тоже были свои политические оппоненты и противники. |
Irony | <ru:irony> | The <irony> tag marks when the speaker says one thing but means something else, usually using sarcasm. |
Вы просто, знаете что: вы такие изобретательные люди там, такие молодцы. Вам скучно жить, видимо. |
National Self-Glorification | <ru:glory> | The <glory> tag marks when the speaker tries to represent him/herself in a positive light by glorifying Russia. |
Я чувствую прямую живую связь с этой землёй, с историей, со страной. |
Vagueness | <ru:vague> | The <vague> tag marks when the speaker blurs the lines between fact and fiction. |
Кто это нам говорит, кто это обижается на нас за то, что мы вмешиваемся? Вы сами постоянно вмешиваетесь. |
The occurence of these tactics throughout political discourse indicates the types of strategies that politicians employ to marshal support. We aggregated occurences of these devices in each candidates' speech to characterize the sample of interviews and speeches we chose. Since the sample was so small, driven by a short deadline and the convenience of marking up existing official transcripts, we don't claim that this sample is characteristic of the candidates' rhetoric in general.
Topic Modelling
As a secondary tool in this project, we constructed a topic model using Mallet's LDA module to help us quantify what Putin and Zhirinovskii talked most about in our sample. To remove topically meaningless words, we borrowed a stop word list from a research project that analysed international diplomacy between Russia, Abkhazia, and South Ossetia. In general, we found this list to be less than satisfactory, as it did not stop on posessive pronouns. It also stopped on words that are very uncommon in spoken Russian, but we decided to leave them in because they weren't words that held much meaning. We decided to stop on all posessive pronouns except наш, because it held some meaning in terms of patriotism and otherness.
In addition to removing stop words, we lemmatized the texts using Yandex's Mystem project. We chose to use Mystem over other options for lemmatization, such as the NLTK stem module, because Mystem uses context clues to figure out the role of the word in the sentence, providing better results. In some cases, Mystem returned multiple lemmas for each word along with a weight, or estimated probability that lemma was correctly chosen. After verifying that Mystem gave good results in the case of ambiguity, we decided to take the lemma with the greatest weight.
After these pre-processing steps, we ran Mystem on our documents, experimenting with topic counts between 10 and 60. We found that 30 topics yielded the best results.
For those who haven't used topic modelling before in their own research, it is important to note that topic models provide an approximation of topics that would be understood by a human. For example, a topic might include words like "конфликт", "вооруженный", "cирия", and "ракеты" but it might also include neutral words like "говорить" and "думать". Since the computer model doesn't understand what a military topic looks like, but "говорить" often co-occurs with these military terms, it may be grouped in with the topic.