Monthly Archives: March 2018

Content Analysis, Process of

Content analysis is a common data analysis process whereby researchers investigate content within a message or text. This process is often described as a replicable, systematic, objective, and quantitative description of communication content features based on a specific context. This analysis process provides constituent structural techniques to analyze communication content that is commonly open-ended and fairly unstructured. The purpose of a content analysis may vary, from describing the characteristics or features of the content to making implications about the cause and/or effect of the content. Content analysis techniques can be applied to a wide range of public and private written content such as letters, newspaper articles, open-ended survey data, transcribed interviews, as well as content in oral (e.g., live speeches or lectures) and visual form (e.g., videotaped interactions, pictures, film).

Content analysis allows researchers to examine and describe both the manifest and latent content meaning in the message. Manifest content refers to the surface or visible features in the message that needs little interpretation by the reader. Manifest content analysis commonly includes features that are physically present and countable within a message. For example, a researcher may count the number of negative words or phrases used during a couple’s discussion about a current disagreement to better understand the couple’s conflict communication. Latent content refers to the underlying features or meaning within the manifest content. Latent content is the deep structural meaning conveyed in the message and requires more explanation for the interpretation. Based on the couple’s conflict example, a researcher may examine the communication content for features of power or dominance by each individual during the conflict. Both manifest and latent content analysis require some interpretation, depending on the depth and level of abstraction.

The remainder of this entry discusses the process for conducting a content analysis, specifically sampling and data types, coding units, coding scheme, and code book. It also discusses coding, reliability, and finalizing data. Finally, this entry also provides a brief summary of some of the benefits and drawbacks in conducting a content analysis.

Content Analysis Process
Sampling and Data Types

Based on the purpose of the study objectives and research questions, the researcher will need to determine the sampling framework or data type that is relevant to his or her research study. Sampling involves identifying and selecting the communication content the researcher intends to analyze. The sampling and data type largely depend on the nature of the communication content—whether it is open-ended survey responses, videotaped interactions, speech, art, letters, or a television series, the sampling or data collection differs. For example, if a researcher is interested in advertisement in an online magazine, he or she needs to select a specific magazine and decide how many issues and the years of publication. In a different study, a researcher may want to know how, if at all, individuals talk about infertility and use survey-based data to collect open-ended narratives on how individuals discuss infertility. Following this sampling or data collection stage, the researcher then needs to decide on the unit of communication content or text he or she will focus on during the coding process.

Coding Units, Coding Scheme, and Code Book

A coding unit often refers to a specific portion of content or text to be coded. The researcher selects the coding unit based on previously established objective and research questions the researchers wish to identify in the analysis. Broadly, coding units may include words, phrases, sentences, paragraphs, images, or an entire document or interaction.

Klaus Krippendorff proposed five different types of coding units for content analysis research. First, a researcher may code for physical units, which means counting quantity or space devoted to content. For example, this may include an analysis of counting the number of articles published on children (e.g., 18 years old or younger) in communication journals in the past decade. A second type of coding unit includes counting references to people, objects, or issues within the content, which is commonly referred to as referential unit. For example, in conducting a referential unit of content analysis a researcher may watch the television series Friday Night Lights and count the various issues that emerge, such as drugs, pregnancy, abuse, and bullying. A third form of coding unit is syntactical unit. This type of coding unit involves examining words, sentences, paragraphs, or complete documents to count number of times certain words or phrases are used within the content. For example, a researcher may want to examine how often men and women use the phrase “I’m sorry,” or “I apologize” during a voice-recorded conversation of a couple’s disagreement. The fourth coding unit is the propositional unit (i.e., thought unit). This unit focuses on coding each time a person expresses or asserts his or her thoughts about a specific topic. This unit may range from a few sentences or multiple paragraphs. For example, in a study on parent and adult children reasoning for estrangement, each reason for estrangement may vary from one to several sentences, counting as one unit. The final coding unit is the thematic unit and commonly involves larger sections of communication content or text. This unit of analysis might include asking participants to share a detailed story about a traumatic event or experience in their lives. This content would be analyzed based on overall features or categories that emerged from the narratives.

Once the researcher has decided on the coding unit, the next step is to develop a coding scheme. A coding scheme involves developing specific categories that will be used to analyze the content. In this process, the researcher may use inductive or deductive methods in deriving the coding categories. Inductive methods allow for categories to freely emerge from the data, whereas deductive methods involve established theory to help guide the development of the categories. In parallel with the inductive or deductive process, the researcher must also make sure the categories are mutually exclusive (coding unit fits in one and only one category) and exhaustive (all coding units examined belong in the proper category). This is often a time-consuming step in the content analysis process as the researcher examines the content multiple times until there are clearly defined categories that are mutually exclusive and exhaustive. Researchers normally pilot-test the categories in a small sample of the data before beginning the full-scale content analysis. Piloting is important to identify problems in the coding scheme or whether categories need to be added or collapsed.

The resulting final categories are detailed in a code book wherein each code is assigned a number, and each category is described (see Table 1). The code book helps to ensure clear, replicable, and systematic coding of the data. Once the code book is finalized, coding and reliability checks begin.

Table 1 Code Book Example
1 = Abuse: Includes emotional, psychological, sexual, and physical forms of abuse.
2 = Beliefs: Includes differences in religious, spiritual, sexual and/or moral belief systems.
3 = Deception: Includes lying and manipulation.
4 = Control: Includes absence of privacy or intrusiveness.
5 = Drug/alcohol use or abuse
Coding, Reliability, and Finalizing Data

Coding a unit of content (e.g., letters, speeches, pictures) into a category is referred to as coding and the individuals conducting the coding are called coders. Content analysis often involves a minimum of two coders to allow the researcher to establish intercoder reliability. The researcher may be one of the coders or the researcher may choose two coders, preferably individuals who are blind to the study’s research questions so that they are not motivated by any bias to code in favor of a particular outcome. At this stage of the analysis, the researcher carefully trains the coders to use the coding categories established in the code book by having them code a small section of the data. This is often a lengthy process that can take multiple training sessions to ensure that the coders are prepared before they independently code the entire dataset or a percentage of overlapping data. To reach reliability, the two coders must establish consistency between their codes or what is often referred to as intercoder reliability. In communication research, an acceptable intercoder reliability score is equal to or greater than .70. One way to calculate intercoder reliability is through reliability coefficient, which basically reflects the two coders’ number of unit agreements divided by the total number of units coded, which is provided by the following formula:

RC = 2AU1 + U2

where RC = reliability coefficient, 2A = number of units agreed upon by the two independent coders, U1 = number of units identified by coder 1, U2 = number of units identified by coder 2.

However, most communication researchers prefer to use a more robust measure to calculate intercoder reliability called Cohen’s kappa. Cohen’s kappa takes into account the coder agreement that would be expected based on chance alone. Here is the formula:

K = Po + Pe1Pe

In this formula, K = kappa, Po = the observed agreement among coders, and Pe = the agreement expected by chance alone. Although these formulas are helpful to calculate reliability by hand, a statistical software program, such as SPSS, allows a researcher to enter both coder’s category scores into the program and quickly run coder agreement and Cohen’s kappa.

After all data have been coded and intercoder agreement has been established, the final step of the content analysis is for coders to meet and resolve any differences. This process involves coders going through the content together and coming to agreement on any codes that differ so that only a single code is assigned to each unit of data. Then, the coders supply their results to the researcher. The researcher then examines the results, explaining descriptive statistics (e.g., total number and percentages for each category), providing qualitative exemplars, and running further statistical tests, if necessary, to answer the study’s research questions.

Benefits and Drawbacks of Content Analysis

Content analysis is a useful research approach that can be applied to a wide variety of small and larger content or text data. This technique allows a researcher to collect more in-depth and rich data in a systematic, replicable, and objective way. In addition, a researcher can ask more complex questions of how communication content (i.e., what people actually say or write) relates to attitudes and behavior variables in a study. A study could use a content analysis to evaluate memorable parent-teenagers sex-talk messages and then run statistical tests to see if there is a relationship between the messages and the teenager’s self-reported sexual attitudes and behaviors.

Furthermore, content analysis is often an unobtrusive approach because researchers can examine communication content in written or oral form in more naturally occurring contexts compared to experimental based contexts. For example, a researcher may collect individuals’ holiday letters to be analyzed or use previously recorded interactions of couples telling the story of how they fell in love. All these reasons add depth, rigor, and creativity to a study.

But, content analysis also has its drawbacks. Based on the availability of the data type or sampling, it could introduce sampling biases. For instance, the data may be collected or chosen in a way that some data are less likely to be used compared with other data. In addition, the process of developing the coding categories, as well as coding the content, involves some level of interpretation that may also produce researcher or coder biases. This can happen when an individual asserts his or her own opinions or knowledge of the subject when explaining the data. Researchers also need to be aware that when examining a specific coding unit (e.g., word, phrase, or paragraph) in isolation from the larger content or context, they may run the risk of losing or altering the meaning of the content.


Content Analysis: Advantages and Disadvantages

Content analysis is a systematic, quantitative process of analyzing communication messages by determining the frequency of message characteristics. Content analysis as a research method has advantages and disadvantages. Content analysis is useful in describing communicative messages, the research process is relatively unobtrusive, and content analysis provides a relatively safe process for examining communicative messages, but it can be time-consuming and presents several methodological challenges. This entry identifies several advantages and disadvantages of content analysis related to the scope, data, and process of content analysis.


The scope and advantage of content analysis is as a descriptive tool. Content analysis can be used to describe communication messages. Content analysis focuses on the specific communication message and the message creator. It is often said that an advantage of content analysis is that the message is “close to” the communicator; that is, content analysis examines communicative messages either created by or recorded from the communicator. Researchers can examine the manifest (the actual communicative message characteristics) and latent (what can be inferred from the message) content of a message. Researchers can use content analysis to study communication processes over time. For example, a communication scholar might be interested in the metaphors presidential candidates have used in speeches during war time.

While content analysis is used to describe communicative messages, content analysis cannot be used to draw cause-and-effect conclusions. Identifying and describing the characteristics of a message is not enough to make claims of what caused or was caused by a message. Content analysis can, however, be combined with other methods to make causal claims, or the description developed through content analysis can be used as a starting point for future causal research. Describing the messages a mother uses to deny a young child’s request is not sufficient to determine the child’s behavior, but combined with other methods (e.g., experimental methods), the child’s behavior could be predicted.


Content analysis is a beneficial research method because of the advantages in collecting data and analyzing quality data. Content analysis can be applied to various types of text (e.g., advertisements, books, newspaper articles, electronic mail, personal communication), and therefore is useful for studying communication from a variety of different contexts. Many times, content analysis can be conducted on existing texts, and therefore the work of collecting data may be minimal (though searching through decades of newspaper articles is a time-consuming process as well). Since content analysis can be used to study communication processes over time, it is useful for studying historical contexts, because describing messages over time can help researchers identify trends in messages over time and subsequently explore the historical context in which the messages changed.

Content analysis also benefits from the data, or communicative messages, coming from the source, or communicator. Data straight from the source relieves several methodological issues (which will be described in greater detail later in this entry). Additionally, data is often readily available for content analysis. For example, print resources are already in an analyzable format, as is written correspondence. Transcripts of videos and radio shows, music lyrics, and the like are readily accessible on the Internet. Also, many texts (e.g., newspaper, books) are available for public consumption and therefore access to texts is easier, making research using content analysis relatively unobtrusive. Once the message has been shared, the researcher only needs the data, and not the source, to conduct the analysis. This is an important benefit of content analysis, as many analyses can bypass human subjects boards because the research neither involves nor affects actual participants; however, some content analyses require data collection from human sources and must therefore receive appropriate approval before the data is collected and research is conducted. Content analysis also affords researchers richer data; that is, because actual communicative messages are collected and analyzed, researchers are exposed to more detailed data than they could obtain through survey research, for example.

Analyzing recorded (e.g., audio, video, print) communicative messages helps to prevent two disadvantages that are characteristic of other research processes: participant recall and recall bias. Some research methods (e.g., interviews, focus groups, diary method) ask participants to recall a situation and what was said and either share the story verbally or write out the account. Research shows that individuals’ abilities to recall information accurately, even a short time after the communicative exchange, are very low. Content analysis uses recorded data, and therefore avoids the issue of misremembering. For example, analyzing the actual discussion will be more accurate than asking a participant to recall what was said and analyzing that response. Additionally, content analysis avoids the issue of recall bias. Oftentimes, participants in the same situation will recall the situation and the communication messages differently. This is frequent in many communicative situations, but particularly common in conflict communication. Because content analysis uses recorded texts, discrepancies between participant accounts are avoided and the data are arguably objective. However, content analysis does not avoid recall bias between the researcher and the participant. One advantage of content analysis is that it removes human participants from the process; however, when a text alone is analyzed without feedback, input, or reflection from the participant, a researcher may misinterpret the latent content of a message; that is, the researcher may misinterpret the intention of the message or infer a different meaning from the message. This is particularly troublesome when analyzing content between close individuals who may frequently use personal idioms in conversations. Without the participants, the researcher is left to analyze the manifest content and may misinterpret the message without considering the latent content imbedded in the idioms.

Analyzing recorded messages has specific advantages, but two major issues arise. First, content analysis cannot study what is not recorded; therefore, if a speech, conversation, or other communicative message is not recorded in some way, the message cannot be analyzed. This could include either an entire population of potential texts (e.g., a series of speeches never recorded, conversations not recorded) or parts of the population (e.g., missing a volume of a newspaper, or a film in a series) that may be excluded from analysis, leaving the population of texts incomplete. Second, content analysis can miss key “real-time” features from the communicative exchange. Because content analysis is focused on the specific communicative message, and analyzed texts and characteristics of the message, important aspects to understanding the message can be excluded. This is particularly true of nonverbal communication, including body language, eye contact, inflection, and the like, which cannot be considered in the analysis. This is troublesome because just as the communicator has insight, the nonverbal communication provides insight as well. For example, a researcher may interpret the manifest content in one way, but miss that the message was delivered sarcastically and therefore should be interpreted differently.


Content analysis also has benefits as a process. Specifically, content analysis is a relatively “safe” process. In many research processes, if an error is made throughout the process, a project may have to be terminated or the researchers may have to start over with a new sample. Because content analysis examines texts and is removed from the original communicators and their potential to bias the process, errors are fixed more easily and entire projects are not lost. Say, for example, two researchers are coding the messages and realize they are coding messages differently. The researchers can go back to the text and recode based on the specific error. However, if a survey has a fundamental error and is distributed to a sample of 400, the survey has to be fixed and distributed to a new sample of 400, which can be arduous, time-consuming and sometimes quite expensive. Repeating part of the process in content analysis tends to be easier than in other projects, and relatively less costly and time-consuming.

Though content analysis is a relatively safe process, the process has its disadvantages. First, because content analysis analyzes texts, finding a representative sample may be difficult. Researchers identify several issues in finding representative samples: searching through newspaper articles and other data sources is time-intensive, transcripts may not be perfectly accurate, researchers might select convenience samples and miss key pieces of data, and access to particular texts may be restricted, to name a few.

Second, coding issues in content analysis make it difficult to generalize across content analyses. Researchers studying the same variable may operationalize the variable differently and therefore code the results differently, and therefore it is difficult to make inferences across studies. For example, researchers studying compliance-gaining or influence messages could use different typologies of messages to analyze conversations. When different coding categories are used in different studies, dissimilar codes can make it difficult to generalize results across studies.

Third, content analysis can also be time-consuming, complex, and labor-intensive. For example, audio recorded messages often need to be transcribed before analysis is conducted. In other cases, a population may be every newspaper article written on a specific presidential election, and while those articles would not need to be transcribed, the sample would be quite large. Collecting a population and/or sample of such an extensive collection would take a great deal of time, particularly if permission was needed to access the material. Coding and analysis of a large volume of material would also be time-consuming.

Finally, other major issues emerge when conducting content analysis related to coding. First, researchers may code messages too narrowly or too broadly. Coding units should be exhaustive and all coded units should fit into a category; however, sometimes coding units are too narrow and important nuances of the message may be missed. It is important for researchers to remain attentive to their research questions and hypotheses to avoid coding too narrowly or too broadly. Using coding units that are specific words, rather than phrases, could affect the interpretation of the message, depending on the purpose of the study.

Other issues are related to coding reliability and validity. Content analysis utilizes multiple coders, and intercoder reliability, or the amount of agreement between coders on coding decisions, is important for the results of the analysis. In content analysis, intercoder reliability is calculated for two different types of coding decisions: unitizing reliability and categorizing reliability. Unitizing reliability refers to the amount of agreement between coders on what is to be coded. Unitizing reliability is typically fairly high when units have natural beginning and ending points; for example, a sentence has a clear beginning and end. Coding is more difficult when there are not clear sentences, or beginning and ending points. Coding units like phrases in a conversation, themes, and stories is more difficult. After coders have identified units, each coder separately decides in which category to place the unit. The more the coders independently place units into the same categories, the higher degree of a second type of intercoder reliability, categorizing reliability.

Intercoder reliability can be measured using a number of different statistics, and the particular statistic should be chosen based on the nature of the coding. Percent agreement is the easiest, and therefore most popular, measurement. Imagine two coders, A and B. Coder A codes a unit “1” and coder B codes a unit “1.” The two coders assign the same code, and therefore, there is 100% agreement. If Coder B had coded the unit “2,” the coders do not assign the same code, and therefore agreement is 0%. In a scenario where three coders, A, B, and C, code a unit 1, 2, and 2, respectively, percent agreement would be 33.33%. One of the three pairs (A/B, B/C, and A/C) agrees on the code, but the other two pairs do not, and therefore there is one-third agreement. Percent agreement is useful for diagnostics during coding, but is not sufficient alone for publishing results, and therefore, other statistics are widely used. Scott’s Pi and Cohen’s kappa are two other statistics used specifically if there are only two coders. Both statistics improve upon percent agreement by including a calculation for chance in their equations, which compares observed agreement with expected agreement. Fleiss’s kappa is similar to these statistics, but is recommended for projects with three or more coders. The final statistic, Krippendorf’s Alpha, is the most reliable, but most complicated statistic. It is recommended for three or more coders, but differs from Scott’s Pi, Cohen’s kappa, and Fleiss’s kappa because rather than measuring observed and expected agreement it compares observed and expected disagreement.

In order to improve intercoder reliability, researchers must have clear operational definitions, code carefully, and carefully train coders on how to categorize message characteristics into coding units. Although reliability is a concern, content analysis is one of the most replicable research methods. Validity is another concern, or whether the coding scheme fits with the desired message analysis (i.e., measures what it is supposed to) and whether the coding scheme is parsimonious, or simple enough to explain the communicative phenomenon. One way to ensure parsimony is to examine the “other” category to determine if the category is too broad and contains important data. Using the “other” category as a catch-all may mean valuable categories or units are neglected. Content analysis is criticized by some scholars who say that the process of coding and counting frequencies of messages is too simplistic and therefore does not provide a thorough analysis of communicative phenomenon.

In summary, content analysis is useful as a descriptive tool, has broad application, is relatively unobtrusive, and is a fairly “safe” research method. Content analysis can be time-consuming, labor-intensive, limited by available texts, and can present challenges to study reliability and validity, but ultimately is a useful heuristic tool for future research and as a method for describing communicative messages.

Content Analysis, Definition of

Content analysis is a widely used method in communication research and is particularly popular in media and popular culture studies. Content analysis is a systematic, quantitative approach to analyzing the content or meaning of communicative messages. Content analysis is a descriptive approach to communication research, and as such is used to describe communicative phenomenon. This entry provides an overview of content analysis, including the definition, uses, process, and limitations of content analysis.

Definition and Uses of Content Analysis

Content analysis is a quantitative process for analyzing communicative messages that follow a specific process. In many communication studies, scholars determine the frequency of specific ideas, concepts, terms, and other message characteristics and make comparisons in order to describe or explain communicative behavior. Content analysis can be used to examine the manifest or latent content of communication, depending on the research question. Manifest content is the specific characteristics of the message itself, or what the communication literally says. For example, when a husband tells his wife, “You look fine, honey,” the manifest content of the message expresses that the wife looks adequate or appropriate. Latent content is the underlying message, or interpretations of the content by implying something about the communicative message. When the wife hears, “You look fine, honey,” she might interpret it to mean that she does not look good but her husband is tired of waiting for her to get ready. Content analysis can study both types of content.

Scholars use content analysis to describe or explain communication; however, content analysis cannot be used to predict cause-and-effect relationships. While used as an approach to discover communication, content analysis can be used in conjunction with other methods, and is useful as a starting point for understanding the effects of particular messages through other research methodologies, in situations where understanding the content of communication is pivotal to examining the effect. Content analysis can be used in conjunction with experimental research when the dependent variable is message-related behavior. For example, researchers who study online civility on social media and message boards use content analysis to analyze posts. A researcher could design an experiment in which participants were exposed to a series of comments written in a specific tone (civil or uncivil) for each participant, and then the participant added a comment of his or her own. The participant’s comment could vary based on the messages to which he or she was exposed. The researcher would conduct a content analysis on the participant comments and compare those to the original comments to which the participant was exposed to determine if the tone of the original comments affected how the participant would respond.

Content analysis, as a method, has several uses. First, content analysis is a flexible method used by scholars and practitioners; that is, it can be used in a wide variety of contexts. Content analysis can be used to characterize communication and make comparisons, such as the types of persuasive messages used in beauty ads. Content analysis is also useful for studying communication in nontraditional settings. While mass media communication is an obvious application of the method, content analysis can be used in a variety of settings, including digital communication, speech therapy, work groups, and the like.

Researchers agree that content analysis should meet three key criteria: objectivity, systematic, and generality. First, content analysis must be objective. In order for the findings of content analysis to have value, the method must be objective and free from bias. Different methods can be employed to ensure objectivity (i.e., using multiple coders and measuring intercoder reliability, using objective codes and procedures). For example, a researcher may hope to find something specific from his or her analysis, and that could affect how he or she interprets the data. One way to prevent researcher bias from affecting the results would be to use a second or third coder in the analysis.

Second, content analysis should be systematic. In identifying and interpreting content, using a particular system to determine what will and will not be included in the dataset and in the conclusions will help avoid researcher bias. Without a systematic approach, researchers could elect to include only the data that supports the research question or hypothesis, thereby influencing the results, which in turn affects objectivity. Carefully defining the codes used to analyze the data and carefully training coders is an important step in this process.

Finally, content analysis should meet the criteria of generality; that is, the results of the content analysis should have theoretical relevance. Researchers agree that content analysis, as a method, should not be applied to a text simply because it can be, but the application of content analysis should culminate in results that can answer a research question or hypothesis. Studying the curse words that contestants on a matchmaking show use to refer to one another might be racy and interesting, but ultimately knowing that information should have a greater purpose.

The Process of Content Analysis

As previously stated, it is critical that content analysis is conducted systematically. As such, scholars outline various step-by-step processes for utilizing the method. While the number of steps in the process differs by scholar, most agree on several key steps to conducting a content analysis.

Define the Population

First, researchers must define the population, or what is going to be studied. Carefully defining the population is an important step in the process. The population should be consistent with the research question, and should be narrow enough to be manageable. For example, a population for the research question, “What words do protagonists in romantic comedies use to describe their love interests to their social network?” would be romantic comedy films. However, the number of existing romantic comedies might be too vast. It could be more useful to focus on films within a specific time frame (e.g., 2005–2015), films with consistent protagonists (e.g., single 30-somethings in New York City), or films featuring female protagonists. The population should be clearly defined.

Select Coding Units

Once the population is defined, coding units, or units of analysis, are selected. Coding units are what is coded and counted from the population. Coding units are observable and measurable and are a consistent way of categorizing the text. Coding units can be words, phrases, amount of time or space utilized, paragraphs, full articles, speakers, characters, photographs, advertisements, television programs, and the like. Coding units should meet three criteria: exhaustive, mutually exclusive, and rule-based. Coding units should be exhaustive, and cover all possibilities; that is, all coded items should fit into a category. For this reason, content analysts will often include an “other” category. Not only should all coded items fit into a category, but they should be mutually exclusive; that is, coded items should only fit into one single category. If a coded item can fit into multiple categories, the categories are not defined narrowly enough and should be refined. Finally, coding units should be rule-based. Before coding begins, rules should be established for what items will be coded and into which category an item will fit.

Select Sample of Messages

Once the population is defined and coding units have been selected, messages are sampled. Sampling is done for a variety of reasons. Sampling should be large enough for meaningful analysis and to claim that the sample is representative of the larger population.

Researchers identify several options for sampling, including random, stratified, interval, and cluster sampling. In random sampling, every text in the population has the same chance of being selected for analysis. Stratified sampling identifies strata (e.g., time slot, geographical region, type of ad) and proportionately selects a sample within each strata. Interval sampling involves drawing a sample based on regular intervals (e.g., every third broadcast, each Monday edition of a daily newspaper, every nth episode). Finally, cluster sampling, sample groups fitting the specific population, and elements within the group are coded. For example, a cluster sample could include all prime-time, network television shows airing Thursday evening from 7–10 p.m.

Coding, Analysis, and Interpretation

Once the population is sampled, messages within the sample are coded, analyzed, and interpreted. Messages are coded based on the coding units, and frequencies of codes are calculated. Coding is an important part of the process, and to address concerns of reliability, multiple coders will code the same messages. If coding inconsistency is high, the inconsistency should be reported and explained, and the unreliable data should be removed from the final dataset. Once codes are tabulated, data is analyzed and reported, most often by reporting descriptive statistics, including tables. Finally, the results are interpreted to answer the research question. Results should be analyzed considering how the results contribute to theory and how the results contribute to practical knowledge.


While content analysis is useful as a descriptive tool, it has limitations. First, while content analysis can describe communicative messages and trends, content analysis cannot be used to infer cause–effect relationships. Content analysis also faces challenges of generalizability; that is, sampling can be difficult for a variety of reasons, and it is often difficult to compile a representative sample. As a result, researchers cannot generalize the results of the study to other categories of content analysis. Content analysis is also a complex, time-consuming, and meticulous process.

While content analysis as a method has limitations, ultimately it serves as a useful and heuristic tool. Content analysis is useful for describing communication phenomenon, and can be used as a starting point for future causal research. Content analysis can be widely used in a variety of different contexts for a variety of purposes, and therefore the communicative messages that can be studied using content analysis are unlimited, provided that they are recorded and accessible. Content analysis provides a systematic, quantitative examination of communicative messages from which descriptive inferences can be drawn.


Media effects

Media effects is a research paradigm concerned with the consequences of media use or exposure. As it is generally used in the study of communication, media effects refers to the consequences of media use experienced by the individual message recipient or audience member. These effects can be cognitive, affective, attitudinal, or behavioral; they can occur in domains as diverse as health, politics, aggression, sexuality, education, child development, and persuasion. Media effects can be driven by characteristics of media messages (e.g., the effects of violent or sexual content), the characteristics of the medium of communication by which those messages are conveyed (e.g., the effects of screen size), or the unique intersection between message and medium. This entry provides a brief overview of the most common research methods used to examine the consequences of media exposure in addition to central concerns and identifies a number of key considerations in applying those methods to media effects research specifically.

Media effects research is characterized by two key elements. The first element is media. Here, “real life” media and their messages are emphasized. Specifically, media effects research tends to examine the effects of actual messages or message patterns or, if dictated by the research question, to examine the effects of messages that, though constructed strictly for the sake of research, are typical of common media messages. The question, then, tends to be not what effects media might elicit but what effects media actually or probably elicit. The second key concept is that of effects. Here, media effects research is concerned with the effects or consequences of media exposure. As such, determining causality between media and effects is a central for media effects researchers.

Research Methods Employed in Media Effects Research

To examine the effects of media use and exposure, researchers use a variety of methodologies, including content analyses, controlled experiments, longitudinal surveys, naturalistic observations, meta-analyses, and more recently, neurocognitive analyses using tools such as functional magnetic resonance imaging (fMRI) and event-related brain potentials. The research methodology selected depends on the type of questions media researchers are interested in studying. These methods are briefly discussed herein.

Content analysis is a well-established research method involving the systematic examination and subsequent quantitative description of specified elements of various media content. This type of analysis allows researchers to describe and make inferences about media messages and, when in light of theories of media influence, to predict effects of media content on audiences. Examples of content analyses include the analysis of television commercials during children’s programming, the depiction of female characters on video game cover inserts, and the presence and frequency of acts of aggression in film previews.

Controlled experiments are commonly employed in media effects research to determine the causal relationships between media exposure or use and related outcomes and to identify mediating or explanatory factors. In controlled experiments, researchers manipulate elements of media content and compare the reactions, attitudes, beliefs, or behaviors of audiences exposed to messages with those elements to those exposed to messages without. Examples include comparing the effects of health messages containing fear appeals to comparable messages without fear appeals on audience’s self-efficacy and information-seeking behavior, or comparing viewers’ perceptions of political candidates and voting intentions after exposure to different political advertisements or news stories.

Surveys, both cross-sectional and longitudinal, are also used to analyze relationships between media consumption and audience beliefs, attitudes, and behaviors, as well as to examine moderators of these relationships. Cross-sectional surveys are used to identify relationships among media use and its anticipated consequences. In longitudinal surveys, by collecting data from the same survey respondents across time, researchers are able to assess individual-level changes, the media antecedents of these changes, and the conditions under which these changes are more or less likely to occur. Examples include examining the influence of childhood exposure to violent television content on aggression in adulthood and exploring the effects of exposure to sexual media content on the sexual beliefs, attitudes, and behaviors of adolescents.

Neurocognitive analyses involve the observation of media influence through the measurement of brain activity with fMRI or EEG. By examining real-time neuro-physiological reactions to different types of media content, researchers better understand how neural and psychological processes (e.g., regulation of emotion, storage and retrieval of memories, and motor functioning) occur during media exposure as well as how neural function changes in response to consistent or prolonged exposure to a single type of media content. Examples include observing brain activity during exposure to violent images among individuals with varying experience playing violent video games, or observing neural activity among children exposed to emotional or nonemotional stimuli.

Naturalistic observation is another category of research method used by media effects researchers to examine the effects of exposure to media or their messages. Naturalistic observation involves identifying naturally occurring differences in media availability or use across different groups or populations and observing differences in behavior across those groups. This allows researchers to observe and record behaviors in a natural field setting either through direct observation or by comparing population-level statistics. This avoids potential problems with self-report biases. Examples have included comparing arrest rates for violent crimes between competitor cities after televised sporting contests and observing differences in reading skills among children in towns with limited or unlimited access to television programming.

Meta-analysis is a research method conducted by media effects researchers to synthesize results across a set of studies investigating the same underlying phenomenon, identify general patterns across findings, and determine the strength of media effects. This methodology allows researchers to conduct a rigorous, statistical comparison of published and unpublished studies that examine the same topic but vary in a number of ways (e.g., location, sample size, environment, social and economic conditions, etc.). Examples include analyses of studies on violent video games and their effects on aggression and prosocial behavior and of studies examining the role of media images on women’s body image concerns.

These methods are employed across a wide range of research areas across the social sciences, and are not specific to media effects research. However, in applying these methods to questions of media influence, a number of special issues are generally of relatively greater import.

Special Considerations in Media Effects Research
Demonstrating Causality: Internal Validity

The central consideration of media effects research is the demonstration of effects. In order to conclusively demonstrate causal relationships, media effects researchers rely heavily on controlled experiments with random assignment to condition. Experimental control, with regard to media content as causal factors, requires that stimuli across conditions be essentially identical except with regard to the causal variable of interest. Experimental stimuli vary in the degree to which this control is achieved and, therefore, the degree to which causality can be conclusively attributed to the construct under investigation. Some experiments employ entirely different television programs, films, or advertisements as experimental stimuli; others selectively edit existing content in order to maintain greater control; and yet others employ stimuli created expressly for the purpose of the experiment.

Demonstrating Relevance: External and Ecological Validity

Media effects research is concerned with documenting effects of actual or typical media messages. To hold implications for real-world contexts, media effects research must employ laboratory analogues and survey measures that allow for generalization beyond the study findings. Important factors with regard to external and ecological validity in media effects research include the nature of the media stimuli (or media use measurement), the amount of exposure, and the nature of the exposure experience.

Media Content as Experimental Stimuli

Because of limitations of the laboratory setting and concern for experimental control, the type of stimuli used in treatment groups and the duration of exposure to this treatment is sometimes less reflective of media exposure in natural settings. In laboratory experiments, real-world media content may be altered to control for potential extraneous variables. For example, a script or narrative text may be used in place of audiovisual media in order to manipulate the variable of interest more easily and limit the influence of other unrelated factors such as lighting, camera angle, and voice. In addition, the duration of exposure to treatment stimuli may be significantly reduced due to sample constraints and/or the need to collect data within a reasonable amount of time (e.g., limited means to conduct repeated measures experiment). For example, an experiment examining the effects of particular film content on viewers may use abbreviated film previews in place of watching entire films. Adapting media content for the purpose of gaining greater control, however, may compromise the extent to which the nature of and exposure to the treatment mimics media exposure in, and therefore generalizes to, natural settings.

The selection of media stimuli must be informed by an awareness of the complexity of media messages. Media selected for one characteristic will also necessarily include many others that may have bearing on study outcomes. For example, a researcher examining the effects of violent content may choose a violent film and a nonviolent film, but a myriad of other characteristics of the violent film may contribute to its influence, including attractiveness of the aggressor, justification of the aggression, the presence or absence of weapons, and so on. For this reason, media effects researchers must be particularly cognizant of the diversity and complexity of media messages. Many researchers address this by including multiple, diverse stimuli.

Another challenge in laboratory studies of media effects lies in attempts to observe the process by which these effects occur by measuring online thoughts, emotions, and/or evaluations during media use. In order to assess real-time cognitions or affective reactions to media content, participants are frequently asked to engage in tasks before, during, or immediately after exposure. These measures might include, for example, clicking, listing thoughts, or indicating liking at some point during treatment exposure. Alternately, participants might be fitted with sensors that measure skin conductance, heart rate, or the contraction of facial muscles indicative of specific affective states. Although these intrusive measures often provide insight into participants’ thoughts and feelings with regard to particular media content, they also interrupt or alter the exposure experience thereby reducing the extent to which exposure matches an actual viewing experience.

Selective Exposure

Media exposure is largely self-directed and governed by an individual’s needs, wants, interests, and social, physical, and media environment at the time of exposure. Research paradigms that focus on this selection include selective exposure, including mood management, and uses and gratifications. Research in these paradigms has demonstrated that the media content to which audiences are ultimately exposed is neither random nor necessarily representative of the overall media environment. Instead, variation in needs, wants, goals, personality traits, attitudes, predispositions, affect, and other traits and states shapes media choices. For example, people tend to systematically select political messages that are consistent with their existing political attitudes and beliefs; men who are angry choose more negatively skewed media content if they anticipate an opportunity for revenge against the individual who angered them; and young people are more likely to select television programs in which members of their own racial or ethnic group play key roles.

Selective exposure limits the external validity of experimental research into media effects. First, the media content that is being investigated as a cause of whatever outcome may have been entirely avoided by the experimental subjects. Second, if those subjects had selected that content, it likely would have been under a set of circumstances that influenced the experience, its meaning, and, quite possibly, its effects. Research into the effects of sexually explicit media content is illustrative. Some participants in experiments investigating such effects may eschew sexually explicit content entirely in their regular viewing choices. Other participants may ordinarily do so only under limited social, relational, or emotional circumstances that substantially shape the meaning of the experience. Selective exposure concerns are addressed somewhat by research methods that employ observation or self-report measures of media exposure rather than manipulation, such as longitudinal surveys or unobtrusive observation.

Self-Reports of Media Use and Consumption

The operationalization and assessment of viewers’ media use and consumption are also of fundamental concern for the study of media effects. Self-report measures, in which respondents are asked about the types of and frequency of exposure to particular media, have a long tradition of use in media effects research. However, self-report measures rely on several assumptions that may be problematic and may therefore suffer from systematic error. In order for these measures to be valid, the researcher must assume the respondent can accurately recall their own media use when cued to do so and that they will report that use in an unbiased way.

Self-report measures of media use and consumption often assume that respondents can accurately recall and estimate the frequency and duration of exposure to various types of media content. These assumptions are unrealistic. The use of these measures, then, may lead to retrospective biases, underestimates of effect sizes, lack of control for spurious variables, and reverse causality issues. Media effects researchers have attempted to address these biases by using guided recall and recognition measures rather than free recall; such measures may provide, for example, a list of various television programs, movies, and popular publications and ask participants to indicate their experience with each. Such measures are not without their limitations; they assume an exhaustive list of relevant media content, an assumption that is increasingly unfounded in the age of on-demand media content. Media diaries, in which respondents are asked to record their media consumption as it occurs, are another example of an alternative method to measuring media consumption that may reduce some of the error inherent in self-report measures. Media effects researchers may also reduce uncertainty regarding the actual content of media exposure specified by respondents by conducting content analyses of such media to use in conjunction with self-reports.

Motives for Consumption

In addition, self-report measures are frequently used to analyze why respondents consume particular media. This type of assessment assumes that respondents are either conscious of their reasons for consuming particular media content or can be made aware of their motives for consumption through various techniques. Self-reports may also be influenced by the socioeconomic status of respondents, social desirability, and inconsistent conceptualizations. To address this issue, media effects researchers may examine whether various indicators of motives for media consumption correlate with measures of media exposure and selection to determine the relationship between media use, consumption, and motivations. Developing more precise measures of media consumption is necessary to facilitate progress in media effects research.

Behavioral Measures

In addition, postexposure tasks attempting to assess behaviors and/or behavioral intent often do not measure natural behaviors. For example, participants exposed to violent media content may be asked to engage in tasks such as choosing the amount of hot sauce to allocate to a fellow participant (i.e., measuring aggressive behavior) or interrupting a simulated fight outside the laboratory (i.e., measuring prosocial behavior). Although such measures allow researchers to assess the influence of media content on subsequent behaviors, these artificial measures may be problematic because they are not assessments of actual behaviors that occur in natural settings.

Resolving Tensions Between Internal and External Validity

Combining the results of multiple studies that employ different methods, each with differing strengths in terms of internal, ecological, and external validity, allows stronger conclusions to be reached. This approach has been labeled triangulation by many researchers. Triangulation has played an important role in demonstrating media effects in a number of domains including the effect of exposure to violent media on aggression; controlled experiments have demonstrated that violent media play a causal role, surveys have documented that these effects occur with real-world media exposure, and naturalistic observation has documented that these observations are not merely a function of self-report.

Meta-analysis provides an objective, statistical tool for combining the results of studies that share the same question but employ diverse methods.

Ethical Considerations

Ethical considerations around media effects research can be organized around two major concerns. First, experimental protocols that involve exposing individuals to potentially harmful media (e.g., graphically violent material, sexually explicit content) are ethically fraught. Although it seems likely that most effects of experimental exposure to problematic content are relatively brief or can be limited or eliminated by appropriate debriefing, they are not necessarily inconsequential. Various media effects experiments have resulted in arguably harmful outcomes (e.g., some theorists maintain that exposure to pornography leads to a greater tolerance for violence against women and for this reason, media effects research involving exposure to pornography is seen potentially harmful). In addition, a substantial portion of media effects research investigates effects of media on children; during relatively vulnerable or impressionable developmental stages, even relatively brief exposure can have lasting effects. Fear effects in response to media messages among children, for example, have been found to have effects that linger for years for some child viewers.

The second ethical consideration, and the one that is relatively unique to media effects research, deals with exposing people to media content that they may find inherently deeply objectionable. Some types of media content may be perceived as inherently wrong even to view. Viewing any sexually explicit material, for example, is a violation of some religious and moral codes. Depictions of graphic violence are seen as dehumanizing by some viewers. Experimentally exposing individuals to materials they do not wish to view is unethical. For this reason, complete and detailed consent procedures should be employed to prevent such exposure.


Frame analysis

Frame analysis offers a theoretical, methodological, and critical tool for exploring processes of meaning making and influence among governmental and social elites, news media, and the public. This entry provides an examination of frame analysis by defining its key terms and identifying four relevant methodological questions. This entry then applies frame analysis to a timely case study related to the War on Terror and concludes by discussing future directions for research.

Key Terms

According to Stephen Reese, a frame is a socially shared organizing principle that works symbolically to shape democratic discourse and influence public opinion by creating and promoting particular vocabularies. Frames appear most vividly in media coverage. Consider the journalistic choices that precede a news story about a crime in your neighborhood, an Occupy Wall Street protest in New York, or a terrorist attack in the Middle East. Newspaper readers or television viewers will want to know what happened, why, and what should be done about it. News directors, producers, and journalists will want to answer those questions in a way that resonates with the cognitive schema already in place in the minds of their audience. The frame is the socially shared organizing principle that informs how media coverage can fulfill the audience’s need to make sense of these news events in a way that aligns with their existing orientations.

Frames serve an important heuristic function. According to Robert Entman, frames allow for mental shortcuts. This shortcut function can be compared with how you might remember a new phone number: your brain may have trouble recalling all 10 digits of a phone number on command. It has a much easier time recalling two sets of three digits and one set of four (555-364-1037). Frames work similarly. By turning fragmented symbolic resources into coherent organizing schema, frames can transform complex political, social, cultural, and economic issues into manageable, chunk-able thought structures.

Scholars from several disciplines, including journalism, political science, and communication and rhetorical studies, have used framing to analyze the rhetorical and ideological potency of our sense-making processes. Here is where frame analysis departs from the phone number comparison. Unlike a phone number, frames do not merely produce a neutral account of the world. There is no objective truth that a frame can illuminate. Explaining why a crime occurred, a protest rally was held, or an act of violence was committed may appear natural and common sense in the media coverage, but it never is. Frames are always imposing a specific logic on an audience and foreclosing alternative perspectives in subtle and taken-for-granted ways. Frame analysis attracts the attention of scholars interested in power because frames define the terms of debate in strategic ways. Frames shape public opinion through the persuasive use of symbols, and in many cases, end up influencing legislative and public policy decisions.

The process of framing described here can sometimes seem like part of a cynical plot employed by elites, politicians, and media power brokers crowded into smoke-filled rooms deciding how to best manipulate news coverage in a way that conforms to their selfish interests. Fortunately, that top-down description of framing is deeply misguided. Framing is not brainwashing. Frames are not targeted at a referential, static, and passive audience. The power of a frame is not derived from its capacity to completely shape discourse and opinion. Frames do not work on audiences, they work with audiences. Frames encourage a particular interpretive lens, but because frames are contingent and dynamic, they must derive their appeal from existing cultural narratives, symbolic traditions, and social orientations. The contingent and dynamic nature of framing opens up fresh and exciting lines of inquiry for the communication researcher.

Methodological Issues in Frame Analysis

As a theoretical perspective, frame analysis is concerned with identifying a set of systematic, generalizable principles that illuminates the relationship between governmental elites, media, and the public. More specifically, frame analysis researchers use the following descriptive questions to guide their work.

  • 1.
    What describes the symbolic foundation of a frame? Because frames are revealed in symbolic expressions, frame analysis researchers begin by looking for specific vocabularies in media coverage. Researchers identify and catalog both the verbal and visual symbols that come together to constitute a specific set of vocabularies. Certain symbols are packaged together creating patterns and allowing for the positioning of a set of symbolic resources within a larger rhetorical environment.
  • 2.
    What describes the symbolic patterns and themes used to weave together a coherent frame? Frame analysis is marked by a dialectic of oscillation among power elites, media, and the public. Originating in Fox News production meetings and White House briefing rooms, a variety of symbolic resources are initially deployed. Not all of them stick. Not all of them become frames. The symbols that do are reproduced by the public in a way that confirms the resonance of a particular interpretive lens. Therefore, researchers keep an eye out for consistency, durability, and lasting power. When symbols cohere strongly enough and for long enough, they can lift an isolated event, issue, or person into a larger narrative.
  • 3.
    What describes the cultural constraints and social situations revealed by the symbolic coherence of particular frames? There is always enough “news” to fill the pages of the newspaper and the minutes of a newscast. Frame analysis researchers are mindful that the journalistic decisions about what to cover and what not to cover hold important implications. Frames are produced by a series of strategic decisions made by news directors, producers, and journalists. Those decisions position an abstract event, issue, or person into a concrete schema in a way that is designed to resonate with an audience. When done effectively, those decisions resonate with the public in a way that will ensure a large audience, along with advertising dollars. The frame reveals the journalist’s perspective on what will attract an audience. By choosing to cover this event (and not that one), media coverage can influence what solutions are proposed by first dictating how problems are defined. Thus, the frame analysis researcher attends to absences and silences and to what is said and unsaid.
  • 4.
    What describes the power relationships produced by a particular frame? Framing is an exercise in power. Frames are often constructed and disseminated in the service of social and institutional interests. While we know the effect is not total and deterministic, frame analysis researchers are aware of the asymmetrical power relationship among elites, media, and the public. Framing researchers are therefore concerned with whose interests are being served by the symbolic production of frames. More specifically, frame analysis researchers explore the hierarchies of power produced by accepting one frame and not another. Accordingly, framing researchers tend to feel more comfortable than quantitative communication scholars making evaluative judgments of artifacts.

These four descriptive questions can be operationalized in a short analysis of the framing techniques that came together to produce the War on Terror.

Case Study: War on Terror

The terrorist attacks of September 11, 2001, were the defining moment of the 21st century. These horrific events required new sense-making techniques to explain what happened, why, and how we should respond. Put another way, 9/11 required a frame. Government and media elites began to construct a War on Terror frame by deploying symbolic resources designed to move an infinite number of amorphous and complex sense-making techniques into comprehensible structures that could guide public deliberations, foreclosed alternatives, and justified subsequent governmental responses. These symbolic resources were first deployed to make sense of urgent questions related to what happened and why. Answers to those questions were found by portraying the attackers as senseless evildoers intent on killing innocent Americans because they hated Americans’ freedom. As these symbols evolved into a coherent frame, potential responses to the 9/11 attacks narrowed. Consider how one responds to a person without sense. Reasoning doesn’t work. The only option this frame allows for is an immediate, war-like response against the perpetrators and the states that protected them.

Although it was hard to see at the time, one can look back years later and see how the War on Terror became concrete, natural, and uncontestable. The War on Terror became an internalized, taken-for-granted description of what appeared to be inevitable domestic and foreign policy choices costing trillions of dollars and leading to invasions of privacy and 12 years of war. Entman outlined a potential alternative in media coverage of the Black Hawk Down debacle. In that situation, pictures of dead U.S. soldiers being dragged through the streets of Somalia prompted a flight response based on an anti-interventionist, quagmire frame fueling the rapid withdrawal of U.S. troops from the region and contributing to President Bill Clinton’s reluctance to intervene in the genocide in Rwanda 2 years later. Without the War on Terror frame, it might have been possible to disavow military action after 9/11 in favor of diplomacy and economic sanctions, such as those used against North Korea and Iran. Even less plausible, the attackers could have been framed as freedom fighters striking a blow for justice against the arrogant, imperialist, and decadent American empire; consequently, the United States might have engaged in critical self-reflection about the root causes of terror.

Why did the War on Terror frame succeed so completely? Frame analysis cannot say for sure. Frame analysis does not deal in causality but rather in plausibility. It is the researcher’s methodological imperative to put forth enough evidence for the reader to be convinced.

But it seems likely that the War on Terror was successful, in part, because it required less cognitive cost; it appealed to an organic understanding and already existing mental pathways that connected similar concepts in the past. These mental associations were easier to access, and therefore, became the widely accepted affective heuristic used to narrow political deliberations and policy decisions.

By defining the key terms, outlining four descriptive questions, and anchoring those questions in the War on Terror, this entry has demonstrated the value of frame analysis. Future research should continue to explore the relationship between government and social elites, media, and the public. As a theoretical, methodological, and critical tool, frame analysis offers the communication researcher a powerful way to illuminate sense-making processes that at times can be harmful and punitive to certain populations. Because it is versatile, researchers can use frame analysis to explore the sense-making techniques that illuminate the rhetorical dimensions of our day-to-day lives.


Who stands to gain?

The Big Lebowski, 1998 – The Dude (Jeff Bridges), Walter Sobchak (John Goodman), and Theodore Donald ‘Donny’ Kerabatsos (Steve Buscemi) referencing “Who Stands to Gain?” by V. I. Lenin, 1913

by V. I. Lenin, 1913

There is a Latin tag cui prodest? meaning “who stands to gain?” When it is not immediately apparent which political or social groups, forces or alignments advocate certain proposals, measures, etc., one should always ask: “Who stands to gain?”

It is not important who directly advocates a particular policy, since under the present noble system of capitalism any money-bag can always “hire”, buy or enlist any number of lawyers, writers and even parliamentary deputies, professors, parsons and the like to defend any views. We live in an age of commerce, when the bourgeoisie have no scruples about trading in honour or conscience. There are also simpletons who out of stupidity or by force of habit defend views prevalent in certain bourgeois circles.

Yes, indeed! In politics it is not so important who directly advocates particular views. What is important is who stands to gain from these views, proposals, measures.

For instance, “Europe”, the states that call themselves “civilised”, are now engaged in a mad armaments hurdle-race. In thousands of ways, in thousands of newspapers, from thousands of pulpits, they shout and clamour about patriotism, culture, native land, peace, and progress—and all in order to justify new expenditures of tens and hundreds of millions of rubles for all manner of weapons of destruction—for guns, dreadnoughts, etc.

“Ladies and gentlemen,” one feels like saying about all these phrases mouthed by patriots, so-called. “Put no faith in phrase-mongering, it is better to see who stands to gain!”

A short while ago the renowned British firm Armstrong, Whitworth & Co. published its annual balance-sheet. The   firm is engaged mainly in the manufacture of armaments of various kinds. A profit was shown of £ 877,000, about 8 million rubles, and a dividend of 12.5 per cent was declared! About 900,000 rubles were set aside as reserve capital, and so on and so forth.

That’s where the millions and milliards squeezed out of the workers and peasants for armaments go. Dividends of 12.5 per cent mean that capital is doubled in 8 years. And this is in addition to all kinds of fees to directors, etc. Arm strong in Britain, Krupp in Germany, Creusot in France, Cockerill in Belgium—how many of them are there in all the “civilised” countries? And the countless host of contractors?

These are the ones who stand to gain from the whipping up of chauvinism, from the chatter about “patriotism” (cannon patriotism), about the defence of culture (with weapons destructive of culture) and so forth!