International Journal of Research and Innovation in Social Science

Submission Deadline-17th January 2025
First Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th February 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-21st January 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Social Media Users’ Perspectives on the Utility of Content Warnings by Big Tech

Social Media Users’ Perspectives on the Utility of Content Warnings by Big Tech

Uju Cecilia Onuchukwu1, Obini Ijeoma Onuchukwu2

1Mass Communication Department, Nnamdi Azikiwe University Awka Anambra State, Nigeria

2Federal Polytechnic, Oko Anambra State, Nigeria

DOI: https://dx.doi.org/10.47772/IJRISS.2024.8120192

Received: 23 November 2024; Accepted: 10 December 2024; Published: 11 January 2025

ABSTRACT

The study investigated social media users’ perspectives on content warnings by Big Tech. The study was guided by three specific purposes, three research questions and two hypotheses. The study adopted descriptive survey design. A population of the study was the Nnamdi Azikiwe University community involving staff and students, old and young. A sample of 500 respondents was selected used multi-method sampling techniques. A researcher-made questionnaire titled “Social Media Users’ Perspectives on Utility of Content Warning Survey” (SMUPUC) was used for the study. The SMUPUC was validated by two experienced lecturers in Mass Communication and one in Measurement and Evaluation from Nnamdi Azikiwe University, Awka. A pilot study was conducted to determine the reliability of the instrument. Results were analysed using Cronbach Alpha which yielded a correlation index of 0.82. SMUPUC was administered on respondents using two assistants from Nnamdi Azikiwe University, Awka who were briefed adequately on the exercise. At the end, a total of 467 copies of the questionnaire was returned leaving a mortality rate of 6.6 percent only. The research questions were analyzed with the mean (X) and standard deviation, while the hypotheses were tested with t-test statistic at 0.05 level of significance. The findings showed that social media users agreed with the utility of content warnings by Big Tech; sex and age of users were insignificant in their perspective on this utility. Based on the findings, it was recommended among others, that content warnings should continue to be included as requirement of the Big Tech to guard against unwarranted exposure of users to unwanted contents; and Big Tech should improve on apps to guard against underage exposure to adult contents in the social media.

Key Words: Social Media, Users, Content Warning, Utility, Big Tech

INTRODUCTION

The advanced and improved usage of social media (SM) platforms has become a worldwide phenomenon for quite some time. Though it all started as a hobby for several computer literate individual, it has changed to become a social norm and existence-style for people around the world (Nicole, 2017). The social media remains a tool, which renders help to people across the globe, and influence their opinion, attitude, and knowledge. According to Adegboyega (2020), social media furnishes individuals with information in all sectors of life. Through the SM, there has been a lot of development and growth that occurred due to the availability of platforms that allows people to get linked to the entire universe and contribute to global development.

There are many advantages of social media, such as using technologies through the web to adapt and convey information on social platforms, e.g. Facebook, Twitter, and so on, in order to teach others and to learn from others (Kaplan & Haenlein, 2010). There are also some pitfalls associated with the SM. As a ‘free market’, contents could be offensive to the sensibility of users. Unlike the traditional media, SM contents are more or less uncensored. Pornographic contents, violence, fraudulent activities and other potentially harmful contents have become stock-in-trade of the SM over the years that various stakeholders and advocates have raised some concerns over the contents of the SM. The SM content refers to content which is created by individuals or companies for social networks such as Facebook, Instagram or Twitter (Nicole, 2017). These contents come in varying forms such as texts, still pictures, audio and video materials.

Owners of the largest technology companies often referred to as the ‘Big Tech’ or ‘Big Five’, namely, Alphabet/Google, Apple, Meta/Facebook, Microsoft and Amazon have sought to strike a balance between freedom of speech and content regulation. The Big Tech refers to the major technology companies which have inordinate influence on the society. The Big Tech companies historically have to make their own decisions when it comes to banning individual users who infringed on the content policy. In perhaps the most notorious case, the former US President, Donald Trump was banned from Facebook and Twitter, while YouTube banned him from uploading new videos, in the wake of the January 6 attacks on the US Capitol (Mann, 2022).  Self-regulation however, appears not to work to the satisfaction of all. Hence, countries make laws to curtail the influence of contents on users.

Content warning is an age long tradition in information dissemination aimed at protecting user interest and sensibilities. Content warnings are notes presented at the beginning of contents- visual, audio, text, and so on, that alert the audience to potentially distressing material (Vallance & McCallum, 2022). Content warnings were originally called “trigger warnings” and were intended to help people with Post-Traumatic Stress Disorder (PTSD) and other anxiety disorders avoid or prepare to engage with material that could trigger a panic attack, flashback, or other distress (Goodwin, 2013).  Trigger warnings were applied first on internet message boards in the late 1990s; they were applied to content that graphically depicted rape (Mannix, 2022). Those posting the warnings were worried reading such content might trigger panic attacks or, in people with PTSD, intrusive memories. The concept soon expanded and hit the mainstream in the mid-2010s. Students at several American college campuses called on their administrations to add official trigger warnings to potentially harmful material (Mannix, 2022). To mitigate harm, the warning should be provided ahead of such content, with ample space and time for individuals to decide if they wish to engage with the content or not, in accordance with their own health, well-being and self-care. Placing this choice in the hands of those potentially affected empowers them to take care of themselves in this way.

In social media, it is common for the author or producer to use CW (content warning) or TW (trigger warning) at the top of a post, along with the relevant information. They are expected to use periods or dashes to create space between the warning and the body of the post. This spacing allows people to choose whether or not to continue reading, without exposing them inadvertently to harm (Vallance & McCallum, 2022). Example: CW// death or dying, abuse, violence. Common areas for content warnings include, but not limited to, sexual assault, abuse, child abuse/pedophilia/incest, animal cruelty or animal death, self-harm and suicide, eating disorders, body hatred, and fat phobia, violence, pornographic content, kidnapping and abduction, death or dying, pregnancy/childbirth, miscarriages/abortion, blood, mental illness and ableism, racism and racial slurs, sexism and misogyny, classism, hateful language directed at religious groups (e.g., Islamophobia, anti-Semitism), transphobia and trans misogyny, homophobia and heterosexism (Bruce, 2017).

Those opposed to content warnings however, have their reasons. Apart from the hue of censorship which the requirement of content warning depicts, there is the concern that it has the potential to heighten what it purports to reduce. Content warnings tell people something bad may be about to happen but most people may not have any idea how to respond. The simplest response might be to not keep reading or watching as the case may be. But in experience, people do not seem to behave that way; the content warnings seem to tickle people’s natural curiosity thereby defeating the very purpose of such warnings. In addition, Nowicki (2019) has raised concern that rather than providing a ‘novel-length’ content warning which audience may find boring reading, content developers ought to use symbols. According to Nowicki, Facebook, for example, has terms of service and related policies that stretch for over 35,000 words. Buried within, are clauses that have significant privacy implications, such as granting Facebook a “non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content” (Nowicki).

As the Big Tech battle with the nitty-gritty of content warnings policy, there is the need to understand the perspective of social media users from this part of the world on the utility of this policy. Most hue and cry about content warnings are non-African based and should not be an appropriate basis for understanding the utility of content warnings. It is to be noted too, that social media users, like the audience of the traditional media are heterogeneous. They comprise the old and the young; female and male; and people from all works of life. Their perspectives on the utility of content warnings may vary according to their status.   This research therefore, sought to investigate the social media users’ perspectives on the utility of content warnings by the Big Tech.

Statement of the Research Problem

Recently, social media has become a veritable platform for people to connect and establish a relationship with others. As the name implies, social media carries a social undertone in which the users are allowed to interact with others in order to promote interpersonal relationships and give a certain effect on the social behaviours of individuals. Some of the contents of the social media however, may be harmful to users, hence necessitating the use of content warnings to prepare users about potential harmful contents. As a matter of policy, the Big Tech (Alphabet/Google, Apple, Meta/Facebook, Microsoft and Amazon) have policies established as a matter of self-regulation and at times imposed by relevant authorities, to provide content warnings to users over potential harmful effects of their contents. Those who oppose the policy however, are quick to point out that content warnings are not necessary. There is the concern that these warnings, rather than put the audience off, arouse audience curiosity, could amount to censorship in disguise and constitute waste of time in some cases. In view of this apparent controversy, it has become necessary to ascertain the utility of concept warnings by the Big Tech from the perspectives of social media users.

Objectives of the Study

The main objective of this study was to investigate the social media users’ perspectives on the utility of content warnings by Big Tech. Specifically, the study sought to determine:

  1. Whether social media users derive utility from content warnings by Big Tech;
  2. Differences in the perspectives of female and male social media users on the utility of content warnings by Big Tech;
  3. Differences in the perspectives of young and old social media users on the utility of content warnings by Big Tech;

Research Questions

  1. What utility do social media users derive from content warnings by Big Tech?
  2. What are the differences in the perspectives of female and male social media users on the utility of content warnings by Big Tech?
  3. What are the differences in the perspectives of young and old social media users on the utility of content warnings by Big Tech?

Hypotheses

The following hypotheses were formulated and answered in the study at 0.05 level of significance:

Ho1: There is no significant mean difference in the perspectives of female and male social media users on the utility of content warnings by Big Tech.

Ho2: There is no significant mean difference in the perspectives of the young and old social media users on the utility of content warnings by Big Tech.

Significance of the Study

The findings of the study will benefit social media users, the Big Tech companies, researchers and the general public. If the findings of the study are made available to users through public enlightenment programmes, they will see the need to adhere strictly to content warnings offered by the Big Tech and save themselves from undesirable situations.

The Big Tech companies will benefit from the findings of the study. If the findings of the study are made available to the firms, they will understand the utility value of content warnings and provide further mechanisms to ensure that the warnings work for the intended audience of the social media.

Researchers will benefit from the findings of the study. They need empirical studies such as this to understand trends, appraise situations and make appropriate comparisons aimed at advancing the frontiers of knowledge.

Finally, the general public will benefit from the findings of the study. The findings will enable them contribute meaningfully to the debate on the appropriateness or otherwise of the content warning provided by Big Tech. They need studies of this nature to make informed decisions and engage in the discourse properly.

Scope of the Study

The study was delimited to the utility of content warnings in nine areas. It also determined the gender and age differentials in the perspectives of social media users on content warning by Big Tech. The study was carried out in the Nnamdi Azikiwe University, Awka, Anambra State of Nigeria.

REVIEW OF LITERATURE

Conceptual Review

Social Media (SM) is a network of internet facilities that is on the technological and ideological foundation of web 2.0 which provides space for the development of user-changeable content (Kaplan & Haenlein, 2010). It is an umbrella term for technologies that provide space for people to create and send content, link up, and connect with others.  SM is a platform for people to get to move close to each other on the internet or social contacts by making connections through individuals. SM is a form of social interaction among people who create, share, and exchange information and ideas in environments, schools, workplaces, homes, communities, and so on. SM, regardless of distance, facilitates people to communicate, and convey information in the form of texts, images, videos, and audio. This media has been used increasingly in many cultures so that the number of users also has increased geometrically over the years.

Today, people use SM daily for its numerous advantages (Dourish, 2011). The National Communication Commission (NCC, 2015) stated that more than 90 million people in Nigeria use SM with the majority being children and adolescents.  According to Howard and Park (2012), the SM has three main parts, namely:

  1. the infrastructure and instrument to create and share content
  2. content, such as concepts, ideas, messages, information and news
  3. decoders, users and consumers, e.g. industries, organizations and individuals.

Some of the contents of the social media may harm the sensibilities of users hence the need for content warnings often self-imposed by the Big Tech or regulated by relevant agencies. Content warnings are alerts about upcoming content that may contain themes related to past negative experiences (Byman, Gao, Meserole & Subrahmanian, 2021). In the United Kingdom, the Online Safety Law requires platforms to remove illegal content, remove material that violates their terms and conditions and give users controls to help them avoid seeing certain types of content specified by the bill. According to Oxfam International (2020), people may be exposed to risky content actively or passively, and it may produce a harmful effect. Content may be illegal to possess or share according to national law, e.g. sexually exploitative images of children or radicalising videos.

Oxfam International (2020) stated that inappropriate and offensive content is more subjective, and includes: commercial adverts or spam; violent, extremist or hateful material; sexually exploitative or sexual material; and content which is discriminatory based on someone’s race, ethnicity, nationality, class, socioeconomic status, age, sex and gender identity/expression, sexual orientation, (dis)ability, religion, language or other status.   Oxfam recognises that children and young people are a group who experience specific risks in the digital sphere, and that special measures should be taken to ensure that they are protected from abuse, harm and exploitation. Experts from the School of Health Science, Department of History, School of Culture, Languages and Area Studies and the School of Psychology at the University of Nottingham, the University of Illinois and a members of the lived Experience Advisory Panel developed a common language to use on content warnings, after their research found that current warnings do not adequately take account of the needs of the intended audience.  According to them, the warning should read: “WARNING: Viewer Discretion is Advised” (University of Nottingham, 2022).

Anti-safety law however, claim that warnings help people to emotionally prepare for or completely avoid distressing material. Critics argue that warnings both contribute to a culture of avoidance at odds with evidence-based treatment practices and instill fear about upcoming content.

Empirical Review

Recently, a body of psychological research has begun to investigate the claims surrounding content warnings empirically. Bellet, Jones, and McNally (2018) were among the first to experimentally test the effect of content warnings. In a crowd-sourced sample of individuals who had not experienced past trauma, they found that content warnings given before literature passages had no significant effect on anxiety. Furthermore, they found that content warnings undermined participants’ sense of their resilience to potential future traumatic events and their sense of the resilience of others. They also reported a moderation effect—among individuals who believed that words were emotionally harmful, content warnings acutely increased anxiety reactions. Bridgland, Jones and Bellet (2022) presented the results of a meta-analysis of all empirical studies on the effects of these warnings. Overall, they found that warnings have no effect on affective responses to negative material nor on educational outcomes (i.e., comprehension). However, warnings reliably increase anticipatory affect. Findings on avoidance were mixed, suggesting either that warnings have no effect on engagement with material, or that they increase engagement with negative material under specific circumstances.

In another study, Charles, Hare-Duke, Nudds, Franklin, Llewellyn-Beardsley and Rennick-Egglestone (2022) carried out a study to develop a typology of content warnings and to identify the contexts in which content warnings are used. They developed the Narrative Experiences Online (NEON) content warning typology, which comprises 14 domains: violence, sex, stigma, disturbing content, language, risky behaviours, mental health, death, parental guidance, crime, abuse, socio-political, flashing lights and objects. They also identified the sectors in which they were used, and the intended audience. The final list of categories included violence, sex, stigma, disturbing content, risky behaviors, mental health, crime, and abuse. The study concluded by developing a common language for content warnings across sectors and contexts.

Jones, Bellet and McNally (2020) conducted a preregistered replication and extension of a previous experiment. Trauma survivors (N = 451) were randomly assigned to either receive or not to receive trigger warnings before reading passages from world literature. The study found no evidence that trigger warnings were helpful for trauma survivors, for participants who self-reported a posttraumatic stress disorder (PTSD) diagnosis, or for participants who qualified for probable PTSD, even when survivors’ trauma matched the passages’ content. The study also found substantial evidence that trigger warnings counter therapeutically reinforced survivors’ view of their trauma as central to their identity. Regarding replication hypotheses, the evidence was either ambiguous or substantially favored the hypothesis that trigger warnings have no effect. In summary, the study found that trigger warnings were not helpful for trauma survivors.

Similarly, Samson, Strange, and Garry (2019) concluded that content warnings had trivially small effects overall. Across six studies of varying sample characteristics, they found that negative affect and intrusive memories were similar regardless of whether individuals received content warnings. Bridgland, Green, Oulton and Takarangi (2019) similarly found that content warnings had trivially small effects on arousal levels when participants viewed photos. However, their results differentiated anticipatory anxiety from response anxiety. Anticipatory anxiety refers to levels of anxiety after viewing the content warning but before viewing the stimulus, whereas response anxiety refers to anxiety after viewing the stimulus (Bridgland et al, 2019). Although content warnings appeared to have a trivial effect on response anxiety, they reliably increased anticipatory anxiety.

In most of studies in extant literature, content warnings were treated in the contexts of classroom environment rather than the social media which is the interest of this study. The teacher-students’ interactions in the classroom and the effect which literature content warnings may have on students, may differ markedly from the expectations of content warnings in the social media. A study to bridge this gap therefore is imperative.

Theoretical Foundation

 The Medical-Trauma, Informed Consent and Asshole Model

Petey (2015) developed three models to explain content warnings in the school context which also is relevant to this study.

Medical Trauma:

This theme organizes content warnings around the metaphor of disease and treatment. Under this model, proponents of content warnings tend to mobilize two different variants of this argument. The first, which draws on knowledge of PTSD and related disorders, is that certain kinds of content might ‘trigger’ emotional responses to, past trauma. The second, which draws on knowledge of allergens, is that certain kinds of content might induce negative, ‘allergic’ reactions.

On the other side of the debate, opponents of content warnings tend to respond by arguing that both trauma and allergies should be treated with exposure therapy to desensitize any negative response. The debate then becomes about what kind of ‘treatment’ should be ‘administered’ and under what conditions.  As such, this metaphor moves warnings (and the content they are warning about) out of the domain of political disagreement into the domain of medical expertise. The effect of this move is to simultaneously depoliticize and professionalize the discourse by making it the kind of claim that can only be debated and settled by scientists.

In other words, if certain contents could trigger off certain reactions, it should be the business of the medics to manage, and not an issue for legislation. The model explains the likelihood that social media users may not accept the utility of content warning going by the tenets of the model.

Informed Consent:

This theme organizes content warnings as corresponding to the content ratings that have been ‘voluntarily’ applied to e.g. movies and video games by professional organizations. Under this model, proponents tend to argue that warnings are not a restriction of information, but in fact, more speech, in the form of meta­speech that characterizes speech. Censorship says “Read what we tell you”. The opposite of censorship is “Read whatever you want”. The philosophy of censorship is “We know what is best for you to read”. The philosophy opposite censorship is “You are an adult and can make your own decisions about what to read”. And part of letting people make their own decisions is giving them relevant information and trusting them to know what to do with them. Uninformed choices are worse choices. Content warnings are an attempt to provide the users with the information to make good free choices.

Opponents of content warnings tend to argue that rating models are censorship by another name because they attempt to enforce a top­down, universal classification of content as “appropriate” or “inappropriate” which has a chilling effect on what kinds of things can be published by marking certain things as requiring a warning (while others do not). These critics position warnings as moves by the ‘moral authoritarian left’ to deploy tactics long used by the reactionary right.

An Asshole:

This model organizes warnings into the ethical sphere, where proponents see warnings as an acknowledgment of actually existing differences in experiences and social power, and refusals to offer warnings as a move to force the conversation to happen on one’s own terms, a tactic only available to the socially empowered. Content warnings are fundamentally about empathy. They are a polite plea for more openness, not less; for more truth, not less. They allow taboo topics and the experience of hurt and pain, often by marginalised people, to be spoken of frankly. Some opponents of this view argue that there are things that will make people uncomfortable yet they must be exposed to them at the risk of being called an asshole; others simply say that, they have a right to be an asshole in a free society.

METHODOLOGY

The study adopted a descriptive survey design. This design involves collecting data in order to test hypothesis or answer research questions generated in the study. It makes attempt to determine the current status of a population with respect to one or more variables, by collecting data from members of that population (Ogunwuyi, 2014). The area of the study was Anambra State while the population of the study comprised the staff and students of Nnamdi Azikiwe University, Awka who use the social media. The choice of the population was because, these category of users are enlightened and would be able to understand content warning and its likely utility. Because the population was large, scattered and unspecified, the researcher adopted a multi-level sampling techniques. At level 1, researcher decided on a sample size of 500 comprising 200 staff (representing the older people) and 300 students (representing the young). The choice of size was because in a survey, the larger the sample size, the better the generalizability of the findings (Onyeme, 2020). At level 2, researcher employed purposive sampling technique to select approximately equal number of female and male participants due to the importance of gender variable in the study. Through this process, 257 female and 243 male participants were selected.

A researcher-made questionnaire titled “Social Media Users’ Perspectives on Utility of Content Warning Survey” (SMUPUC) was developed after a literature review based on the four research questions and two hypotheses guiding the study. The SMUPUC has two sections A and B. The section A captures the respondents’ variables of Age and Sex, while the section B comprised one cluster, addressing the likely utility value of content warnings. In this section, four weighted options (Strongly Agree-SA, Agree-A, Disagree-D, and Strongly Disagree-SD) are provided for the items; and respondents were expected to check the option that best explained their perspective. The options were weighted 4 to 1 respectively. The SMUPUC was validated by two experienced lecturers in Mass Communication and one in Measurement and Evaluation from Nnamdi Azikiwe University, Awka. Validates’ suggestions were incorporated into the final draft to ensure face validity of SMUPUC. A pilot study was conducted to determine the reliability of the instrument. A similar population from Chukwuemeka Odumegwu Ojukwu University, Igbariam was administered with 20 copies of SMUPUC. Results were analysed using Cronbach Alpha which yielded a correlation index of 0.82 which suggested that the instrument was reliable.

SMUPUC was administered on respondents using two assistants from Nnamdi Azikiwe University, Awka who were briefed adequately on the exercise. At the end, a total of 467 copies of the questionnaire (comprising 246 female and 221 male responses; and 188 staff and 279 students) was returned leaving a mortality rate of 6.6 percent only. The research questions were analyzed with the mean (X) (at 2.50 cut-off point) and standard deviation, while the hypotheses were tested with t-test statistic at 0.05 level of significance.

DATA PRESENTATION/ANALYSIS

Research Question One: What utility do social media users derive from content warnings by Big Tech?

Table 1: Utility Derived by Social Media Users from Content Warnings by Big Tech (N=467)

S/N Items SA A D SD X S.D Remark
1 It prevents people from watching violent scenes. 389 70 4 4 3.81 0.47 Agreed
2 It prevents people from watching obscene materials 402 60 5 3.85 0.39 Agreed
3 It helps people to avoid unwanted exposure to distress. 388 47 21 11 3.70 0.65 Agreed
4 It enables people to avoid extremist views and contents. 298 132 30 7 3.54 0.68 Agreed
5 It protects the children and vulnerable from undue influence. 21 22 148 276 1.55 0.78 Disagreed
6 It prevents people from being exposed to content relating to self-harm. 413 32 7 15 3.81 0.62 Agreed
7 It prevents people from being exposed to content relating to eating disorders. 14 54 87 312 1.51 0.71 Disagreed
8 It allows people to filter out harmful content they do not want to see. 411 19 21 16 3.77 0.69 Agreed
9 It enables people to avoid content inciting hate on the basis of ethnicity, sex or religion. 361 57 11 38 3.59 0.88 Agreed
Cluster Mean and S.D 3.24 0.65 Agreed

Data in Table 1 shows the utility derived by social media users from content warnings by the Big Tech. Items 1, 2, 3, 4, 6, 8 and 9 were rated above the cut-off point of 2.50 which means that respondents agreed that they constitute utility they derived from the content warnings. Items 5 and 7, however were rated below the cut-off point which indicate that respondents disagreed with they constitute utility they derived from content warnings. The cluster mean of 3.24 however, indicates that social media users derive utility from content warnings by Big Tech.

Research Question Two: What are the differences in the perspectives of female and male social media users on the utility of content warnings by the Big Tech?

Table 2: Difference in Utility Derived by Female and Male Social Media Users from Content Warnings by Big Tech (N: Female=246; Male=221)

S/N Utility Var. SA A D SD X S.D Remark
1 It prevents people from watching violent scenes. Female 209 33 2 2 3.83 0.46 Agreed
Male 180 37 2 2 3.79 0.49 Agreed
2 It prevents people from watching obscene materials Female 205 38 3 3.82 0.41 Agreed
Male 197 22 2 3.88 0.35 Agreed
3 It helps people to avoid unwanted exposure to distress. Female 199 22 19 6 3.50 0.72 Agreed
Male 189 25 2 5 3.80 0.56 Agreed
4 It enables people to avoid extremist views and contents. Female 161 67 15 3 3.57 0.66 Agreed
Male 137 65 15 4 3.52 0.70 Agreed
5 It protects the children and vulnerable from undue influence. Female 10 11 70 155 1.50 0.76 Disagreed
Male 11 11 78 121 1.60 0.80 Disagreed
6 It prevents people being exposed to content relating to self-harm. Female 214 18 4 10 3.77 0.67 Agreed
Male 199 14 3 5 3.84 0.54 Agreed
7 It prevents people being exposed to content relating to eating disorders. Female 7 20 47 172 1.44 0.76 Disagreed
Male 7 34 40 140 1.58 0.86 Disagreed
8 It allows people to filter out harmful content they do not want to see. Female 217 10 11 8 3.77 0.68 Agreed
Male 194 9 10 8 3.76 0.70 Agreed
9 It enables people to avoid content inciting hate on the basis of ethnicity, sex or religion. Female 193 27 6 20 3.60 0.88 Agreed
Male 168 30 5 18 3.57 0.88 Agreed
         Cluster Mean and S.D

 

Female   3.20 0.67 Agreed
Male 3.26 0.58 Agreed

Data in Table 2 shows the mean differences in the utility derived by female and male social media users from content warnings by Big Tech. Apart from items 5 and 7 which were rated below the cut-off point, female and male social media users agreed with the rest of the items as constituting utility derived from content warnings by Big Tech.  The male users however, had overall mean rating of 3.26, while their female counterparts had 3.20. The standard deviation of 0.67 and .58 for female and male users respectively reflect relative homogeneity in the opinion of users.

Research Question Three: What are the differences in the perspectives of young and old social media users on the utility of content warnings by the Big Tech?

Table 3: Difference in Utility Derived by Young and Old Social Media Users from Content Warnings by Big Tech (N: Young=279; Old=188)

S/N Utility Var. SA A D SD X S.D Remark
1 It prevents people from watching violent scenes. Young 242 33 2 2 3.85 0.43 Agreed
Old 147 37 2 2 3.75 0.52 Agreed
2 It prevents people from watching obscene materials Young 238 38 3 3.84 0.39 Agreed
Old 164 22 2 3.86 0.37 Agreed
3 It helps people to avoid unwanted exposure to distress. Young 232 22 19 6 3.72 0.68 Agreed
Old 159 25 2 5 3.83 0.61 Agreed
4 It enables people to avoid extremist views and contents. Young 194 67 15 3 3.63 0.64 Agreed
Old 104 65 15 4 3.43 0.73 Agreed
5 It protects the children and vulnerable from undue influence. Young 10 11 70 188 1.44 0.73 Disagreed
Old 11 11 78 88 1.71 0.82 Disagreed
6 It prevents people being exposed to content relating to self-harm. Young 247 18 4 10 3.80 0.40 Agreed
Old 166 14 3 5 3.81 0.59 Agreed
7 It prevents people being exposed to content relating to eating disorders. Young 7 20 47 205 1.39 0.73 Disagreed
Old 7 34 40 107 1.69 0.89 Disagreed
8 It allows people to filter out harmful content they do not want to see. Young 250 10 11 8 3.80 0.64 Agreed
Old 161 9 10 8 2.77 0.99 Agreed
9 It enables people to avoid content inciting hate on the basis of ethnicity, sex or religion. Young 226 27 6 20 3.65 0.84 Agreed
Old 135 30 5 18 3.50 0.94 Agreed
         Cluster Mean and S.D Young

Old

  3.24

3.15

0.61

0.72

Agreed

Agreed

Data in Table 3 shows the mean differences in the utility derived by young and old social media users from content warnings by Big Tech. Apart from items 5 and 7 which were rated below the cut-off point, young and old social media users agreed with the rest of the items as constituting utility derived from content warnings by Big Tech.  The younger users however, had overall mean rating of 3.24, while their older counterparts had 3.15. The standard deviation of 0.61 and .72 for female and male users respectively reflect relative homogeneity in the opinion of young and old users.

Test of Hypotheses

Ho1: There is no significant mean difference in the perspectives of female and male social media users on the utility of content warnings by the Big Tech.

Table 4: t-Test of Difference in the Perspectives of Female and Male Social Media Users on the Utility of Content Warnings by Big Tech

Variable N X S.D Df t-calc. t-crit. Decision
Female

Male

246

221

3.20

3.26

0.67

0.58

465 1.00 1.96 Not Sig.

Table 4 shows the t-test of significance in the mean ratings of female and male social media users on utility derived from content warnings by Big Tech. The calculated t-value of 1.00 is less than the critical t-value of 1.96 at 0.05 level of freedom and 465 degrees of freedom. The mean differences therefore was not significant. In other words, there is no significant mean difference in the perspectives of female and male social media users on the utility of content warnings by the Big Tech.

Ho2: There is no significant mean difference in the perspectives of the young and old social media users on the utility of content warnings by the Big Tech.

Table 5: t-Test of Difference in the Perspectives of Young and Old Social Media Users on the Utility of Content Warnings by Big Tech

Variable N X S.D Df t-calc. t-crit. Decision
Young

Old

279

188

3.24

3.15

0.61

0.72

465 1.50 1.96 Not Sig.

Table 5 shows the t-test of significance in the mean ratings of young and old social media users on utility derived from content warnings by Big Tech. The calculated t-value of 1.50 is less than the critical t-value of 1.96 at 0.05 level of freedom and 465 degrees of freedom. The mean differences therefore was not significant. In other words, there is no significant mean difference in the perspectives of young and old social media users on the utility of content warnings by the Big Tech.

DISCUSSION OF FINDINGS

The study listed possible utility that could be derived by social media users from content warnings by Big Tech. The list was based on earlier study by Charles et al (2022) who carried out a study to develop a typology of content warnings and to identify the contexts in which content warnings are used. The findings showed that social media users derive utility from content warnings. Social media users believe that the content warnings prevent people from watching violent scenes, obscene materials, unwanted exposure to distress, avoid extremist views and contents and from being exposed to content relating to self-harm. They also filter out harmful content they do not want to see and to avoid content inciting hate on the basis of ethnicity, sex or religion. The social media users however, do not agree that the content warnings protect the children and vulnerable from undue influence and prevent people from being exposed to content relating to eating disorders.

The disagreement with the suggestion that content warnings protect the children and vulnerable from undue influence could be as a result of difficulty in choosing between adult and child users of the social media. As a result, children who use the social media have access to adult contents as revealed by Oxfam (2020). Although the Big Tech have been obliged to develop mechanism for control of users’ age, this has not be as effective as it used to be. Again, the disagreement with the utility of content warnings preventing people from being exposed to content relating to eating disorders could be because eating disorder is not an identified problem in this part of the world. In summary, the findings on the utility of content warnings disagreed with Bridgland, Green, Oulton and Takarangi (2019) who found that content warnings had trivially small effects on arousal levels when participants viewed photos.

The study also found that there is no significant difference in the perspectives of social media users on the utility of content warnings by Big Tech based on gender. The homogeneity of opinion of respondents irrespective of gender indicates that both sexes have similar perspective on the utility of content warning. In the same vein, the study found no significant difference in the perspectives of social media users based on their age. This indicates that there is homogeneity of opinion of social media users on the utility of content warnings by Big Tech irrespective of the age of users. Both the young and old alike consider content warning as providing some utility values and equally took exceptions to the utility of content warnings in the area of child’s access as well as avoidance of eating disorder. The findings agreed with Oxfam (2020) policy guidelines on social media use which does not discriminate irrespective of age and sex of social media users.

CONCLUSION

Content warnings can provide some utility values to social media users especially in avoiding harmful and distressful contents of the social media. These users, irrespective of differences in sex and age believe in this utility. They also agreed that this utility does not extend to children’s access to the social media, nor does it include avoidance of eating disorder.

RECOMMENDATIONS

In view of the findings of the study, it is recommended as follows:

  1. Content warnings should continue to be included as requirement of the Big Tech to guard against unwarranted exposure of users to unwanted contents;
  2. The Big Tech should improve on apps to guard against underage exposure to adult contents of the social media.
  3. There is no cultural universal; the Big Tech should consider African values in placing limitations to contents.

REFERENCES

  1. Adegboyega, L. (2020). Influence of social media on the social behavior of students as viewed by primary school teachers in Kwara State, Nigeria. Mimbar Sekolah Dasar, 7(1), 43- 53. https://doi.org/10.17509/mimbar-sd.v7i1.23479.
  2. Bellet, B. W., Jones, P. J., McNally, R. J. (2018). Trigger warning: Empirical evidence ahead. Journal of Behavior Therapy and Experimental Psychiatry, 61, 134–141.
  3. Bridgland, V. M., Green, D. M., Oulton, J. M., Takarangi, M. K. (2019). Expecting the worst: Investigating the effects of trigger warnings on reactions to ambiguously themed photos. Journal of Experimental Psychology: Applied, 25, 602–617.
  4. Bridgland, V., Jones, P. J. & Bellet, B. W. (2022). A meta-analysis of the effects of trigger warnings, content warnings, and content notes, August 23       https://doi.org/10.31219/osf.io/qav9m
  5. Bruce M. J. (2017). Does trauma centrality predict trigger warning use? Physiological responses to using a trigger warning. Poster presented at the 89th annual meeting of the Midwestern Psychological Association, Chicago, IL.
  6. Byman, D. L., Gao, C., Meserole, C. & Subrahmanian, V. S. (2021). Deep fakes and international conflict. UK: The Brookings Institution.
  7. Charles, A., Hare-Duke, L., Nudds, H., Franklin, D., Llewellyn-Beardsley, J. & Rennick- Egglestone, S. (2022). Typology of content warnings and trigger warnings: Systematic     review. PLoS ONE 17(5), e0266722. doi:10.1371/journal.pone.0266722
  8. Dourish, P. (2001). Seeking a foundation for context: Aware computing. Human-Computer Interaction, 15(2-1), 229-241. DOI: 10.1207/S15327051HCI16234_07
  9. Goodwin, D. K. (2013). The bully pulpit: Theodore Roosevelt, William Howard Taft, and the golden age of journalism. New York:  Simon & Schuster, p. 448.
  10. Hirsh-Pasek, K & Blinkoff, E. (2022). Encouraging self-harm to be criminalised in Online Safety Bill. Technology & Innovation, 6(7), 38-46.
  11. Howard, P. N. & Parks, M. R. (2012). Social media and political change: Capacity, constraint, and consequence. Journal of Communication, 62(2), DOI:          10.1111/j.14602466.2012.01626.x
  12. Jones, P. J., Bellet, B. W. & McNally, R. J.  (2020). Helping or Harming? The Effect of Trigger Warnings on Individuals with Trauma Histories, 8(5), https://doi.org/10.1177/2167702620921341
  13. Kaplan, A. & Haenlein, M. (2010). The fairyland of second life: About virtual social worlds and how to use them. Business Horizon, 52(26), 563-572. DOI: 10.1016/j.bushor.2009.07.002
  14. Mann, J. (2022). Big Tech lobbyists are calling on the Supreme Court to block a Texas anti-censorship law, warning it would open the floodgates to extremist content. Insider Inc.,           May 14.
  15. Mannix, R. K. (2022). ). Fear extinction in rats: Implications for human brain imaging and anxiety disorders. Biological Psychology, 73, 61–71.
  16. Nicole, E. (2007). The benefits of Facebook “Fiends;” social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication, 4(7), 68-77.
  17. Nowicki, B. L.  (2019). Mechanisms of fear extinction. Molecular Psychiatry, 12, 120–150.
  18. Ogunwuyi, A. O. (2014). Research design. In O. Oluokun, J. O. Adewuyi & G. O. Owoyebi  (Eds.). Fundamentals of educational research (pp. 22-35). Ibadan: KingDave Publishers.
  19. Onyeme, A. C. (2021). Some ungrammatical aspects of good writing. The student-Teacher, 5, magazine of the Teaching Practice Unit, Federal College of Education Technical Umunze, Anambra State.
  20. Oxfam International (2020). One Oxfam digital safeguarding policy. Nairobi, Kenya: Oxfam International.
  21. Samson, M., Strange, D., Garry, M. (2019). Trigger warnings are trivially helpful at reducing negative affect, intrusive thoughts, and avoidance. Clinical Psychological Science, 7, 778–793.
  22. University of Nottingham (2022). Experts develop a common language for trigger and content warnings, PLoS ONE, NEON study on online mental health recovery narratives, May            4,
  23. Vallance, C. & McCallum, S. (2022). Online safety bill: Plan to make big tech remove harmful content axed. BBC News, November 28.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

4 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER

Subscribe to Our Newsletter

Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.

    Subscribe to Our Newsletter

    Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.