Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.
Visual Elements in Digital Technology Platforms and Visualized Communication in Spreading of False Information
- Dr. Diala Edwin Lionel
- Dr. Uzowuihe Bertha
- 2118-2138
- Jul 29, 2024
- Communication
Visual Elements in Digital Technology Platforms and Visualized Communication in Spreading of False Information
Dr. Diala Edwin Lionel & Dr. Uzowuihe Bertha
Department of Languages and Humanities Alvan Ikoku Federal University of Education, Owerri
DOI: https://dx.doi.org/10.47772/IJRISS.2024.803154S
Received: 23 April 2024; Revised: 07 May 2024; Accepted: 13 May 2024; Published: 29 July 2024
ABSTRACT
Digital technology has demonstrated humongous benefits in spreading of information and awareness creation. Digital media platforms are observable facts that have turned out to be pervasive as the presence and use of digital platforms by netizens. Even more puzzling in several discourses is the near absence of the cultural strengthening and self-motivated of existence, particularly in Nigeria. This represents a new boundary of challenge. The use of visual elements in digital technology platforms and visualized communication in spreading of false information in Nigeria should be regulated. Its format is most used in promoting this practice. The major platforms used are WhatsApp, Facebook, YouTube and Instagram. This study utilized Survey design to obtain secondary data in literature and descriptive statistics were adopted using a self-structured questionnaire. The study adopts Technological Determinism theory and Disruptive Technology theory in observation of distinctiveness submissions to better knowledge of netizens behavioural pattern. Findings reveal that that digital technology user’s spread false information to gain attention, although significant confidence is placed on digital media platforms; many netizens hide their real identities. The study found out that false information spread through it creates panic and tension, pseudo identities dictate because sharing personal information violates long-standing culture of social identity secrecy entrenched in customary traditions in many parts of Nigeria. It was also found that emergence and continual spreading of false information need to be dealt with by imbibing societal value re-orientation for digital technology users; building their understanding and educating the masses. Spreading of false information has given rise to insecurity in Nigerian. Several unknown armed groups, tribal mercenaries and religious fundamentalist are proof of banditry and insecurity in Nigeria. Aside explaining themselves with overt and covert cultural distinctions of digital technology users, the study concludes that software engineers in Nigeria require developing modified software and re-categorizing bots accounts, and chat-bots especially into algorithmic template because of the socio-psychological peculiarities of the clime. The paper recommends the potential valuing and strengthening of digital technology platforms in ensuring reduction of spreading of false information in Nigeria.
Keywords: Digital Technology Platforms, Spreading of Information, Visual Elements, Visualized Communication
INTRODUCTION
Digital technology platforms have remained indispensible device for information dissemination and communication sharing. Digital technologies are examples of internet platforms that facilitate users sharing and spreading of information. They are platforms that create participation, conversation, sharing, collaboration and linkage. The platforms are possibly among the quickest means of disseminating information. Nwabueze (2014) posits that digital technologies have proved to be vital channels of mass enlightenment and mobilisation in the society. The use of digital technology platforms in Nigeria has been characterized through the adaptation of smart mobile phone. Most Nigerians have seen that its use is an indispensible tool that assists information sharing and dissemination.
Consenting to this, Uzuegbunam (2020) notes that Nigeria is among the largest mobile market in Africa. This has enabled people who use digital technology to post several stories whether the facts are right or not. Most Nigerians are active in using it, such as Facebook, Twitter, Instagram, Whatsapp and host of other platforms where stories go viral.
Most false information thrives on digital technology platforms. Joan and Baptista (2020) asserts that false information refers to any form of online disinformation, which totally or partially has false contents, created intentionally to manipulate a specific audience, through a format that imitates a news or reports, through false information that may or may not be associated with real events to attract the readers’ attention and to persuade them to believe in falsehood. The above undoubtedly explains that false information motive is purposeful intention to deceive and create false image and impressions. The insecurity in Nigeria has attributed to sharing and dissemination of false information.
Nigeria is facing a serious challenge largely caused by heinous activities of bandits and unknown gunmen, who destroy public facilities, kill security operatives and create atmosphere of fear in the region. Digital technology platforms post activities of bandits and unknown gunmen to ignite certain feelings. Chukwuere and Onyebukwa (2018) state that at present, millions of Digital technology users are connected to events happening in Nigeria, while some use this medium to promote false information. This study therefore seeks to determine digital technology use in propagating false information and its influence on insecurity in Nigeria.
Statement of the Problem
The arrival of digital technology platforms in Nigeria is a good development since people communicate, share ideas and information. It has been accepted and adopted as a powerful means of communication and social relationship. However, it propagates false information which threatens the security of the country. Chukwuere and Onyebukwa (2018) identify that using it creates terror by means of false information postings to promote propaganda and fear. Its platforms are not regulated, as a result, people that are not well exposed to differentiate between false information and real news fall victim to it on digital technology platforms.
Several researches has revolved around the phenomenon of false information, its risk to the society, and clarifications to the spread of fake information such as the works of research Dentith (2017); Roozenbeek and Linden (2018); Egelhofer and Lecheler (2019). As a result, this study examines the demography accounts of likely persons mostly involved in the spreading of false information, the tempo at which they spread fake information, and the nature of the information they frequently spread. Manipulation and misinformation of the public through it has caused great harm to the security in unprecedented level in Nigeria. Findings of the study seek to determine the role of false information in the escalation of security challenges in the country. Findings will also assist relevant stakeholders in regulating contents of stories and guiding them in making informed decisions.
Purpose of the Study
To determine visual elements in digital technology platforms and visualized communication in spreading of false information
The specific objectives are to:
- examine the spreading of false information;
- investigate the role of Nigerian audiences in the spreading of false information.
- determine impacts of false information reports on subscribers of a digital technology platforms
Research questions formulation
The research questions were established following the evaluation of the literature and the following research questions were formulated and addressed.
- What are the factors responsible for spreading false information in digital technology platforms?
- What are the roles of Nigerian audiences in the spreading of false information?
- What are the existing challenges of false information?
- What are the existing techniques used to identify false information in digital technology platforms?
- What are impacts of false information reports on subscribers of digital technology platforms?
Sources of information
The researchers extensively searched for journal and conference research articles, books, and magazines as source of data to extract relevant articles. They used main sources of scientific databases and digital libraries in their search, like Google Scholar, Springer Link, Science Direct, and Scopus.
LITERATURE REVIEW
Digital Technology Platforms and False Information Reports
Previously, the media setting was controlled by the traditional media forms that provided information, entertainment and education for people. Currently, digital technology platforms have changed the entire information dissemination where people connect and share ideas. The advent of digital technology platform has brought a lot of progress in communication. Digital technology platforms enable people to interact, communicate and share ideas. Digital technology platforms according to Nwabueze, (2014) is the web-based technologies that transforms and broadcast media monologues into social media dialogues. It gives room for involvement, social association with numerous people who utilize the digital technology. Across digital technology platforms, the spread false information has been on the rise.
False information according to Thorsten, Lena, Svenja & Tim (2019) can be classified into: misleading false news, fabricated false news, intentional false news and negligent false news. Muigai (2019) notes that inventors of false information fabricate stories to misinform, sway opinions, spread propaganda, incite hate, alter perceptions, etc., through opinions, predictions and blatant fabricated narratives.
Viewpoints on Visualization
Paintings dates back to pre-historic times and the ancient periods bear witness to humankind’s instinctive and inherent ability to generate visualizations in order to support generational knowledge transfer during pre-literate era. According to Christianson (2012), a list of over 100 diagrams changed the world, beginning from the Chauvet Cave drawing (30 000 BC) to the Aztec calendar (1479), the Periodic Table (1869), and the World Wide Web (1989), and finishing with the design of the iPod in 2001. This visualization is a representation of innovations in the areas of art, astronomy, cartography, engineering, chemistry, mathematics, history as well as communications. Visualizations have made provisions for graphical objects, with characteristics making powerful communication mechanism. Burkhard (2004) notes that images are pre-attentive. They are developed ahead of the conscious mind, which starts to pay attention, processed before text and required with a reduction of effort to comprehend than textual descriptions. When utilising visualization, the viewer moves through the stages of perception, interpretation and comprehension with each stage relying on the earlier stage. This Kirk (2016) asserts that visualization itself is pre-existing knowledge and experience.
Visual methods creates powerful medium that positions general communicative frameworks. Gavrilova, Alsufyev and Grinberg (2017) add that any study of knowledge visualization is beneficial considering its relationship with information revelation. Hansen and Johnson (2011) posit that visualization is considered in the transformation of the symbolic into the geometric societal actions, through changing of the dimensionality of the data. These are representations of visual combinations of marks and attributes which include appearance properties like size, color and position thereof. Skillful combinations of these marks and attributes make possible and suitable representations of the entities and relationships, patterns, trends, clusters, and outliers within the data.
Data visualization occurs as a result of the use of a visual representation of factors such as illiteracy, porous borders, and non-compliance with the rule of law contributes to insecurity in the country. The consequences of insecurity include underdevelopment, poverty, insurgency, militancy, and drug abuse which adds meaning, supports the transitioning of such data to information which facilitate understanding.
Information visualization occurs as a result of the use of a computer-supported, interactive, visual representation of abstract non-physically based data on the root causes of banditry and insecurity in Nigeria. This includes terrorism, kidnapping, ritual killings, cultism, corruption, injustice, poverty, inflation, and bad governance with the aim to amplify cognition.
Visualization is an essential part of knowledge mobilization, which Meyer (2010) refers to as a field of research that focuses on the creation and transfer of knowledge, specifically the use of visual representations to support and facilitate the communication of knowledge. Visualization has a distinctive prospect of facilitating the knowledge sharing beyond words and text to link disciplinary knowledge gaps and making visualization relevant in exposing activities of banditry and insecurity.
One key challenging issues visualization has is the spread of false information on banditry and insecurity in Nigeria. False information on banditry and insecurity recognition and detection is still an intricate unresolved issue. False information on banditry and insecurity detection through visualization presents unique characteristics and challenges that make finding a solution is trivial. Visualization has leveraged to deceive people through creation and spread of false content on banditry and insecurity. False information on banditry and insecurity detection remains a huge challenge, primarily because the content is drawn and pictured in a way to closely resemble the truth, and it is often hard to determine its veracity. It means recognizing the full range of visualization tools for supporting knowledge presentation and sharing on banditry and insecurity is a dream. Bresciani and Eppler (2015) identify the overall issues relating to visualization as risks and pitfalls which consist of cognitive, emotional, and social risks that could be introduced by the designer’s choices or by the users’ interpretation of visualized image.
False Information Spreading
False information has been in existence for a very long time. False information is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017). Nakov (2020) reports that false information means different things to different people. False information according to Allcott and Gentzkow (2017) is news articles that are intentionally and verifiably false and could mislead readers.
False information, disinformation and misinformation have become great scourge. “Misinformation is worse than any epidemic: It spreads at the speed of lightening globally and proves deadly when it reinforces misplaced personal bias against all trustworthy evidence”. Though online social networks classified as a form of digital technology platform, have advanced the ease of real-time information; its reputation and its considerable use have widened the spread of false information by increasing the speed and scope at which it can spread. False information refers to the manipulation of information carried out through the production of false information, or the distortion of true information.
False information is spread intentionally through various actors who are aware that the information is false. Earlier research principally dealt with textual forms of misinformation, while visual and multi-modal types like news images, memes, and videos of false information receives less attention. False information in this study refers to as guarded spread of false news in the shape of news articles and stories that have the intention of destroying an image. Gentzkow (2017) describes it as news articles that intentionally and verifiably create fake and misleading to readers. The spread of false information according to Cooke (2017).has become the same with fake news. Langin (2018) adds that fake news spread faster than real news and has wider coverage and wider audience. False information employs several devices like visual information media to draw audience.
Therefore, digital technology platforms have become presently an influential source for false information dissemination Sharma, Qian, Jiang, Ruchansky, Zhang, Liu (2019); Shu, Sliva, Wang, Tang and Liu (2017)). False information have significant impact on society and are manipulated with false contents, which are easier to generate and harder to detect Kumar and Shah (2018) and as disinformation actors change their tactics Kumar and Shah.
Falsehood is significantly spread faster, deeper and more widely than the truth in every categories of information, and the impacts are more noticeable for false news about terrorism and disasters. Vosoughi, Roy and Aral (2018).
Categorizing Visual False Information
False information is characterised with malicious intent. Providing empirical evidence of malicious intent according to Egelhofer and Lecheler (2019) are clearly identifying false information is difficult in a textual context, as falsehoods can be produced unconsciously. Constructing false visual content entails some kind of conscious action and in some cases certain skills. This makes the term false information more applicable to the study of visuals. Chadwick and Stanyer (2022) advocate that instead of focusing on the misperception caused as a result of false content, dishonesty can function as a bridging gap to put a stronger focus on the origins and intentionality of interceded falsehoods.
Visual false information is classified in two mutually dependent dimensions:
(a) Intensity of richness in audio-visual modality, that is, if still or moving images are used and
(2b) Intensity of sophistication, that is, if manipulation occurs using low- intensity or high- intensity creation techniques.
Low Sophistication
False information often emerges in combination with photograph that creates the appearance of ordinary news. If the visual is clearly referred to in the text and exploited as proof for a false claim, it is referred to as visual false information. Multiple cases of decontextualised images have severally occurred in connection with banditry, where pictures are often used to illustrate or provide evidence for certain false information surrounding the activities of bandits. Mislabeled, not manipulated images comprise the largest part of banditry false information that is identified in fact-checking articles. There are also occurrences for moving images, with video footage which are simply mischaracterized with a different dates or locations.
false information make use of undemanding and inexpensive editing techniques like video filters, speeding up and slowing down or even employ lookalikes. Brennen, Simon and Nielsen (2021) state that though visual false information with a low intensity of sophistication appears to be more common and is easier to create, it has only been subject to few empirical studies.
High Sophistication
The coming into view of deep fakes has lately given communication scholars the opportunity to have stronger focus on visual aspects of false information research. Deepfakes operate on a high intensity of technological sophistication, as they put together use of artificial intelligence to fake someone’s entire audio-visual representation. If both video footage and someone’s voice are artificially generated, to create a virtual performance, this is classified as the richest form of visual false information.
Vaccari and Chadwick (2020) note that Deepfakes have created opportunity to test the effects of manipulated visuals, research on photoshopped images have remained limited. Hameleers et al. (2020); Kasra, Shen and O’Brien (2018) assert that sophisticated manipulations of still images, also referred to as ‘composition’ or ‘doctoring’, have been possible since the emergence of photoshop in the 1990s. Recent examples of images of banditry, like the cases of Boko Haram in Borno State, Adamawa, Zamfara and Kidnapping in Eastern Nigeria in recent times where image composition was used to make the criminal disaster scenes look even more dramatic by photoshopping a burning fire in the background of a normal-looking landscape.
Finally, false infographics and data visualisations comprise visual false information, when, if data are distorted or presented in a way that it either hides or makes too much of certain pertinent parts.
False Information Are Satiric
False information reports are like Satire. They have false correlation through their headlines, visuals or captions which do not agree with the content.
- They have misleading contents and use misleading information to frame up news. Sometimes genuine sources are impersonated with false and framed-up sources.
- Contents are manipulated to deceive.
- They have fabricated contents designed to deceive and cause harm.
- From the foregoing, it is clear that false information reports use different forms. This understanding is needful in discussing false information reports as a societal vice.
Specific Tactics to Spread Fake Information
Bots, People like You, Cookies, Trolls, and Microtargeting
Fake information is spread to get access to audiences. Examples of sources of spreading false information includes: Bots and flesh-and-blood people, Cookies, Trolls and Microtargeting
Bots and Propagation of fake information
This is used to spread fake information. Bots assist to propagate fake information and blow up the noticeable popularity of fake information. Facebook, Twitter and Instagram have become source for the spread of fake information. Bots spread of false information through searching and retrieving information that has not been confirmed and certified yet on the web. Bots post placed on social media sites uninterruptedly, spreading information that has not been confirmed and certified using trending topics and hash tags as the major strategies to get in touch with a broader audience helps the propagation of the false information. Bots spread false information in two ways: They keep “stating” or tweeting false information items and they employ the same bits of false information to reply to or comment on the postings of real social media users.
Bots are not physical entities generated by people with computer programming skills, and reside on social media platforms, encompassed of nothing but code, within computer instructions. Bots are computer algorithms that work in online social network sites to carry out tasks separately and frequently. They replicate the behaviour of human beings in a social network, interacting with other users, and sharing information and messages. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI).
Artificial intelligence permits bots to replicate internet users’ behaviour which assists in the spread of fake information. For example, on Twitter, bots is capable of emulating social interactions that make them seem regular of people.
How Do Bots Help in the Propagation of Fake News?
Bots’ tactics attain their objective because typical social media users have a propensity to believe what they see or what’ is shared by others without questioning on Facebook, Retweets on Twitter, trending hash tags, among others.
Specific Types of People and Propagation of Fake Information
Fake information websites target audience and endeavours that they send false information to which people are most likely to be respond. It uses social media analytics. Analytics uses cookies to work in showing interest groups to use information provided by cookies to come across a responsive audience for ]messages.
Cookies and Propagation of Fake Information
Cookies are used to track peoples’ that visit websites, creating personality profiles, and showing false information content that are most receptive. They are files that websites install on the computer to save peoples’ preferences and remember what they look at, shop for, etc. At the time they agree to let a website that they visit install a cookie to their app or web browser, when they then make definite selections like the language they prefer, the cookie will tell the website to use the same setting when they visit it again. Websites will identify it is you since the cookie saved in their browser has a unique ID.
Cookies on the computer system track people’s actions on the web across all the websites they visit. Some of the cookies that are stored even come from websites other than the ones they have visited. Each website that is configured to save these “third party” cookies on the computer reports that browser – identified by the unique ID of its cookies – has visited them. These cookies are referred to as tracking cookies, or trackers.
Facebook, Google, and other websites which provide trackers analyze the websites people have visited and also what they did while they looked at those websites. The websites you visit, in combination with the actions that you take on those websites, give valuable information about you to other social media analytics firms. Based on these data, they calculate models to predict your interests, to select and deliver the kinds of messages you are most likely to react positively to.
Trolls and the Propagation of Fake News
Trolls are people who set up social media accounts for the sole purpose of spreading fake news and fanning the flames of fake information. Trolls, in this study refer to human beings who have accounts on social media platforms, for the purpose generating comments that argue with people, insult other users. They attempt to damage the credibility of ideas which they do not like, and to intimidate people who post those ideas. They support false information stories that they are ideologically aligned with.
Algorithms’ Roles and How They Contribute To Echo Chambers and Filter Bubbles
Filter Bubbles
A filter bubble refers to an algorithmic bias that skews or restricts information an individual user sees on the internet. The bias and unfairness is caused by the weighted algorithms that search engines and social media sites use to personalize user experience (UX). Web search results and social media feeds are the most common examples of online filter bubbles. Filter bubbles now exist beyond platforms. Algorithms dictate proposed movies and series on streaming sites, songs on Spotify, videos on YouTube, and even the content users notice first on some news sites.
Filter bubble definition
Internet activist, Eli Pariser created the term “filter bubble” approximately 2010. He perceived that search engines, social media sites, and other platforms make use of algorithms to personalize and present the content based on a person’s previous activity, usually filtering out the content that presents dissimilarity views or opinions.
As a result of the filter bubble, one mostly observes the posts reinforcing their beliefs. Various content that could dispute what people believe in is just not there. That is reason the filter bubble skews the information people encounter online and can misrepresent their view of reality. Web search results and social media feeds are the most general illustrations of online filter bubbles. But filter bubbles now exist beyond these platforms. Algorithms dictate suggested movies and series on streaming sites, songs on Spotify, videos on YouTube, and even what content you see first on some news sites.
Impact of Filter Bubbles
A filter bubble has both positive and negative impacts:
The positive impact of filter bubbles.
Filtering information based on preferences exists for a reason – filter bubbles have some clear benefits:
Personalization. Personalized content can improve the overall user experience. You are more likely to find relevant information and engaging content when it’s tailored based on your past behavior. For example, if you love cooking and get mostly recipe videos on TikTok, it’s a win-win for you and TikTok.
Improved efficiency. Personalized content can also help you find what you’re looking for faster. For example, Google search can show you news about your city or state even when you don’t specify it in the search query.
Increased engagement. If you enjoy the content on the platform, you’re more likely to continue using it instead of hopping through sites or looking for alternatives. It benefits you, your social circle, and the platform itself.
Reduced information overload. A filter bubble can help manage the overwhelming amount of information online by narrowing down the content to what is most relevant and valuable for you. For example, if you’re a jazz fan, Spotify won’t be suggesting or auto-playing metal music.
Personal empowerment. A filter bubble can give you a sense of control over the content you see, as it prioritizes the topics that matter most to you.
Negative Impact of Filter Bubbles
Despite the positive impact, a filter bubble can distort the way we see reality:
- Limited exposure to diverse perspectives.A filter bubble can create echo chambers. You only come across content reinforcing your beliefs and opinions, making engaging in conversations with people with contrasting views more challenging. It can contribute to a lack of empathy and understanding between different groups, potentially leading to social division and polarization.
- Cognitive biases.When algorithms create a filter bubble based on flawed data or biased assumptions, they can strengthen existing prejudices and lead to unfair or discriminatory results. A filter bubble can also encourage confirmation bias, which is the tendency to seek out and interpret information in a way that confirms pre-existing beliefs. Critical thinking suffers, and an inability to consider alternative viewpoints builds up.
- Lack of serendipity.A filter bubble limits the opportunities to discover new and unexpected content, limiting creativity and innovation.
- Propaganda and manipulation.Sometimes, a filter bubble can manipulate your behavior or opinions by promoting content that aligns with a particular agenda. The Cambridge Analytica scandal is the most famous example of that.
- Fake news and misinformation.A filter bubble can speed up the spread of fake news and misinformation. People are less likely to encounter information that challenges their (false) beliefs, leading to impaired critical thinking and an inability to distinguish between accurate and inaccurate information. It can have negative consequences for individuals and society as a whole.
- Reduced exposure to important information.A filter bubble prioritizes content that is entertaining or popular over content that is important or relevant. It leads to a lack of exposure to critical information, such as uncomfortable news or current events.
- Privacy concerns.Platforms collect and analyze your data to put you in a filter bubble. It raises privacy concerns, especially when you’re unaware of or don’t consent to the use of your data.
Avoiding Filter Bubbles
- Seek out content diversity
- Engage in active search and discovery. Rather than relying on algorithms to present information to you, actively seek out information and discover new sources of content:
- Visit news sites that don’t lean toward one ideological or political spectrum.
- Follow people and join groups outside of your usual information environment.
- Check sources from different political affiliations, cultures, and countries.
- Evaluate and Fact-Check Your Sources
- Take Control of Your Browsing Experience
- Disable customization and personalization.
- Browse incognito and avoid logging into your accounts.
- Block third-party cookies..
- Opt for a private search engine.
- Switch between multiple search engines.
- Use ad blockers.
Echo Chamber
An echo chamber is a hollow enclosure used to produce reverberation, usually for recording purposes. An echo chamber is enveloped in highly audio insightful surfaces. Through the use directional microphones pointed away from the speakers, echo capture is maximized. Echo Chamber lacks information diversity due to restriction of information sources. In echo chambers, individuals are exposed only to information from like-minded individuals. Echo chambers are commonly characterized by ideological segregation (the tendency of individuals to associate with others who share their viewpoints) and by partisan polarization (the adoption of more extreme views). Echo chambers are associated with fragmentation of users into ideologically narrow groups and with “segregation by interest or opinion [that] will … increase political polarization” Dubois and Blank (2018) and “foster social extremism” Barberá (2015).
Virtual Private Network (VPN)
Use a VPN to change your IP address and avoid tracking. Websites would not connect your IP address to your previous experience on the site, allowing you to see the content without a filter bubble being in the way. A VPN can also make it more difficult for algorithms to create a personalized information environment based on your browsing or search history.
Moreover, NordVPN’s Threat Protection feature blocks trackers, further protecting your privacy and preventing filter bubbles. It also blocks ads and malware downloads so that you can browse online with confidence.
Impact of Visual False Information
Interestingly, research has been conducted to review and study the false information issue in digital technology platforms. Some focus not only on false information, but also differentiate between false information and rumor Bondielli and Marcelloni (2019); Meel and Vishwakarma (2020). However, they mostly focus on studying approaches from a machine learning perspective Bondielli and Marcelloni, crowd intelligence perspective Guo, Ding, Yao, Liang and Yu (2020), or knowledge-based perspective (Zhou and Zafarani (2020). However, in the present study covers all the approaches used for false information. False information easily reaches and impacts large number of users within a short time. This study covers primarily the understanding of false information problem, as related to the of banditry and insecurity challenges in the country.
A number of possible effects are inherent to visual false information:
- The defective and flawed processing of information through false visuals is unsettling. The widespread supposition that images do not lie makes visual false information. Images create physical link between a photographed object and its spectator. the role of images in stories describe its visuals’ indexicality which makes them a useful device for framing ideological messages, while their content is questioned less vitally by the audience. Kasra et al., (2018) affirm that this is why the wide-ranging public opinion is moderately bad at detecting visual false information online, as people fall short to question their authenticity. Visual false information leads to defective processing of information focused on the very fact that visuals commonly appear trustworthy and true-to-life.
- Visuals are processed in a different way than text not only on a cognitive, but also on an emotional level. Powell et al. (2015) note that images in isolation creates stronger framing consequences than text alone and impact people’s opinions and determined behaviour without follow-up text. Emotions have an intervening role in these effects. Geise (2017) identifies the most familiar emotions measured in visual framing consequences as possibly sympathy, fear, and anger. Feelings of uncertainty, confusion or frustration while seeking information can transform into affective symptoms of anxiety.
- Connected to images’ commonly perceived believability is a problem of misperceptions due to visual false information they may possibly cause and how they can be corrected.
- Another reason why visual false information is considered a problem is because of its engagement with social media and sharing ability. The underlying problem of engaging social media is another primary effect of visual false information on citizens.
- There is lack of trust in visuals. Giotta (2020) states that digital manipulations of news photographs published in conventional newspapers have in the past led to debates about the general trustworthiness of pictures.
- The situation of mistaking real for fake is another unexpected that visual false information might cause citizens to the extent that they start thinking everything is fake. This proposition stands in connection to the false information label as illustrated by Egelhofer and Lecheler (2019).
- National security takes decision concerning identification of actual threats and mobilization of resources to ensure the safety lives and properties.
Challenges Related to False Information
Several issues make false information in digital technology platforms a challenging problem. These issues are:
Ahmed (2021); Dobber et al. (2021) suggest that;
- content-based issues (i.e., deceptive content that resembles the truth very closely),
- contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for False information detection).
- Fake information through videos can damage the reputation.
- Fake information through videos can affect public attitudes.
- It has detrimental spillover effects.
- It is more likely to spread faster and deeper.
- It may yield greater effects that are particularly difficult to correct.
Addressing these is likely to be full of obstacles. Solutions could be focused on
(a) Illustrating whether something really occurred.
(b) Identifying Fake information hoping to neutralize the communication environment.
(c) Parallel move toward solution is to focus on learning how to survive with fake videos and understand the threats they cause,
(d) Increasing digital literacy in algorithmic spaces.
(e) Concentrating on how certain actors can mitigate the effects of fake videos.
Strategies for Curbing False Information Reports in Nigeria
Mosseri (2017) notes that false information reports is harmful to the country and grind downs public trust and threatens national security. The scholar further states that from the telecommunication perception, the majority false information reports is financially encouraged; and one effective approach to fight it is through removal of economic incentives to perpetrators of misleading information.
Measures should be taken to promote media and digital literacy about the issue of false information reports.
THEORETICAL FRAMEWORK
Two Theoretical Frameworks were applicable to this study: Technological Determinism Theory and Disruptive Technology Theory.
Technological Determinism Theory
The foundation of the technological determinism theory states that technology determines and decides activities in the society. This was propounded by Marshal McLuhan. According to Griffin; Nwabueze (2014), citing Marshal McLuhan note that changes in the modes of communication determine human existence and innovations in technology always cause cultural change. Digital technology platforms such as Internet and the position of the theory is that such innovation helps change the society. Digital technology platforms are changing the structure of the society which has completely changed the human communicative system. The spread of false information reports across digital technology platforms on security problems is achieved through the internet. This theory explains that technology determines and decides activities in the society and here the digital technology platform is internet. This theory is relevant to the study because digital technology platforms have a great impact on the society and it also influences peoples’ reaction to news.
Disruptive Technology Theory
The Disruptive Technology represents the tendency of new improvement to challenge and alter the values and approach that have defined a given activity. This was advocated by Clayton Christensen (1997) to explain ways technologies shifts thinking of marketers in reaching audiences for their goods and services. Berenger and Taha (2012) note that sometimes, it results in flawed strategies that moved marketers out of comfort zones and away from established customer base. This technology tends to redefine the existing philosophy and strategy strengthening a given endeavour. Digital technology platforms have altered the exposure to news and information making it possible for people to have easy access to the media. Unlike the traditional media, where certain restrictions are placed on information gathering and dissemination the digital technology platforms have altered exposure patterns and media use.
METHODOLOGY
The study applied the historical descriptive research design and used existing secondary data sources like articles, dailies, journals, documents, books, and libraries on security issues. The research design involved digital technology platforms such as WhatsApp and Facebook platforms. The derived questionnaire was sent to respondents through WhatsApp, Facebook, Twitter and email accounts. Data generated contained pertinent questions to gain knowledge of the demography of Nigerians who are responsible for spreading false information media platforms. The questionnaire was designed by the researchers to determine the role of Nigerian audiences through demography in the spread of false information and examine how respondents react in the spread of false information. The respondents were randomly selected. The sample size to participate in the process, the researcher was through the Yamane formula, 1957, posited by Singh and Masuku (2014), which considered an appropriate formula used in making selection of sample size in social science research.
DISCUSSION
Spread of false information reports is an ugly development that will worsen the security challenge in the country. This affirms what Joan and Baptista (2020) said that false information attracts the readers’ attention and persuades them to believe in falsehood.
The extreme anxiety that bound the nature of Nigerian society continuously provides extra platforms that ignore the distribution of false information on the internet. Zhao, Zhao, Sano, Levy, Takayasu, Takayasu, Li, Wu and Havlin (2018) outline the unease number of blogs that are exclusively directed towards the spread of false information. The rate at which these blogs are created is rising greatly as more users search for sources to make money online. This is in addition motivated by audience willingness to draw traffic to their pages. There is prospect that few younger generations will continue to engage in creating and spreading false information for financial benefits.
It is of necessity for the federal government to understand the difficulties that stand as a barrier in curbing the spread of false information. Identifying the sites and their users is vital. False information reports are common on varied digital technology platforms. The study finds that there is high degree of acceptance of reports on digital technology platforms by users regardless of the negative influence that false information reports possess. False information reports contribute to increase of bandits attacks.
The facts produced from the field show that greater part of the category of audience involved in spreading false information are within the ages of 21- 36. Outside the education of Nigerians involved in the act, it is significant to provide them with a sense of fulfillment through credible source of income and appropriate environment sustenance.
CONCLUSION AND RECOMMENDATIONS
The study examined visual elements in digital technology platforms and visualized communication in spreading of false information in Nigeria. The study specifically examined spreading of false information in Nigeria and likely people in making venture and creating the spread of false information. The existence of this increased opportunity in making the unethical role attractive to younger generations who are eager to involve in lawlessness, bombings, killings, kidnappings, armed robbery and arson.
Recommendations
In order to address the issue of demography distribution of Nigerians involved in the spread of false information the study gives account of the general behaviour of Nigerians regarding this. Resolutions to the problems generated by false information become possible with the knowledge of Nigerian demographical involvement particularly by location, age, and gender thus, the study recommends that:
- There should be increased digital technology platforms sensitization on false information reports.
- The federal government and relevant organisations should collate research papers/materials particular to demography analysis on the dissemination of false information in Nigeria and use results to identify the specific role of Nigerians in the spreading of false information based on the demography.
- Since the social system is involved in the creation and spread of false information, it is necessary to engage in proper investigations to expose this act.
- Relevant government agencies should recommend and pass into law the necessary bills that punish whoever spreads false information.
- Security agencies should monitor and harness digital technology platforms as avenue for information gathering on distress situations as most people would rather upload a security bridge on social media than call the police.
- The media regulating agencies should insist on media professionalism from media houses and professionals in order to avoid using them as instruments for the spread of false
REFERENCES
- Allcott H, Gentzkow M (2017) Digital technology platforms and False information in the 2016 election. J Econ Perspect 31(2):211–36. https://doi.org/10.1257/jep.31.2.211
- Allcott, H., and Gentzkow, M. (2017). “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives (31:2), pp. 211-236.
- Berenger, R & Taha, M. (2012). Technology Disruption theory and Middle East Media. http://academia.edu. On July 15,2021.
- Bondielli A, Marcelloni F (2019) A survey on False information and rumour detection techniques. Inf Sci 497:38–55. https://doi.org/10.1016/j.ins.2019.05.035
- Bresciani, S., & Eppler, M. J. (2015). The pitfalls of visual representations : A review and classification of common errors made while designing and interpreting visualizations. Sage Open, 1–14. https://doi.org/10.1177/2158244015611451
- Burkhard, R. A. (2004). Learning from architects: The difference between knowledge visualization and information visualization. In Proceedings of the Eighth International Conference on Information Visualisation (IV’04) (pp. 519–524). IEEE Computer Society. https://doi.org/10.1109/IV.2004.1320194.
- Chadwick A and Stanyer J (2022) Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: toward a holistic framework. Communication Theory 32(1): 1–24.
- Cooke, N. A. (2017) Post-truth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age, Library Quarterly, Vol. 87, No. 3, July, pp. 211–221. https://doi.org/10.1086/692298
- Dentith, M. (2017) The problem of fake news, Public Reason, Vol. 8, No. 1-2, December, pp. 65-79.
- Dubois, E., and Blank, G. (2018). “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media,” Information, Communication & Society (21:5), pp. 729-745.
- Egelhofer, J. L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: A framework and research agenda. Annals of the International Communication Association, 43, 97–116. https://doi.org/10.1080/23808985.2019.1602782
- Gavrilova, T., Alsufyev, A., & Grinberg, E. (2017). Knowledge visualization : Critique of the St. Gallen School and an analysis of contemporary trends 1. Business Informatics, 3(41), 7–19. https://doi.org/10.17323/1998-0663.2017.3.7.19
- Gentzkow, H. A. (2017). Social Media and Fake News in the 2016 Election, Journal of Economic Perspectives, Vol. 31, No. 2, Spring, pp. 211–236.
- Guo B, Ding Y, Yao L, Liang Y, Yu Z (2020) The future of false information detection on Digital technology platforms : new perspectives and trends. ACM Comput Surv (CSUR) 53(4):1–36. https://doi.org/10.1145/3393880
- Hameleers M, Powell TE, Van Der Meer TG, Bos L (2020) A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via Digital technology platforms. Polit Commun 37(2):281–301. https://doi.org/10.1080/10584609.2019.1674979
- Hansen, C. D., & Johnson, C. R. (2011). Visualization handbook. Elsevier.
- Joan, A. & Baptista. (2020). Understanding False information Consumption: A Review. http://semanticscholar.org
- Kirk, A. (2016). Data visualisation: A handbook for data driven design. Sage.
- Kumar S, Shah N (2018) False information on web and Digital technology platforms: a survey. arXiv preprint arXiv:1804.08559
- Langin, K. (2018) “Fake news spread faster than true news on Twitter- thanks to people not bots”, [Online], sciencemag.org/news/2018/03/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots
- Meel P, Vishwakarma DK (2020) False information, rumor, information pollution in Digital technology platforms and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986. https://doi.org/10.1016/j.eswa.2019.112986
- Meyer, R. (2010). Knowledge visualization. Trends in Information Visualization, 23, 23–30. https://doi.org/10.1007/978-1- 4471-4303-1
- Mosseri, A. (2017). Working to stop misinformation and false news. Available at www.facebook.com. Accessed 2/6/2019.
- Muigai, J. (2019). Understanding fake news. International Journal of Scientific Research. http://researchgate.com. On June 6th,2021.
- Nakov P (2020) Can we spot the “False information ” before it was even written? arXiv preprint arXiv:2008.04374
- Nwabueze, C. (2014). Introduction to Mass Communication. Media Ecology in the Global Village . Owerri: Topshelve Publishers.
- Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65, 997–1017. https://doi.org/10.1111/jcom.12184
- Roozenbeek, J. and Linden, S. (2018) The Fake news game: Actively Inoculating against the risk of misinformation, Journal of Risk Research, Vol. 22, No. 5, February, pp. 1-28.
- Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y (2019) Combating False information : a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 10(3):1–42. https://doi.org/10.1145/3305260
- Shu K, Sliva A, Wang S, Tang J, Liu H (2017) False information detection on Digital technology platforms : a data mining perspective. ACM SIGKDD Explor Newsl 19(1):22–36. https://doi.org/10.1145/3137597.3137600
- Singh, A and Masuku, M. (2014). Sampling Techniques and determination of sample size in applied statistics research: An overview. International Journal of Economics, commerce, and Management, Vol. 2 No.2, November, pp. 1-22.
- Thoresten, Q. , Lena, E., Svenja, B. & Tim, E. (2019). Fake News. http://reserachgate.com
- von Sikorski, C. (2021). Visual polarization: Examining the interplay of visual cues and media trust on the evaluation of political candidates. Journalism. Advance online publication. https://doi.org/10.1177/1464884920987680
- Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
- Yamane, T. (1967) Statistics: An Introductory Analysis, 2nd, Harper and Row, New York.
- Zhao, Zhao, Sano, Levy, Takayasu, Takayasu, Li, Wu and Havlin., (2018) Fake news propagates differently from real news even at early stages of spreading, EPJ Data Science, Vol. 9 No. 1, April, pp. 1–14.
- Uzuegbunam, C. (2020). Communication and Media Studies Multiple Perspectives. Enugu: New Generation Books.
- Chukwuere, J and Onyebukwa, C. (2018). The Impacts of Social Media on National Security: A View from the Northern & South-Eastern Region of Nigeria. http://ecojournals.com June, 6th, 2021.
- Egelhofer, J and Lecheler, S (2019) Fake news as a two-dimensional phenomenon: a framework and Research agenda, Annals of the International Communication Association, Vol. 43, No. 2, April, pp. 97-116.
- Christianson, S. (2012). 100 diagrams that changed the world. Penguin.
- Brennen JS, Simon FM and Nielsen RK (2021) Beyond (mis)representation: visuals in COVID-19 misinformation. The International Journal of Press/Politics 26(1): 277–299.
- Lee J and Shin SY (2021) Something that they never said: multimodal disinformation and source vividness in understanding the power of AI-enabled deepfake news. Media Psychology 25: 531–546.
- Vaccari C and Chadwick A (2020) Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society 6(1): 1–13.
- Kasra M, Shen C and O’Brien JF (2018) Seeing is believing: how people fail to identify fake images on the web. In: Conference on human factors in computing systems—proceedings, Montreal QC, Canada, 21–26 April. New York: ACM.
- Zhou X, Zafarani R (2020) A survey of False information : fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 53(5):1–40. https://doi.org/10.1145/3395046
- Kasra M, Shen C and O’Brien JF (2018) Seeing is believing: how people fail to identify fake images on the web. In: Conference on human factors in computing systems—proceedings, Montreal QC, Canada, 21–26 April. New York: ACM.
- Geise S (2017) Visual framing. In: Rössler P (ed.) The International Encyclopedia of Media Effects. Hoboken, NJ: Wiley, pp. 1–12.
- Giotta G (2020) Ways of seeing . . . what you want: flexible visuality and image politics in the post-truth era. In: Zimdars M and McLeod K (eds) Fake News. Cambridge, MA: The MIT Press, pp. 29–44.
- Abel, R. (2004). Encyclopedia of early cinema. Taylor & Francis. Adjer, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of deepfakes: Landscape, threats, and impact. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
- Ahmed, S. (2021). Who inadvertently shares deepfakes? Analyzing the role of political interest, cognitive ability, and social network size. Telematics and Informatics, 57, 101508. https:// doi.org/10.1016/j.tele.2020.101508
- Alawode, W., Olorede, J. O. & Azeez, L. D. (2018). False information and public perception of Nigerian’s online media: implications for national security. A paper presented at the 1 st national conference of the academic Staff Union of Polytechnics, Federal Polytechnic Offa, held from 2nd to 5 th October, 2018.
- Alemanno A (2018) How to counter False information ? A taxonomy of anti-False information Eur J Risk Regul 9(1):1–5. https://doi.org/10.1017/err.2018.12
- Allcott H, Gentzkow M (2017) Digital technology platforms and False information in the 2016 election. J Econ Perspect 31(2):211–36. https://doi.org/10.1257/jep.31.2.211
- Allcott, H., and Gentzkow, M. 2017. “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives (31:2), pp. 211-236.
- Allcott, H., Gentzkow, M., & Yu, C. (2019). Trend in the diffusion of misinformation on social media. Research & Politics, 6, 1–8. https://doi.org/10.1177/2053168019848554
- Altay S, Hacquin AS, Mercier H (2022) Why do so few people share False information ? It hurts their reputation. New Media Soc 24(6):1303–1324. https://doi.org/10.1177/1461444820969893
- Amazeen, M. A., Thorson, E., Muddiman, L., & Graves, L. (2018). Correcting political and consumer misperceptions: The effectiveness and effects of rating scale versus contextual correction formats. Journalism & Mass Communication Quarterly, 95(1), 28–48. https:// doi.org/10.1177/10776990166781
- Apuke OD, Omar B, Tunca EA, Gever CV (2022) The effect of visual multimedia instructions against False information spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. https://doi.org/10.1177/09610006221096477
- Arcos R, Gertrudix M, Arribas C, et al. (2022) Responses to digital disinformation as part of hybrid threats: a systematic review on the effects of disinformation and the effectiveness of fact-checking/debunking. Open Research Europe 2: 8.
- Attwood, F. (2007). No money shot? Commerce, pornography and new sex taste cultures. Sexualities, 10, 441–56. https://doi.org/10.1177/1363460707080982
- Bakos, J. Y. 1997. “Reducing Buyer Search Costs: Implications for Electronic Marketplaces,” Management Science (43:12), pp. 1676-1692. Bakshy, E., Messing, S., and Adamic, L. A. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook,” Science (348:6239), pp. 1130-1132.
- Banas, J. A., & Rains, S. A. (2010). A meta-analysis of research on inoculation theory. Communication Monographs, 77, 281–311. https://doi.org/10.1080/03637751003758193
- Basol, M., Roozenbeek, J., & van der Linden, S. (2020). Good news about Bad News: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1)(2), 1–9. https://doi.org/10.5334/joc.91
- Baum, M. Lazer, D. and Mele, N. (2017) “Combating Fake news: An Agenda for research and action”, Paper read at Northeastern University and Harvard University, Boston, May.
- (2019) “How Much Data Is on The Internet?”, [Online] https://www.sciencefocus.com/future-technology/how-muchdata-is-on-the-internet.
- Bennett WL and Livingston S (2018) The disinformation order: disruptive communication and the decline of democratic institutions. European Journal of Communication 33(2): 122–139.
- Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33, 122–139. https://doi.org/10.1177/0267323118760317
- Berenger, R & Taha, M. (2012). Technology Disruption theory and Middle East Media. http://academia.edu. On July 15,2021.
- Bode L, Vraga EK (2015) In related news, that was wrong: the correction of misinformation through related stories functionality in Digital technology platforms . J Commun 65(4):619–638. https://doi.org/10.1111/jcom.12166
- Bondielli A, Marcelloni F (2019) A survey on False information and rumour detection techniques. Inf Sci 497:38–55. https://doi.org/10.1016/j.ins.2019.05.035
- Bozdag, E., and van den Hoven, J. 2015. “Breaking the Filter Bubble: Democracy and Design,” Ethics and Information Technology (17:4), pp. 249-265.
- Bresciani, S., & Eppler, M. J. (2015). The pitfalls of visual representations : A review and classification of common errors made while designing and interpreting visualizations. Sage Open, 1–14. https://doi.org/10.1177/2158244015611451
- Burkhard, R. A. (2004). Learning from architects: The difference between knowledge visualization and information visualization. In Proceedings of the Eighth International Conference on Information Visualisation (IV’04) (pp. 519–524). IEEE Computer Society. https://doi.org/10.1109/IV.2004.1320194.
- Burshtein S (2017). The true story on False information. Intell Prop J 29(3):397–446
- Cairo A (2015) Graphics lies, misleading visuals. In: Bihanic D (ed.) New Challenges for Data Design. London: Springer, pp. 103–116.
- Carmi E, Yates SJ, Lockley E, et al. (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Internet Policy Review 9(2). Available at: https://policyreview.info/articles/analysis/data-citizenship-rethinking-data-literacy-agedisinformation-misinformation-and
- Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 https://doi.org/10.14763/2020.2.1481
- Chadwick A and Stanyer J (2022) Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: toward a holistic framework. Communication Theory 32(1): 1–24.
- Chen, M., Ebert, D., Hagen, H., Laramee, R. S., van Liere, R., Ma, K., Ribarsky, W., Scheuermann, G., & Silver, D. (2009). Data, information, and knowledge in visualization. IEEE Computer Graphics and Applications, 29(1), 12–19. https://doi.org/10. 1109/MCG.2009.6
- Chiu MM, Oh YW (2021) How False information differs from personal lies. Am Behav Sci 65(2):243–258. https://doi.org/10.1177/0002764220910243
- Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E et al (2020) Real solutions for False information ? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on Digital technology platforms . Polit Behav 42(4):1073–1095. https://doi.org/10.1007/s11109-019-09533-0
- Cooke NA (2017) Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q 87(3):211–221. https://doi.org/10.1086/692298
- Cooke, N. A. (2017) Post-truth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age, Library Quarterly, Vol. 87, No. 3, July, pp. 211–221.
- Dame Adjin-Tettey T (2022) Combating False information, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human 9(1):2037229. https://doi.org/10.1080/23311983.2022.2037229
- Dan V, Paris B, Donovan J, et al. (2021) Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly 98(3): 641–664.
- Dentith, M. (2017) The problem of fake news, Public Reason, Vol. 8, No. 1-2, December, pp. 65-79.
- Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/ Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364
- Dubois, E., and Blank, G. 2018. “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media,” Information, Communication & Society (21:5), pp. 729-745.
- Dunu, I. (2018) Social media and Gubernatorial Elections in Nigeria: A critical Discourse, Journal of Humanities and Social science, Vol. 23, No. 1, January, pp. 6-15.
- C. Tandoc, Z. W. Lim, and R. Ling, “Defining ‘Fake News’: A Typology of Scholarly Definitions,” Digital Journalism, vol. 6, no. 2, pp. 137–153, Feb. 2018, https://10.1080/21670811.2017.1360143.
- Egelhofer JL and Lecheler S (2019) Fake news as a two-dimensional phenomenon: a framework and research agenda. Annals of the International Communication Association 43: 97–116.
- Egelhofer, J and Lecheler, S (2019) Fake news as a two-dimensional phenomenon: a framework and Research agenda, Annals of the International Communication Association, Vol. 43, No. 2, April, pp. 97-116.
- Egelhofer, J. L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: A framework and research agenda. Annals of the International Communication Association, 43, 97–116. https://doi.org/10.1080/23808985.2019.1602782
- Fasanya, Amodu, Aiyelabola, Kayode-Adedeji and Okorie (2018) “Twitter Exposure on Lagosians’ Football Betting Lifestyle”, Paper read at Proceedings of the 31st International Business Information Management Association Conference, IBIMA 2018: Innovation Management and Education Excellence through Vision 2020 February.
- Finkel, J., Jiang, S., Luo, M., Mears, R. Metaxa-Kakavouli, D., Peeple, C. Sasso, B. Shenoy, A. Sheu, V. and Torres-Echeverry (2019) Fake news and misinformation: The roles of the nations’ digital newsstands, FB, Google, Twitter and Reddit’, [Online], Stanford Law School, www-cdn.law.stanford.edu/wp-content/uploads/2017/10/Fake-News-MisinformationFINAL-PDF.pdf.
- Folarin, B. (1998) Theories of Mass Communication: An Introductory Text, Stirling-Horden Publishers (Nigeria Limited), Ibadan.
- Garrett, R. K. 2009. “Echo Chambers Online? Politically Motivated Selective Exposure among Internet News Users,” Journal of Computer-Mediated Communication (14:2), pp. 265-285.
- Gavrilova, T., Alsufyev, A., & Grinberg, E. (2017). Knowledge visualization : Critique of the St. Gallen School and an analysis of contemporary trends 1. Business Informatics, 3(41), 7–19. https://doi.org/10.17323/1998-0663.2017.3.7.19
- Gentzkow, H. A. (2017). Social Media and Fake News in the 2016 Election, Journal of Economic Perspectives, Vol. 31, No. 2, Spring, pp. 211–236.
- Getz, D. and Page, S. (2016) Progress and prospects for event tourist research, Tourism Management, Vol. 52, February, pp. 593-31.
- Goyanes, M. and Lavin, A. (2018) The Sociology of Fake News Factors affecting the probability of sharing political fake news online. Media and Communication, Media@LSE Working Paper Series, June, pp.12-13.
- Guess, A., Nagler, J., and Tucker, J. (2019) Less than you think: Prevalence and predictors of fake news dissemination on Facebook, Science Advances, Vol. 5, No. 1, pp. 1-7.
- Guo B, Ding Y, Yao L, Liang Y, Yu Z (2020) The future of false information detection on Digital technology platforms : new perspectives and trends. ACM Comput Surv (CSUR) 53(4):1–36. https://doi.org/10.1145/3393880
- Hameleers M and de Vreese C (2021) Perceived mis- and disinformation in a post-factual information setting. In: Tumber H and Waisbord S (eds) The Routledge Companion to Media Disinformation and Populism. 1st ed. New York: Routledge, pp. 366–375.
- Hameleers M, Powell TE, Van Der Meer TG, Bos L (2020) A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via Digital technology platforms. Polit Commun 37(2):281–301. https://doi.org/10.1080/10584609.2019.1674979
- Hameleers M, Powell TE, van der Meer TGLA, et al. (2020) A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication 37(2): 283–301.
- Hannah MN (2021) A conspiracy of data: Qanon, social media, and information visualization. Social Media + Society 7(3).
- Hansen, C. D., & Johnson, C. R. (2011). Visualization handbook. Elsevier.
- Hegelich, S. (2020). Facebook needs to share more with researchers. Nature, 579. https://doi. org/10.1038/d41586-020-00828-5
- Hemsley, J., & Snyder, J. (2018). Dimensions of visual misinformation in the emerging media landscape. In B. G. Southwell, E. A. Thorson, & L. Sheble (Eds.), Misinformation and mass audiences (pp. 91–106). University of Texas Press. https://doi.org/10.7560/314555
- Igwebuike, Ebuka and Chimuanya, Lily (2021) Legitimating Falsehood in Social Media: A Discourse Analysis of Political Fake News. A Discourse Analysis of Political Fake News’. In: Discourse & Communication, 15 (1). pp. 42-58.
- Jacobson, S., Myung, E., and Johnson, S. L. 2016. “Open Media or Echo Chamber: The Use of Links in Audience Discussions on the Facebook Pages of Partisan News Organizations,” Information, Communication & Society (19:7), pp. 875-891.
- Joan, A. & Baptista. (2020). Understanding False information Consumption: A Review. http://semanticscholar.org
- Jones-Jang SM, Mortensen T, Liu J (2021) Does media literacy help identification of False information ? Information literacy helps, but other literacies don’t. Am Behav Sci 65(2):371–388. https://doi.org/10.1177/0002764219869406
- Kirk, A. (2016). Data visualisation: A handbook for data driven design. Sage.
- Klein O (2020) Misleading memes: the effects of deceptive visuals of the British national party. Partecipazione e Conflitto 13(1): 154–179.
- Kumar S, Shah N (2018) False information on web and Digital technology platforms: a survey. arXiv preprint arXiv:1804.08559
- Langin, K. (2018) “Fake news spread faster than true news on Twitter- thanks to people not bots”, [Online], sciencemag.org/news/2018/03/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots
- Lanius C, Weber R, MacKenzie WI (2021) Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min 11(1):1–15. https://doi.org/10.1007/s13278-021-00739-x
- Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Metzger, M. J., . . . , Zittrain, J. L. (2019). The science of fake news. Science, 359, 1094–1096. https://doi. org/10.1126/science.aao2998
- Lecheler S and Egelhofer JL (2022) Disinformation, misinformation, and fake news: understanding the supply side. In: Strömbäck J, Wikforss Å, Glüer K, et al. (eds) Knowledge Resistance in High-Choice Information Environments. New York: Routledge, pp. 69–87.
- Lee C, Yang T, Inchoco GD, et al. (2021) Viral visualizations: how coronavirus skeptics use orthodox data practices to promote unorthodox science online. In: Proceedings of the 2021 CHI conference on human factors in computing systems, New York, 6 May 2021, pp. 1–18. New York: ACM
- Lee, T. (2019) The global rise of ‘fake news’ and the threat to democratic elections in the USA, Public Administration and policy: An Asia-Pacific Journal, Vol. 22, No.1, June, pp.15-24.
- Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 1–38. https:// doi.org/10.1080/10463283.2021.1876983
- Li, K. G., Mithas, S., Zhang, Z., and Tam, K. Y. 2019. “How Does Algorithmic Filtering Influence Attention Inequality on Social Media?,” in Proceedings of the 40 th International Conference on Information Systems, Munich..
- Mazzeo V, Rapisarda A (2022) Investigating fake and reliable news sources using complex networks analysis. Front Phys 10:886544. https://doi.org/10.3389/fphy.2022.886544
- McGonagle, T. (2017). “Fake News: false fears or real concerns? Netherlands Quarterly of Human Rights. 35(4), 203 – 209. Available at www.journals.sagepub.com. Accessed 25/6/2019.
- Meel P, Vishwakarma DK (2020) False information, rumor, information pollution in Digital technology platforms and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986. https://doi.org/10.1016/j.eswa.2019.112986
- Meijer, I. C. (2007) The Paradox of Popularity. How Young People Experience the News, Journalism Studies, Vol. 8, No. 1, February, pp. 96-116.
- Messaris P and Abraham L (2001) The role of images in framing news stories. In: Reese SD, Gandy OH and Grant AE (eds) Framing Public Life. 1st ed. New York: Routledge, pp. 217–226.
- Meyer, R. (2010). Knowledge visualization. Trends in Information Visualization, 23, 23–30. https://doi.org/10.1007/978-1- 4471-4303-1
- Mitchell, A., and Jurkowitz, D. 2014. “Social, Search & Direct: Pathways to Digital News,” Pew Research Center (https://www. journalism.org/2014/03/13/social-search-direct/).
- Mosseri, A. (2017). Working to stop misinformation and false news. Available at www.facebook.com. Accessed 2/6/2019.
- Muigai, J. (2019). Understanding fake news. International Journal of Scientific Research. http://researchgate.com. On June 6th,2021.
- Mutz, D. C. 2006. “How the Mass Media Divide Us,” in Red and Blue Nation? Characteristics and Causes of America’s Polarized Politics (Vol. 1), P. S. Nivola and D. W. Brady (eds.), Washington, DC: Brookings Institution Press.
- Mutz, D. C., and Martin, P. S. 2001. “Facilitating Communication across Lines of Political Difference: The Role of Mass Media,” American Political Science Review (95:1), pp. 97-114.
- Nakov P (2020) Can we spot the “False information ” before it was even written? arXiv preprint arXiv:2008.04374
- Negroponte, N. 1996. Being Digital, New York: Vintage Books. Nickerson, R. S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology (2:2), pp. 175-220.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism (1st ed.). New York University Press.
- Nwabueze, C. (2014). Introduction to Mass Communication. Media Ecology in the Global Village . Owerri: Topshelve Publishers.
- Nwanegbo, C. J., & Odigbo, J. (2013). International Journal of Humanities and Social Science, 3(4), 285-291.
- Nygren T, Brounéus F, Svensson G (2019) Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ 18(2):87–109. https://doi.org/10.4119/jsse-917
- Ogbette D. and Kareem, A. (2019) Fake News in Nigeria: Causes, Effects and Management. Journal of Information and Knowledge Management, Vol. 3 No. 2, March, pp. 1-10.
- Ognyanova, K. (2021). Network approaches to misinformation evaluation and correction. In I. Yanovitsky, & M. Weber (Eds.), Networks, knowledge brokers, and the public policymaking process (pp. 1–19). Palgrave Macmillan.
- Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review. https://doi. org/10.37016/mr-2020-024
- Page, S. E. 2010. Diversity and Complexity (Vol. 2), Princeton, NJ: Princeton University Press.
- Paris, B., & Donovan, J. (2020). Deepfakes and cheapfakes: The manipulation of audio and visual evidence [Data & Society Report]. https://datasociety.net/library/deepfakes-andcheap-fakes/
- Pariser, E. 2011. The Filter Bubble: What the Internet Is Hiding from You, New York: Penguin Press.
- Pennycook G, Rand DG (2020). Who falls for False information? The roles of bullshit receptivity, over claiming, familiarity, and analytic thinking. J Pers 88(2):185–200. https://doi.org/10.1111/jopy.12476
- Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65, 997–1017. https://doi.org/10.1111/jcom.12184
- Prawesh, S., and Padmanabhan, B. 2011. “The ‘Top N’ News Recommender: Count Distortion and ManipulationResistance,” in Proceedings of the 5 th ACM Conference on Recommender Systems, New York: ACM, pp. 237-244.
- Roozenbeek, J. and Linden, S. (2018) The Fake news game: Actively Inoculating against the risk of misinformation, Journal of Risk Research, Vol. 22, No. 5, February, pp. 1-28.
- Roozenbeek, J., & van der Linden, S. (2020). Breaking Harmony Square: A game that “inoculates” against political misinformation. The Harvard Kennedy School (HKS) Misinformation Review, 1(8). https://doi.org/10.37016/mr-2020-47
- Roozenbeek, J., van der Linden, S., & Nygren, T. (2020). Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures. The Harvard Kennedy School (HKS) Misinformation Review, 1(2). https://doi.org/10.37016// mr-2020-008
- Rowman & Littlefield. Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113
- Vosoughi, D. Roy, and S. Aral, “The Spread of True and False News Online,” Science, vol. 359, no. 6380, pp. 1146–1151, Mar. 2018, https://10.1126/science.aap9559.
- Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y (2019) Combating False information : a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 10(3):1–42. https://doi.org/10.1145/3305260
- Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF (2019) Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc 21(2):438–463. https://doi.org/10.1177/1461444818799526
- Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
- Shifman L (2013) Memes in a digital world: reconciling with a conceptual troublemaker. Journal of Computer-Mediated Communication 18(3): 362–377.
- Shu K, Sliva A, Wang S, Tang J, Liu H (2017) False information detection on Digital technology platforms : a data mining perspective. ACM SIGKDD Explor Newsl 19(1):22–36. https://doi.org/10.1145/3137597.3137600
- Silverman C (2016) This analysis shows how viral fake election news stories outperformed real news on Facebook. BuzzFeed News. Available at: https://www.buzzfeednews.com/article/ craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook (accessed 17 February 2022).
- Singh VK, Ghosh I and Sonagara D (2021) Detecting fake news stories via multimodal analysis. Journal of the Association for Information Science and Technology 72(1): 3–17.
- Singh, A and Masuku, M. (2014). Sampling Techniques and determination of sample size in applied statistics research: An overview. International Journal of Economics, commerce, and Management, Vol. 2 No.2, November, pp. 1-22.
- Skjerdal, T., and Gebru, S. 2020. “Not Quite an Echo Chamber: Ethnic Debate on Ethiopian Facebook Pages during Times of Unrest,” Media, Culture & Society (42:3), pp. 365-379.
- Statista (2019) Number of Internet Users in Nigeria from 2017 to 2023 (in millions), [Online], statista.com/statistics/183849/internet-users-nigeria.
- Stroud, N. J., Thorson, E., & Young, D. G. (2017). Making sense of information and judging its credibility. Understanding and Addressing the Disinformation Ecosystem. https://firstdraftnews.org/wp-content/uploads/2018/03/The-Disinformation-Ecosystem-20180207-v4. pdf?x33777
- Sundar S (2008) The MAIN Model: a heuristic approach to understanding technology effects on credibility. In: Metzger MJ and Flanagin AJ (eds) Digital Media, Youth, and Credibility. Cambridge, MA: The MIT Press, pp. 73–100.
- Tandoc EC, Lim ZW and Ling R (2018) Defining ‘Fake News’: a typology of scholarly definitions. Digital Journalism 6: 137–153. Ternovski J, Kalla J and Aronow P (2022) Negative consequences of informing voters about deepfakes: evidence from two survey experiments. Journal of Online Trust and Safety 1(2).
- Teyssou D (2019) Disinformation: the force of falsity. In: Mezaris V, Nixon L, Papadopoulos S, et al. (eds) Video Verification in the Fake News Era. Cham: Springer International Publishing, pp. 339–347.
- The Premium Times (2019) “Premium Times seeks collaboration with NAN to curb fake news”, [Online], premiumtimesng.com/news/more-news/361117-premium-times-seeks-collaboration-with-nan-to-curb-fakenews.html.
- Thomson TJ, Angus D, Dootson P, et al. (2022) Visual mis/disinformation in journalism and public communications: current verification practices, challenges, and future opportunities. Journalism Practice 16: 938–962.
- Thoresten, Q. , Lena, E., Svenja, B. & Tim, E. (2019). Fake News. http://reserachgate.com
- Tsang SJ (2020) Motivated False information perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. https://doi.org/10.1177/1077699020952129
- Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 44, 157–173. https://doi.org/10.1080/23808985.2020.1759443
- van Duyn E and Collier J (2019) Priming and fake news: the effects of elite discourse on evaluations of news media. Mass Communication and Society 22(1): 29–48.
- Vishwakarma DK, Varshney D, Yadav A (2019) Detection and veracity analysis of False information via scrapping and authenticating the web search. Cogn Syst Res 58:217–229. https://doi.org/10.1016/j.cogsys.2019.07.004
- Vizoso Á, Vaz-álvarez M and López-García X (2021) Fighting deepfakes: media and internet giants’ converging and diverging strategies against hi-tech misinformation. Media and Communication 9(1): 291–300.
- von Sikorski, C. (2021). Visual polarization: Examining the interplay of visual cues and media trust on the evaluation of political candidates. Journalism. Advance online publication. https://doi.org/10.1177/1464884920987680
- Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151. https://doi.org/10.1126/science.aap9559
- Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
- Vosoughi, S; Roy, D; and Aral, S. (2018) The spread of true and false news online, Science, Vol. 359, No. 6380, pp. 1146- 1151.
- Vraga EK, Bode L (2017) Using expert sources to correct health misinformation in Digital technology platforms. Sci Commun 39(5):621–645. https://doi.org/10.1177/1075547017731776
- Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2020.1716500
- Waldman AE (2017) The marketplace of False information. Univ Pa J Const Law 20:845
- Weiss AP, Alwan A, Garcia EP, and Garcia J (2020) Surveying false information: assessing university faculty’s fragmented definition of false information and its impact on teaching critical thinking. Int J Educ Integr 16(1):1–30. https://doi.org/10.1007/s40979-019-0049-x
- Wilson, F. and Umar, M. (2019) The effect of fake news on Nigeria’s Democracy within the premise of freedom of expression, Global Media Journal, Vol. 17 No. 32, April, pp. 1-12.
- Woodcock, A. (2019) “Nearly half of Social Media users who share articles have passed on fake news, study suggests”, [Online], independent.co.uk/news/uk/home-news/fake-news-facebook-twitter-share-misinformationsurveya8908361.htn
- Yamane, T. (1967) Statistics: An Introductory Analysis, 2nd, Harper and Row, New York.
- Zhao, Zhao, Sano, Levy, Takayasu, Takayasu, Li, Wu and Havlin., (2018) Fake news propagates differently from real news even at early stages of spreading, EPJ Data Science, Vol. 9 No. 1, April, pp. 1–14.
- Zillmann, D., Knobloch, S., & Yu, H. (2001). Effects of photographs on the selective reading of news reports. Media Psychology, 3(4), 301–324. https://doi.org/10.1207/S1532 785XMEP0304_01
Subscribe to Our Newsletter
Subscribe to Our Newsletter
Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.