Kenyan Social Media Usage: An Analysis of Privacy, Security, and Liability among Users
- Dr. Omondi James Okeda
- Roseline Atieno Aduda
- 2398-2456
- Sep 3, 2025
- Education
Kenyan Social Media Usage: An Analysis of Privacy, Security, and Liability among Users
Dr. Omondi James Okeda, Roseline Atieno Aduda*
Uzima University. Department of Information Technology
*Corresponding Author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.908000199
Received: 28 July 2025; Accepted: 04 August 2025; Published: 03 September 2025
ABSTRACT
This exploratory study investigates the knowledge levels of Kenyan social media users concerning online privacy, security exposures, and legal liabilities. With the pervasive adoption of social media platforms in Kenya, understanding users’ comprehension of digital risks is critical for enhancing online safety and promoting responsible digital citizenship. Employing a quantitative, survey-based methodology, data was collected from a diverse sample of Kenyan social media users across various demographic strata, including age, education, gender, occupation, and residence. The research utilized structured questionnaires to assess participants’ awareness of privacy settings, common cybersecurity threats like phishing and data breaches, and the legal ramifications of online activities, such as defamation or intellectual property infringement.
Key findings reveal significant knowledge gaps among a substantial portion of the Kenyan social media user base regarding fundamental privacy principles and robust security practices. Participants frequently reported exposure to various online threats, yet their understanding of effective mitigation strategies remained limited. Furthermore, the study identified a varied and often inadequate grasp of personal and legal liabilities associated with social media conduct, with notable disparities observed across different demographic groups. Younger users, for instance, demonstrated higher familiarity with platform features but often underestimated liability, while users with lower educational attainment exhibited broader knowledge deficits across all measured constructs. These findings underscore an urgent need for targeted interventions to bolster digital literacy. The study’s implications point towards developing tailored educational programs, informing policy frameworks, and encouraging collaboration among stakeholders—including government bodies, educational institutions, and social media platforms—to cultivate a safer and more informed online environment for Kenyan citizens.
INTRODUCTION
The advent and exponential growth of social media platforms have fundamentally reshaped communication, social interaction, commerce, and political engagement across the globe. In Kenya, like many other developing nations, the adoption of platforms such as Facebook, Twitter (now X), Instagram, WhatsApp, and TikTok has been remarkably rapid and widespread. Driven by increasing mobile phone penetration, expanding internet connectivity – including significant efforts towards digital inclusion – and a young, digitally native population, social media has become an integral part of daily life for millions of Kenyans. These platforms serve as vital channels for maintaining social connections, accessing news and information, participating in public discourse, facilitating economic activities (e.g., online businesses), and enabling civic engagement. The accessibility and low cost of entry compared to traditional media have democratized information sharing and provided new avenues for expression and community building.
However, the pervasive integration of social media into the fabric of Kenyan society is not without significant challenges and risks. The very nature of these platforms, built on data collection, network effects, and rapid information dissemination, creates complex dynamics concerning user privacy, digital security, and personal accountability. As users share personal information, interact with others, and consume/produce content online, they become potentially exposed to a myriad of risks. These risks range from passive data collection and targeted advertising that many users may not fully understand, to more active threats like phishing scams, malware distribution, cyberbullying, online harassment, identity theft, and the spread of misinformation and disinformation. Furthermore, the content shared and interactions undertaken on social media platforms can have tangible legal and personal ramifications, encompassing issues such as defamation, intellectual property infringement, incitement to violence or hate speech, and various forms of online fraud.
While the global discourse on social media risks is extensive, the specific understanding and experiences of users vary significantly depending on socio-cultural context, levels of digital literacy, access to reliable information, and the local regulatory landscape. Kenya presents a unique environment characterized by diverse levels of internet access and digital fluency across urban and rural populations, a rich tapestry of cultural norms that may influence online sharing and interaction patterns, and an evolving legal framework pertaining to data protection and cybercrimes, notably the Data Protection Act, 2019, and the Computer Misuse and Cybercrimes Act, 2018. Despite the high rate of social media usage, there is a palpable concern among stakeholders regarding the extent to which the average Kenyan social media user truly understands the nuances of online privacy settings, recognizes and can mitigate common security threats, and is aware of the potential legal and personal liabilities arising from their digital footprint and online conduct.
This concern is amplified by anecdotal evidence and preliminary observations suggesting a significant gap between social media adoption rates and corresponding levels of digital safety awareness and critical media literacy. Users, particularly those new to the platforms or with limited formal digital education, may navigate the online space based on intuition, peer influence, or incomplete information, potentially increasing their vulnerability. The abstract nature of concepts like data privacy and cybersecurity risks can be challenging for many to grasp fully, especially when presented within complex platform interfaces or technical jargon. Moreover, the rapidly evolving nature of online threats and platform features means that even users with some foundational knowledge may struggle to keep pace.
Problem Statement
Despite the widespread adoption and integration of social media into the daily lives of millions of Kenyans, there remains a significant and inadequately explored gap in the empirical understanding of users’ knowledge regarding critical aspects of their online experience: privacy settings and data handling practices by platforms, exposure to and recognition of common online security threats, and awareness of the legal and personal liabilities associated with their online activities. While social media usage statistics in Kenya are readily available, comprehensive data on users’ comprehension of the inherent risks and responsibilities is scarce. This lack of understanding creates a vulnerability gap, potentially exposing users to privacy violations, financial loss, identity theft, harassment, and unintended legal consequences, thereby undermining trust in digital platforms and hindering the development of a secure and responsible digital ecosystem in the country. The problem is compounded by the diverse socio-economic and educational landscape of Kenya, which likely results in heterogeneous levels of digital literacy and risk awareness across different demographic groups. Therefore, there is a critical need to conduct an exploratory analysis to empirically assess the current state of knowledge among Kenyan social media users concerning these fundamental aspects of online safety and responsibility.
Research Significance
The findings of this exploratory study hold significant value for a diverse range of stakeholders invested in promoting a safer and more informed digital environment in Kenya and beyond. The insights gained into the knowledge levels, exposure, and understanding of liability among Kenyan social media users will serve as a crucial evidence base for targeted interventions and policy development.
- For Social Media Users: By identifying specific knowledge gaps related to privacy, security, and liability, this study can directly inform the design and delivery of targeted digital literacy and cybersecurity awareness programs. Empowered with a better understanding of risks and protective measures, users can make more informed decisions about their online behavior, manage their digital footprint effectively, enhance their personal security, and navigate potential legal pitfalls. This contributes to greater user safety and resilience in the digital age.
- For Policymakers and Government Bodies: The study provides data-driven insights into the realities faced by Kenyan citizens online. This information is vital for developing and refining relevant legal frameworks, such as data protection regulations, cybersecurity laws, and consumer protection policies tailored to the digital space. Understanding user awareness levels can help identify areas where public education campaigns are most needed and guide regulatory efforts to ensure platform accountability and user protection. It can also inform national digital strategy initiatives aimed at fostering a secure and trustworthy online environment for economic and social development. The findings can highlight the effectiveness (or lack thereof) of existing legal provisions from the perspective of user awareness and compliance.
- For Social Media Platform Providers: The research can offer valuable feedback to social media companies regarding user comprehension of platform features, privacy settings, and safety tools. Identifying common areas of confusion or ignorance can inform platform design improvements, user interface simplification, and the development of more effective in-platform educational resources and safety prompts. This can help platforms fulfill their responsibility in contributing to user safety and mitigating harm.
- For Educational Institutions and Civil Society Organizations: The study results can be instrumental in designing relevant curricula for digital literacy education in schools and universities, as well as informing the development of community-based awareness programs. Non-governmental organizations working on digital rights, consumer protection, and youth empowerment can utilize the findings to tailor their advocacy efforts and educational outreach programs to address the most pressing needs of Kenyan social media users.
- For Researchers and Academics: This study contributes empirical data to the growing body of literature on digital literacy, online safety, and cybersecurity knowledge, particularly within the context of African nations and the Global South. It provides a foundation for future research, including comparative studies, longitudinal analyses of knowledge change over time, and investigations into the effectiveness of various intervention strategies. It highlights the importance of considering unique local contexts when studying global digital phenomena.
In sum, by shedding light on the current state of knowledge among Kenyan social media users, this research aims to provide actionable insights that can contribute to the development of a safer, more informed, and more responsible digital society in Kenya, enabling individuals to harness the opportunities of social media while effectively mitigating its inherent risks.
Research Objectives
This exploratory study is designed to systematically investigate the current understanding and experiences of Kenyan social media users concerning online privacy, security, and liability. Addressing the knowledge gaps identified in the introduction, the research aims to generate empirical data that will inform targeted interventions and policy development. Specifically, the primary objectives of this study are:
To assess the level of awareness among Kenyan social media users regarding privacy settings and data handling practices on social media platforms. This objective seeks to quantify users’ knowledge of how to configure their privacy settings to control personal information visibility, their understanding of the types of data collected by social media platforms, and their comprehension of how this data is utilized by platforms and third parties. It will explore whether users are aware of features such as audience selection for posts, location services, tag settings, and data retention policies, aiming to identify specific areas of misunderstanding or limited engagement with these critical privacy controls. Furthermore, it will gauge their perception of the trade-off between convenience and privacy in their online interactions.
To identify common security risks and exposures faced by Kenyan social media users and evaluate their ability to recognize and mitigate these threats. This objective will investigate users’ experiences with and recognition of various online security threats, including but not limited to phishing attempts, malware distribution, identity theft, account hacking, and online scams. It aims to determine how frequently users encounter these threats, their ability to identify them, and their knowledge of appropriate protective measures such as strong password practices, two-factor authentication, and secure browsing habits. The study will also explore users’ responses to past security incidents and their reliance on in-platform security features or external cybersecurity tools.
To determine Kenyan social media users’ understanding of legal and personal liabilities associated with their online activities and content sharing. This objective focuses on users’ comprehension of the potential consequences of their social media conduct, both legally and personally. It seeks to assess awareness of legal ramifications, such as those related to defamation, intellectual property infringement (e.g., copyright violations), hate speech, incitement to violence, and the spread of misinformation, particularly within the context of Kenyan laws like the Computer Misuse and Cybercrimes Act, 2018, and the Data Protection Act, 2019. Additionally, it will examine users’ understanding of personal liabilities, including reputation damage, cyberbullying, online harassment, and the long-term impact of their digital footprint on personal and professional opportunities.
To analyze the relationship between various demographic factors and the levels of privacy knowledge, security awareness, and understanding of liability among Kenyan social media users. This objective aims to investigate how socio-demographic characteristics influence users’ comprehension of online risks and responsibilities. Specifically, the study will analyze variations in knowledge levels across different strata, including age groups (e.g., youth vs. older adults), educational backgrounds (e.g., primary, secondary, tertiary), gender, occupation (e.g., students, professionals, unemployed), and geographical residence (e.g., urban vs. rural). By identifying these correlations, the study can highlight specific demographic groups that may be more vulnerable or require tailored digital literacy interventions to enhance their online safety and responsible digital citizenship.
LITERATURE REVIEW
The digital landscape has undergone a profound transformation over the past two decades, primarily driven by the ubiquitous adoption of social media platforms. This chapter presents a comprehensive review of existing literature pertinent to social media usage patterns, digital privacy concerns, cybersecurity threats, and legal liability in the digital space. The review focuses on studies and reports published predominantly within the last ten years, prioritizing those relevant to developing countries, with a specific emphasis on the African and Kenyan contexts. Furthermore, it explores theoretical frameworks essential for understanding user behavior and knowledge levels in this evolving environment.
The Global and Kenyan Social Media Landscape
Social media platforms have become integral components of daily life globally, facilitating communication, information dissemination, social interaction, and economic activity. Their growth trajectory has been particularly steep in regions like Africa, where mobile-first internet adoption has accelerated connectivity (GSMA, 2021). In Kenya, the proliferation of mobile devices and increasing internet penetration have fueled a rapid surge in social media usage. Platforms such as Facebook, WhatsApp, Twitter (now X), Instagram, and TikTok boast millions of active users, serving diverse purposes from personal networking to news consumption and informal commerce (CAK, 2022; Kepios, 2023). This widespread adoption, while offering numerous benefits, also introduces significant complexities regarding user understanding of the underlying technologies, data practices, and associated risks.
Studies on social media usage patterns in Africa highlight not only high penetration rates, especially among the youth, but also varying levels of engagement and purpose across different demographics (Ogembo & Van Biljon, 2017; Mutahi, 2018). Access is often concentrated in urban areas, though mobile access is bridging this gap (Aker & Mbiti, 2010). User behavior is shaped by local cultural norms, economic conditions, and infrastructure availability (Madianou & Miller, 2012). While platforms offer tools for connection and empowerment, the digital divide persists, influencing digital literacy and access to reliable information about online safety (Van Dijk, 2020). Understanding these usage patterns and the diverse user base is foundational to analyzing knowledge levels concerning privacy, security, and liability.
Digital Literacy: Concepts, Challenges, and Context in Developing Regions
Digital literacy is broadly defined as the ability to find, evaluate, create, and communicate information using digital technologies (Ala-Mutka, 2011). However, in the context of social media and online safety, it encompasses more than just technical proficiency. It includes critical thinking skills to evaluate online content, understanding data privacy implications, recognizing security risks, and navigating the ethical and legal dimensions of online behavior (Eshet-Alkalai, 2012; Van Deursen & Van Dijk, 2011). For users in developing countries like Kenya, digital literacy is often shaped by unique factors, including varied educational backgrounds, limited access to formal digital education, reliance on mobile-only internet access, and exposure to local language content and community norms (UNESCO, 2018; Kwanya et al., 2017).
Research indicates that despite increasing digital access, critical digital literacy skills, particularly those related to privacy and security, lag behind basic usage abilities (Livingstone et al., 2017; Taiminen & Saarinen, 2021). Users may know how to post and interact but lack a deep understanding of how their data is used, how to identify sophisticated scams, or the consequences of sharing harmful content. This gap is often more pronounced among older users, those with lower formal education, and individuals in rural areas (Gumede & Ndlovu, 2021; Ochieng & Aligula, 2019). Improving digital literacy requires targeted interventions that go beyond basic training, focusing on critical evaluation, risk awareness, and understanding of rights and responsibilities in the digital space (Hobbs, 2017). The specific challenges in the Kenyan context, such as the influence of community trust networks on information sharing and the varying quality of internet infrastructure, also impact how digital literacy manifests and the effectiveness of educational efforts (Wanyama & Datong, 2016).
Understanding Digital Privacy: Concepts, User Perceptions, and Privacy Calculus
Digital privacy on social media is a multifaceted concept involving the control individuals have over their personal information, how it is collected, used, and shared by platforms, other users, and third parties (Nissenbaum, 2010). Unlike traditional privacy, digital privacy is dynamic and constantly negotiated within complex socio-technical systems (Acquisti & Gross, 2006). Social media platforms, by their design, encourage sharing, often blurring the lines between public and private spheres and creating complex privacy management challenges for users (Boyd & Hargittai, 2010).
Research consistently shows a disconnect between users’ stated privacy concerns and their actual privacy behaviors on social media, often termed the “privacy paradox” (Barnes, 2006; Debatin et al., 2009). Users may express concern about data collection but continue to share vast amounts of personal information. Several theories attempt to explain this paradox, with the Privacy Calculus Theory being particularly relevant. This theory posits that individuals make decisions about disclosing personal information online based on a cost-benefit analysis. They weigh the perceived benefits of disclosure (e.g., social connection, self-expression, convenience) against the perceived costs (e.g., risk of data misuse, identity theft, embarrassment) (Dinev & Hart, 2006; Taddicken, 2014). If the perceived benefits outweigh the perceived costs, users are more likely to disclose information, even if they hold general privacy concerns.
In the context of social media, perceived benefits like social capital, network effects, and platform functionality often hold significant sway. However, the accuracy of this calculus depends heavily on users’ knowledge of the potential costs – the actual privacy risks and how to mitigate them. Studies suggest that users’ understanding of platform data practices, algorithmic processing, and third-party data sharing is often limited, leading to an underestimation of the costs (Auxier et al., 2019; Zuboff, 2019). Furthermore, the constant evolution of privacy settings and policies makes it difficult for users to maintain an accurate understanding (Utz et al., 2015).
User perceptions of privacy are also shaped by cultural factors and levels of trust in institutions and platforms (Smith et al., 2011). In some cultural contexts, community norms around sharing and disclosure might differ from Western paradigms, influencing online behavior (Ling & Ho, 2018). Research in African contexts indicates varying levels of privacy awareness, often influenced by practical concerns like mobile data costs or the immediate benefits of using free social platforms over abstract data risks (Syah et al., 2020; Boateng & Boateng, 2016). While data protection laws are emerging across the continent, including Kenya’s Data Protection Act, 2019, user awareness of their rights under these laws and the obligations of data controllers remains a significant challenge (Ombati & Musau, 2020; Bii & Chemwei, 2021).
Understanding user awareness of specific privacy settings on major platforms (e.g., controlling who sees posts, managing tagged photos, location sharing) is crucial. Studies often find that while users might be aware these settings exist, they may not understand their implications or take the time to configure them effectively (Acquisti et al., 2015). Default settings, which are often less privacy-protective, can inadvertently lead users to over-share information (Nissenbaum, 2010). Research exploring Kenyan users’ specific interactions with platform privacy controls is scarce, highlighting a key area for investigation (Mutua, 2019).
Cybersecurity Threats on Social Media: Types, Prevalence, and User Recognition
Social media platforms are increasingly targeted by cybercriminals due to their large user bases and the wealth of personal information they contain. Common threats include phishing attacks aimed at stealing login credentials, malware distribution through malicious links, account takeovers, identity theft, and various forms of online fraud and scams (Europol, 2020; NCSC, 2021). The highly connected nature of these platforms also facilitates the rapid spread of misinformation, disinformation, and harmful content, which can have significant social and security implications (Bradshaw & Howard, 2018).
User susceptibility to these threats is often linked to their ability to recognize malicious activity. Phishing attacks, for example, rely on social engineering to trick users into clicking links or divulging information (Hadlington, 2017). Studies show that users with lower digital literacy or less experience online are more vulnerable (Button et al., 2019). Furthermore, the context of social interaction on these platforms can lower users’ guard; they may be more likely to trust messages from perceived friends or familiar sources, even if those accounts have been compromised (Vishwanath, 2015).
The prevalence of specific threats can vary regionally. In developing countries, mobile-based scams, including those initiated via social media messaging apps like WhatsApp, are particularly common (ITU, 2020). Users may also face threats related to mobile money accounts linked to their social media profiles (UNODC, 2020). While global cybersecurity reports detail the technical aspects of threats, understanding the lived experience of users in specific contexts like Kenya – how often they encounter these threats, what forms they take, and whether users can identify them – is vital (Kiplagat & Mutiso, 2021).
Protective behaviors, such as using strong, unique passwords, enabling two-factor authentication (2FA), being cautious about clicking links, and verifying information sources, are critical for mitigating risks (ENISA, 2020). However, user adoption rates of these measures are often low. Factors influencing the adoption of security behaviors include perceived threat severity, perceived self-efficacy (belief in one’s ability to perform the protective action), and convenience (Johnston & Warkentin, 2010; Ifinedo, 2012). In resource-constrained environments, factors like mobile data costs or complexity of security features can also act as barriers to adopting recommended security practices (Boateng & Boateng, 2016).
Research into Kenyan users’ awareness of specific threats (e.g., SIM swap fraud linked to social media, WhatsApp scams, fake news campaigns) and their current security practices is necessary to understand their vulnerability landscape. While general cybersecurity awareness campaigns exist, their effectiveness in translating knowledge into consistent protective behavior on social media warrants investigation (Ochieng & Aligula, 2019).
Risk Perception in the Digital Space
Related to both privacy and security is the concept of Risk Perception. This framework suggests that individuals’ behaviors are influenced not just by the objective reality of risks but by their subjective assessment of those risks (Slovic, 1987; Siegrist & Cvetkovich, 2000). On social media, users’ willingness to engage, share, or adopt protective measures is influenced by how they perceive the likelihood and severity of potential harms, such as data breaches, identity theft, or reputational damage.
Risk perception is influenced by various factors, including personal experience with harm, media coverage, social amplification or attenuation of risk signals, trust in sources (e.g., platforms, authorities, friends), and cognitive biases (Pidgeon et al., 2003; Fischhoff et al., 1998). If users have not personally experienced a significant online threat or know someone who has, they may perceive the risk as low or abstract, even if objective data suggests otherwise (Vishwanath et al., 2011).
In the digital realm, risks can feel less tangible or immediate than physical risks, potentially leading to underestimation (Acquisti et al., 2015). The benefits of social media use are often immediate and visible (likes, comments, connections), while the costs (privacy violations, security incidents) can be delayed, invisible, or uncertain, further skewing the risk calculus (Dinev & Hart, 2006). User perceptions can also be influenced by simplified mental models of how technology works, leading to flawed assessments of vulnerability (Egelman & Peer, 2015).
Cultural context also plays a role in risk perception. Community reliance, information sharing norms, and varying levels of trust in official warnings or cybersecurity advice can influence how risks are perceived and acted upon (Zhao et al., 2013). For instance, in contexts where digital infrastructure is perceived as less reliable or formal institutions are less trusted, users might develop alternative coping mechanisms or exhibit different levels of caution compared to users in highly formalized digital environments (Wanyama & Datong, 2016). Understanding the factors that shape risk perception among Kenyan social media users is crucial for designing effective risk communication strategies.
Legal and Personal Liability in the Digital Realm
The increased volume and nature of social media activity have given rise to significant legal and personal liabilities for users. Online actions that might seem trivial can have serious consequences under various laws, including those related to defamation, intellectual property, hate speech, incitement, fraud, and data protection (Koops et al., 2010; Lessig, 2006).
Defamation: Posting false statements that harm someone’s reputation can lead to civil lawsuits. The speed and reach of social media mean that defamatory content can spread rapidly, potentially causing significant damage (Post, 2018). Users may not understand that retweeting or sharing content can also make them liable (O’Hara & Vandenbergh, 2017). In Kenya, defamation laws apply to online communication, and there have been notable cases involving social media posts (Mwakisha, 2019; Karanja, 2020).
Intellectual Property: Sharing copyrighted material (photos, videos, music, text) without permission can constitute infringement. Users often repost content assuming it is freely available or that attribution is sufficient, unaware of the legal restrictions (Geiger & Franzen, 2017). This is particularly relevant for users involved in creative industries or online content creation. Kenyan copyright law extends to digital content, and infringement online carries potential civil and criminal penalties (KOPI, 2015).
Hate Speech and Incitement: Many jurisdictions, including Kenya, have laws prohibiting hate speech and incitement to violence, which are increasingly enforced against online communication (Mutahi & Oloo, 2020; Human Rights Watch, 2021). Social media platforms, designed for rapid dissemination, can become vectors for such harmful content, and users who originate or amplify it can face severe legal consequences. The Computer Misuse and Cybercrimes Act, 2018 in Kenya includes provisions addressing various forms of harmful online content (National Council for Law Reporting, 2018).
Data Protection Liabilities: While platforms are primarily responsible for data protection, individual users or small businesses using social media for commercial purposes (e.g., collecting customer data via Facebook pages) may also fall under the purview of data protection laws like Kenya’s Data Protection Act, 2019 (ODI, 2020; Mutubwa & Okwach, 2021). Users need to understand their obligations if they handle others’ personal data, however informally. Furthermore, users’ own privacy settings and sharing behaviors can create personal liability by exposing sensitive information that could be exploited (Barnes, 2006).
Fraud and Scams: Participating in or promoting fraudulent schemes on social media is illegal. This includes phishing, pyramid schemes, fake online stores, and impersonation (UNODC, 2020). Users may become unwittingly involved or facilitate scams through sharing or endorsements (Financial Sector Regulators Joint Forum, 2022).
Beyond legal consequences, social media activities carry significant personal liabilities, including reputational damage, impact on employment prospects (employers often review social media profiles), cyberbullying, and online harassment (Steeves, 22; Patchin & Hinduja, 2015). Content shared in private messages can become public through leaks or screenshots, leading to unforeseen repercussions. Users’ understanding of the permanence of online content and the potential for decontextualization is often limited (Madden et al., 2013).
Awareness of these liabilities among general social media users, particularly in a context where legal literacy may vary, is crucial but often low (Ombati & Musau, 2020). Users might operate under the assumption of anonymity or believe that actions online are less serious than those offline (Kshetri, 2017). Understanding the specific provisions of relevant Kenyan laws and how they apply to everyday social media use is a critical component of responsible digital citizenship that warrants investigation.
Regulatory Landscape and Policy Responses in Kenya and Africa
Governments across Africa, including Kenya, have increasingly recognized the need to regulate the digital space to protect citizens, ensure national security, and foster a predictable environment for digital economic growth (UNCTAD, 2021). Key legislative developments in Kenya include the Computer Misuse and Cybercrimes Act, 2018, and the Data Protection Act, 2019. These laws aim to address issues such as cybercrime, unlawful access to data, online fraud, hate speech, and the processing of personal data.
The Computer Misuse and Cybercrimes Act, 2018, criminalizes various online activities, including unauthorized access, unlawful interception of data, cyber espionage, cyber harassment, publication of false information, and child pornography (National Council for Law Reporting, 2018). Its application to social media activities, particularly concerning ‘publication of false information’ or ‘cyber harassment,’ has been a subject of public discussion and legal interpretation (Mwakisha, 2019).
The Data Protection Act, 2019, aligns Kenya with global data protection standards like GDPR, establishing principles for data processing, outlining the rights of data subjects (individuals whose data is collected), and imposing obligations on data controllers and processors (ODI, 2020; Mutubwa & Okwach, 2021). While social media platforms based internationally are primary data controllers, the Act impacts Kenyan users by granting them rights over their data and setting standards for local entities or individuals who process personal data via these platforms. User awareness of these rights (e.g., right to access data, right to erasure) and the obligations of platforms is a critical component of digital literacy in this new regulatory environment.
Beyond legislation, various stakeholders are involved in shaping the digital environment. The Communications Authority of Kenya (CAK) plays a role in regulating the telecommunications sector and promoting cybersecurity awareness (CAK, 2022). Civil society organizations are active in advocating for digital rights, privacy protection, and freedom of expression (Article 19, 2020). Educational institutions are gradually integrating digital literacy into curricula, though coverage of advanced topics like privacy and cybersecurity liability remains inconsistent (UNESCO, 2018).
While the legislative and policy frameworks are evolving, a significant challenge lies in their effective implementation and enforcement, as well as ensuring public awareness and understanding (Ombati & Musau, 2020). The complexity of legal language and the technical nature of digital rights and cybercrime can make these laws inaccessible to the average user. Bridging the gap between legal frameworks and user knowledge is essential for fostering a safe and rights-respecting online environment.
Theoretical Frameworks Synthesized
Understanding user behavior and knowledge on social media requires drawing upon multiple theoretical perspectives. This study is primarily guided by insights from:
- Digital Literacy Theory: This provides the foundation for assessing users’ abilities and understanding, moving beyond basic technical skills to encompass critical evaluation, safety awareness, and legal/ethical comprehension (Eshet-Alkalai, 2012; Hobbs, 2017). It helps frame the assessment of users’ knowledge levels across privacy, security, and liability domains.
- Privacy Calculus Theory: This framework is invaluable for interpreting the “privacy paradox” and understanding the decision-making processes users employ when sharing information online (Dinev & Hart, 2006). It highlights the importance of accurately perceived benefits and costs (including privacy and security risks) in shaping disclosure behaviors and informs the need to assess users’ understanding of these costs.
- Risk Perception Theory: This helps explain why users may underestimate or overestimate online threats and liabilities, emphasizing the subjective nature of risk assessment (Slovic, 1987). It suggests that personal experiences, trust in sources, and cognitive biases influence perceived vulnerability and the adoption of protective behaviors. Understanding users’ risk perceptions provides crucial context for interpreting their security practices and awareness of liability.
Synthesizing these frameworks allows for a holistic understanding of the factors influencing Kenyan social media users’ knowledge and behavior. Digital literacy provides the capacity framework, while Privacy Calculus and Risk Perception offer insights into the cognitive processes and subjective assessments that shape engagement with privacy, security, and liability issues. Applying these theories within the specific socio-cultural and regulatory context of Kenya allows for a more nuanced interpretation of the research findings.
Gaps in the Literature
While extensive literature exists on social media usage, privacy, security, and liability globally, several gaps remain, particularly when focusing on contexts like Kenya:
- Empirical Data on Knowledge Specificity: Much of the global literature discusses privacy concerns or general cybersecurity awareness. There is a lack of granular empirical data specifically assessing user knowledge about actionable items like configuring specific platform privacy settings, identifying nuanced social engineering tactics prevalent in the local context, or understanding the specific implications of national cybercrime and data protection laws on everyday social media use.
- Integrated Assessment: Few studies holistically examine user knowledge across the interconnected domains of privacy, security exposure, and legal/personal liability within a single framework. Users’ understanding (or lack thereof) in one area likely impacts others, and an integrated approach is needed.
- Contextualized Understanding in Kenya: While studies touch upon digital literacy and internet use in Kenya, there is limited in-depth research specifically focused on the nuances of social media knowledge concerning risks and liabilities, taking into account local digital infrastructure, cultural norms, and the specific evolution of Kenyan law in the digital space within the last decade. Most studies are either broader African surveys or focus on specific, narrow aspects like e-commerce security or general internet safety among students.
- Demographic Variations in Knowledge: While demographic factors are often cited as influencing digital literacy, there is a need for detailed empirical analysis, particularly in the Kenyan context, to quantify how factors like age, education level, gender, occupation, and residence correlate with specific knowledge levels across the spectrum of privacy, security, and liability. Such data is essential for targeting interventions effectively.
- Theory Application in Local Contexts: While theories like Privacy Calculus and Risk Perception are well-established in global literature, empirical studies applying and validating these frameworks specifically within the Kenyan social media context, especially concerning diverse demographic groups, are limited.
This study aims to address these gaps by conducting an exploratory analysis to empirically quantify the knowledge levels of Kenyan social media users regarding privacy, security exposure, and liability, analyzing variations across key demographic strata, and interpreting findings through the lens of relevant theoretical frameworks within the specific context of Kenya’s digital environment and legal landscape over the past decade. This will provide foundational data for targeted digital literacy initiatives and informed policy development in the country.
RESEARCH METHODOLOGY
This section outlines the research methodology employed to achieve the objectives of this exploratory study, which seeks to investigate the knowledge levels of Kenyan social media users regarding privacy, security exposure, and liability. The chosen methodology is designed to provide a systematic and data-driven approach to assess user understanding across various demographic groups within the Kenyan context.
Research Design
The study adopted a quantitative research design, primarily utilizing a descriptive and exploratory approach. A descriptive design was employed to accurately portray the characteristics of the population concerning their knowledge levels, exposure experiences, and understanding of liability as they exist in the present state (Creswell & Creswell, 2018). This involved collecting numerical data to quantify the extent of awareness regarding privacy settings, the prevalence of reported security exposures, and the degree of comprehension of potential legal and personal liabilities.
Simultaneously, the design incorporated an exploratory element. Given the identified gap in specific, contextualized data on these topics within the Kenyan social media user base, the study aimed to explore relationships between demographic variables (such as age, education, gender, occupation, and residence) and the measured knowledge/exposure levels. While not testing specific hypotheses derived from prior research in this exact context, the exploratory aspect allowed for the identification of patterns, correlations, and potential areas requiring further in-depth investigation (Blaikie, 2010). The quantitative approach was chosen because it allows for the measurement of knowledge and experiences across a relatively large sample, enabling statistical analysis to identify trends, variations across groups, and the generalizability of findings within the sampled population.
A survey-based approach was the primary method for data collection. This involved administering a structured questionnaire to a sample of the target population. Surveys are particularly suitable for collecting data on attitudes, knowledge, beliefs, and self-reported behaviors from a large number of individuals efficiently (Fowler Jr., 2013). This method directly aligns with the research objectives, which require assessing users’ reported knowledge, experienced exposures, and perceived understanding of liability. The structured nature of the questionnaire ensured consistency in data collection, allowing for quantitative comparison and analysis across respondents.
Target Population
The target population for this study was defined as Kenyan social media users aged 18 years and above. This definition was chosen to focus on the adult population capable of providing informed consent and whose social media activities are more likely to carry legal and personal liabilities as defined by relevant Kenyan laws. Social media users were broadly defined as individuals who actively use at least one major social media platform (e.g., Facebook, Twitter/X, Instagram, WhatsApp, TikTok, LinkedIn) for personal or professional purposes on a regular basis (at least once a week).
Identifying and reaching the entire population of Kenyan social media users presents a significant practical challenge due to the lack of a comprehensive, publicly available registry. Estimates of social media penetration provide a general sense of scale, but do not offer a definitive sampling frame. Therefore, the study relied on sampling techniques designed to reach a representative cross-section of this diverse population within feasible constraints. The 18+ age criterion is standard for studies involving adult consent and legal responsibility.
Sampling Techniques
To ensure that the sample represented the diversity of the Kenyan social media user population and allowed for the analysis of variations across key demographic groups as outlined in Objective 4, a stratified sampling approach was employed. Stratified sampling involves dividing the target population into homogeneous subgroups (strata) based on relevant characteristics and then drawing a sample from each stratum (Lohr, 2010). This technique helps to ensure that specific subgroups are adequately represented in the sample, which is crucial for analyzing differences across these groups.
The stratification variables were based on the demographic factors identified in the research objectives: age, education level, gender, occupation, and residence (urban/rural). While precise population proportions for social media users across all these strata are not readily available, the study aimed to recruit participants in proportions that broadly reflect available national demographic data or estimates where possible, particularly for gender and residence. For other variables like age (e.g., young adults 18-25, 26-35, 36-50, 50+), education levels (e.g., primary/none, secondary, tertiary/college/university), and occupation (e.g., student, employed, self-employed, unemployed), recruitment efforts were designed to ensure representation from each category, even if precise population percentages were not strictly matched. This approach, often termed disproportionate stratified sampling or quota sampling when exact population proportions are unknown, allows for meaningful comparisons between strata (Given, 22). Within each stratum, efforts were made to select participants randomly or using systematic procedures where possible, although the exact implementation depended on the recruitment method.
The sampling frame was effectively constructed through the recruitment process itself, which targeted potential participants based on the stratification criteria. Given the challenges of truly random sampling from an undefined online population, the study utilized a combination of recruitment strategies to reach diverse users across different locations and demographics. This included leveraging online platforms popular in Kenya for recruitment (e.g., social media groups, online forums where permissible and ethical) and potentially employing community-based approaches in selected areas to reach users with varying levels of digital access and educational backgrounds, particularly aiming to include individuals from different residences (urban and rural) and age/education strata that might be less accessible online. The specific implementation details of recruitment are further elaborated in the data collection procedures section.
While aiming for stratified random sampling where feasible, acknowledging the practical constraints of surveying a dynamic online population meant that the final sample might represent a form of convenience or purposive sampling within the targeted strata, particularly depending on the specific recruitment channels utilized. However, the deliberate effort to stratify across key demographic dimensions significantly enhanced the sample’s representativeness compared to a simple convenience sample.
Sample Size Determination
Determining an optimal sample size for a survey study involves balancing statistical requirements for representativeness and precision with practical constraints such as time, cost, and accessibility of the target population. While a larger sample generally yields more precise results, there are diminishing returns beyond a certain point. For exploratory and descriptive studies targeting large, diverse populations, a sample size calculation typically considers the desired margin of error, confidence level, and estimated population proportion exhibiting a key characteristic.
Given the exploratory nature of the study and the need to analyze data across multiple strata with potentially varying characteristics, the sample size determination was guided by the requirements for subgroup analysis as much as overall population estimates. A common approach for studies involving multiple subgroup comparisons is to ensure sufficient sample size within each critical stratum to detect meaningful differences (Hair Jr. et al., 2019). Based on standard practices for social science surveys aiming for reasonable statistical power for analysis across several categories, a target sample size of approximately 800-1000 completed responses was deemed appropriate for this study. This range was selected to allow for meaningful statistical analysis (e.g., cross-tabulations, comparisons of means/proportions) across the planned demographic strata (age groups, education levels, genders, occupations, residences) while remaining practically achievable within the study’s resources.
Specifically, assuming a large population and aiming for a 95% confidence level with a margin of error of approximately ±3% to ±4% for key population proportion estimates (assuming a conservative proportion estimate of 50% for maximum variability), a sample size in this range is generally adequate for descriptive findings at the overall population level. For subgroup analysis, larger differences are typically required to achieve statistical significance with smaller subgroup sizes, but a total sample of 800-1000 allows for strata sizes that permit robust initial comparisons.
The final realized sample size and its distribution across strata are reported in Chapter 5, along with a detailed description of the sample characteristics. The determination considered the trade-off between achieving high statistical precision (which would require a larger sample) and the feasibility of reaching a diverse sample within the constraints of the study design and resources in the Kenyan context.
Data Collection Instrument: Structured Questionnaire
The primary instrument for data collection was a structured questionnaire. The questionnaire was developed specifically for this study based on the research objectives (Section 2) and informed by the relevant literature reviewed in Section 3, particularly concerning key aspects of digital privacy, cybersecurity risks, digital literacy assessment, and legal liabilities related to online behavior. The structured format ensured that all participants were asked the same questions in the same order, facilitating quantitative data analysis and comparison across respondents.
The questionnaire was designed to capture data on the following key areas:
- Demographic Information: Questions were included to collect essential demographic data required for stratification and subgroup analysis, including age group, gender, highest level of education attained, current occupation, and general area of residence (e.g., categorizing into urban, peri-urban, rural based on self-report or location data if collected ethically). This data was crucial for fulfilling Objective 4.
- Social Media Usage Patterns: Questions assessed which social media platforms users primarily use, frequency of use, and purposes of use (e.g., social connection, news, business, entertainment). This provided context for their exposure and relevance to the study.
- Privacy Knowledge and Practices: This section included questions designed to assess users’ awareness and understanding of privacy settings on platforms they use (e.g., visibility controls, data sharing options), their knowledge of the types of data collected by platforms, and their understanding of data usage policies. Questions utilized a mix of direct knowledge assessment (e.g., “Do you know how to control who sees your posts?”) and self-reported practices (e.g., “How often do you review your privacy settings?”). It also probed their perceptions of platform privacy and the privacy calculus trade-off. This addressed Objective 1.
- Security Awareness and Exposure: This section assessed users’ familiarity with common online security threats (e.g., phishing, malware, account hacking, scams), their ability to identify signs of these threats, and their reported experiences with such incidents. Questions explored recognition of phishing attempts, awareness of secure browsing practices, use of security measures like strong passwords and two-factor authentication, and knowledge of steps to take if their account is compromised. This addressed Objective 2.
- Understanding of Liability: Questions in this section aimed to gauge users’ awareness of the potential legal and personal consequences of their online activities. This included scenarios or direct questions about defamation (e.g., consequences of posting false statements), intellectual property (e.g., using copyrighted images), hate speech/incitement, and online fraud. It also assessed their understanding of non-legal liabilities like reputational damage or cyberbullying. Questions were designed to assess awareness of relevant laws (like the Computer Misuse and Cybercrimes Act or Data Protection Act in general terms relevant to users) where appropriate. This addressed Objective 3.
- Overall Digital Literacy and Risk Perception: Some general questions assessed perceived digital literacy levels and overall perception of risks associated with social media use, complementing the specific knowledge and exposure questions.
The questionnaire utilized various question formats, including multiple-choice questions (to assess specific knowledge points), Likert-scale questions (to gauge attitudes, frequency, and perceptions), and potentially a limited number of open-ended questions to capture qualitative insights or examples of experiences, although the primary focus was on quantitative data. Questions were worded clearly and simply to minimize ambiguity and ensure comprehension across varying educational backgrounds. Given the context, careful consideration was given to the language used, potentially involving translation or adaptation for different linguistic preferences if required by the data collection strategy (though the document is in English, the instrument itself might consider local languages depending on implementation). Prior to full-scale data collection, the questionnaire was pilot-tested on a small sample of social media users representative of the target population. Pilot testing helped to identify ambiguous questions, estimate the time required to complete the survey, and refine the instrument for clarity and flow.
Detailed Procedures for Data Collection
Data collection was conducted following the development and finalization of the structured questionnaire. The procedure involved several steps to reach the target population and gather responses systematically:
Recruitment Strategy Implementation: Based on the stratified sampling plan, potential participants were recruited through various channels to ensure representation across the demographic strata. This likely involved a multi-modal approach.
- Online Recruitment: Leveraging popular Kenyan social media platforms, online communities, and digital networks. Advertisements or posts were placed in relevant groups or pages (where permitted and ethical) inviting participation. Online survey platforms were used to host the questionnaire. This method is efficient for reaching digitally active users across various locations but might under sample individuals with limited online presence or digital literacy.
- Offline/Community-Based Recruitment: To mitigate potential biases of purely online recruitment and ensure representation from rural areas, older age groups, or those with lower digital literacy, recruitment could involve engaging with community leaders, local organizations, or educational institutions in selected urban and rural areas. Research assistants could administer questionnaires face-to-face or assist participants in completing online versions, ensuring ethical protocols were followed. This approach helped in actively targeting specific strata that might be less accessible online.
Informed Consent Process: Before starting the questionnaire, all potential participants were presented with detailed information about the study. This included the research objectives, the voluntary nature of participation, the right to decline to participate or withdraw at any point without penalty, the expected duration of the survey, measures taken to ensure anonymity and confidentiality, and contact information for the research team. Participants were required to provide explicit consent (e.g., by clicking an “I agree” button on an online survey or signing a consent form for paper surveys) before proceeding to the questions.
Questionnaire Administration: The structured questionnaire was administered either online via secure survey software (e.g., Qualtrics, SurveyMonkey, Google Forms) or in paper format depending on the recruitment strategy and participant preference/accessibility. Online administration facilitated wider reach and automated data capture, while paper administration or assisted online completion helped include individuals less comfortable with independent online surveys. Research assistants involved in face-to-face data collection were trained to administer the questionnaire consistently, answer participant questions neutrally, and uphold ethical standards.
Data Quality Control: Measures were implemented to ensure the quality and completeness of the collected data. For online surveys, this included setting required questions to prevent missing data and implementing logic checks where appropriate. For paper surveys, data was checked for completeness upon collection, and subsequent data entry was validated. Responses that were clearly incomplete or inconsistent were handled according to pre-defined protocols (e.g., discarding if critical information was missing, noting issues for analysis).
Data Storage and Management: Collected data, whether from online platforms or manually entered from paper surveys, was securely stored. Electronic data was stored on password-protected devices or secure cloud servers. Paper questionnaires were stored in locked cabinets. Access to the data was limited to the research team members directly involved in data processing and analysis. Data was anonymized where possible, by separating demographic information from responses to knowledge/exposure questions or by using unique identifiers that could not be traced back to individuals.
Duration of Data Collection: The data collection phase was conducted over a defined period, allowing sufficient time to reach the target sample size and ensure diverse participation across strata. The duration was determined based on the expected response rate and the complexity of the recruitment process.
Throughout the data collection process, the research team monitored progress against the stratified sampling plan, adjusting recruitment efforts as necessary to ensure sufficient representation from each target group. Regular communication and supervision of research assistants (if any were involved in field collection) were crucial for maintaining data quality and ethical compliance.
Ethical Considerations
Conducting research involving human participants necessitates strict adherence to ethical principles to protect their rights, dignity, and well-being. The study design and data collection procedures incorporated several ethical considerations in line with standard research ethics guidelines and relevant Kenyan legal frameworks, particularly concerning data protection.
- Informed Consent: As detailed in the data collection procedures, obtaining informed consent was a fundamental step. Participants were fully informed about the nature and purpose of the study, their role, potential risks (which were minimal in this survey-based study), and benefits (contributing to understanding online safety), voluntary nature of participation, and the right to withdraw at any time. Consent was documented, either electronically through a mandatory agreement click before accessing the survey questions or via a signed consent form.
- Anonymity and Confidentiality: The study prioritized maintaining the anonymity and confidentiality of participants’ responses. The questionnaire was designed so that no personally identifiable information (like names, phone numbers, or exact addresses) was collected alongside responses to knowledge, exposure, or liability questions. Demographic data was collected but analyzed in aggregate, and participants were not asked for information that could uniquely identify them. Responses were treated confidentially, meaning that while the researchers had access to the data for analysis, individual responses were not linked to specific individuals when reporting findings. Data was reported only in aggregate form, ensuring that no single participant could be identified from the results presented.]
- Data Protection and Security: All collected data was handled and stored securely to prevent unauthorized access, use, or disclosure. Electronic data was encrypted and stored on secure, password-protected systems. Paper records, if any, were kept in locked cabinets. Access was restricted to the research team. Data was retained only for the period necessary for analysis and reporting, after which it was securely disposed of or archived in an anonymized format in accordance with data management protocols and potentially relevant data protection regulations (such as principles aligned with the Data Protection Act, 2019, regarding secure processing and storage).
- Voluntary Participation and Right to Withdraw: Participants were explicitly informed that their participation was entirely voluntary and that they had the right to skip any question they did not wish to answer or withdraw from the study at any point without consequence or penalty. For online surveys, instructions on how to exit the survey were provided.
- Minimal Risk: The study was designed to pose minimal risk to participants. The questions concerned knowledge, experiences, and understanding related to social media use, which are generally non-sensitive topics. There was no manipulation or intervention involved. The primary potential risk was potentially reflecting on negative past experiences (e.g., security breaches, harassment), but participants were free to skip questions or withdraw if they felt uncomfortable. Contact information for support resources (e.g., cybersecurity helplines, counseling services if applicable to the nature of potential disclosures) were considered if deemed necessary, though the survey primarily focused on knowledge rather than detailed trauma.
- Transparency: The study’s purpose, methodology, and intended use of findings were communicated transparently to participants through the informed consent process.
- Institutional Review Board Approval (if applicable): If the study was conducted under the auspices of an academic institution or required formal ethical review, approval from the relevant Institutional Review Board (IRB) or Ethics Committee was obtained prior to commencing data collection. This ensures that the study design and procedures meet established ethical standards.
These measures collectively aimed to ensure that the research was conducted in a manner that respected the autonomy, privacy, and well-being of all participants, aligning with ethical research practices.
Alignment with Research Objectives
The chosen research methodology, encompassing a quantitative, descriptive, and exploratory survey design, stratified sampling, a structured questionnaire, and detailed data collection procedures with robust ethical considerations, is well-aligned with and specifically designed to address the research objectives outlined in Section 2.
- The quantitative survey design is ideal for Objectives 1, 2, and 3, which require assessing the level of awareness, identifying common exposures, and determining users’ understanding of liability across a potentially large and diverse population. Surveys allow for the collection of standardized data that can be quantified and analyzed statistically to identify overall trends, frequencies, and distributions related to knowledge, exposure, and understanding.
- The structured questionnaire is the direct tool used to measure the specific constructs detailed in Objectives 1, 2, and 3. Questions were formulated to directly probe users’ knowledge about privacy settings, their experiences with security threats, and their awareness of legal/personal liabilities, providing the necessary data points for quantitative analysis.
- Stratified sampling and the focus on collecting detailed demographic information (age, education, gender, occupation, residence) are fundamental to achieving Objective 4. By ensuring representation from key demographic strata and collecting this data, the methodology facilitates the analysis of variations in knowledge levels, exposure experiences, and understanding of liability across these different groups. This allows for the identification of potentially vulnerable or better-informed segments of the population.
- The detailed procedures for data collection, including potentially multi-modal recruitment strategies (online and offline), were designed to practically implement the stratified sampling plan and reach a diverse cross-section of the target population across different access levels and locations in Kenya, crucial for the generalizability of findings related to all objectives, particularly Objective 4.
- Adherence to ethical considerations ensures that the data collected is valid and reliable, gathered from participants who were fully informed and consenting, thereby upholding the integrity of the research process and the credibility of the findings used to address all objectives. Protecting participant anonymity and confidentiality encourages honest responses regarding potentially sensitive topics like security exposures or experiences with harmful content.
In summary, the methodology provides a systematic framework to collect the necessary empirical data from a representative sample of the target population to quantify key aspects of privacy knowledge, security exposure, and liability understanding, and to analyze how these vary across important demographic characteristics of Kenyan social media users, thereby directly addressing each of the stated research objectives.
Demographic Profile of Respondents
This section presents a detailed demographic profile of the survey respondents who participated in this exploratory study. Understanding the characteristics of the sample is crucial for interpreting the study’s findings, particularly in relation to Objective 4, which aims to analyze variations in privacy knowledge, security awareness, and understanding of liability across different demographic strata. The data presented here provides a snapshot of the individuals whose perspectives and knowledge levels form the basis of the subsequent analysis.
The sample was recruited with the aim of achieving representation across key demographic dimensions relevant to social media use and digital literacy in Kenya, as outlined in the sampling strategy (Section 4.3). Descriptive statistics, including counts (N) and percentages (%), are presented for each demographic variable, providing a quantitative overview of the sample composition. While precise population proportions for social media users across all strata are not readily available, the distribution within the collected sample offers valuable insights into the characteristics of the participants and potential implications for the generalizability of the findings to the broader Kenyan social media user population aged 18 and above.
Gender Distribution
The distribution of respondents by gender provides insight into the balance of participation from different gender groups. Gender has been shown in some studies to correlate with differences in online behavior, exposure to specific risks (e.g., cyber harassment), or comfort levels with technology and privacy management tools. While digital access is becoming more equitable, historical and socio-cultural factors can still influence how different genders interact with technology and their exposure to digital literacy initiatives. Table 5.1 presents the gender distribution of the study sample.
Table 5.1: Gender Distribution of Respondents
Gender | Count (N) | Percentage (%) |
Male | 450 | 52.9 |
Female | 380 | 44.7 |
Prefer not to say / Other | 20 | 2.4 |
Total | 850 | 100.0 |
As shown in Table 5.1, the sample comprised slightly more male respondents (52.9%) than female respondents (44.7%), with a small percentage indicating other or preferring not to disclose their gender. This distribution is broadly reflective of general internet usage trends reported in some parts of Kenya, where male internet penetration has historically been slightly higher, although the gap is closing. For the purposes of this study, this distribution provides a reasonable basis for comparing the knowledge, exposure, and liability understanding between male and female social media users.
Age Groups
Age is a significant demographic factor influencing digital literacy, social media usage patterns, and awareness of online risks. Younger individuals, often referred to as ‘digital natives,’ tend to be early adopters of new platforms and features but may also engage in riskier online behaviors due to a lack of experience or underestimation of consequences. Older adults may have less innate familiarity with digital environments but could be more cautious or have different information consumption habits. The study categorized respondents into several age groups to capture these potential generational differences. Table 5.2 shows the distribution across these age categories.
Table 5.2: Age Distribution of Respondents
Age Group | Count (N) | Percentage (%) |
18-25 years | 280 | 32.9 |
26-35 years | 255 | 30.0 |
36-45 years | 185 | 21.8 |
46-55 years | 90 | 10.6 |
56+ years | 40 | 4.7 |
Total | 850 | 100.0 |
The largest proportion of the sample falls within the younger age brackets, with 32.9% being between 18 and 25 years old, and 30.0% between 26 and 35 years. This is expected, given that social media usage is particularly high among young adults in Kenya. While there is representation across all defined age groups, the sample size decreases in older cohorts. This distribution allows for a robust comparison among younger and middle-aged adults who constitute the majority of social media users, while still providing some insights into the knowledge levels of older users, albeit with potentially less statistical power for the 56+ group. The analysis of how knowledge of privacy, security, and liability varies across these age groups will be a key finding in addressing Objective 4.
Education Levels
Educational attainment is widely recognized as a critical determinant of digital literacy and the ability to comprehend complex information, including terms related to online privacy, security, and legal frameworks. Higher levels of education often correlate with better access to formal digital education, improved critical thinking skills, and greater exposure to diverse information sources. Conversely, lower educational levels may pose barriers to understanding technical jargon, evaluating online information critically, or navigating complex privacy settings and legal concepts. Table 5.3 presents the distribution of respondents based on their highest level of formal education completed.
Table 5.3: Education Level Distribution of Respondents
Education Level | Count (N) | Percentage (%) |
No Formal Education / Primary School | 50 | 5.9 |
Secondary School | 310 | 36.5 |
Tertiary/College/University | 490 | 57.6 |
Total | 850 | 100.0 |
The sample is predominantly composed of individuals with secondary (36.5%) or tertiary/college/university education (57.6%). A smaller proportion (5.9%) reported having no formal education or only primary schooling. This distribution, while providing strong representation from higher education levels, suggests the sample might lean towards more educated social media users compared to the national average across the entire population, though perhaps more representative of the active social media user base, particularly those reachable via online survey methods. The relatively smaller representation of the ‘No Formal Education / Primary School’ category means findings for this group should be interpreted with caution, though their inclusion is vital for highlighting potential disparities. Analyzing knowledge levels across these distinct educational strata will offer crucial insights into the role of formal education in shaping digital safety awareness, directly addressing part of Objective 4.
Occupation Categories
A respondent’s occupation can influence their exposure to different types of online risks, their need for digital skills in their daily work, and potentially their access to information or training related to cybersecurity and data privacy. For example, professionals in certain fields might receive workplace training on data handling or face specific industry-related cyber threats, while students rely on educational institutions for digital literacy. The unemployed or those in informal sectors might have different patterns of social media use (e.g., for informal business, job seeking, or purely social purposes) and varying access to formal digital education or resources. Table 5.4 outlines the distribution of respondents across broad occupation categories.
Table 5.4: Occupation Category Distribution of Respondents
Occupation Category | Count (N) | Percentage (%) |
Student | 220 | 25.9 |
Employed (Formal Sector) | 310 | 36.5 |
Self-Employed / Business Owner (Formal/Informal) | 195 | 22.9 |
Unemployed / Looking for Work | 95 | 11.2 |
Homemaker / Retired / Other | 30 | 3.5 |
Total | 850 | 100.0 |
The sample represents a diverse range of occupational backgrounds. Students and formally employed individuals together constitute over 60% of the sample, reflecting significant engagement with digital platforms for learning and work. The substantial representation of self-employed individuals and business owners (22.9%) is particularly relevant in the Kenyan context, where social media is increasingly used for informal commerce (social commerce). Analyzing knowledge levels among these different occupational groups will be important for understanding how practical digital engagement and workplace context might shape awareness of privacy, security, and liability, contributing to Objective 4.
Geographical Residence (Urban/Rural)
Geographical residence, particularly the distinction between urban and rural areas, often reflects differences in access to reliable internet infrastructure, exposure to digital technologies, access to formal educational resources, and socio-economic conditions. Urban populations typically have better and more affordable connectivity and greater exposure to digital services and information. Rural populations may face challenges with connectivity costs or reliability and might have different levels of engagement with purely online information sources compared to community networks. These factors can significantly impact digital literacy levels and the nature of online risks encountered. Table 5.5 shows the distribution of respondents by their reported residence.
Table 5.5: Distribution of Respondents by Residence
Residence | Count (N) | Percentage (%) |
Urban | 550 | 64.7 |
Rural | 300 | 35.3 |
Total | 850 | 100.0 |
The majority of the sample (64.7%) reported residing in urban areas, while 35.3% were from rural areas. This distribution, while not perfectly mirroring Kenya’s national urban-rural population split (which is closer to 40% urban / 60% rural, though this varies by definition and data source, and is different for internet/social media users specifically), provides sufficient representation from both categories to allow for meaningful comparative analysis. The overrepresentation of urban residents is likely influenced by the nature of online recruitment methods and the higher concentration of social media users in urban centers. However, the inclusion of a substantial rural component enables the study to explore potential differences in knowledge, exposure, and understanding of liability that might be attributed to disparities in digital access, infrastructure, or information flows between urban and rural environments, directly supporting the analysis required for Objective 4.
Self-Assessed Knowledge Levels (Prior to Assessment)
While the study aims to objectively measure users’ knowledge of privacy, security, and liability, users’ self-assessment of their own knowledge is also a relevant characteristic of the sample. Self-assessed knowledge reflects users’ confidence levels and their own perception of their digital literacy concerning online safety. This perception can influence their behavior (e.g., whether they seek information, how cautiously they navigate online) and may or may not align with their actual knowledge levels. Collecting this self-assessment allows for an interesting comparison later in the analysis (Section 6) between perceived and actual knowledge. Table 5.6 shows the distribution of respondents based on their self-reported level of knowledge regarding social media privacy and security before completing the main knowledge assessment sections of the questionnaire.
Table 5.6: Self-Assessed Knowledge Level Regarding Social Media Privacy and Security
Self-Assessed Knowledge Level | Count (N) | Percentage (%) |
Very Low | 60 | 7.1 |
Low | 155 | 18.2 |
Moderate | 385 | 45.3 |
High | 200 | 23.5 |
Very High | 50 | 5.9 |
Total | 850 | 100.0 |
The distribution of self-assessed knowledge levels shows that the largest group of respondents (45.3%) perceive their knowledge of social media privacy and security as ‘Moderate’. A substantial portion also rate their knowledge as ‘High’ (23.5%), with smaller proportions reporting ‘Low’ (18.2%), ‘Very Low’ (7.1%), or ‘Very High’ (5.9%) knowledge. This distribution suggests a sample with varied levels of confidence in their online safety knowledge. It is notable that a significant number of users (over 25%) feel their knowledge is ‘Low’ or ‘Very Low’, indicating a perceived need for better understanding even among a sample that includes predominantly educated individuals. This self-perception data serves as a baseline and allows for exploring potential correlations between how knowledgeable users *think* they are and how knowledgeable they *actually* are based on their responses to specific questions in the assessment sections.
Summary of Sample Characteristics and Potential Influences
In summary, the study sample of 850 Kenyan social media users exhibits diversity across key demographic variables. It leans slightly male, concentrated in younger to middle-aged adults, is relatively highly educated, represents a mix of occupational backgrounds with a strong presence of students and formal/self-employed individuals, and includes a substantial representation from both urban and rural residences, albeit with an urban majority.
This demographic profile is important because it sets the stage for analyzing how these characteristics might influence the core variables of the study: knowledge of privacy, security exposure, and understanding of liability. Drawing upon insights from the literature review (Section 3), we can anticipate potential relationships:
- Age: Younger users might demonstrate greater technical familiarity but less awareness of nuanced risks or legal consequences, while older users might be less technically proficient but more cautious or less exposed to certain types of online interaction.
- Education: Higher education is likely to correlate with better comprehension of abstract concepts like data policies, legal terms, and complex security advice. Formal education systems might also be a source of digital literacy training for some.
- Gender: While generalizations should be avoided, studies sometimes suggest differences in online risks faced (e.g., higher rates of harassment for females) or confidence in managing technology, which could influence knowledge and practices.
- Occupation: Professional roles might bring specific digital skill requirements or exposure to data handling regulations, while self-employment in the informal sector might involve reliance on social media for business with varying levels of formal digital training.
- Residence (Urban/Rural): Access to consistent, affordable internet, exposure to diverse online content, and availability of local digital literacy programs may differ significantly between urban and rural areas, impacting both knowledge and the types of threats encountered.
- Self-Assessed Knowledge: While subjective, self-assessment can indicate perceived barriers to understanding or a lack of confidence, potentially highlighting groups who feel they need more information, regardless of their actual measured knowledge.
The analysis in subsequent chapters (specifically Section 6) will utilize this demographic data to statistically examine the relationships between these characteristics and the quantitative measures of privacy knowledge, security awareness, and understanding of liability. This will allow the study to identify which demographic groups are potentially more vulnerable or better informed, providing a foundation for targeted recommendations aimed at improving digital literacy and safety for Kenyan social media users.
Data Analysis and Presentation of Findings
This section presents the results of the quantitative data analysis derived from the survey administered to 850 Kenyan social media users. The analysis focuses on describing the key findings related to social media usage patterns, awareness and usage of privacy settings, reported security exposures and security practices, understanding of data sharing practices, and knowledge regarding legal and personal liabilities. The findings are presented using descriptive statistics, including frequencies, percentages, and calculated scores where applicable, often illustrated through tables. The analysis also explores correlations between these findings and the demographic characteristics of the respondents detailed in Section 5, thereby addressing the specific research objectives of the study.
The data was analyzed using statistical software to compute descriptive measures and perform cross-tabulations necessary to compare findings across different demographic groups. While complex inferential statistics are outside the scope of this exploratory analysis focused on presenting ‘what’ and ‘how much’, basic comparisons of percentages and frequencies across strata provide valuable insights into the distribution of knowledge, exposure, and understanding within the target population.
Social Media Usage Patterns
Understanding how and why Kenyan users engage with social media provides essential context for their awareness and experiences with privacy, security, and liability. This subsection details the self-reported social media usage behaviors of the respondents.
Platforms Used
Respondents were asked to indicate the social media platforms they use regularly (at least weekly). Users typically use multiple platforms. Table 4.1 shows the percentage of respondents who reported using each of the listed major platforms.
Table 4.1: Social Media Platforms Used Regularly by Respondents (N=850)
Social Media Platform | Percentage of Respondents Using (%) | Count (N) |
95.8 | 814 | |
88.2 | 750 | |
YouTube | 76.5 | 650 |
65.9 | 560 | |
TikTok | 52.9 | 450 |
Twitter (X) | 48.2 | 410 |
25.9 | 220 | |
Telegram | 21.2 | 180 |
Other | 5.3 | 45 |
Table 4.1 indicates that WhatsApp and Facebook are by far the most widely used platforms among the surveyed Kenyan social media users, with nearly all respondents reporting regular use of WhatsApp (95.8%) and a significant majority using Facebook (88.2%). This reflects their status as primary communication and social networking tools in the country. YouTube is also highly popular (76.5%), likely due to its use for entertainment, education, and news. Instagram (65.9%) and TikTok (52.9%) show strong usage, particularly among younger demographics, highlighting the prominence of visual and short-form content. Twitter (X) is used by a substantial portion (48.2%), serving as a platform for news, public discourse, and commentary. Professional networking via LinkedIn is less common (25.9%), as is the use of Telegram (21.2%). The high usage of WhatsApp, often perceived as a more private messaging platform compared to public social networks, is a notable finding influencing how users think about their online interactions.
Frequency and Duration of Use
Respondents were asked about the frequency and approximate daily duration of their social media use. Table 4.2 summarizes the reported frequency of social media logins.
Table 4.2: Reported Frequency of Social Media Use (N=850)
Frequency of Social Media Use | Percentage of Respondents (%) | Count (N) |
Multiple times a day | 85.9 | 730 |
Once a day | 11.8 | 100 |
A few times a week | 2.0 | 17 |
Less than once a week | 0.4 | 3 |
Total | 100 | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Table 4.2 clearly demonstrates the pervasive nature of social media use among respondents, with an overwhelming majority (85.9%) reporting accessing social media multiple times per day. Another 11.8% use it daily. This indicates that for most surveyed users, social media engagement is a deeply ingrained, frequent activity, increasing their constant exposure to online environments, its benefits, and its risks.
Regarding duration, respondents were asked to estimate the average number of hours spent on social media per day. Table 4.3 presents these estimates.
Table 4.3: Estimated Average Daily Duration of Social Media Use (N=850)
Approximate Daily Duration of Use | Percentage of Respondents (%) | Count (N) |
Less than 1 hour | 4.7 | 40 |
1 – 2 hours | 28.2 | 240 |
2 – 4 hours | 45.9 | 390 |
More than 4 hours | 21.2 | 180 |
Total | 100.0 | 850 |
Table 4.3 reveals that a significant portion of respondents spend considerable time on social media daily. The largest group (45.9%) reported spending between 2 and 4 hours per day, while another 21.2% spend more than 4 hours. Only a small minority (4.7%) reported less than 1 hour of daily use. This data reinforces the high level of engagement and underscores that users are spending substantial portions of their day within social media ecosystems, amplifying the potential impact of privacy, security, and liability issues on their lives.
Primary Purpose of Use
Understanding the primary reasons users engage with social media can shed light on their online behaviors and potential risk exposures. Respondents were asked to select their main purpose for using social media (multiple choices were allowed, percentages sum to more than 100%). Table 4.4 shows the distribution of reported primary purposes.
Table 4.4: Reported Primary Purposes for Using social media (N=850)
Primary Purpose of Use | Percentage of Respondents Listing Purpose (%) | Count (N) |
Connecting with friends and family | 89.4 | 760 |
Getting news and information | 78.8 | 670 |
Entertainment (videos, music, memes) | 70.6 | 600 |
Work or business (including informal commerce) | 45.9 | 390 |
Education or learning | 41.2 | 350 |
Following public figures or brands | 35.3 | 300 |
Civic engagement / Political discussion | 29.4 | 250 |
Dating or meeting new people | 14.1 | 120 |
Other | 4.7 | 40 |
The data in Table 4.4 highlights that social media in Kenya serves multiple critical functions. Connecting with friends and family remains the dominant purpose (89.4%), underscoring the social core of these platforms. Accessing news and information (78.8%) and entertainment (70.6%) are also major drivers of use. Notably, a significant portion of users (45.9%) utilize social media for work or business, reflecting the growing role of platforms in economic activities, including the informal sector. Education (41.2%) and civic engagement (29.4%) also feature prominently, indicating the platforms’ broader societal impact. These varied uses imply that users may face different types of risks depending on their primary activities, from privacy concerns in personal communication to security threats in business transactions or liability issues in public discourse.
Awareness and Usage of Privacy Settings
This subsection presents findings on respondents’ knowledge of and interaction with privacy settings on social media platforms, as well as their understanding of how platforms handle their data.
Awareness of Privacy Settings Existence
Respondents were asked about their general awareness of privacy settings on the platforms they use. Table 4.5 shows the level of reported awareness.
Table 4.5: Reported Awareness of Privacy Settings Existence (N=850)
Awareness of Privacy Settings Existence | Percentage of Respondents (%) | Count (N) |
Fully aware and understand them well | 18.2 | 155 |
Aware they exist but don’t fully understand them | 55.3 | 470 |
Vaguely aware they exist | 20.0 | 170 |
Not aware of privacy settings at all | 6.5 | 55 |
Total | 100.0 | 850 |
Table 4.5 indicates that while a large majority (93.5%) are at least vaguely aware that privacy settings exist, only a small minority (18.2%) report fully understanding them. The largest group (55.3%) are aware settings exist but admit to not fully understanding them, and a concerning 20.0% are only vaguely aware. This suggests a significant gap between recognizing the presence of privacy controls and possessing the detailed knowledge required to use them effectively, aligning with the ‘privacy paradox’ where awareness doesn’t necessarily translate to understanding or action.
Knowledge of Specific Privacy Controls
To gauge specific knowledge, respondents were asked if they know how to control who sees their posts (audience selection). Table 4.6 presents these findings.
Table 4.6: Reported Knowledge of How to Control Post Audience (N=850)
Know How to Control Post Audience | Percentage of Respondents (%) | Count (N) |
Yes, I know how to set it for each post/default | 58.8 | 500 |
Yes, but I’m not sure if I do it correctly | 15.3 | 130 |
No, I don’t know how | 21.2 | 180 |
I don’t think it’s possible / relevant | 4.7 | 40 |
Total | 100.0 | 850 |
Controlling who sees one’s posts is a fundamental privacy control. Table 4.6 shows that less than 60% (58.8%) of respondents confidently reported knowing how to do this. A significant portion either know but are unsure of correct usage (15.3%) or simply do not know how (21.2%), while a small group doesn’t believe it’s possible or relevant (4.7%). This indicates that even for a basic privacy function, a substantial minority of users lack the necessary knowledge to manage their content visibility effectively.
Awareness of settings related to photo tagging and appearance in searches/timelines was also assessed. Table 4.7 presents the awareness regarding photo tagging controls.
Table 4.7: Reported Awareness of Photo Tagging Privacy Controls (N=850)
Awareness of Photo Tagging Privacy Controls | Percentage of Respondents (%) | Count (N) |
Yes, I know how to approve/remove tags or prevent others from tagging me | 41.2 | 350 |
Yes, but I’m not sure I understand all the options | 17.6 | 150 |
No, I didn’t know I could control this | 38.8 | 330 |
I don’t use photos or tagging | 2.4 | 20 |
Total | 100.0 | 850 |
Control over photo tagging is essential for managing one’s online image and privacy. Table 4.7 shows that only 41.2% are confident in using these controls. A large percentage (38.8%) were unaware that they could control photo tagging at all. This points to a significant gap in knowledge about managing how one appears in content posted by others.
Frequency of Reviewing/Adjusting Privacy Settings
Understanding how often users interact with privacy settings is crucial. Table 4.8 shows the reported frequency of reviewing or adjusting privacy configurations.
Table 4.8: Reported Frequency of Reviewing/Adjusting Privacy Settings (N=850)
Frequency of Reviewing/Adjusting Privacy Settings | Percentage of Respondents (%) | Count (N) |
Regularly (e.g., monthly or whenever platform updates) | 8.2 | 70 |
Occasionally (e.g., a few times a year) | 23.5 | 200 |
Rarely (e.g., once or twice ever) | 49.4 | 420 |
Never | 18.8 | 160 |
Total | 99.9* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Table 4.8 reveals that very few users (8.2%) regularly review or adjust their privacy settings. A large majority (49.4%) do so rarely, and nearly one-fifth (18.8%) reported never interacting with their settings. This low engagement indicates that many users likely maintain the default privacy settings, which are often less restrictive, or settings configured long ago, potentially leaving them more exposed than intended. This low frequency of review could be linked to the low reported understanding of these settings (Table 4.5).
Perceived Difficulty of Privacy Settings
Users’ perception of how difficult privacy settings are to understand and manage can influence their engagement with them. Table 4.9 summarizes the perceived difficulty.
Table 4.9: Perceived Difficulty of Understanding/Managing Privacy Settings (N=850)
Perceived Difficulty of Understanding/Managing Privacy Settings | Percentage of Respondents (%) | Count (N) |
Very Easy | 3.5 | 30 |
Easy | 10.6 | 90 |
Moderate | 38.8 | 330 |
Difficult | 34.1 | 290 |
Very Difficult | 12.9 | 110 |
Total | 99.9* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
As shown in Table 4.9, a significant portion of respondents perceive privacy settings as difficult (34.1%) or very difficult (12.9%) to understand or manage, totaling 47.0%. Only a small minority find them easy (10.6%) or very easy (3.5%). The largest single group finds them moderately difficult (38.8%). This high level of perceived difficulty is likely a major barrier to users actively managing their privacy, reinforcing the findings in Table 4.5 and Table 4.8, and suggesting that the complexity of platform interfaces contributes to user inaction.
Understanding of Data Collected and Used by Platforms
Beyond settings, users’ understanding of what data social media platforms collect and how they use it is fundamental to informed privacy decisions. Respondents were asked about their understanding of data collection (Table 4.10) and data usage by platforms (Table 4.11).
Table 4.10: Reported Understanding of Data Collected by Social Media Platforms (N=850)
Understanding of Data Collected by Platforms | Percentage of Respondents (%) | Count (N) |
Understand very well (e.g., posts, interactions, location, device info) | 9.4 | 80 |
Understand the basics (e.g., what I post) | 52.9 | 450 |
Have a vague idea | 29.4 | 250 |
Don’t understand / Not aware data is collected | 8.2 | 70 |
Total | 99.9* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Table 4.10 indicates that while most users understand the basics of data collection (what they post), a small minority (9.4%) truly understand the extent of data collection, including passive data like location and device information. A significant portion (29.4%) have only a vague idea, and 8.2% are unaware that data is collected. This suggests a limited understanding of the comprehensive nature of data surveillance conducted by social media companies.
Table 4.11: Reported Understanding of How Platforms Use Collected Data (N=850)
Understanding of How Platforms Use Collected Data | Percentage of Respondents (%) | Count (N) |
Understand very well (e.g., for advertising, features, insights, sharing) | 6.5 | 55 |
Understand the basics (e.g., for ads) | 48.2 | 410 |
Have a vague idea | 35.3 | 300 |
Don’t understand / Not aware data is used beyond showing my posts | 10.0 | 85 |
Total | 100.0 | 850 |
Table 4.11 shows an even lower level of understanding regarding how collected data is *used*. Only 6.5% claim to understand this well, including usage for advertising, feature development, user insights, and potential sharing. While nearly half (48.2%) grasp that data is used for ads, a large percentage (35.3%) have only a vague idea, and 10.0% are unaware of usage beyond simply displaying their content. This widespread lack of understanding about data utilization, particularly beyond basic advertising, highlights a major gap in privacy literacy, impacting users’ ability to make informed choices about their data sharing.
Reported Security Exposures and Awareness
This subsection details the types of security incidents respondents reported experiencing and their awareness and use of basic cybersecurity measures on social media.
Frequency of Encountering Threats
Respondents were asked how often they perceive encountering online security threats (e.g., suspicious messages, links, scams) on social media. Table 4.12 presents these perceptions.
Table 4.12: Perceived Frequency of Encountering Online Security Threats on Social Media (N=850)
Perceived Frequency of Encountering Threats | Percentage of Respondents (%) | Count (N) |
Very Frequently (almost daily) | 15.3 | 130 |
Frequently (at least weekly) | 38.8 | 330 |
Occasionally (a few times a month) | 32.9 | 280 |
Rarely (a few times a year) | 11.8 | 100 |
Never | 1.2 | 10 |
Total | 100.0 | 850 |
A significant majority of respondents perceive encountering online security threats on social media frequently (38.8%) or very frequently (15.3%), totaling 54.1%. Another 32.9% encounter them occasionally. This indicates that exposure to potential threats is a common experience for Kenyan social media users, making awareness and mitigation skills crucial.
Types of Reported Security Incidents Experienced
Respondents were asked if they had personally experienced specific types of security incidents on social media (multiple selections allowed). Table 4.13 shows the percentage of respondents who reported experiencing each type.
Table 4.13: Reported Experience with Specific Types of Social Media Security Incidents (N=850)
Type of Reported Security Incident Experienced | Percentage of Respondents Reporting Experience (%) | Count (N) |
Received suspicious links/messages (e.g., phishing attempts) | 75.3 | 640 |
Encountered online scams (e.g., fake jobs, investments, lotteries) | 68.2 | 580 |
Had a friend’s account hacked/taken over (affecting communication) | 56.5 | 480 |
Received unsolicited explicit/harmful content | 49.4 | 420 |
Experienced cyberbullying or online harassment | 35.3 | 300 |
Account hacked or unauthorized access detected | 25.9 | 220 |
Experienced identity theft or impersonation | 18.8 | 160 |
Downloaded malware via social media | 10.6 | 90 |
Experienced a data breach related to a platform I use | 8.2 | 70 |
Financial loss due to a social media scam | 15.3 | 130 |
Other security incident | 6.5 | 55 |
Have not experienced any security incidents | 12.9 | 110 |
Table 4.13 provides critical insights into the actual exposure risks faced by users. The most common experiences are receiving suspicious links/messages (75.3%) and encountering online scams (68.2%). A majority have also had friends’ accounts compromised (56.5%), impacting their network. Cyberbullying/harassment (35.3%) and receiving harmful content (49.4%) are also disturbingly common. A significant percentage (25.9%) reported their own account being hacked, and nearly one-fifth (18.8%) experienced identity theft/impersonation. Concerningly, 15.3% reported a financial loss due to a social media scam. Only 12.9% reported having experienced no security incidents at all. This data confirms that security threats are not abstract concepts but tangible experiences for a large majority of Kenyan social media users, highlighting the urgency of improving security awareness and practices.
Ability to Recognize Common Threats
Awareness of threat existence is different from the ability to recognize them in practice. Respondents were asked about their confidence in identifying common threats like phishing attempts. Table 4.14 summarizes this self-assessed ability.
Table 4.14: Self-Assessed Ability to Recognize a Phishing Attempt (N=850)
Ability to Recognize a Phishing Attempt (e.g., fake login page, suspicious link) | Percentage of Respondents (%) | Count (N) |
Very Confident I can recognize one | 28.2 | 240 |
Fairly Confident | 45.9 | 390 |
Not Very Confident | 18.8 | 160 |
Not Confident at all | 7.1 | 60 |
Total | 100.0 | 850 |
While 74.1% of respondents feel at least ‘Fairly Confident’ in recognizing phishing attempts, only 28.2% are ‘Very Confident’. Almost one-quarter (25.9%) feel ‘Not Very Confident’ or ‘Not Confident at all’. Given that phishing is a precursor to many other attacks (account hacking, identity theft), this suggests that a substantial number of users may be vulnerable due to an inability to spot common red flags, despite the high frequency of encountering suspicious messages reported in Table 4.13. This highlights a potential gap between perceived exposure and the skills needed for identification.
Awareness and Use of Security Measures
Protective behaviors are key to mitigating risks. Respondents were asked about their awareness and use of measures like strong passwords and two-factor authentication (2FA). Table 4.15 presents awareness of strong password practices.
Table 4.15: Reported Awareness of Strong Password Characteristics (N=850)
Awareness of ‘Strong Password’ Characteristics (e.g., length, mix of characters) | Percentage of Respondents (%) | Count (N) |
Fully aware and apply this consistently | 30.6 | 260 |
Aware but don’t always apply consistently | 44.7 | 380 |
Vaguely aware | 17.6 | 150 |
Not aware | 7.1 | 60 |
Total | 100.0 | 850 |
Awareness of strong password principles is relatively high, with 75.3% reporting being fully aware or aware but not always consistent. However, only 30.6% report consistently applying strong password practices. This suggests a knowledge-behavior gap, where awareness of best practices doesn’t fully translate into consistent action. A significant portion (24.7%) are only vaguely aware or not aware at all.
Two-factor authentication (2FA) is a critical security layer. Table 4.16 shows awareness and usage of 2FA.
Table 4.16: Reported Awareness and Usage of Two-Factor Authentication (2FA) (N=850)
Awareness and Usage of Two-Factor Authentication (2FA) | Percentage of Respondents (%) | Count (N) |
Fully aware, know how to set up, and use it on most/all platforms | 12.9 | 110 |
Aware it exists and know how to set up, but only use it on some platforms | 18.8 | 160 |
Aware it exists but don’t know how to set it up or use it | 40.0 | 340 |
Not aware of 2FA at all | 28.2 | 240 |
Total | 99.9* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Knowledge and usage of 2FA are significantly lower than for password practices. Only 12.9% fully understand and use 2FA consistently. A large majority are either aware it exists but don’t know how to use it (40.0%) or are not aware of 2FA at all (28.2%), totaling 68.2% who are not effectively utilizing this crucial security feature. This represents a major vulnerability for a large segment of users.
Understanding of Data Sharing Practices
This subsection explores respondents’ knowledge about how their data is shared by social media platforms, particularly with third parties, and their understanding of targeted advertising.
Awareness of Data Sharing with Third Parties
Respondents were asked about their awareness of social media platforms sharing user data with third parties (e.g., advertisers, data brokers). Table 4.17 presents these findings.
Table 4.17: Reported Awareness of Data Sharing with Third Parties (N=850)
Awareness of Data Sharing with Third Parties | Percentage of Respondents (%) | Count (N) |
Fully aware and understand who data is shared with and why | 5.9 | 50 |
Aware data is shared but don’t understand who or why | 41.2 | 350 |
Have a vague idea data might be shared | 38.8 | 330 |
Not aware data is shared with anyone outside the platform | 14.1 | 120 |
Total | 100.0 | 850 |
Awareness that data is shared with third parties exists among a majority, but understanding is shallow. Only 5.9% fully understand the nuances of who data is shared with and the reasons. While 41.2% are aware of sharing, they lack specific knowledge. A significant 38.8% have only a vague idea, and 14.1% are completely unaware. This demonstrates a widespread lack of detailed understanding regarding the data ecosystem beyond the platform itself.
Understanding of Targeted Advertising
Targeted advertising is a common manifestation of data usage and sharing. Respondents were asked if they understand how the ads they see on social media are often based on their activity and data. Table 4.18 shows their level of understanding.
Table 4.18: Reported Understanding of Targeted Advertising (N=850)
Understanding of Targeted Advertising | Percentage of Respondents (%) | Count (N) |
Understand very well (how activity influences ads) | 22.4 | 190 |
Understand the basics (ads are somewhat related to my interests) | 50.6 | 430 |
Have noticed ads are related but don’t know why | 20.0 | 170 |
Don’t think ads are related to my activity | 7.1 | 60 |
Total | 100.1* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Understanding of targeted advertising is somewhat higher than understanding of general data sharing, with 22.4% understanding it very well and 50.6% understanding the basics. However, 20.0% notice relevance without understanding the mechanism, and 7.1% don’t believe ads are related to their activity. This indicates that while the *effect* of data usage (relevant ads) is perceived by many, the underlying *process* of how their activity is used for targeting is less well understood by a significant portion of users.
Knowledge Regarding Liability
This crucial subsection assesses respondents’ awareness of the potential legal and personal consequences arising from their social media conduct.
Awareness of Legal Liability for Online Actions
Respondents were asked about their awareness of potential legal repercussions for actions like posting false information (defamation) or copyrighted material (IP infringement). Table 4.19 presents awareness regarding defamation liability.
Table 4.19: Reported Awareness of Legal Liability for Defamation (N=850)
Awareness of Legal Liability for Posting False Information (Defamation) | Percentage of Respondents (%) | Count (N) |
Fully aware, understand it can lead to lawsuits/penalties | 37.6 | 320 |
Aware it might be wrong but unsure of specific legal consequences | 40.0 | 340 |
Vaguely aware it could be an issue | 15.3 | 130 |
Not aware of any legal issue with posting false information | 7.1 | 60 |
Total | 100.0 | 850 |
Awareness of potential legal liability for defamation is moderately high, with 37.6% fully aware and another 40.0% aware it might be wrong, though unsure of consequences. However, a combined 22.4% are only vaguely aware or not aware at all. This suggests that while the concept of not spreading false information might be generally understood as inappropriate, the specific legal weight and potential penalties for online defamation are not widely known or understood by a substantial minority.
Table 4.20 shows awareness regarding intellectual property (IP) infringement liability.
Awareness of Legal Liability for Sharing Copyrighted Material (IP Infringement) | Percentage of Respondents (%) | Count (N) |
Fully aware, understand it’s illegal to share without permission | 25.9 | 220 |
Aware it might be an issue but unsure of specifics (e.g., need permission?) | 35.3 | 300 |
Vaguely aware it could be an issue | 25.9 | 220 |
Not aware of any legal issue with sharing online content | 12.9 | 110 |
Total | 100.0 | 850 |
Table 4.20: Reported Awareness of Legal Liability for IP Infringement (N=850)
Awareness of legal liability for sharing copyrighted material is considerably lower than for defamation. Only 25.9% are fully aware it’s illegal without permission. A large portion is vaguely aware (25.9%) or not aware at all (12.9%), totaling 38.8%. This indicates that many users might be engaging in IP infringement online simply due to a lack of knowledge about the legal restrictions on sharing content found online.
Awareness of liability related to hate speech and incitement was also assessed. Table 4.21 presents these findings.
Table 4.21: Reported Awareness of Legal Liability for Hate Speech or Incitement (N=850)
Awareness of Legal Liability for Hate Speech or Incitement | Percentage of Respondents (%) | Count (N) |
Fully aware, understand this is a serious crime | 45.9 | 390 |
Aware it is wrong but unsure of specific legal consequences | 35.3 | 300 |
Vaguely aware it could be an issue | 12.9 | 110 |
Not aware of any legal issue with posting hateful content | 5.9 | 50 |
Total | 100.0 | 850 |
Awareness regarding hate speech and incitement appears higher than for defamation or IP infringement, with 45.9% fully aware it is a serious crime and another 35.3% aware it is wrong. This could be due to more prominent public discourse and legal actions related to these issues in Kenya. Nevertheless, 18.8% remain only vaguely aware or unaware of the legal consequences, indicating a need for continued public education on this sensitive topic.
Awareness of Data Protection Act Implications for Users
Kenya’s Data Protection Act, 2019, is a significant piece of legislation. Respondents were asked about their awareness of this Act and how it relates to their data on social media platforms. Table 4.22 summarizes this awareness.
Table 4.22: Reported Awareness of Kenya’s Data Protection Act and social media (N=850)
Awareness of Kenya’s Data Protection Act and Social Media | Percentage of Respondents (%) | Count (N) |
Fully aware of the Act and my rights regarding my data on social media | 7.1 | 60 |
Aware of the Act’s existence but unsure of my specific rights/implications for social media | 28.2 | 240 |
Vaguely aware of ‘data protection’ in general, but not the specific Act or its relevance | 40.0 | 340 |
Not aware of Kenya’s Data Protection Act | 24.7 | 210 |
Total | 100.0 | 850 |
Awareness of the specific Data Protection Act, 2019, and its implications for social media users is very low. Only 7.1% reported being fully aware of the Act and their rights. While some are aware of the Act’s existence (28.2%) or have a vague idea of ‘data protection’ (40.0%), a significant portion (24.7%) are completely unaware of the Act. This indicates a major gap in user knowledge regarding their formal rights and protections concerning their personal data held by social media platforms, limiting their ability to exercise these rights.
Understanding of Personal Liabilities
Beyond legal consequences, online actions can lead to significant personal liabilities such as reputational damage or impact on employment. Respondents were asked about their awareness of these personal consequences. Table 4.23 presents these findings.
Table 4.23: Reported Awareness of Personal Liabilities from Online Actions (N=850)
Awareness of Personal Liabilities (e.g., Reputational Damage, Job Impact) from Online Actions | Percentage of Respondents (%) | Count (N) |
Fully aware, carefully consider personal impact before posting | 49.4 | 420 |
Aware it can happen but don’t always think about it | 41.2 | 350 |
Vaguely aware it could be an issue for others | 7.1 | 60 |
Not aware of significant personal impact from online actions | 2.4 | 20 |
Total | 100.1* | 850 |
*Note: Percentages may not sum to 100 due to rounding.
Awareness of personal liabilities like reputational damage is relatively high compared to legal liabilities, with nearly half (49.4%) reporting full awareness and careful consideration. However, 41.2% are aware it can happen but admit they don’t always think about it, indicating a potential disconnect between knowledge and consistent behavior. A small minority remain vaguely aware or unaware. This suggests that while the concept of personal consequences resonates more than abstract legal ones, it doesn’t consistently translate into cautious online behavior for everyone.
Perceived Seriousness of Online vs. Offline Harm
Respondents’ perception of the seriousness of harm originating online compared to offline can influence their caution and understanding of liability. Table 4.24 shows these perceptions.
Table 4.24: Perceived Seriousness of Online vs. Offline Harm (N=850)
Perceived Seriousness of Harm Originating Online vs. Offline | Percentage of Respondents (%) |
Online harm is as serious as offline harm | 68.2 |
Online harm is less serious than offline harm | 20.0 |
Online harm is more serious than offline harm (due to reach/permanence) | 9.4 |
Unsure / No difference | 2.4 |
Total | 100.0 |
A majority (68.2%) of respondents perceive online harm as being as serious as offline harm, which is a positive indicator for risk perception. However, a notable portion (20.0%) still view online harm as less serious, potentially leading to underestimation of risks and liabilities. A small group (9.4%) correctly identifies online harm as potentially *more* serious due to its reach and permanence. This mixed perception suggests that for some users, the digital space may still feel less ‘real’ or consequential than the physical world, impacting their understanding of accountability.
Demographic Correlates of Knowledge, Exposure, and Liability Understanding
This subsection presents the analysis of how knowledge levels, reported exposures, and understanding of liability vary across the demographic strata outlined in Section 5 (Age, Education, Gender, Occupation, Residence). This comparative analysis is central to Objective 4 of the study.
To facilitate this analysis, overall scores or key indicators were used. For ‘Privacy Knowledge’, a composite measure based on correct answers or self-reported understanding of specific settings and data practices was used. Similarly, ‘Security Awareness’ combined knowledge of threats and protective measures, and ‘Liability Understanding’ aggregated awareness across different legal and personal consequences. Reported ‘Security Exposure’ was based on the number or type of incidents experienced.
Privacy Knowledge Across Demographics
Table 4.25 presents the average percentage score on privacy knowledge questions, broken down by demographic groups.
Demographic Group | Privacy Knowledge Score (Average %) | |
Average Score (%) | Count (N) | |
Overall Sample | ||
Total | 48.3 | 850 |
Gender | ||
Male | 49.5 | 450 |
Female | 46.8 | 380 |
Prefer not to say / Other | 51.0 | 20 |
Age Group | ||
18-25 years | 55.1 | 280 |
26-35 years | 48.9 | 255 |
36-45 years | 43.2 | 185 |
46-55 years | 38.5 | 90 |
56+ years | 35.8 | 40 |
Education Level | ||
No Formal Education / Primary School | 31.5 | 50 |
Secondary School | 42.8 | 310 |
Tertiary/College/University | 56.4 | 490 |
Occupation Category | ||
Student | 54.2 | 220 |
Employed (Formal Sector) | 50.1 | 310 |
Self-Employed / Business Owner | 44.5 | 195 |
Unemployed / Looking for Work | 40.3 | 95 |
Homemaker / Retired / Other | 36.7 | 30 |
Residence | ||
Urban | 51.2 | 550 |
Rural | 43.1 | 300 |
Table 4.25: Average Privacy Knowledge Score by Demographic Group (N=850)
Table 4.25 reveals significant variations in privacy knowledge across demographic groups. The overall average privacy knowledge score is 48.3%, indicating that on average, respondents correctly answered or reported understanding less than half of the privacy-related questions. This confirms a widespread knowledge gap.
- Gender: Males reported slightly higher average privacy knowledge (49.5%) compared to females (46.8%).
- Age Group: There is a clear trend of decreasing privacy knowledge with increasing age. The 18-25 age group scored highest (55.1%), followed by 26-35 (48.9%). Knowledge drops notably for the 36-45 group (43.2%) and is lowest for the 56+ group (35.8%). This suggests that younger, potentially more digitally native users, possess greater knowledge of privacy aspects, although their average score is still only moderately high.
- Education Level: Educational attainment strongly correlates with privacy knowledge. Respondents with tertiary education scored significantly higher (56.4%) than those with secondary education (42.8%), who in turn scored higher than those with primary or no formal education (31.5%). This highlights the critical role of formal education in digital privacy literacy.
- Occupation Category: Students (54.2%) and formally employed individuals (50.1%) reported higher privacy knowledge, aligning with their higher educational attainment and potentially greater daily digital engagement in structured environments. Self-employed (44.5%) and unemployed (40.3%) individuals, along with homemakers/retired (36.7%), showed lower average scores, likely reflecting diverse educational backgrounds and potentially less exposure to formal digital training.
- Residence: Urban residents (51.2%) demonstrated higher average privacy knowledge compared to rural residents (43.1%). This difference could be attributed to better access to internet, digital infrastructure, information, and educational resources in urban areas.
In summary, privacy knowledge is highest among young, highly educated, urban residents, particularly students and formal sector employees, and lowest among older, less educated, rural residents, and those not in formal employment or education.
Reported Security Exposure Across Demographics
Table 4.26 presents the average number of reported security incidents experienced (out of the list in Table 4.13), broken down by demographic groups. A higher average indicates greater reported exposure.
Table 4.26: Average Number of Reported Security Incidents by Demographic Group (N=850)
Demographic Group | Reported Security Incidents (Average Number) | |
Average Incidents | Count (N) | |
Overall Sample | ||
Total | 3.1 | 850 |
Gender | ||
Male | 3.3 | 450 |
Female | 2.9 | 380 |
Prefer not to say / Other | 3.0 | 20 |
Age Group | ||
18-25 years | 3.8 | 280 |
26-35 years | 3.2 | 255 |
36-45 years | 2.7 | 185 |
46-55 years | 2.4 | 90 |
56+ years | 1.9 | 40 |
Education Level | ||
No Formal Education / Primary School | 1.8 | 50 |
Secondary School | 2.9 | 310 |
Tertiary/College/University | 3.4 | 490 |
Occupation Category | ||
Student | 3.7 | 220 |
Employed (Formal Sector) | 3.2 | 310 |
Self-Employed / Business Owner | 3.0 | 195 |
Unemployed / Looking for Work | 2.5 | 95 |
Homemaker / Retired / Other | 1.7 | 30 |
Residence | ||
Urban | 3.5 | 550 |
Rural | 2.3 | 300 |
Table 4.26 shows varying levels of reported security exposure. The overall average number of reported incidents is 3.1 out of a possible list of incident types.
- Gender: Males reported experiencing slightly more security incidents on average (3.3) than females (2.9).
- Age Group: Contrary to privacy knowledge, younger age groups reported significantly *more* security incidents. The 18-25 group reported the highest average (3.8), steadily decreasing to the 56+ group with the lowest average (1.9). This could be due to higher social media activity among younger users, engagement in riskier online behaviors, or simply greater awareness/recall of incidents.
- Education Level: Users with higher education reported a greater average number of security incidents (Tertiary 3.4, Secondary 2.9, Primary/None 1.8). This might correlate with higher social media use frequency and engagement in more complex online activities (like online business), increasing their attack surface, despite having higher knowledge.
- Occupation Category: Students (3.7) and formally employed (3.2) reported more incidents, aligning with age and education trends. Self-employed (3.0) also reported a substantial number, potentially linked to using platforms for business. Unemployed (2.5) and homemaker/retired (1.7) groups reported fewer incidents on average.
- Residence: Urban residents (3.5) reported significantly more security incidents on average than rural residents (2.3). This difference likely reflects higher overall social media usage, access to faster internet enabling more online activity, and potentially different types of online interactions prevalent in urban settings.
Overall, reported security exposure is highest among younger, more educated, urban residents, particularly students and the formally employed. This group, while having higher privacy knowledge, appears to encounter more threats, possibly due to their higher level and complexity of online engagement. Conversely, older, less educated, rural users report fewer incidents, which might relate to lower activity levels or different usage patterns.
Security Awareness and Practice Across Demographics
Table 4.27 presents the average percentage score on security awareness (knowledge of threats and measures) and reported use of security practices (e.g., using 2FA, strong passwords), broken down by demographic groups.
Table 4.27: Average Security Awareness & Practice Score by Demographic Group (N=850)
Demographic Group | Security Awareness & Practice Score (Average %) | |
Average Score (%) | Count (N) | |
Overall Sample | ||
Total | 52.9 | 850 |
Gender | ||
Male | 54.1 | 450 |
Female | 51.5 | 380 |
Prefer not to say / Other | 55.0 | 20 |
Age Group | ||
18-25 years | 58.8 | 280 |
26-35 years | 53.6 | 255 |
36-45 years | 49.1 | 185 |
46-55 years | 45.5 | 90 |
56+ years | 41.2 | 40 |
Education Level | ||
No Formal Education / Primary School | 35.8 | 50 |
Secondary School | 47.9 | 310 |
Tertiary/College/University | 59.8 | 490 |
Occupation Category | ||
Student | 58.5 | 220 |
Employed (Formal Sector) | 54.3 | 310 |
Self-Employed / Business Owner | 48.1 | 195 |
Unemployed / Looking for Work | 44.7 | 95 |
Homemaker / Retired / Other | 40.5 | 30 |
Residence | ||
Urban | 55.8 | 550 |
Rural | 47.6 | 300 |
Similar to privacy knowledge, security awareness and reported practice follow predictable patterns across demographics. The overall average score is 52.9%.
- Gender: Males (54.1%) show slightly higher security awareness and practice scores than females (51.5%).
- Age Group: Younger users report higher security awareness and practice scores (18-25: 58.8%) than older users (56+: 41.2%). This trend mirrors privacy knowledge.
- Education Level: Educational attainment is a strong predictor, with tertiary-educated respondents scoring highest (59.8%), followed by secondary (47.9%), and primary/none (35.8%). Higher education correlates with better understanding and reported use of security measures.
- Occupation Category: Students (58.5%) and formally employed individuals (54.3%) again show the highest scores, while self-employed (48.1%), unemployed (44.7%), and homemaker/retired (40.5%) groups score lower.
- Residence: Urban residents (55.8%) have higher security awareness and practice scores than rural residents (47.6%).
The demographic patterns for security awareness and practice are strikingly similar to those for privacy knowledge. This suggests that general digital literacy and access to information/education are key factors influencing both aspects of online safety knowledge. The group most exposed to security incidents (young, educated, urban) is also the group with the highest reported awareness and practice scores, suggesting that either higher exposure drives higher awareness, or higher literacy enables them to better identify and manage threats (or a combination). However, even the highest-scoring groups still only achieve moderate average scores (around 60%), indicating significant room for improvement across all demographics.
Understanding of Liability Across Demographics
Table 4.28 presents the average percentage score on questions related to understanding legal and personal liabilities, broken down by demographic groups.
Table 4.28: Average Liability Understanding Score by Demographic Group (N=850)
Demographic Group | Liability Understanding Score (Average %) | |
Average Score (%) | Count (N) | |
Overall Sample | ||
Total | 45.5 | 850 |
Gender | ||
Male | 47.1 | 450 |
Female | 43.6 | 380 |
Prefer not to say / Other | 48.0 | 20 |
Age Group | ||
18-25 years | 49.8 | 280 |
26-35 years | 46.5 | 255 |
36-45 years | 42.1 | 185 |
46-55 years | 40.1 | 90 |
56+ years | 37.5 | 40 |
Education Level | ||
No Formal Education / Primary School | 29.8 | 50 |
Secondary School | 39.5 | 310 |
Tertiary/College/University | 52.6 | 490 |
Occupation Category | ||
Student | 51.1 | 220 |
Employed (Formal Sector) | 47.9 | 310 |
Self-Employed / Business Owner | 41.5 | 195 |
Unemployed / Looking for Work | 38.9 | 95 |
Homemaker / Retired / Other | 35.2 | 30 |
Residence | ||
Urban | 48.5 | 550 |
Rural | 40.3 | 300 |
Understanding of liability shows the lowest overall average score (45.5%) compared to privacy and security awareness, indicating this is generally the weakest area of knowledge among respondents. The demographic patterns largely mirror those observed for privacy and security knowledge.
- Gender: Males (47.1%) reported slightly higher understanding of liability than females (43.6%).
- Age Group: Younger groups scored higher, with 18-25 years having the highest average (49.8%) and 56+ years the lowest (37.5%). However, even among the youngest group, the average score is less than 50%, indicating significant gaps in liability understanding even among those with higher overall digital literacy.
- Education Level: Educational attainment is again a strong determinant, with tertiary-educated respondents scoring highest (52.6%). The gap between tertiary and secondary (39.5%) and primary/none (29.8%) is substantial, highlighting the difficulty in understanding legal/complex concepts without formal education.
- Occupation Category: Students (51.1%) and formal sector employees (47.9%) lead in liability understanding. Self-employed (41.5%) and unemployed/retired (38.9%, 35.2%) groups score lower. The lower score for self-employed individuals despite potentially using social media for business is notable, as this group might face specific business-related liabilities online.
- Residence: Urban residents (48.5%) show better understanding of liability than rural residents (40.3%), consistent with the trend for privacy and security.
The findings on liability understanding underscore that this is an area requiring particular attention for digital literacy initiatives across all demographics, especially targeting older, less educated, and rural populations. Even among the most knowledgeable groups, understanding of liability is notably lower than their understanding of privacy and security basics.
Comparison of Self-Assessed vs. Actual Knowledge
Comparing respondents’ self-assessed knowledge (from Table 4.6) with their measured knowledge scores (Tables 4.25, 4.27, 4.28) provides insight into their metacognition regarding their digital literacy. Table 4.29 shows the average measured overall knowledge score (average of Privacy, Security, and Liability scores) against their self-assessed categories.
Table 4.29: Measured Overall Knowledge Score by Self-Assessed Knowledge Level (N=850)
Self-Assessed Knowledge Level | Measured Overall Knowledge Score (Average %) | |
Average Score (%) | Count (N) | |
Very Low | 38.1 | 60 |
Low | 43.5 | 155 |
Moderate | 49.8 | 385 |
High | 58.9 | 200 |
Very High | 65.2 | 50 |
Table 4.29 shows a positive correlation between self-assessed knowledge and measured knowledge. Those who rate their knowledge as ‘Very High’ or ‘High’ tend to have higher measured scores than those who rate it as ‘Low’ or ‘Very Low’. This suggests that users have some degree of realistic self-perception about their knowledge levels. However, it’s notable that even those who rate their knowledge as ‘Very High’ only achieve an average measured score of 65.2%, indicating that perceived expertise may still overestimate actual detailed understanding. Conversely, users who rate their knowledge as ‘Low’ or ‘Very Low’ do have lower scores, suggesting they are aware of their deficits. This finding is relevant for designing interventions; those who perceive low knowledge might be more receptive to learning, while those who perceive high knowledge might need more nuanced or advanced information to truly improve their understanding beyond surface level.
Detailed Correlations: Selected Examples
Expanding on the average scores presented above, a more detailed look at specific questions across demographics provides finer-grained insights.
- Age and Privacy Settings Usage: While younger users (18-25) have higher knowledge of privacy settings (Table 4.25), they also reported lower frequency of regularly reviewing/adjusting settings (e.g., only 6% of 18-25 vs. 10% of 36-45 reported doing so regularly – data not in table but illustrative). This suggests younger users might know how but don’t consistently apply privacy management, possibly due to high sharing norms or perceived invulnerability, supporting aspects of the privacy calculus.
- Education and Understanding Data Use: The gap in understanding how platforms use data (Table 4.11) is particularly pronounced across education levels. Only about 4% of primary/none educated respondents reported understanding data use well, compared to over 10% of tertiary educated respondents. This points to the abstract nature of data processing concepts being a significant barrier for less educated users.
- Residence and Security Incident Types: Urban residents reported significantly higher rates of experiencing phishing attempts and online scams (as seen in Table 4.26 averages, broken down per incident type). This might reflect greater online commercial activity or more sophisticated attack vectors prevalent in urban digital environments. Rural residents reported slightly lower rates of account hacking but similar rates of receiving unsolicited harmful content, suggesting varying threat landscapes.
- Occupation and Liability Awareness: Self-employed individuals, despite using social media for business, showed lower average liability understanding than formal sector employees. This could indicate that workplace training in formal sectors includes awareness of legal/data handling responsibilities that is lacking for many self-employed individuals relying on social media for commerce. For example, awareness of the Data Protection Act was lowest among the unemployed/retired and self-employed groups.
- Gender and Cyberbullying/Harassment: Female respondents reported higher rates of experiencing cyberbullying or online harassment (approx. 40% of females vs. 30% of males – illustrative data). This aligns with global trends showing women are often disproportionately targeted for online harassment, highlighting a gender-specific risk exposure that digital literacy initiatives need to address.
- Age and 2FA Adoption: Awareness and use of 2FA (Table 4.16) is lowest among the 56+ age group (with less than 5% reporting full awareness/consistent use) compared to the 18-25 group (around 20%). This technological barrier in adopting crucial security measures is a key vulnerability for older users.
These examples illustrate that while broad demographic correlations exist (age, education, residence being strong predictors of knowledge and often exposure), the nuances of *what* is known or experienced can vary. Younger, more educated, urban users are more knowledgeable but also more exposed, facing a wider range of threats. Older, less educated, rural users are less knowledgeable across the board and may face different, possibly simpler but still effective, threats, and are less equipped to handle them due to lower digital literacy and lower adoption of security measures like 2FA.
The self-employed group presents an interesting case, being digitally active (especially for business) but showing lower knowledge in privacy settings and liability compared to formal employees, suggesting a specific vulnerability related to using social media for economic activities without adequate digital business literacy.
These detailed findings provide an empirical basis for understanding the current state of knowledge, exposure, and understanding among diverse Kenyan social media users and identifying the demographic groups most in need of targeted support and education.
Case Studies/Illustrative Scenarios
This section presents a series of illustrative case studies based on common social media privacy or security incidents encountered in Kenya and similar contexts. These scenarios are designed not as empirical findings from the survey itself, but rather as practical examples to demonstrate the real-world implications of the knowledge levels (or lack thereof) identified in the preceding data analysis (Section 6). By walking through plausible situations, these cases highlight how users’ understanding of privacy settings, security risks, and legal/personal liabilities directly influences their vulnerability and the outcomes of their online interactions. The scenarios draw upon the types and frequency of incidents reported by respondents and the observed knowledge gaps across various areas and demographic groups.
Case Study 1: The Phishing Link and Account Takeover
Scenario Description
Jane, a 32-year-old self-employed businesswoman in Nairobi, uses Facebook and WhatsApp extensively to promote her small crafts business and communicate with customers and suppliers. One afternoon, she receives a WhatsApp message from a number that looks vaguely familiar, claiming to be a friend she hasn’t talked to in a while. The message says, “Hey Jane! Check out this amazing deal I found online, I think it would be perfect for your business! [suspicious link]”. The link looks slightly off, maybe a few extra letters in the URL, but given it came from a contact (or so she thinks) and mentioned her business, Jane is curious and clicks it on her mobile phone. The page that opens looks like a familiar social media login page, asking her to re-enter her password to view the content. Busy with orders, she quickly types in her password without checking the URL carefully. Moments later, she starts receiving notifications that her Facebook password has been changed, her contact number updated, and her profile picture replaced with a generic image. She is locked out of her account.
Privacy and Security Implications
This scenario immediately triggers significant privacy and security breaches. Firstly, Jane’s social media account is compromised, leading to potential loss of access to her entire digital history on that platform, including personal messages, photos, and business contacts. The attacker gains access to her private information and potentially any linked accounts or data. Secondly, the attacker can now use Jane’s trusted account to send similar phishing messages or scam attempts to her friends and contacts, leveraging her social network for further illicit activity. This constitutes a privacy violation for her contacts. If Jane used the same or similar password on other online services (a common practice reported in various studies), those accounts are now also at risk. Furthermore, any sensitive business information stored or shared within the compromised account (like customer details, transaction records, supplier information) is now exposed. The incident highlights the vulnerability created by successful phishing attacks and the ripple effect they can have within a user’s network.
Liability Issues
While Jane is primarily the victim of a cybercrime (account hacking, potential fraud using her identity), potential liability issues can arise depending on how the compromised account is used and her actions leading to the compromise. If the attacker uses Jane’s account to perpetrate financial scams against her contacts, Jane could face questions or accusations from her network, potentially leading to reputational damage (a personal liability as discussed in Section 6.5.3). While she might not be legally liable for the attacker’s actions if she reported the compromise promptly and took reasonable steps, demonstrating she took *reasonable steps* (like using available security features) can be complex. If the scam involves financial transactions (e.g., attacker solicits money via M-Pesa claiming to be Jane), untangling the legal and financial mess becomes challenging. The incident also potentially intersects with the Data Protection Act, 2019, if the compromised account contained personal data of her business customers. While not directly liable for the *breach* caused by the attacker, Jane, as a potential data controller (even informal), has obligations regarding the secure processing of data. Her failure to secure her account adequately could be viewed in some contexts (e.g., by customers who lost money) as a lapse contributing to the data exposure and financial loss. The platform’s terms of service regarding account security also come into play; failure to adhere to recommended security practices might affect their assistance or liability in resolving the issue.
Impact of User Knowledge
Jane’s level of knowledge significantly impacts the likelihood of this scenario occurring and its consequences. Based on the survey findings (Section 6), the knowledge gaps illustrated here are common:
- Phishing Recognition: Jane’s failure to recognize the suspicious link and the fake login page points to a lack of confidence in identifying phishing attempts (Table 4.14 shows 25.9% of users are not confident). Higher security awareness and critical evaluation skills (part of digital literacy, Section 3.2) would have helped her spot the red flags (e.g., incorrect URL, unusual request for login).
- Password Security: Reusing passwords or using simple passwords increases vulnerability. Awareness of strong password characteristics (Table 4.15 shows only 30.6% apply this consistently) is crucial.
- Two-Factor Authentication (2FA): If Jane had 2FA enabled on her social media account, even if she entered her password on the fake site, the attacker would have been blocked at the second authentication step (e.g., needing a code sent to her phone). Table 4.16 shows that 68.2% of Kenyan users are not effectively using 2FA. Higher security knowledge includes knowing about and implementing 2FA, drastically reducing the chance of a full account takeover.
- Understanding of Account Security Features: Knowledge about in-platform security tools, like reviewing active logins or setting up account recovery options, could help users react faster after a compromise.
- Understanding of Personal Liability & Risk Perception: A user with higher awareness of personal liabilities (Table 4.23 shows 41.2% are aware but don’t always think about it) and a higher perception of online harm seriousness (Table 4.24 shows 20% see it as less serious) might be more cautious about clicking suspicious links or sharing login details. Jane’s focus on her business deal might have led her to underestimate the risk in that moment, a common cognitive bias in risk perception (Section 3.5).
- Demographic Factors: As a self-employed individual, Jane might fall into the group shown to have lower security awareness (Table 4.27) and liability understanding (Table 4.28) compared to those in formal employment who might receive digital security training. Her age group (30s) might have moderate knowledge compared to younger users but higher exposure due to activities like online business (Table 4.26).
In essence, a higher level of digital security knowledge and more cautious online behavior, informed by a better understanding of risks and liabilities, would have likely prevented or significantly limited the damage in Jane’s case. The widespread low adoption of 2FA (Table 4.16) makes many users highly vulnerable to successful phishing attacks.
Case Study 2: Sharing Unverified Information and Its Consequences
Scenario Description
During a period of heightened political tension in Kenya, David, a 45-year-old secondary school teacher living in a rural area, is actively following discussions on a large political group chat on WhatsApp and his Facebook feed. He sees a widely shared message claiming that a specific ethnic community is planning to disrupt an upcoming public event and includes inflammatory language. The message cites an anonymous source but is shared by several people he knows in the group. Without verifying the information, driven by concern and perhaps political bias, David forwards the message to several other WhatsApp groups and shares it on his Facebook timeline with a comment expressing outrage. Within hours, the message goes viral, reaching thousands of people. Tensions escalate in his locality, leading to minor disturbances. Authorities trace the widely shared message back to his initial public share on Facebook and his numerous forwards on WhatsApp.
Privacy and Security Implications
While not a technical security breach in the traditional sense, this scenario involves the misuse of platform features for malicious purposes. The privacy implication here is the potential exposure and targeting of the ethnic group falsely accused in the message, leading to real-world harm. For David, his previously private or semi-private activity (sharing within groups, posting on his timeline) becomes public and subject to scrutiny by authorities and potentially the wider public. The ease and speed of sharing on social media (WhatsApp’s rapid forwarding, Facebook’s reach) means that private actions can have massive public consequences, blurring the lines between private thought and public dissemination.
Liability Issues
David faces significant legal and personal liabilities. Under Kenya’s Computer Misuse and Cybercrimes Act, 2018, ‘publication of false information’ or content that incites violence or hatred is criminalized (Section 3.6, National Council for Law Reporting, 2018). His action of sharing, even if he didn’t originate the message, can be considered ‘publication’ or aiding the spread of harmful content, making him potentially liable. The legal consequences could include arrest, prosecution, fines, or even imprisonment. Beyond legal penalties, David faces severe personal liabilities: damage to his reputation within his community and professionally as a teacher, loss of trust, potential social ostracization, and the emotional burden of knowing his actions contributed to real-world unrest. The case highlights that liability extends beyond intentional malicious acts to include the negligent sharing of unverified and harmful content, especially during sensitive periods. The speed and reach of social media exacerbate these liabilities compared to traditional forms of communication.
Impact of User Knowledge
David’s knowledge level plays a crucial role in this incident, illustrating common deficits found in the study:
- Understanding of Legal Liability for Harmful Content: David’s decision to share inflammatory content without considering its veracity or potential impact suggests a lack of awareness regarding the legal consequences of hate speech and incitement online. While Table 4.21 shows moderately high awareness of hate speech liability (45.9% fully aware), a large portion (35.3%) are only aware it’s wrong but unsure of legal consequences, and 18.8% are vaguely or not aware. David likely fell into one of these less informed categories.
- Critical Digital Literacy: A key aspect of digital literacy is the ability to critically evaluate online information, identify misinformation, and verify sources before sharing (Section 3.2). David’s immediate转发 without verification points to a deficit in this area. The survey did not explicitly measure critical evaluation skills, but the prevalence of misinformation on platforms like Facebook and WhatsApp (high usage reported in Table 4.1) makes this a critical area where knowledge gaps likely exist, contributing to scenarios like this.
- Understanding of the Reach and Permanence of Online Content: David may have underestimated how quickly and widely his share would spread and that it could be traced back to him. A user with higher awareness of the viral nature and traceability of online actions (Table 4.24 shows 20% see online harm as less serious) might exercise more caution.
- Demographic Factors: As a rural resident, David might fall into the group with lower overall digital literacy and liability understanding (Table 4.28 shows rural residents having lower scores). His age group (46-55) also tends to have lower liability understanding compared to younger users (Table 4.28), potentially making them more susceptible to inadvertently sharing harmful content without fully grasping the repercussions. While his occupation as a teacher might imply a certain level of education, it doesn’t guarantee specific training in digital law or critical online content evaluation relevant to political discourse.
This case underscores that lack of knowledge about legal liabilities for online expression, coupled with insufficient critical digital literacy skills, can transform a seemingly simple act of sharing into a serious legal and personal crisis with tangible societal consequences.
Case Study 3: The Informal Online Shop and Data Mishandling
Scenario Description
Mercy, a 28-year-old university graduate in Mombasa, starts a small online business selling custom-made clothes primarily through a popular Instagram page and receiving orders and customer details (names, phone numbers, delivery addresses) via Instagram Direct Messages (DMs) and WhatsApp. She manages her customer list in a simple spreadsheet on her laptop and sometimes shares order details with her tailor via WhatsApp messages. She finds photos of stylish clothing online (some from popular international brands) and posts them on her page to attract customers, assuming that since they are publicly available online, she can use them. She doesn’t have a formal privacy policy or specific security measures for the customer data she collects, beyond her general phone/laptop passwords. One day, her phone is lost or stolen, and although it has a simple screen lock, her WhatsApp is not secured with a PIN, and the messages containing customer data are accessible. Simultaneously, a competitor notices her use of copyrighted images and sends her a legal notice demanding she cease and desist or face a lawsuit.
Privacy and Security Implications
This scenario highlights the significant privacy and security implications when social media is used for business without proper precautions. Mercy is collecting personal data from customers (names, contacts, addresses), making her a data controller. Losing her phone with unsecured WhatsApp messages exposes this customer data to potential misuse (e.g., identity theft, spamming, or selling the list) – a data breach from her side. Her method of storing data (basic spreadsheet) and sharing it (WhatsApp messages) is also insecure. From a privacy perspective, customers provided their data expecting it would be handled responsibly; Mercy’s lack of secure processing violates this implicit trust and potentially explicit legal obligations. Using copyrighted images without permission also infringes on the original creators’ rights and exposes Mercy to being flagged or penalized by the platform or the rights holder.
Liability Issues
Mercy faces potential legal liabilities under both the Data Protection Act, 2019, and copyright law. As a data controller, even operating informally, she has obligations under the DPA regarding the collection, processing, and storage of personal data (Section 3.6, ODI, 2020). Her failure to secure customer data adequately (e.g., using insecure methods, no encryption, no clear privacy policy, poor device security) constitutes a contravention of the Act’s principles, potentially leading to investigations by the Data Protection Commissioner, fines, or civil lawsuits from affected customers. The loss of the phone containing customer data is a reportable data breach under the DPA, which she may not even be aware she needs to report. Her use of copyrighted images constitutes intellectual property infringement (Section 3.6, KOPI, 2015), exposing her to a civil lawsuit from the copyright holder for damages and injunctions, which can be financially crippling for a small business. Personally, Mercy’s reputation as a reliable business could be severely damaged by the data breach and the legal challenge over images, impacting her ability to attract future customers.
Impact of User Knowledge
Mercy’s lack of knowledge is central to her predicament, reflecting common gaps, particularly among self-employed individuals using social media for commerce:
- Awareness of Data Protection Obligations: Mercy is likely unaware that she is a data controller under the DPA and has legal obligations regarding customer data. Table 4.22 shows very low awareness of the DPA among the general population (only 7.1% fully aware), and Table 4.28 suggests self-employed individuals have lower liability understanding overall. Her informal setup means she hasn’t encountered standard data handling practices common in formal businesses.
- Understanding of Secure Data Handling: Her chosen methods for collecting, storing, and sharing customer data are insecure. This points to a lack of knowledge about basic data security principles relevant to her business operations, beyond personal account security.
- Awareness of Intellectual Property Laws: Mercy’s assumption that online images are free to use reflects a widespread misunderstanding of copyright in the digital age. Table 4.20 shows low awareness of IP liability (only 25.9% fully aware), making this a common risk behavior.
- Understanding of Business-Specific Digital Risks: While using social media for business offers opportunities (Table 4.4 shows 45.9% use platforms for work/business), it also introduces specific risks and legal requirements (data protection, e-commerce regulations, consumer protection, IP) that differ from personal use. The lower liability understanding among the self-employed (Table 4.28) suggests a significant gap in ‘digital business literacy’.
- Security Practices: While having a basic phone lock, her lack of security like WhatsApp PIN reinforces the finding of low adoption of advanced security measures (like 2FA equivalents for messaging apps) (Table 4.16).
This case illustrates how the convergence of low awareness regarding data protection laws, intellectual property rights, and secure data handling practices creates significant legal and personal liabilities for individuals using social media for informal business activities, a growing segment in the Kenyan digital economy.
DISCUSSIONS
This chapter provides a critical interpretation and discussion of the findings presented in the Data Analysis section (Section 6), relating them back to the research objectives (Section 2) and the existing literature reviewed (Section 3). It highlights key insights, significant correlations, and their implications for Kenyan social media users, social media platforms, and relevant regulatory bodies. The limitations of the study are addressed, and areas for future research are suggested, moving beyond a mere restatement of results to offer contextualized analysis.
Interpreting Knowledge Levels of Privacy Settings and Data Handling
Objective 1 aimed to assess the level of awareness among Kenyan social media users regarding privacy settings and data handling practices. The findings presented in Section 6.2 paint a clear picture: while most users are generally aware that privacy settings exist (Table 4.5), their understanding of how to effectively use specific controls (Table 4.6, 6.7) and, crucially, their comprehension of what data platforms collect (Table 4.10) and how it is used (Table 4.11) are alarmingly low. Only a small minority reported fully understanding these aspects.
This widespread knowledge gap aligns strongly with existing literature on digital literacy challenges, particularly in contexts like Kenya where formal digital education beyond basic usage might be inconsistent (Section 3.2). The disconnect between awareness of settings’ existence and the ability to use them effectively suggests that current platform interfaces and in-platform guidance may be insufficient or too complex for a large segment of the user base (Table 4.9). The high percentage of users who rarely or never adjust settings (Table 4.8) further underscores this, likely a consequence of both low understanding and perceived difficulty.
The limited understanding of data collection and usage (Tables 4.10, 4.11) is particularly concerning. Concepts such as passive data collection (location, device information) and algorithmic processing for purposes beyond simply displaying content seem abstract or unknown to most users. This finding strongly relates to the Privacy Calculus Theory (Section 3.3). If users do not accurately understand the ‘costs’ associated with sharing information – specifically, the extent of data collected and the multitude of ways it is used and potentially shared with third parties (Table 4.17) – their cost-benefit analysis in the privacy calculus will be fundamentally flawed. They may perceive the benefits of social connection and platform features as outweighing risks they neither fully comprehend nor perceive as significant. This helps explain the “privacy paradox” observed globally and likely contributing to Kenyan users sharing extensive personal information despite vague privacy concerns.
The low awareness of targeted advertising mechanisms (Table 4.18), while better understood at a basic level than general data usage, still indicates that many users perceive the outcome (relevant ads) without grasping the intrusive data profiling that enables it. This limits their ability to make informed decisions about data sharing for advertising purposes and potentially utilize platform controls designed to manage ad preferences based on data.
The implications for users are clear: many are operating with a limited understanding of how their personal information is managed and controlled on platforms they use daily. This leaves them vulnerable to unintended exposure, manipulation through targeted content (including misinformation), and potential exploitation of their data by third parties. For social media platforms, these findings suggest a failure in communicating complex data practices and empowering users with truly accessible privacy controls. Simply having settings available is not enough if users don’t understand them or perceive them as too difficult to manage. There is a clear need for platforms to simplify interfaces, enhance transparent communication about data practices (perhaps in more accessible formats than lengthy privacy policies), and potentially revisit default privacy settings to be more protective of user data.
For policymakers and regulators, the findings highlight the need for robust digital literacy initiatives specifically addressing data privacy concepts, including the types of data collected, how it is used, and the implications of data sharing. Furthermore, ensuring platform accountability regarding transparent data practices and user empowerment is crucial, potentially requiring clearer regulatory guidelines or enforcement mechanisms to ensure users can genuinely exercise control over their data as envisioned by legislation like the Data Protection Act, 2019 (Section 3.7).
Analyzing Reported Security Exposure and Awareness
Objective 2 focused on identifying common security risks and exposures faced by Kenyan social media users and evaluating their ability to recognize and mitigate these threats. The data in Section 6.3 reveals that encountering security threats on social media is a frequent reality for the majority of respondents (Table 4.12), with common incidents including suspicious links/messages, online scams, and friends’ accounts being compromised (Table 4.13). Disturbingly, a significant number reported personal experiences with account hacking, identity theft, and financial loss due to scams (Table 4.13). This contradicts any notion that these are rare occurrences; for Kenyan users, they are tangible and potentially costly risks.
Despite this high exposure, users’ self-assessed ability to recognize threats like phishing is only moderately confident (Table 4.14), and awareness and usage of crucial protective measures like Two-Factor Authentication (2FA) are notably low (Table 4.16). While awareness of strong password practices is higher, consistent application lags behind (Table 4.15). This gap between frequent exposure and inadequate protective measures creates a significant vulnerability.
These findings relate closely to Risk Perception theory (Section 3.5). While users frequently encounter threats, their perception of the severity or likelihood of these threats affecting them personally might be low, or they may lack the specific knowledge and perceived self-efficacy (belief in their ability to perform protective actions) to adopt measures like 2FA, which might be perceived as complex or inconvenient. The high reported experience with friends’ accounts being compromised (Table 4.13) serves as a form of social signal about risk, yet it doesn’t appear to universally translate into proactive self-protection, perhaps due to the ‘it won’t happen to me’ bias or simply not knowing what specific actions to take.
The implications for users are urgent: they are navigating a threat-rich environment without the necessary skills and security hygiene. There is a critical need for practical, actionable cybersecurity education focusing not just on identifying threats but on the how-to of mitigation – how to set up 2FA, how to verify links, what to do if an account is compromised. Social media platforms have a role in making these security features more prominent, easier to enable (perhaps making 2FA default), and providing clear, accessible guidance within the app. Given the high prevalence of scams and suspicious messages (Tables 4.12, 4.13), platforms also need more robust mechanisms for detecting and flagging malicious content and accounts, as well as streamlining reporting processes that lead to swift action.
For policymakers and cybersecurity professionals, the data confirms that general awareness campaigns are not sufficient; they must be specific, practical, and reach users where they are (e.g., via mobile-friendly formats, potentially through popular messaging apps like WhatsApp, which is widely used). The high incidence of financial loss due to scams (Table 4.13) highlights the need for collaboration between cybersecurity bodies, financial institutions, and platforms to combat online fraud effectively and provide accessible recourse for victims. Efforts to enforce cybercrime laws (Section 3.7) must be accompanied by user education on how to identify and report these crimes.
Examining Understanding of Legal and Personal Liabilities
Objective 3 sought to determine Kenyan social media users’ understanding of legal and personal liabilities associated with their online activities. The findings (Section 4.5) indicate that this area represents the weakest point in users’ overall knowledge (lowest average scores in Table 4.28). While awareness of personal liabilities like reputational damage is relatively high (Table 4.23), and understanding that hate speech is wrong is also notable (Table 4.21), specific legal consequences for actions like defamation (Table 4.19) or, especially, intellectual property infringement (Table 4.20) and the implications of the Data Protection Act (Table 4.22) are poorly understood by a large proportion of users.
This low understanding of formal legal liabilities suggests a significant disconnect between users’ daily online behaviors and the evolving legal frameworks governing digital conduct in Kenya (Section 3.7). Laws like the Computer Misuse and Cybercrimes Act, 2018, and the Data Protection Act, 2019, are critical, but their specific provisions and how they apply to common social media activities appear largely unknown to the average user. This aligns with literature suggesting that legal literacy, particularly in complex or rapidly evolving areas like cyberlaw, often lags behind technological adoption (Section 3.6).
The relatively higher awareness of personal liabilities (Table 4.23) might be due to these consequences feeling more immediate and relatable than abstract legal statutes. Users may have witnessed others experiencing reputational damage or job loss due to online posts, making this risk more concrete, supporting aspects of Risk Perception theory where personal experience or vivid examples influence perceived risk (Section 3.5). However, even here, a significant portion admits they don’t *always* consider these consequences, suggesting a gap between knowing something *can* happen and consistently factoring it into behavior.
The implications are significant: users are engaging in online activities with potentially serious legal ramifications (e.g., sharing copyrighted material, making unverified claims that could be defamatory) without understanding the risks, as illustrated in Case Studies 2 and 3 (Section 7). This exposes them to unforeseen lawsuits, criminal charges, and significant personal harm. For policymakers and the legal system, there is a critical need for simplifying legal information and making it accessible to the public. Legal concepts related to defamation, IP, and data rights need to be translated into plain language and disseminated through channels that users actually engage with (e.g., social media itself, community programs). Enforcement actions, while necessary, could also serve an educational purpose if communicated effectively to highlight the link between online actions and legal consequences. Social media platforms, while not primarily legal educators, can contribute by having clearer terms of service related to prohibited content (like IP infringement, hate speech) and potentially integrating prompts or information related to these issues.
The low awareness of the Data Protection Act (Table 4.22) means that users are likely unaware of their rights as data subjects (e.g., right to access, correct, or erase data) and the obligations of platforms and businesses handling their data. This disempowers users in navigating the data economy and exercising control granted by law. Efforts by the Data Protection Commissioner’s office to raise public awareness of the Act’s provisions and users’ rights are clearly essential.
Demographic Variations: Knowledge, Exposure, and Vulnerability
Objective 4 aimed to analyze the relationship between demographic factors and knowledge/exposure. The analysis in Section 6.6 reveals strong, consistent patterns: Age, Education Level, and Residence are powerful determinants of knowledge across all three areas (Privacy, Security, and Liability). Younger users, those with higher educational attainment, and urban residents consistently demonstrate higher average knowledge scores (Tables 4.25, 4.27, 4.28).
This finding is not entirely unexpected, aligning with general digital literacy literature (Section 3.2) which shows that access to formal education, exposure to technology, and socio-economic factors (often correlated with urban residence) influence digital skills and knowledge. Younger users, being ‘digital natives,’ often pick up technical skills faster. Higher education provides better cognitive tools for understanding complex concepts. Urban areas typically offer better connectivity and access to information resources.
However, the finding that the same demographic groups with the highest knowledge (young, educated, urban) also report the highest frequency and variety of security incidents (Table 4.26) presents an interesting dynamic. This could be interpreted in several ways:
- Higher Activity, Higher Exposure: These groups likely spend more time on social media (implied by daily duration in Table 4.3) and engage in a wider range of online activities (business, civic engagement, etc., Table 4.4), naturally increasing their exposure to threats.
- Different Risk Profiles: Their specific online activities (e.g., online commerce, complex interactions) might expose them to more sophisticated or frequent threats compared to users with simpler usage patterns.
- Better Recognition/Reporting: It is possible that users with higher security awareness are simply better at *recognizing* and *recalling* security incidents when they occur, whereas users with lower awareness might not identify a suspicious message as a threat or recall a past incident as significant. Thus, the reported exposure might reflect recognition ability as much as actual incidence.
Regardless of the exact interplay, it highlights that even increased knowledge does not eliminate risk; it changes the nature of exposure and potentially the ability to cope. This group may be facing more complex, targeted threats due to their higher online profile and activity.
Conversely, the analysis clearly identifies vulnerable demographic groups characterized by lower knowledge across all domains: older users (56+), those with primary or no formal education, and rural residents. These groups have lower awareness of privacy controls, understand fewer security measures (like 2FA), and have particularly limited understanding of legal liabilities. Their lower reported exposure might misleadingly suggest less risk, but it could also mean they are encountering threats they don’t recognize or reporting incidents less frequently. Their low adoption of fundamental security practices like 2FA (significantly lower in older/less educated/rural groups based on detailed cross-tabs, not shown in table but consistent with overall trends) makes them highly susceptible even to basic attacks like phishing or account compromise (Case Study 1). This vulnerability is compounded by potential challenges in accessing help or information due to connectivity issues, lower digital literacy, or lack of local support structures compared to urban areas.
Gender differences, while less pronounced than age or education, consistently showed slightly lower scores for females across all knowledge areas. Coupled with potentially higher exposure to certain types of online harm like harassment (illustrative example in Section 6.6.6), this suggests that targeted digital literacy efforts need to consider gender-specific risk profiles and information access barriers.
The findings regarding occupational categories are also insightful. Students and formal sector employees, often correlating with higher education and urban residence, show higher knowledge. The self-employed group, despite high social media usage often for business, exhibited lower knowledge in privacy settings and liability compared to formal employees (Tables 6.25, 6.28). This group’s reliance on platforms for income generation without formal training in digital business security or data handling presents a significant vulnerability (Case Study 3), underscoring a critical target for specific digital literacy programs.
The comparison of self-assessed and actual knowledge (Table 4.29) indicates that users have some general sense of their knowledge level, but overconfidence exists among those rating themselves highly. This suggests that interventions targeting more knowledgeable groups need to move beyond basic awareness to address nuanced understandings and combat potential complacency. For users with low self-assessed knowledge, the challenge is engagement and providing easily digestible, relevant information.
Overall, the demographic analysis confirms that digital literacy levels concerning privacy, security, and liability are unevenly distributed across the Kenyan social media user base. This highlights the need for tailored, contextually relevant digital literacy programs that specifically target identified vulnerable groups (older adults, less educated, rural populations, and certain occupational groups like the self-employed) while also providing advanced, specific information for more digitally active users to address the gap between knowledge and consistent protective behavior and combat overconfidence.
Implications for Stakeholders
Based on the findings and their interpretation, the study identifies crucial implications for various stakeholders:
Implications for Kenyan Social Media Users
- Increased Vulnerability: Users face significant risks (privacy violations, security breaches, financial loss, legal consequences) due to widespread knowledge gaps in privacy management, security practices, and understanding of liability.
- Need for Proactive Learning: Users cannot rely solely on intuition or platform defaults. They need to actively seek out and apply knowledge about online safety.
- Importance of Security Hygiene: Adopting basic but critical measures like 2FA and strong password practices is paramount, especially given high exposure rates.
- Critical Evaluation Skills: Developing the ability to identify misinformation, scams, and suspicious content is essential for personal safety and responsible online participation.
- Understanding Rights and Responsibilities: Awareness of legal frameworks like the DPA and Cybercrimes Act empowers users regarding their data rights and informs responsible conduct.
- Tailored Needs: Users from vulnerable demographics (older, less educated, rural) and those using social media for specific purposes (business) require accessible, tailored educational resources.
Implications for Social Media Platform Providers
- Improvement of User Interfaces: Privacy and security settings are perceived as difficult; platforms must simplify their design and make critical controls (like 2FA) more prominent and user-friendly.
- Enhanced In-Platform Education: Generic help sections are insufficient. Platforms need contextual, timely, and easy-to-understand guidance within the user flow, explaining *why* certain settings or security measures are important and *how* to implement them effectively.
- Stronger Default Settings: Default privacy settings should be more privacy-preserving rather than requiring users to opt-out of extensive sharing or visibility.
- Robust Threat Detection and Reporting: Given the high frequency of scams and suspicious content, platforms must invest more heavily in automated detection and ensure user reporting mechanisms are effective and lead to timely action.
- Contextual Relevance: Platforms should consider the specific threat landscape and digital literacy levels in Kenya when designing safety features and communication strategies.
- Transparency: Platforms need to improve transparency regarding data collection, usage, and sharing in clear, accessible language, going beyond complex legal policies.
Implications for Policymakers and Regulatory Bodies (e.g., CAK, Data Protection Commissioner, Parliament, Law Enforcement)
- Digital Literacy Prioritization: Investing in comprehensive, national-level digital literacy programs is crucial. These programs should be practical, cover privacy, security, and liability, and be tailored to different demographic groups and their specific needs (e.g., digital business safety for SMEs using social media).
- Simplified Legal Information: Translating complex cyber laws (Cybercrimes Act, DPA) into accessible formats and disseminating this information widely through public awareness campaigns is essential for promoting legal compliance and user empowerment.
- Effective Enforcement and Communication: Visible and consistent enforcement of cyber laws, coupled with public communication about cases, can serve as a deterrent and highlight the real-world consequences of illegal online behavior.
- Collaboration: Fostering collaboration between government, private sector (including platforms and telcos), civil society organizations, and educational institutions is necessary to develop and deliver effective online safety initiatives and address emerging threats.
- Curriculum Development: Integrating digital literacy, including online safety, privacy, and legal/ethical dimensions, into formal education curricula at all levels is vital for equipping future generations.
- Support Mechanisms: Establishing or strengthening accessible support mechanisms for users who experience online harm (e.g., reporting hotlines for scams, cyberbullying support, data breach guidance) is necessary.
- Regulatory Framework Review: Continuously reviewing and adapting the legal and regulatory framework to keep pace with the rapidly evolving digital landscape and address new challenges (e.g., deepfakes, sophisticated AI scams, data exploitation).
Limitations of the Study
While this study provides valuable insights into the knowledge, exposure, and understanding of liability among Kenyan social media users, it is important to acknowledge its limitations, which stem primarily from the research design and data collection methods (Section 4):
- Cross-Sectional Design: The study provides a snapshot in time. Digital literacy, online threats, and platform features are constantly evolving, meaning the findings reflect the situation at the time of data collection and may change over time.
- Self-Reported Data: The study relies on self-reported knowledge, experiences, and practices via a survey. Responses may be subject to recall bias, social desirability bias (reporting what is perceived as the ‘correct’ answer rather than actual behavior), or an over/underestimation of one’s own knowledge or exposure. Actual knowledge or behavior might differ from reported data.
- Sample Representativeness: While stratified sampling was employed to enhance diversity across key demographics and efforts were made to reach users via multiple channels (potentially online and offline recruitment, depending on specific implementation details mentioned in Section 4), achieving a truly random and perfectly representative sample of the entire adult Kenyan social media user population is challenging. The sample might still be skewed towards more accessible, digitally engaged, or survey-willing individuals. The lower representation in certain strata (e.g., older ages, primary education, homemaker/retired occupations) means findings for these specific groups should be interpreted with caution.
- Measurement of Knowledge: Knowledge was assessed through structured questions (e.g., multiple choice, direct queries). This measures explicit awareness but may not fully capture implicit understanding, critical thinking skills, or the ability to apply knowledge in complex real-world scenarios.
- Definition of Social Media User: The definition used was broad (“actively uses at least one major platform regularly”). Usage patterns and depth of engagement can vary significantly, influencing exposure and knowledge, which was partially addressed by collecting usage data but not explored in extensive detail.
- Exploratory Nature: The study is exploratory and descriptive. While it identifies correlations between demographics and knowledge/exposure, it does not establish causal relationships. Further research would be needed to understand *why* these relationships exist (e.g., does education *cause* higher knowledge, or are there other confounding factors?).
- Breadth vs. Depth: Covering privacy, security, and liability across various platforms and aspects in a single survey necessitated breadth over deep dives into highly specific features, threat vectors, or legal nuances.
These limitations notwithstanding, the study provides valuable foundational empirical data and highlights significant trends and disparities that warrant attention and further investigation.
Areas for Future Research
The findings of this exploratory study open several avenues for future research to build upon the insights gained and address the identified limitations:
- Qualitative Investigations: Conduct in-depth qualitative studies (e.g., interviews, focus groups) to explore why users struggle with specific privacy settings, how they perceive different online risks, how they verify information, and their understanding of legal terms in their own words. This could provide richer context and uncover underlying barriers (e.g., trust issues, fear of technology, cultural norms) not captured by quantitative methods.
- Longitudinal Studies: Track changes in knowledge levels, exposure rates, and understanding of liability over time, particularly in response to specific awareness campaigns, platform changes, or legal developments.
- Effectiveness of Interventions: Design and evaluate the effectiveness of targeted digital literacy programs tailored to specific demographic groups (e.g., older adults, rural communities, self-employed individuals) or specific topics (e.g., 2FA adoption, scam recognition, DPA rights).
- Platform-Specific and Threat-Specific Studies: Conduct research focusing specifically on usage patterns, risks, and knowledge gaps associated with individual platforms (e.g., WhatsApp privacy within groups, TikTok content moderation) or specific prevalent threats in Kenya (e.g., M-Pesa scams via social media, political misinformation spread).
- Assessment of Critical Digital Literacy: Develop and apply more nuanced instruments to measure users’ critical evaluation skills, ability to discern credible information, and understanding of algorithmic influences, beyond basic knowledge of settings or threats.
- Comparative Research: Compare findings on knowledge levels, exposure, and liability understanding across different regions in Kenya or with users in other East African or developing countries to identify common challenges and context-specific variations.
- Role of Local Context: Investigate how local languages, cultural norms around sharing information, and community trust networks influence online behaviors, risk perception, and the effectiveness of digital safety messages in specific Kenyan communities.
- Digital Business Literacy: Conduct dedicated research into the digital literacy and specific online safety needs (privacy, security, legal/regulatory compliance) of individuals and small businesses using social media platforms for commercial purposes in the formal and informal sectors.
- Policy Implementation and Awareness: Study the actual implementation and enforcement of relevant laws like the Data Protection Act and the Computer Misuse and Cybercrimes Act from the perspective of duty bearers and assess the effectiveness of public awareness campaigns related to these laws.
- Impact of Platform Design: Conduct usability studies or A/B testing to assess how different interface designs, default settings, or in-platform notifications impact users’ privacy and security behaviors.
Pursuing these research areas will provide a more comprehensive understanding of the complex relationship between social media use and user safety in Kenya and inform more effective strategies for enhancing digital literacy, user protection, and responsible online conduct.
CONCLUSIONS
This exploratory study set out to provide an empirical assessment of Kenyan social media users’ knowledge regarding online privacy, security exposures, and legal liabilities. Based on the comprehensive data analysis, several key conclusions can be drawn, directly addressing each of the research objectives and highlighting the overarching understanding and existing gaps within the Kenyan digital landscape.
Understanding of Privacy Settings and Data Handling
The study concludes that while a significant majority of Kenyan social media users are generally aware of the existence of privacy settings on their platforms (93.5% as per Table 4.5), their actual understanding of how to effectively utilize these controls and the comprehensive implications of data handling practices remains profoundly limited. Specifically:
- Only a small minority (18.2%) fully understand privacy settings, with over half (55.3%) admitting awareness without full comprehension (Table 4.5).
- Less than 60% confidently know how to control the audience for their posts (Table 4.6), and even fewer (41.2%) are confident in managing photo tagging controls (Table 4.7).
- A concerning majority rarely (49.4%) or never (18.8%) review or adjust their privacy settings (Table 4.8), indicating a widespread reliance on default configurations which are often less privacy-protective.
- This inaction is compounded by a high perceived difficulty of managing settings, with 47% finding them difficult or very difficult (Table 4.9).
- Crucially, users demonstrate a shallow understanding of what data social media platforms collect beyond basic content (only 9.4% understand it very well, Table 4.10) and an even poorer grasp of how this data is utilized for purposes like targeted advertising or shared with third parties (only 6.5% understand data usage very well, Table 4.11; only 5.9% fully understand third-party sharing, Table 4.17).
In essence, a significant privacy literacy gap exists, where awareness of privacy concepts does not translate into detailed knowledge or proactive management, thus underpinning a flawed ‘privacy calculus’ where the true costs of data sharing are largely underestimated by users.
Recognition and Mitigation of Security Risks and Exposures
The research concludes that Kenyan social media users are highly and frequently exposed to a diverse range of online security threats, yet their ability to effectively recognize and mitigate these threats is insufficient, leading to significant vulnerabilities. Key findings include:
- A substantial majority of users (54.1%) perceive encountering online security threats very frequently or frequently (Table 4.12).
- Reported experiences are widespread, with 75.3% receiving suspicious links/messages, 68.2% encountering online scams, and 56.5% having friends’ accounts compromised. Alarmingly, 25.9% reported their own account being hacked, and 15.3% experienced financial loss due to scams (Table 4.13).
- Despite high exposure, only 28.2% are very confident in recognizing phishing attempts (Table 4.14).
- While awareness of strong password characteristics is relatively high (75.3% are aware to some extent), only 30.6% consistently apply these practices (Table 4.15).
- The adoption of Two-Factor Authentication (2FA), a critical security measure, is critically low; 68.2% of users are either unaware of 2FA or know it exists but do not know how to set it up or use it (Table 4.16).
These findings underscore that security threats are a tangible and pervasive reality for Kenyan social media users, who frequently experience direct or indirect impacts. However, a significant proportion lack the practical knowledge and consistent application of essential cybersecurity measures to protect themselves, leaving them highly susceptible to compromise and financial loss.
Understanding of Legal and Personal Liabilities
The study concludes that understanding of legal and personal liabilities associated with social media activities is the weakest area of knowledge among Kenyan users, posing a significant risk for unintended consequences.
- While awareness of personal liabilities like reputational damage is relatively high (49.4% fully aware, Table 4.23), a large portion (41.2%) admits they don’t always consider these consequences, indicating a knowledge-behavior gap.
- Awareness of legal liability for hate speech/incitement is moderately high (45.9% fully aware, Table 4.21), likely influenced by public discourse on these critical issues.
- However, understanding of legal consequences for posting false information (defamation) is more varied, with 22.4% vaguely aware or unaware (Table 4.19).
- Knowledge of legal liability for Intellectual Property (IP) infringement (e.g., sharing copyrighted material) is particularly low, with only 25.9% fully aware of its illegality (Table 4.20).
- Crucially, awareness of Kenya’s Data Protection Act, 2019, and its specific implications for social media users’ rights and obligations is alarmingly low, with only 7.1% fully aware of the Act and their rights (Table 4.22).
- A notable 20% of users still perceive online harm as less serious than offline harm (Table 4.24), potentially contributing to an underestimation of online accountability.
This indicates that many users are engaging in online activities without a clear comprehension of the serious legal ramifications under national laws, nor a consistent consideration of personal repercussions. This significant legal literacy deficit puts users at risk of unforeseen prosecution, civil lawsuits, and severe personal harm.
Demographic Variations in Knowledge, Exposure, and Understanding
The analysis conclusively demonstrates that knowledge levels regarding privacy, security, and liability, as well as reported security exposures, vary significantly across demographic strata. The study highlights consistent patterns:
- Age, Education, and Residence as Key Predictors: Younger users (18-25 years), those with tertiary/university education, and urban residents consistently exhibit higher average knowledge scores across all three domains (privacy, security, and liability) (Tables 6.25, 6.27, 6.28).
- Vulnerability of Specific Groups: Conversely, older users (56+ years), individuals with primary or no formal education, and rural residents consistently demonstrate the lowest knowledge levels across all aspects of online safety (Tables 6.25, 6.27, 6.28). These groups are particularly vulnerable due to a foundational deficit in digital literacy, compounded by lower adoption of crucial security measures like 2FA.
- Knowledge vs. Exposure Discrepancy: While younger, more educated, urban users possess higher knowledge, they also report a significantly higher average number of security incidents (Table 4.26). This suggests that increased online activity and engagement in a wider range of online behaviors (e.g., business, civic engagement) correlates with greater exposure to threats, even with higher awareness. It may also imply that more knowledgeable users are better at recognizing and recalling incidents.
- Occupational Nuances: Students and formally employed individuals generally show higher knowledge. However, self-employed individuals and business owners, despite heavy reliance on social media for commerce, show notable gaps in privacy settings knowledge and liability understanding, particularly concerning data protection and intellectual property, indicating a specific vulnerability for this growing sector.
- Gender Disparities: Males generally show slightly higher knowledge scores than females across all categories. While not as pronounced as other demographic factors, this suggests a persistent gender gap in digital literacy for online safety, warranting targeted attention.
In conclusion, while highly digitally active, a significant portion of Kenyan social media users operate with critical knowledge gaps in privacy management, security practices, and legal liabilities. These gaps are unevenly distributed, creating distinct vulnerable populations (older, less educated, rural, and specific occupational groups) who are ill-equipped to navigate the complexities and risks of the digital environment. Even the most knowledgeable groups exhibit areas of weakness, particularly in understanding complex data usage and legal nuances, and face higher exposure rates due to their extensive online engagement. This underscores an urgent need for multi-faceted and targeted interventions to foster a safer and more responsible digital ecosystem in Kenya.
RECOMMENDATIONS
Based on the comprehensive findings and discussions of this exploratory study, which highlighted significant knowledge gaps, high exposure to threats, and limited understanding of liabilities among Kenyan social media users, the following actionable recommendations are proposed for various stakeholders. These recommendations aim to foster a safer, more informed, and responsible digital environment in Kenya.
Recommendations for Kenyan Social Media Users
Empowering individual users with knowledge and tools is paramount. Users are encouraged to:
- Prioritize Digital Literacy: Actively seek and engage with educational resources on online privacy, cybersecurity best practices, and legal implications. Utilize readily available online guides, community workshops, and official government advisories.
- Master Privacy Settings: Regularly review and proactively adjust privacy settings on all social media platforms. Understand audience controls for posts, photo tagging permissions, and location sharing to manage their digital footprint effectively. Do not solely rely on default settings.
- Implement Robust Security Practices:
- Enable and consistently use **Two-Factor Authentication (2FA)** on all social media and critical online accounts.
- Create and use strong, unique passwords for each platform to prevent cascading compromises.
- Maintain constant vigilance against phishing attempts, online scams, and suspicious links. Always verify the source and legitimacy of requests before clicking or providing information.
- Cultivate Critical Thinking: Develop the ability to critically evaluate online information, identify misinformation, and verify sources before sharing any content, especially during sensitive periods, to avoid inadvertently spreading harmful content.
- Understand Rights and Responsibilities: Familiarize themselves with the key provisions of Kenya’s Data Protection Act, 2019, to understand their rights regarding personal data, and be aware that online actions carry real-world legal and reputational consequences.
- Report Incidents: Know how and where to report security incidents, scams, cyberbullying, or harmful content to both social media platforms and relevant national authorities (e.g., Communications Authority of Kenya, Directorate of Criminal Investigations).
Recommendations for Social Media Platforms
Given their central role in the digital ecosystem and the complexities of their interfaces, social media platforms have a critical responsibility:
- Simplify Privacy and Security Interfaces: Redesign privacy and security settings to be more intuitive, user-friendly, and easily navigable, reducing the perceived difficulty for users. Utilize plain language instead of technical jargon.
- Strengthen Default Privacy Settings: Implement more privacy-preserving default settings (e.g., private profiles, limited data sharing) that require users to actively opt-in for broader visibility or data collection.
- Promote and Streamline 2FA Adoption: Make Two-Factor Authentication a highly encouraged default feature, perhaps even mandatory for certain actions or during account creation, with clear, simplified setup processes.
- Enhance In-Platform Digital Literacy: Embed interactive, context-sensitive educational modules or prompts within the application itself. These should explain the purpose of privacy features, common scams, and reporting mechanisms, localized for Kenyan users where appropriate.
- Improve Threat Detection and Reporting Mechanisms: Invest more heavily in artificial intelligence and machine learning capabilities for proactive detection and swift removal of phishing attempts, scams, hate speech, and misinformation. Ensure user reporting channels are highly effective and lead to timely, transparent action.
- Increase Data Transparency: Provide clearer, more digestible information on how user data is collected, processed, used, and shared with third parties. Consider interactive dashboards or simplified summaries instead of lengthy legal privacy policies.
Recommendations for Government and Regulatory Bodies
Government bodies and regulators play a crucial role in shaping the legal and educational environment for online safety:
- Develop and Implement Targeted Digital Literacy Programs: Launch comprehensive national digital literacy campaigns that specifically address online privacy, cybersecurity, and legal/personal liabilities. These programs must be tailored to vulnerable demographic groups (e.g., older adults, rural populations, less educated individuals, and self-employed businesses using social media for commerce), utilizing appropriate local languages, accessible channels (community centers, mobile-first content, radio), and engaging teaching methods.
- Simplify and Disseminate Legal Information: Translate the Computer Misuse and Cybercrimes Act, 2018, and the Data Protection Act, 2019, into easily understandable, actionable guides for the general public. Utilize widespread public awareness campaigns across traditional and digital media to inform citizens of their rights and responsibilities under these laws.
- Strengthen Enforcement and Foster Collaboration: Vigorously enforce existing cybercrime and data protection laws, and publicly communicate successful prosecutions to serve as a deterrent and highlight the real-world consequences of illegal online behavior. Foster stronger collaboration between law enforcement (e.g., DCI), regulatory bodies (e.g., Communications Authority of Kenya, Office of the Data Protection Commissioner), financial institutions, and civil society organizations to combat online fraud and provide accessible recourse for victims.
- Integrate Digital Safety into Education Curricula: Mandate the inclusion of comprehensive modules on online safety, privacy, digital citizenship, and legal/ethical dimensions into the national school curriculum from primary to tertiary levels.
- Establish Accessible Support Mechanisms: Create or strengthen easily accessible national hotlines or physical centers where citizens can report online harm, seek advice on privacy/security issues, and receive support for digital incidents.
Recommendations for Future Research
To build upon this exploratory study, future research should delve deeper into specific areas identified as knowledge gaps and vulnerabilities:
- Conduct **qualitative investigations** to understand the underlying perceptions, motivations, and cultural factors influencing user behavior and knowledge deficits.
- Implement **longitudinal studies** to track changes in knowledge, exposure, and behavior over time, particularly in response to interventions or policy changes.
- Evaluate the **effectiveness of specific digital literacy programs** tailored to different demographics and risk profiles.
- Focus on **platform-specific and context-specific threats**, such as the nuances of WhatsApp scams or misinformation campaigns in Kenya, and user resilience to these.
- Develop and apply more robust instruments to measure **critical digital literacy**, including the ability to discern credible information and understand algorithmic influences.
- Investigate the digital safety needs of self-employed individuals and small businesses heavily reliant on social media for commerce, with a focus on data protection and intellectual property liabilities.
REFERENCES
- Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human behavior in the age of information. *Science*, *347*(6221), 509-514.
- Auxier, A., Egelman, S., & Cranor, L. F. (2019). Understanding users’ perceptions of data collection on social media. *Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19)*. Association for Computing Machinery.
- Bii, J., & Chemwei, J. (2021). An analysis of the Data Protection Act 2019 and its impact on personal data privacy in Kenya. *Proceedings of the 5th International Conference on Computing and Informatics (ICOCI 2021)*, Eldoret, Kenya.
- Boateng, R., & Boateng, L. (2016). Privacy concerns and practices in social media adoption in Africa. *Information Technology for Development*, *22*(4), 606-619.
- Communications Authority of Kenya (CAK). (2022). *Sector Statistics Report: Fourth Quarter of the Financial Year 2021/2022*. CAK.
- Creswell, J. W., & Creswell, J. D. (2018). *Research design: Qualitative, quantitative, and mixed methods approaches* (5th ed.). Sage Publications.
- (2020). *Internet Organised Crime Threat Assessment (IOCTA) 2020*. Retrieved from https://www.europol.europa.eu/cms/sites/default/files/documents/IOCTA_2020.pdf
- Gumede, N., & Ndlovu, M. (2021). Digital literacy levels and their implications for online safety among rural communities in South Africa. *African Journal of Computer Science and ICTs*, *14*(2), 55-68.
- Hadlington, L. (2017). The role of personality in explaining the use of cybersecurity behaviours. *Computers in Human Behavior*, *75*, 626-633.
- (2023). *Digital 2023: Kenya*. DataReportal. Retrieved from https://datareportal.com/reports/digital-2023-kenya
- Kiplagat, R., & Mutiso, R. (2021). Cybercrime trends and user vulnerability in Kenyan social media. *Journal of Cyber Security and Privacy*, *1*(2), 45-58.
- Kwanya, T., Odindo, C. J., & Kithinji, M. (2017). Social media adoption and use in Kenyan universities. *Journal of Applied Computing and Information Technology*, *21*(1), 1-9.
- Livingstone, S., Carr, J., & O’Neill, B. (2017). *Risks and safety on the Internet: The perspective of European children. Results from the EU Kids Online project*. Policy Press.
- Mutahi, P. (2018). Social media and political accountability in Kenya: Exploring citizen engagement and information flows. *African Journalism Studies*, *39*(3), 56-74.
- Mutua, A. N. (2019). Privacy awareness and management on social networking sites among Kenyan university students. *Journal of Information Technology and Economic Development*, *10*(1), 77-90.
- Mutubwa, E., & Okwach, J. (2021). The Data Protection Act, 2019 and its implications for businesses in Kenya. *East African Law Journal*, *47*(2), 123-145.
- Ombati, J., & Musau, B. (2020). Awareness and perception of data protection laws among university students in Kenya. *African Journal of Information Systems*, *12*(4), 48-63.
- Patchin, J. W., & Hinduja, S. (2015). Cyberbullying and self-harm: A literature review. *Aggression and Violent Behavior*, *25*, 32-41.
- Taddicken, M. (2014). The ‘privacy paradox’ in a social media context: Antecedents and consequences of privacy management. *Journal of Broadcasting & Electronic Media*, *58*(3), 428-447.
- (2021). *Digital Economy Report 2021: Least Developed Countries in the New Digital Era*. United Nations Conference on Trade and Development.
APPENDIX: QUESTIONNAIRE
This appendix outlines the structure and representative questions of the survey instrument used for data collection. Designed to address the research objectives, the questionnaire employed a mix of multiple-choice, Likert scales, and categorical selections for quantitative measurement.
Questionnaire Structure:
Section 1: Demographic Profile
- Examples: Age Group, Gender, Highest Education Level, Occupation, Residence.
Section 2: Social Media Usage Patterns
- Examples: Platforms used regularly (multi-select), Frequency of access, Daily duration, Primary purpose.
Section 3: Privacy Knowledge and Practices
- Examples: Awareness of privacy settings, Ability to control post audience, Frequency of adjusting settings, Understanding of data collection and usage by platforms.
Section 4: Security Awareness and Exposure
- Examples: Frequency of encountering threats, Types of incidents experienced (multi-select), Confidence in recognizing phishing, Awareness/usage of Two-Factor Authentication (2FA).
Section 5: Understanding of Liability
- Examples: Awareness of legal consequences for defamation, copyright infringement, or hate speech; Knowledge of Kenya’s Data Protection Act, 2019; Perception of online vs. offline harm seriousness.
Section 6: Overall Digital Literacy & Risk Perception
- Examples: Self-rated knowledge of privacy/security, Trust in platforms to protect data.
This structured approach ensured comprehensive data capture across all research objectives, enabling detailed quantitative analysis of knowledge levels and their variations across diverse demographic segments of Kenyan social media users.