Gen Z Protests and the Ethics of AI-Generated Political Images: A Sentiment Analysis of Kenyan Twitter Discourse
- Boniface Kimwere
- 176-192
- Apr 29, 2025
- Political Science
Gen Z Protests and the Ethics of AI-Generated Political Images: A Sentiment Analysis of Kenyan Twitter Discourse
Boniface Kimwere
KCA University Masters: Knowledge Management and Innovation
DOI: https://doi.org/10.51584/IJRIAS.2025.10040012
Received: 11 March 2025; Accepted: 15 March 2025; Published: 29 April 2025
ABSTRACT
The rapid, unprecedented advancement of artificial intelligence (AI) has transformed political discourse in Kenya, especially in digital activism. The current study examines the recent Gen Z protests and the role of AI-generated political images in shaping public sentiment on Kenyan X, specifically focusing on how these images have been used to criticize the head of state. The study adopts a quantitative research design, with sentiment analysis conducted on 680 tweets comprising between 80 and 100 responses to original AI-generated political images. The researcher harvested the raw data from X using TwReplyExport and developed a Python-based sentiment analysis model. The framework classified the responses as positive, negative, or neutral. The findings reveal that AI-generated political imagery is highly engaged on X, with posts receiving thousands of likes and retweets. Besides, there is a strong prevalence of positive sentiment. Indeed, while these AI-generated images effectively amplify digital activism and political narratives, they raise serious ethical concerns about digital manipulation, misinformation, and the issue of ideological polarization. The neutral sentiments in these tweets could imply that some users are skeptical about this kind of digital activism, which shows a growing need for media literacy and transparency. Moreover, the use of AI-generated political imagery raises concerns about the potential for echo chambers and the increasingly deepening political divides. All these problems underline the necessity for ethical guidelines and regulatory changes to address risks connected with AI-generated misinformation as part of political communication.
Keywords: AI-generated political images, digital activism, sentiment analysis, misinformation, political discourse, Kenyan X, and ethics
INTRODUCTION
The June 2024 Finance Bill protests marked a historic watershed moment in the Kenyan political landscape, igniting a serious national conversation on corruption, governance, and social justice. Remarkably, these protests, which were primarily led by young people, now referred to as the Gen Zs, initially started on X (formerly Twitter) and were characterized by the unprecedented outpouring of activism that transcended the traditional and tribal political leadership and party affiliations (Omweri, 2024; Kiprono, 2024). In their numbers, the young protestors labelled themselves ‘leaderless’ and ‘partyless’ and carried the national flag in their unique protests. Across the nation, the mentioned protestors demanded that President William Ruto immediately withdraw the Finance Bill 2024, which many Kenyans viewed as emblematic of the state’s failure to deal with pressing social and economic challenges facing the masses. Indisputably, the protestors, who were emboldened by the collective resolve and strength of their generation, not only called for the immediate repeal of this bill but also directed their anger at President Ruto, demanding that he dissolve his cabinet and institute deep-reaching reforms aimed at dealing with systematic challenges like rampant corruption, cronyism, widespread unemployment, the lack of empathy from the political class, and continued poor representation from elected officials, especially members of parliament.
Unfortunately, in its proper colonial form, as highlighted by Pfingst and Kimari (2021), the Kenyan government responded to this significant challenge with its characteristic violence, deploying thousands of security forces, especially in Nairobi, to suppress dissent. Expectedly, the brutal crackdown resulted in the death of at least 50 protestors, with countless others suffering from minor and severe physical and psychological harm (Kenya National Commission on Human Rights, 2024). Regrettably, the highlighted violent repression culminated in a direct and brave confrontation at the Kenyan Parliament, where peaceful and unarmed protestors marched as Members of Parliament continued to deliberate and eventually vote to pass the controversial Finance Bill. Historically, what followed was a tragic scene, where security personnel opened fire on these protestors, who President Ruto would later describe as criminals and treasonous, leaving several killed and others wounded amidst the chaos.
A few days later, President Ruto caved under pressure, affirming that he would not sign the bill into law. Apart from this, President Ruto dissolved his cabinet, revealing that he was keen on implementing some of the demands of protestors during these nationwide protests (Musambi, 2024). Despite these promises of reforms, the President backtracked on these wishes by re-hiring half of his cabinet, including cabinet secretaries that protestors had labelled as unfit to lead due to corruption and integrity issues. Moreover, President Ruto closed ranks with Former Prime Minister Raila Odinga to preserve his political power and created a broad-based government that he suggested would help him resolve outstanding issues. Despite these moves, most young people felt the President had not instituted the radical changes they demanded during protests.
As a result, protestors left the streets and took their activism on social media platforms. Specifically, platforms like X, Facebook, and TikTok became the new battlegrounds where public sentiment, counter-narratives, and political opinions unfolded (Twinomurinzi, 2024). Indeed, among the most significant and controversial acts of activism was the creation of artificial intelligence (AI) images as part of political imagery. Generally, these AI images, with users created entirely by AI or manipulated existing images, began circulating in 2024, influencing how individuals perceived their leaders and expressed their frustrations with the current ruling class. In this context, the rapid use and dissemination of AI-generated content continues to raise profound questions regarding the role of technology in shaping Kenya’s political discourse, with significant concerns regarding its ethical implications.
Essentially, this paper assesses the intersection of AI, public sentiment, and political imagery in Kenya, with a focus on how AI-generated political images continue to influence discourse surrounding the 2024 Finance Bill protests and the resulting political and judicial implications of these activities, especially on X. A sentiment analysis of different X users and their response to these AI images helps examine how AI-driven content continues to impact public opinion, the ethical concerns it raises, and the broader implications for political integrity and democracy in Kenya.
BACKGROUND
The Evolution of AI
Although there are challenges in pinpointing the origins of AI, researchers have helped trace this transformative technology to the 1940s. In 1942, Isaac Asimov, an American science fiction writer, wrote and published his famous short story, Runaround. In this plot, Asimov narrated about a robot developed by Mike Donavan and Gregory Powell. The robot adhered to the three laws of robotics: not injuring or bringing harm to humans, obeying commands from humans with the exception that those orders do not contravene the first rule, and protecting its existence in the full light of these guidelines. The mentioned short story inspired many scientists in the field of robotics, with considerable attention to the American cognitive scientist Marvin Minsky, who later co-founded the MIT AI lab.
AI was coined in 1965 when two scientists, John McCarthy and Marvin Minsky, held an eight-week Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College (Haenlein & Kaplan, 2019). Precisely, this conference followed for about two decades where experts in the field of AI recorded significant success. Some of the early successes in this field included the ELIZA Program and the General Problem Solver program, which helped solve issues and improve interactions between humans and computer programs (Natale, 2019). Developments in AI reached a significant turning point when IBM developed the Deep Blue chess-playing program that defeated the then-world champion Gary Kasparov (Newborn, 2012). Despite this, these expert systems failed in areas that lacked formalization, such as the failure to recognize human faces and the ability to distinguish images.
The AI field registered impressive performance between the 2000s and the present. IBM’s AI system’s triumph during the mentioned competition underscored its significance and extensive applications in diverse fields (Newborn, 2012). Apart from this, pivotal milestones in the evolution of this technology were realized when experts developed and launched Generative Pre-Trained Transformers (GPTs). In this context, the founders of OpenAI, which launched ChatGPT, introduced other GPTs, with the firm’s GPT-1 debuting in 2018 (Yenduri et al., 2024). The company developed GPT-2, which was not released, and later unveiled GPT-3 in 2020, becoming the industrial benchmark with over 175 billion parameters (Yenduri et al., 2024). The development left GPT-3 as one of the most crucial and potent languages of the modern era. Later, in 2022, OpenAI launched ChatGPT, featuring a user-friendly chat-based interface built on the foundations of the GPT-3.5 LLM.
AI Image Generators
Text-to-image AI capabilities are relatively recent developments that have transformed how people engage with AI platforms. Before this development, the Ai-Da robot was the first system to create ultra-realistic works in 2019 (Beyan & Rossy, 2023). The AI-Da robots had advanced capabilities that helped create paintings, drawings, and sculptures. In 2022, OpenAI opened its AI Image Generator-based framework DALL-E, and this has continued to surprise the public with its increased capability of creating images from text in a few seconds. At the end of 2022, at least 1.5 million users utilized OpenAI’s platforms and could generate at least 2 million AI images daily (Beyan & Rossy, 2023). The AI image generator has continued to evolve and improve, allowing any technology user to realize the most unique fantasies of the human mind and create detailed images that people could not have thought were possible.
Gen Z and Protests in Kenya
Gen Z (also known as Zoomers) describes the demographic cohort born between 1996 and 2010 (Seemiller & Grace, 2018). Individuals in this age category are the second-youngest generation between Millennials and Generation Alpha (Seemiller & Grace, 2018). Young people in this age bracket have become a major political force since they are well-informed, well-connected, and politically engaged. Unlike Millennials and older generations, these people continue using social media to organize, mobilize, and magnify their collective voices. Based on this, Gen Zs have undermined the long-established political and power structures in the Kenyan political landscape and demanded accountability from their leaders.
Specifically, the #RejectFinanceBill2024 saw nationwide protests in Kenya, which were primarily led by young people in the Gen Z cohort. The mentioned protests lasted between June 18 and August 8, and as disclosed, they were triggered by the controversial Finance Bill 2024 (Kang’ethe & Onyango, 2024). The widespread discontent with proposed tax increases, coupled with the elected representatives’ reluctance to address discontent, compelled young people to organize and coordinate protests across the country through social media platforms (Kang’ethe & Onyango, 2024). To disseminate and mobilize support for these protests, protestors utilized different approaches, including non-violent demonstrations, digital activism, and artistic expression. In some cases, young people organized concerts to celebrate the lives of those who had been killed during these protests. The protests remain remarkable due to individuals’ use of social media and decentralized organization, culminating in demonstrations in Nairobi and other regions. Some primary demands included withdrawing the Finance Bill 2024, cabinet dissolution, and President William Ruto’s resignation.
Ethical Issues in AI-Generated Political Images
After the end of street protests in August 2024, young Kenyans increased their focus on digital activism. One of the most controversial approaches some users adopted was the generation of AI images showing their frustrations with Kenya’s political class. Some of these images show top leaders dead and in coffins, while others reveal their nakedness. In this context, AI-generated political images raise considerable ethical concerns, especially in realms of defamation and the balance between freedom of expression (enshrined in the Kenyan Constitution) and moral responsibility (Afshari & Mohammadi, 2023). AI-generated images can be utilized to create misleading visual narratives that could influence public opinion, distort reality, and affect democratic processes. Apart from this, there is reputational harm and defamation as AI images are increasingly being utilized to damage the credibility of top leadership (Du, 2025). As highlighted, AI images present a crucial debate between freedom of expression and ethical boundaries (Romero Moreno, 2024). While Kenyans have the right to express their political and general opinions, the creation and spread of AI-generated political images raises queries regarding ethical responsibility, particularly when such contents incite hate and misrepresents reality.
Empirical Literature Review
Twinomurinzi (2024) explores how Kenya’s Gen Z leverages social media platforms for leaderless political activism against the Finance Bill 2024. The researcher utilized digital theory and critical realism as the primary analytical frameworks. The study used a mixed-method approach, combining quantitative data with qualitative analysis based on data scraped from different social media pages and online news sources. The findings revealed deep economic grievances, which served as the primary triggers of these nationwide protests and increased demands for government accountability. Social media platforms, including Facebook and X, played crucial roles in mobilizing, financing, and organizing these protests, amplifying the narrative and momentum. While the study provides valuable insights into youth-led political mobilization and the sociopolitical impact of digital activism in Kenya, its limitations include the failure to discuss using AI-generated images as part of digital activism.
Kang’ethe and Onyango (2024) conducted a study that examined the role of the meme culture and social media in shaping Gen Z’s political engagement during Kenya’s 2024 Finance Bill-related protests. The research study demonstrates how social media platforms like Instagram, TikTok, and X have enabled young Kenyans to rapidly share information, leverage AI to improve understanding and participation, and simplify complicated legal documents through regional translations. The study stresses the decentralized nature of this movement, revealing how digital activism among Gen Z is changing and evolving through the use of technology. A major finding of this study was that the protests successfully pressured the government to rescind the Finance Bill, illustrating the power of youth-led digital mobilization in influencing public policy. Despite this, a research gap is evident in attempts to understand the ethical ramifications of AI-generated imagery in influencing online discourse and public opinions. Fundamentally, the explained gap is crucial for exploring how AI-generated images impact political discourse on Kenyan Twitter, aligning with the ethical concerns of AI-driven content in digital activism.
Maina (2024) conducted a study examining AI’s role in digital activism during the Finance Bill 2024 protests. The paper highlights the transformative nature of AI in digital activism, including educating, organizing, and amplifying this movement. Accordingly, AI technologies were crucial in generating impactful content, optimizing social media strategies, simplifying legal information, and sustaining protest efforts. The highlighted study underscores how AI-driven allowed activists to reach and engage with a broader audience, maintain momentum, and effectively communicate grievances, ultimately shaping public discourse on primary issues. While the mentioned study offers valuable insights into the potential use of AI as a tool for grassroots movement, there is a significant gap regarding the ethical concerns regarding using AI-generated images as part of political content. Unfortunately, Maina did not address the risks of misinformation, bias in AI-generated images, or the manipulation of public sentiment through deepfakes.
Sarhan and Hegelich (2023) conducted a study investigating the potential harms of AI-generated image captions in political images. The two authors focused on the representational risks faced by specific social groups depicted in such photos. While technology bias and accessibility in these AI-based captions have been investigated in disability studies, there has been limited research to highlight how these captions can change and affect the perception of persons and communities in diverse political contexts. The researchers analyzed 1,000 images from various news sources, utilizing Microsoft’s Azure Cloud Service to generate captions. The outcomes found that AI models tend to produce overly generic descriptions that can erase or underrepresent marginalized groups and reinforce existing stereotypes when generating specific captions. The highlighted tension shows the critical challenge in AI-generated political imagery, balancing between neutral, broad descriptions that might omit important identity markers and more detailed captions that risk reinforcing biases. While the study is helpful in this review, it fails to focus on AI-generated images and only concentrates on captions. The paper has not also focused on the Kenyan context, where social media users generate AI images to further digital activism and shape online discourses.
The research study by Partadiredja et al. (2020) examined the socio-ethical implications of AI-generated media content, including text, sound, and images. The authors revealed that AI now has advanced capabilities that allow these technologies to produce highly realistic media. The study explored whether people can differentiate between AI-generated and human-made content. The researchers organized their experiment as a trivia game involving 2,383 participants who made 24,502 guesses. The research offers insights into the public perception of AI-generated images and the broader ethical concerns regarding this type of media. Fundamentally, the findings underline the potential for AI-generated content to blur reality, raising questions about authenticity, misinformation, and ethical responsibility. However, while the research study explored the general ethical concerns of AI-generated media, the authors failed to address how AI-generated political images influence public sentiment, engagement, or discourse in specific sociopolitical contexts, especially in the Kenyan political landscape.
The research study by Simões and Caldeira (2024) examined ethical issues regarding using AI-generated content, especially computer-generated images, in human communication. The authors confirm that the rapid advancement of AI-driven image generation has transformed how people engage, interact, and perceive digital content. The study reviewed insights from academic journals and books, exploring the ethical concerns associated with AI-generated images. The analysis stressed the need for people and organizations to use AI responsibly and follow applicable ethical guidelines. The study highlights AI-driven communication’s societal and philosophical implications, underlining the delicate balance between technological/AI advances and ethical considerations. Despite this, the explained research offers a broad overview of ethics in communication without addressing how AI-generated images can actually influence public sentiments and discourse in politically charged contexts, such as Kenyans engaging in X-related digital activism. The hinted research gap is critical, given that AI-generated political images can and continue to shape narratives, reinforce biases, and affect digital activism. In this paper, examining this intersection could offer a more profound comprehension of AI’s role in political engagement and inform ethical guidelines for the responsible use of these materials in the current digital political discourse.
In summation, the reviewed studies demonstrate the profound effect of AI-generated content and digital activism on political discourse, especially in the Finance Bill 2024 protests. The study by Twinomurinzi (2024) has shown how Kenya’s Gen Zs leveraged digital platforms for leaderless political activism, while Kang’ethe and Onyango (2024) have underlined the role of the meme culture in shaping how young Kenyans engage in political discourses. Indeed, Maina (2025) builds on this discussion by analyzing how AI-driven digital activism has facilitated the collective organization and amplification of these historic protests. The highlighted studies establish the link between existing digital tools, grassroots activism, and political mobilization. Beyond Kenya, other studies, for example, Sarhan et al. (2024), have focused on the representational harms of AI-generated image captions in political contexts, stressing the bias risks and the erasure of automated descriptions. Partadiredja et al. (2024) explore the socio-ethical implications of the growing field of AI-generated media content, showing the challenges people currently face in distinguishing between AI-generated content and human-created materials. Simõe (2024) offers a broader discussion on ethics, focusing on concerns regarding integrating AI images into human engagement and communication and the need for ethical and responsible AI governance.
Despite the explained valuable contributions, a crucial research gap exists in understanding how AI-generated political images shape and influence public sentiments, especially within Kenya’s active X space. While the highlighted studies evaluate digital activism, AI-generated content, and popular meme culture separately, none of these research studies have analyzed AI-generated images and how they shape political movements, especially during widespread protests. The studies have also not explored the ethics of AI-generated content, with a special emphasis on Kenyan social media users. Indeed, given the increasing sophistication of AI platforms and their ability to generate realistic political imagery, this gap presents a need for empirical investigation.
METHODOLOGY
The study adopted a quantitative research method to analyze the sentiment of Kenyan X users regarding AI-generated political images. The research activities entailed data collection, data pre-processing, sentiment analysis, and the interpretation of key findings.
Data Collection
The dataset comprised 680 tweets, which were obtained from 100 replies to each of seven AI-generated political images that are still posted on X. TwReplyExport was a crucial tool in collecting raw data and scrapped and extracted replies to the selected tweets. The selection criteria that were used for the original tweets included (1) the relevance of these tweets to the Kenyan political discourse, (2) a clear indication that the used images were AI-generated, and (3) significant user engagement (shares, likes, and replies). Instead of the original post, the clear focus on user replies enabled a deeper understanding of audience reactions, sentiment shifts, and the general public discourse dynamics related to AI-generated political imagery. The figure below highlights the tweets that were included in this analysis.
Data Processing and Sentiment Analysis
Once the data was collected, the dataset underwent sentiment analysis using Python. A custom Python script, which the researcher developed, was utilized to categorize the sentiment of each rely as positive, negative, or neutral. The mentioned Python script leverages natural language processing (NLP) techniques to analyze the text, using sentiment classification models like Valence Aware Dictionary and sEntiment Reasoner (VADER) to analyze informal social media text. The highlighted approach allowed the researcher to classify the responses based on their emotional tone and polarity.
The steps that the researcher adopted when conducting the sentiment analysis were as follows:
1. Text Pre-processing – The extracted user replies were cleaned by removing unnecessary whitespace to improve sentiment classification accuracy.
2. Tokenization and Sentiment Scoring – Each reply was processed through the developed Python-based sentiment analysis model that assigned each sentiment a specific score. The system categorized the replies as follows:
-
- Positive (Sentiment score > 0)
- Negative (Sentiment score > 0)
- Neutral (Sentiment score = 0)
3. Data Aggregation – The outcomes were compiled, and tables were generated for each dataset, showing the distribution of sentiments across the 680 replies. The results allowed an overview of public reactions to AI-generated political images.
Justification of this Research Design
As highlighted, this research paper adopted a purely quantitative research design that allowed a systematic analysis of sentiment trends through statistical means. The researcher used sentiment analysis as a methodological tool that promotes an objective assessment and a reproducible approach when evaluating online discourse. The sample size of 680 tweets offers a statistically meaningful dataset for sentiment analysis and ensures that the study captures the diverse perspectives on AI-generated political imagery in the Kenyan digital space. The application of this computational approach helps contribute empirical insights regarding how AI-generated images continue to influence public sentiment in online political discussions.
FINDINGS
Tweet 1: MethoDman (@polo_kimanii)
A sentiment analysis of the first 100 replies to this X post revealed that 53% of the responses were neutral, 29% positive, and 18% were negative. The explained tweet garnered 14,000 likes and 4,500 likes, revealing considerable engagement. While most users adopted the neutral approach, a relatively higher proportion of these users provided positive responses, suggesting that this particular AI-generated political image successfully resonated with their beliefs and ethical guidelines.
Figure 1 Sentiment analysis from MethoDman’s (@polo_kimanii) AI image on X
Tweet 2: TL Elder (@mwabilimwagodi)
Specifically, this tweet had over 9,900 retweets and 3,700 likes, demonstrating a strong engagement level. The sentiment analysis found 38% neutral, 33% positive, and 29% negative responses. Ideally, while the negative response was higher than the previous tweet, the strong positive sentiment indicates that most users received the AI-generated image well and supported this act of digital activism.
Figure 2 Sentiment analysis from TL Elder’s (@mwabilimwagodi) AI image on X
Tweet 3: TL Elder (@mwabilimwagodi)
The second tweet by this user garnered over 11,000 retweets, and 3,200 users liked it. The sentiment distribution was 53% positive, 26% neutral, and 21% negative. Indeed, this tweet had a majority of positive responses, which was a general affirmation and positive reception of the AI-generated political imagery by most of these social media users.
Figure 3 Sentiment analysis from TL Elder’s (@mwabilimwagodi) AI image on X
Tweet 4: Dalmus Murgor (@dalmus16)
The highlighted tweet only garnered 635 retweets and over 4,400 likes. The outcome demonstrated a high level of engagement with the original post. The sentiment analysis found 63% positive, 28% neutral, and only 9% negative responses. The outcome makes this AI-generated image have the most favorable outlook among the replies in this dataset.
Figure 4 Sentiment analysis of Dalmus Murgor’s (@dalmus16) AI image on X
Tweet 5: Yoko (@Kibet_bull)
The fifth tweet gathered 707 retweets and over 2,800 retweets. The sentiment distribution for this tweet was as follows: 43% positive, 28% neutral, and 9% negative. Like in the previous tweet, the strong positive sentiment, coupled with the high engagement metrics, reveals that the AI-generated image in this tweet was perceived favorably by a considerable portion of X users.
Figure 5 Sentiment analysis of Yoko’s (@Kibet_bull) AI image on X
Tweet 6: Cyprian, Is Nyakundi (@C_NyaKundiH)
The sixth tweet in this dataset garnered over 3,200 retweets and 9,800 likes. The original post had considerable public engagement, including over 501 replies to this post. The sentiment analysis had 40% neutral, 34% positive, and 26% negative sentiment. The high engagement and polarity score for this post demonstrates the controversies regarding using AI-generated political imagery and the resulting ethical concerns.
Figure 6 Sentiment analysis of Cyprian, Is Nyakundi’s (@C_NyaKundiH) AI image on X
Tweet 7: Dictator Watch (@DictatorWatch)
The last tweet also saw substantial engagement, with over 10,000 retweets and 3,700 likes. The sentiment analysis found 47% positive, 38% neutral, and 15% negative responses to this original post. The outcome reveals a positive reception of this AI-generated political image created to promote digital activism.
Figure 7 Sentiment analysis of Dictator Watch’s (@DictatorWatch) AI image on X
DISCUSSION AND RECOMMENDATIONS
The sentiment analysis of these AI-generated political images reveals a complicated dynamic in digital political discourse in the Kenyan context. Undeniably, the predominance of positive sentiments in all the tweets suggests that Kenyan social media users have embraced these AI-generated images as effective tools for political critique. Despite this, one must explore the ethical implications of using AI in this politically charged content, primarily since most of these tweets have been directed at the head of state and top Kenya Kwanza leadership brass. Evidently, AI-generated political imagery, as witnessed, has immense power to influence public perception, shaping political narratives and even the complete redefinition of the boundaries of political engagement.
Essentially, one of the most striking observations of this data is the high engagement rates among these tweets, with a significant number of retweets and likes. The outcome suggests that AI-generated images, which criticize political figures, resonate strongly with digital audiences, especially in politically active spaces like X. AI’s utilization to represent this kind of dissent visually amplifies the messages in a manner that conventional criticism could not attain. However, the described approach also raises serious ethical concerns regarding accountability and fairness. Unlike traditional techniques, such as photojournalism and political cartoons, AI-generated images lack clear editorial and authorship oversight, which could affect the credibility and ability to verify intent. The outcome could be a misleading portrayal that could shape and influence public opinion based on these manipulated AI-based imagery and the failure to focus on factual political critique.
A noteworthy observation was the large number of neutral stances in these sentiments. The highlighted outcome could imply that some users remain cautious or skeptical when engaging with AI-generated political images. Indeed, this affirms that these AI-generated images are impactful but might not result in widespread acceptance. In this context, some users might question the accuracy of these depictions and could be concerned about the distortion and misinformation. The neutral position could also be evident since the use of AI-generated images has blurred the line between legitimate political engagement and digital propaganda. If AI-generated criticism becomes prominent on social media platforms and is designed to provoke outrage and not inform, it could result in politically motivated misinformation.
Another essential ethical consideration arising from this analysis is the potential of AI-generated political images to increase political bias and reinforce existing echo chambers. The high level of engagement and the positive sentiments reveal that most users share these images across X and other social media platforms. In most cases, those sharing these images are likely already opposing the head of state and the Kenya Kwanza top leadership. Unquestionably, this kind of AI-generated political criticism, when it has been designed without ethical constraints, could risk deepening the present ideological divisions instead of promoting constructive debate. The results underline the need to establish clear ethical guidelines that people could adopt when using AI for political critique and ensure that such creations do not lead to digital sensationalism.
Despite the highlighted ethical concerns, the widespread use of AI-generated political images represents a new kind of political expression in the current digital age. The highlighted content offers marginalized youths an impactful and accessible means of engaging in political discourse. During periods of a lack of political influence or censorship, this approach provides an alternative avenue for political activism and commentary. Despite this, an ethical challenge arises in ensuring these images remain grounded in accountability and truth and promote responsible digital citizenship. Specifically, without the explained safeguards, AI-generated political imagery could undermine Kenya’s democratic principles, which these creators are keen on upholding.
CONCLUSION
In summation, the current research study assessed the role of AI-generated political images and how these are shaping public sentiment on Kenyan X., The study focused on how these widespread digital artifacts affect Kenyan political discourse, especially in criticizing the head of state and other top leaders. The outcomes reveal that AI-generated political images are widely engaged on X and well-received by most users. The explained result is evidenced by the high levels of positive sentiment and the significant engagement metrics, including thousands of retweets and likes. While skepticism is evident, as highlighted in negative and neutral sentiment distribution, the general reception could suggest that AI-generated imagery is emerging as a powerful tool for political expression and mobilization in digital activism in Kenya. Future protests could be waged using AI-generated imagery, increasing mobilization and potentially changing electoral processes and political discourse.
Limitations
While the study offers invaluable insights regarding the growing role of AI-generated political imagery and how this is shaping public sentiments on X, the researcher acknowledges several limitations. First, the study was based on just 680 tweets, which is a relatively small sample that might not fully capture the entire digital discussion surrounding the use of AI-generated political content. Second, the research study only concentrated on X and did not include other social media platforms like TikTok, Instagram, and Facebook. Political sentiments and engagement could differ on these different platforms. Third, the author used a sentiment analysis model that, despite being effective, might not fully understand the nuances of cultural context, sarcasm, and deeper political meanings embedded in these replies. Fourth, the study did not differentiate between potential bot activity and organic user engagement, which could have artificially negative or positive sentiment trends. Based on these issues, future research studies should focus on expanding this dataset, incorporating multi-platform analysis, and exploring the long-term impacts of AI-generated political images to fully comprehend the ethical and societal implications of AI-generated political imagery in the Kenyan context.
REFERENCES
- Afshari, N., & Mohammadi, A. (2023). The legal implications of deepfake technology: Privacy, defamation, and the challenge of regulating synthetic media. Legal Studies in Digital Age, 2(2), 13-23.
- Beyan, E. V. P., & Rossy, A. G. C. (2023). A review of AI image generator: influences, challenges, and future prospects for architectural field. Journal of Artificial Intelligence in Architecture, 2(1), 53-65.
- Du, F. (2025). The dilemma of applying reputation rights norms in the context of information authenticity and its institutional optimization: –An empirical study based on deep synthesis and text-to-video technologies. Journal of Computing and Electronic Information Management, 16(1), 42-48.
- Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5-14.
- Kang’ethe, B. M., & Onyango, W. O. (2024). Influence of memes culture in shaping Gen Z’s political engagement: A case study of the Reject Finance Bill 2024 Protests. Retrieved from https://www.researchgate.net/profile/Kangethe-Brian/publication/386129712_Case_Study_of_the_Reject_finance_bill_2024/links/67457942868c966b933053fb/Case-Study-of-the-Reject-finance-bill-2024.pdf
- Kenya National Commission on Human Rights. (2024). Statement on Mukuru Murders and updates on the Anti-Finance Bill Protests. Retrieved from https://www.knchr.org/DesktopModules/EasyDNNNews/DocumentDownload.ashx?portalid=0&moduleid=2432&articleid=1201&documentid=133
- Kiprono, Z. M. C. (2024). Breaking the cycle: Corruption, ethnic politics, and the crisis of political trust in Kenya from the 2017-2022 Elections to Gen Z Protests (2024). Retrieved from https://www.researchgate.net/profile/Zipporah-Maureen-Kiprono/publication/385285930_Breaking_the_Cycle_Corruption_Ethnic_Politics_and_the_Crisis_of_Political_Trust_in_Kenya_from_the_2017-2022_Elections_to_Gen_Z_Protests_2024/links/671e47b155a5271cdedebca8/Breaking-the-Cycle-Corruption-Ethnic-Politics-and-the-Crisis-of-Political-Trust-in-Kenya-from-the-2017-2022-Elections-to-Gen-Z-Protests-2024.pdf
- Musambi, E. (2024). Kenya president retains 6 former Cabinet ministers in first batch of appointments. Retrieved from https://apnews.com/article/kenya-cabinet-ministers-william-ruto-ace8a4b5c93fa8fbe090a287e842868c
- Natale, S. (2019). If software is narrative: Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA. new media & society, 21(3), 712-728.
- Newborn, M. (2012). Kasparov versus Deep Blue: Computer chess comes of age. Springer Science & Business Media.
- Omweri, F. S. (2024). Youth-led policy advocacy in Africa: A qualitative analysis of Generation Z’s mobilization efforts against fiscal legislation in Kenya and its implications for democratic governance in the continent. International Journal of Innovative Scientific Research, 2(3), 1-22.
- Partadiredja, R. A., Serrano, C. E., & Ljubenkov, D. (2020, November). AI or human: The socio-ethical implications of AI-generated media content. In 2020 13th CMI Conference on Cybersecurity and Privacy (CMI)-Digital Transformation-Potentials and Challenges (51275) (pp. 1-6). IEEE.
- Pfingst, A., & Kimari, W. (2021). Carcerality and the legacies of settler colonial punishment in Nairobi. Punishment & Society, 23(5), 697-722.
- Romero Moreno, F. (2024). Generative AI and deepfakes: A human rights approach to tackling harmful content. International Review of Law, Computers & Technology, 38(3), 297-326.
- Sarhan, H., & Hegelich, S. (2023). Understanding and evaluating harms of AI-generated image captions in political images. Frontiers in Political Science, 5, 1-14.
- Seemiller, C., & Grace, M. (2018). Generation Z: A century in the making. Routledge.
- Simões, J. M., & Caldeira, W. (2024). Ethics concerns in the use of computer-generated images for human communication. Journal of Ethics in Higher Education, (4), 169-192.
- Twinomurinzi, H. (2024). From tweets to streets: How Kenya’s Generation Z (Gen Z) is redefining political and digital activism. Retrieved from https://digitalcommons.kennesaw.edu/cgi/viewcontent.cgi?article=1218&context=acist
- Yenduri, G., Ramalingam, M., Selvi, G. C., Supriya, Y., Srivastava, G., Maddikunta, P. K. R., … & Gadekallu, T. R. (2024). Gpt (generative pre-trained transformer)–A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access.
APPENDICES
Appendix 1: Original X Posts
TW_REPLY_mwabilimwagodi_1871469389624865166_100 (TL Elder – @mwabilimwagodi)
TW_REPLY_mwabilimwagodi_1871606074727674035_100 (TL Elder – @mwabilimwagodi)
TW_REPLY_polo_kimanii_1872945975805002019_100 (MethoDman – @polo_kimanii)
TW_REPLY_C_NyaKundiH_1861374604108792000_100 (Cyprian, Is Nyakundi – @C_NyaKundiH)
TW_REPLY_DictatorWatch_1871742478174605412_100 (Dictator Watch – @Dictatorwatch)
TW_REPLY_Kibet_bull_1861398421455724935_80 (Yoko – @Kibet_bull)
TW_REPLY_dalmus16_1878131789719302639_100
Appendix 2: The Python Script Used in Sentiment Analysis
import pandas as pd
from vader Sentiment. vader Sentiment import Sentiment Intensity Analyzer
import matplotlib.pyplot as plt
import os
def main ():
try: # Step 1: Load the dataset file path = “C:\\Users\\user\\Downloads\\TW_REPLY_DictatorWatch_1871742478174605412_100.csv” # Update this path to your actual file location
if not os. path.exists(file_path):
print (f”Error: The file ‘{file_path}’ does not exist. Please check the path.”)
return
df = pd. read_csv(file_path)
print (data loaded successfully. Number of records: {len(df)}”)
# Step 2: Initialize Sentiment Analyzer
analyzer = SentimentIntensityAnalyzer()
# Step 3: Define function to analyze sentiment
def get_vader_sentiment(text):
if not isinstance (text, str): # Ensure the text is a string before analysis
return “Neutral”
scores = analyzer. polarity scores(text)
if scores[‘compound’] >= 0.05:
return “Positive”
elif scores[‘compound’] <= -0.05:
return “Negative”
else:
return “Neutral”
# Step 4: Apply sentiment analysis
if “full text” not in df. Columns:
print (“Error: ‘full text’ column not found in the dataset.”)
return
df[“sentiment”] = df [“full text”]. apply(get_vader_sentiment)
# Step 5: Display sentiment distribution
sentiment counts = df[“sentiment”].value counts ()
print (sentiment Distribution:\n {sentiment counts}”)
# Step 6: Visualize the results
plt. figure (figsize= (6, 4))
sentiment counts. plot (kind= “bar”, color= [“green”, “red”, “gray”])
plt.xlabel(“Sentiment”)
plt.ylabel(“Count”)
plt.title(“Sentiment Analysis of Tweets”)
plt.show()
# Step 7: Wait for user input to exit
input(“Press Enter to exit…”)
except Exception as e:
print (fan error occurred: {e}”)
input (“Press Enter to exit…”)
if __name__ == “__main__”:
main()
input(“Press Enter to exit…”)