www.rsisinternational.org
Page 557
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
AI-Generated Content: Transparency, Accountability, and Ethical
Challenges in Journalism – A Case Study
Ms. Radhika. H
1
*, Dr. Prathibha Vinod
2
1
Research Scholar, School of Media Studies, Presidency University, Bangalore, India
2
Assistant Professor, School of Media Studies, Presidency University, Bangalore, India
*
Corresponding author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.913COM0049
Received: 04 November 2025; Accepted: 10 November 2025; Published: 19 November 2025
ABSTRACT
The ethical implications of AI-generated content in journalism are systematically discussed in the current debate,
with a focus on the opportunities and challenges that accompany automated writing and artificial media. The
core issues, which include transparency, accountability, truth, and harm, fall within the scope of examination,
and the necessity of an immediate policy response is underlined. In this regard, a multimethod approach will be
employed, involving case-study analysis and surveys of the audience among the general population. The field
of research is organized around live cases of AI applications in sports and financial reporting, and incorporates
topical cases, such as deception related to deepfakes. The surveys reveal the importance of the audience as they
reflect on the views of the people on ethical requirements. This discourse assesses ethical disclosure behaviors
and bias in artificial-intelligence (AI) systems in the context of proven normative constructs of newsrooms and
journalists, namely, honesty, autonomy, and accountability, to provide the best ways in which AI can be used
responsibly, but under the principles of journalism. Suggestions would enable the creation of ethical codes and
best practices in the industry, and more specifically, encourage the audience to trust the industry, given its modern
digital environment.
Keywords: AI-generated content, Journalism ethics, Transparency and accountability, Synthetic media,
Journalistic integrity
INTRODUCTION
In the years 2023-2024, AI has undergone an unprecedented surge in growth, with many tools and applications
quickly rising to prominence, demonstrating how quickly technology has advanced. Interestingly, the phrase
"AI" has been in use since the 1950s, and its founder is often referred to as the "father of AI." The desire to have
machines execute things that humans have always done stems from a deep passion for efficiency and
productivity.
With the progress of AI, using it in journalism represents a significant shift in how news is made and distributed.
Starting as a new tool in the journalistic machine, AI quickly advanced from experimentation to become an
essential component of modern media. AI applications have grown beyond the imagination; they are currently
utilized to analyze data for journalists, allowing them to handle massive volumes of information in extremely
short periods. AI also improves the user experience by tailoring content and targeting news based on individual
reader interests. Additional AI capabilities aid investigative journalism by classifying and analyzing data that
would otherwise be too large for human reporters to handle on their own. Such a skill increases reporting quality
and encourages more in-depth study of complicated stories.
Actually, AI's effect extends to editorial decisions, offering knowledge that can help news companies decide
which stories to cover and how to approach them. The introduction of AI in journalism runs counter to the trend
of pervasive digital transformation. As technology advances, strategies for obtaining, reporting, and consuming
www.rsisinternational.org
Page 558
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
news evolve. The emergence of AI reflects not just technological advancement but also changes in demand and
dynamism in news consumption in the digital age. The importance of AI in journalism merely signals a
watershed moment in terms of how much these advancements will count and be perceived in comprehending
the needs of today's audience (Fabia Ioscote et al., 2024). The primary goal of AI research is to create machines
capable of doing tasks that are traditionally associated with human intelligence. This exploration is represented
in the study of how robots can understand and learn in the same way that people do. Finally, this understanding
is limited to the general spectrum of AI study known as machine learning. This subgroup of AI research
concentrates on the development of algorithms that allow computers to evolve behaviors based on empirical
data. The theoretical approaches of media ethics and communication research inform this study by putting the
ethical issues of AI in journalism into context. The Social Responsibility Theory of the press offers the normative
basis of judging how the automated systems should be involved in content creation based on the truth,
accountability, and public welfare. Also, Media Systems Dependency Theory aids in explaining the situation
when viewers become more dependent on the use of technologies to mediate the flow of news, and the
significance of ethical AI governance to form the perception of credibility and trust increases. To be added to
these views, the algorithmic accountability paradigm highlights the need to have transparency, explainability,
and human control over the use of AI tools in newsrooms. These theoretical models altogether provide
conceptual framework with the help of which the empirical data of this research can be understood and related
to the larger questions of media governance in the digital era.
Journalism, Law, Transparency, and Accountability
Journalism in an AI environment will need to adapt to AI-related technology in order to be efficient while also
keeping ethical standards. AI has revolutionized journalism. Automation, data analysis, and other technologies
enable new ways to personalize information while also raising concerns about bias, transparency, and job
displacement. For example, journalism content automation frees up journalists' time to focus on investigative
reporting. Still, it also puts them at risk of being reliant on algorithms, which can lead to biases in sharing, mainly
due to flaws in an insufficient data set or an unclear mechanism for making decisions. At the same time, as
journalists increasingly employ AI to fact-check, the fight against disinformation becomes more complicated
when synthetic media, such as deepfakes, enter the scene (Wang, 2021). Journalists must make AI-led reporting
as transparent as feasible and ensure that algorithmic biases do not corrupt public discourse.
The law's involvement in disciplining AI includes ensuring that AI is used ethically and appropriately. Proper
legal frameworks must be updated to address issues such as privacy, bias, liability, and so on after an AI system
has caused harm. Given that technology occurs in a variety of domains, regulations must account for the rapidly
changing content production coming from AI discoveries concerning intellectual property rights and personal
data protection against AI monitoring (Lutz, 2019). For example, in predictive policing or the criminal justice
system, algorithms must work in accordance with other rules to ensure fairness and equity in decision-making;
hence, they will not perpetuate biases in their operations that will hurt portions of society (García-Aviles 2014).
International cooperation is also required to set standardized norms across borders to ensure responsible AI use
around the world with no loopholes.
AI tools provide answers and hazards for trust and safety, mostly in online situations. Most AI tools are employed
for content filtering when hate speech or misinformation is discovered. However, the algorithms are not perfect.
Teams responsible for trust and safety must constantly improve their algorithms to reduce the likelihood of errors
or bias. Another issue AI raises is the establishment of "echo chambers," in which individuals are only exposed
to content perspectives and feeds that reflect their current opinions, resulting in polarization in public discussion
(Verma, 2024). AI moderation struggles to perceive sophisticated human communication, such as sarcasm or
cultural nuances, resulting in over-censorship and the failure to remove damaging information (Nurelmadina et
al., 2021).
Ethical Challenges
Publishing using AI raises problems about the availability of transparency and where accountability will sit. The
www.rsisinternational.org
Page 559
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
scope of material created by AI is frequently indistinguishable from material produced by people, with the source
of information blurring or becoming blurred, making it difficult for consumers to identify between AI-provided
content and what was created by a human. According to Abuhamad (2024), "the use of AI in journalism,
particularly visual content, raises concerns about the authenticity and validity of AI-generated media, as well as
a decrease in public trust." According to Illia (2022), the directness of AI-mediated communication reduces
moral ambiguity and makes responsibility more difficult (Al-Zoubi et al., 2024).
Another significant ethical concern in AI systems is prejudice amplification. AI relies mainly on the data on
which it is trained, and if that data represents societal biases, whether race, gender, or other forms of bias, AI
will replay and reinforce those biases. Illia (2022) argues that if AI systems are not monitored and managed, they
can reproduce and amplify society's biases, leading to discriminatory outcomes (Illia et al., 2022). These biases
might manifest themselves in AI-generated reporting and writing, among other disciplines, and impair the
objectivity and fairness of the content delivered by AI systems.
To close the gap between theoretical ethics and newsroom practice, it is critical to put normative concepts like
fairness, autonomy, and accountability into practice standards that will guide the application of AI. In
deontological media ethics, the safeguarding of the truth and avoiding harm are the most critical; when extended
to AI systems, it would mean that the algorithmic engagement is disclosed completely, biases are thoroughly
tested, and the means of automated decisions are documented. Concrete solutions such as algorithmic audits,
transparency labels and organized human oversight of the AI outputs are crucial tools of undertaking to make
sure that AI produced material complies with the journalism requirements. These models are also useful in
reducing the risks linked to synthetic media, data commodification, and the lack of transparency in algorithmic
operations to strengthen the ethical dimension of an increasingly automated newsroom setting.
Another ethical issue related to AI is data commodification. AI learns from vast volumes of user data without
their awareness or agreement, raising a number of privacy and data ownership concerns. As Labajová (2023)
argues, there is an increasing use of AI-generated material on social media, raising ethical concerns about the
use of data, mainly when an AI system creates media content after collecting and analyzing personal information
(Labajová, n.d.).
AI use in journalism raises problems about reporting values. Traditionally, journalism has been based on truth
and authenticity, but the introduction of AI into the newsroom threatens those ideals. According to Abuhamad
(2024), journalists may become more like "gatekeepers" in the future, overcoming ethical challenges given by
AI-generated content (Al-Zoubi et al., 2024). There should be ethical practice requirements since journalistic
integrity is maintained even as AI tools become more widely used.
This increases the risk of propagating erroneous information. According to Illia (2022), AI-generated content
can lead to "mass manipulation" by creating credible but deceptive material and testing people's trust in media
sources (Illia et al., 2022). The dangers of "deepfakes" and other AI-generated media are particularly relevant in
an age of enormous audience manipulation.
Journalists in an AI-driven newsroom
AI-powered tools are more advanced in writing news pieces, performing data analysis, and automating
factchecking, among other things. The question of whether AI will replace human journalists is contentious;
nonetheless, many experts believe that journalism should primarily serve as an extension of journalists rather
than a replacement (Lewis et al., 2021). The collaborative approach would have AI do mundane tasks like
summarizing data, scanning for sources of information, and using templates in reporting, freeing up journalists
to focus on investigation, interpretation, and editorial work that required variation in comprehension and ethical
sense-making (Carlson, 2020).
In light of AI's growing role in the newsroom, it's critical to recall the distinct human characteristics that
journalists contribute to their work. While AI can manage large amounts of data to uncover patterns, journalists
are responsible for verifying and interpreting these findings for the public (Thurman et al., 2019). Journalists
www.rsisinternational.org
Page 560
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
must develop new skills as AI takes a leading role in the newsroom. While conventional journalistic skills, such
as interviewing and narrative, are essential, journalists must also be technically savvy in order to work effectively
with AI tools. As previously said, digital literacy-data processing skills and familiarity with AI tools have become
an absolute requirement for journalists to comprehend the processes by which AI generates output and to
evaluate AI output so that coverage remains accurate (Beckett, 2021).
The ethics of data and algorithmic accountability developments will prevent a news environment from becoming
opaque, with accompanying factual compromises under the influence of AI. Training in computational
journalism, as well as programming and data analytics for news production, emerged as critical skill areas. For
example, The Washington Post's journalists employ the AI system "Heliograf" to automatically write stories on
athletic events or election coverage, allowing them to have control over the production process rather than
creating the content themselves (Anderson, 2020). In such an AI-driven situation, a newsroom's strength is in
journalists who excel at data analysis, coding, and analyzing AI machine output (Beckett, 2021).
The ethical responsibility of journalists in an AI-powered newsroom cuts to the heart of several crucial issues,
including bias, transparency, and accountability in AI-generated material. An essential ethical challenge emerges
whenever AI is trained on biased data or when it is unclear how the decision-making process is carried out.
Journalists working with AI should guarantee that AI-driven outputs, like other kinds of AI-driven knowledge
production, do not promote harmful biases or misinformation that might lead to public distrust of the media
(Diakopoulos, 2019). Journalists are responsible for fact-checking AI stories, revising AI's work, and being
upfront with the public when using AI to create material. Transparency regarding AI's role in reporting may help
alleviate concerns about authenticity and accuracy, making it more straightforward for consumers to trust
reporters. (Carlson, 2020).
Newspaper and magazine views also provide a broader understanding of the ethical situation with AI-assisted
reporting. Studies indicate that the newsroom professionals tend to view AI as a support system as opposed to
substituting human judgement, mostly in routine and data-driven endeavours (Grimme & Zabel, 2025; Xiao et
al., 2025). Nevertheless, they raise long-term doubts about the undermining of editorial independence and the
lack of transparency in the work of algorithmic systems that can affect the selection of news and news framing
without being interpretable in a direct manner (Diakopoulos, 2019; Petre Breazu and Katson, 2024). News
reporters underline that such basic functions of their work as investigative journalism, making ethical choices,
and situational interpretation are essentially human processes, which cannot be reproduced by AI because of
their lack of sociocultural and moral awareness (Carlson, 2020; Illia et al., 2022). The editors also emphasize
that human intervention is essential in the last phases of publishing, as AI will never be able to assess the cultural
nuances of the situation, the effect it has on listeners, or the overall implications of the news stories on society
(Gotfredsen, 2023; Abuhamad and Andersson, 2024). These professional perspectives are essential as they offer
an equal perspective on ethical demands of AI-assisted journalism and emphasize the need to maintain the
human-focused control over the newsroom powered by AI (Porlezza and Schapals, 2024).
Review Of Literature
The development of the AI (AI) in newsrooms around the world has completely changed the nature of news
content production, distribution, and consumption. Since learners of AI make more advanced strides in creating
human-like texts, images, and even videos, journalism is challenged more than ever with concerns of
transparency, accountability, and ethics. The change brings important questions to the integrity of the information
at the time when the boundary separating the human and machine-made content is still waning.
The Transformation of Modern Journalism Through AI Integration
The use of AI technologies in journalism can be described as one of the most crucial technological changes in
the sphere of media production since the introduction of digital publishing. There is a growing variety of AI
products being used in contemporary newsrooms to assist in automated data analysis and content generation,
personalized news curation, and real-time fact-checking. Studies show that about 73 percent of news houses
www.rsisinternational.org
Page 561
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
have embraced the use of AI to automate news writing, 68 percent to analyze data and 62 percent to personalize
their contents. Such a wide-scale implementation shows the increasing centrality of AI in the everyday work of
journalism (Piasecki et al., 2024).
Use of AI in journalism can go much further than mere automation. News houses are currently using advanced
language models to create first-time drafts of stories, write headlines and summaries of complex datasets and
even translate multilingual content. Also, AI systems can help in investigative reporting as they can analyze large
volumes of data, examine trends on the data available to the populace, and even potential leads that would not
be easy to notice by doing so manually. These features have greatly boosted the efficiency of journalism and the
number of stories that can be undertaken by the newsrooms with limited resources (Sinclair, 2025). Such
technological integration has brought about what is called, by researchers, algorithmic gatekeeping - a hybrid
process in which AI systems are getting greater and greater control over editorial choices regarding what stories
are published, how they are packaged and to whom they are shared. This change is a paradigm shift concerning
the traditional human-based editorial judgment, and it is questionable as to whether journalistic values will be
preserved in the media environment of AI algorithms (Voinea, 2025).
The Transparency Imperative: Regulatory Frameworks and Industry Standards
AI Act of the European Union especially Article 50 has become a key regulatory framework in the context of
responding to transparency needs of AI-based generated content in the media. This bill requires AI applications
generating or tampering with content to make clear to end users that they are artificially generated. Nevertheless,
studies indicate that there is a substantial disconnect between the regulatory intent and the realization, especially
when it comes to how the transparency requirements are operationalized within the complex newsroom setting
context within the feasibility (Ramos and Ellul, 2024).
The openness issues are not confined to mere disclosure laws. Research proves that simply stating that content
is AI-generated can become a self-contradictory way to achieve the opposite in terms of trust when that content
is correct or useful. People are prone to thinking that AI-labeled content is a completely automated production
with no human supervision, which results in a higher level of skepticism irrespective of whether there was
genuinely any editorial intervention. This effect reveals the intricacy related to the establishment of efficient
transparency protocols that would actually inform audiences without unintentionally hurting credibility (Altay
and Gilardi, 2024).
To solve these transparency issues, media organizations have started to create internal AI governance. Thematic
review of 37 AI guidelines in 17 countries demonstrates that such areas as transparency, accountability, fairness,
privacy and the maintenance of journalistic values are repeated. These principles focus on the role of human
control, the explanation of AI systems, and the publication of automated content and the security of user
information. Nonetheless, the geographical distribution of these standards indicates that there are considerable
disparities, where the establishment of AI ethics standards is led by the Western organizations and does not cover
all of the world (deLimaSantos, Yeung and Dodds, 2025).
Accountability Challenges in AI-Assisted Journalism
The adoption of AI systems into the production of news has raised new levels of accountability that cannot be
addressed under the conventional journalistic models. The problem is that it is still necessary to decide who is
to be held accountable when AI systems cause editorial mistakes, one-sided coverage, or misinformation
distribution. This is an especially pressing problem in light of the fact that many AI systems are black box, so
the drivers of these decisions are obscured even to those that run them (Qingtao Liu, 2025). There are three
dilemmas in AI journalism research that are critical. First, the probabilistic character of AI-generated content
contradicts the very principle of journalism as accurate, since the language models can exhibit fabricated
information as true or false using the so-called hallucination phenomena. Second, the uncertainty of the
responsibility allocation of human-AI collaborative work processes complicates the process of establishing
accountability in the spread of misinformation. Third, media businesses encounter a paradox of transparency in
www.rsisinternational.org
Page 562
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
which under-the-radar AI deployment compromises social confidence but complete disclosure may provoke
unnecessary mistrust in the quality of content (Qingtao Liu, 2025).
The problem of accountability is also complicated by the fact that AI development and implementation are
global. Because most news organizations use the AI systems that are built by third-party technology companies,
they establish complex chains of responsibility that transcend jurisdictions and regulation models. This state of
affairs concerns the question of whether the conventional journalistic mechanisms of accountability, created in
human-focused newsrooms, are sufficient in the age of AI-assisted content creation (Diaz-Rodriguez et al.,
2023).
Algorithmic Bias and Editorial Independence
Among the most important ethical issues with AI journalism, there is algorithmic bias and the effect it has on
editorial autonomy. The AI systems that have been trained on historical data risks spreading the same biases that
exist in society resulting in biased coverage of minority communities, perpetuation of stereotypes, or the
systematic exclusion of other worldviews. This bias may occur in a variety of forms: in terms of the sources
used, the perspective of the reporting, the language of the report, and even the criteria by which events should
be covered (Petre Breazu and Katson, 2024).
The studies of journalistic performance of ChatGPT-4 show worrying trends of reproducing the perspectives,
especially in sensitive issues like the coverage of immigration. The researchers discovered that AI systems
reproduce the prevailing narratives in their training data, which can even increase media bias instead of
delivering an impartial report. This result poses some fundamental questions regarding the existence of AI
systems that can preserve the editorial variety and critical outlook needed in quality journalism (Petre Breazu
and Katson, 2024).
The bias issue is also applied to automated news curation and recommendation tools that decide what news
audiences watch. Such algorithms can also form filter bubbles that only reinforce what people already believe,
and lessen exposure to different opinions. In the case of journalism, where the democratic role of informing the
discourse is traditionally played, the information flow is mediated algorithmically, which, in comparison with
the principles of the profession of complete and balanced coverage of such information, is a very big change
(Voinea, 2025).
The Crisis of Authenticity and Detection Challenges
Development of AI-generated content has posed authenticity crisis that poses a threat to the essential
epistemological principles of journalism. The Deepfake technology has made it possible to produce highly
convincing fake images, audio, and video materials which are now indistinguishable to genuine media. This
feature presents an existential problem to journalism, whose reliability is determined by providing the audience
with checked, authentic information (Anandhasivam et al., 2024).
The issue of AI-created content being detected has now become a serious issue in the newsrooms that want to
preserve editorial integrity. Studies indicate that although detection technologies are getting advanced, they have
enormous constraints in the practical world. Research on journalists working with AI verification software shows
worrying trends in the number of false positives and their negative results, and shows that the existing detection
mechanisms are not reliable. This technological cat and mouse game among generation and detection technology
imposes a persistent uncertainty on news organisations that seek to confirm the authenticity of the content (Saniat
Javid Sohrawardi et al., 2024). This issue of detection is especially severe in the context of breaking news when
the responsibility of journalists is to make quick editorial judgments and have little time to check the facts. The
studies conducted with journalists show that in case of unreliable mechanisms of deep fake detection, the result
may be making bad decisions, which may lead to the publication of fake content or refusal to publish legitimate
content. These dynamic studies the importance of applying thorough verification frameworks that integrate both
technological applications with conventional journalistic verification systems (Saniat Javid Sohrawardi et al.,
2024).
www.rsisinternational.org
Page 563
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
Human-AI Collaboration and the Future of Journalistic Roles
The implementation of AI in newsrooms is transforming the old role of journalism and developing new ways of
human-machine collaboration. One study found the rise of hybrid journalist-programmer roles in 52 per cent of
organizations examined, and also found that 38 per cent of examined organizations indicated the necessity of
journalists to develop AI literacy. This development indicates that the art of successful journalism in AI will
demand additional skills in the field that will merge classic editorial discretion with the technical knowledge of
AI possibilities and restrictions (Sonni et al., 2024).
Research into the implementation of AI in newsroom shows that different models of human-AI collaboration
exist, including fully automated systems that need only human intervention to those that are AI-assisted
workflows that complement human abilities. The most appropriate ones seem to be the ones that maintain human
control and use the benefits of AI computations in processing data, forming initial content, and in routine chores
(Grimme and Zabel, 2025). There is a high barrier to cross-functional cooperation between AI technologists and
journalists. A study of Chinese news agencies revealed that there was considerable hindrance to successful
cooperation, such as the lack of communication between the technical and editorial personnel, divergence of
priorities and schedule, and insufficient knowledge of the professional needs of the other party. These results
indicate that the effective implementation of AI may presuppose technological implementation, as well as
organizational change and cultural shift in newsrooms (Xiao et al., 2025).
Regulatory Response and Policy Implications
Regulatory environment of AI journalism is still developing because legislators struggle with the dilemma of
prioritizing the incentives of innovation over the protection of the public interests. The EU AI Act is the most
detailed regulatory framework to date encompassing transparency requirements of AI systems used in the
production of content. Nevertheless, it is not implemented, and the key issues concerning implementation relate
to enforcement procedures, applicability across borders, and technical viability (Ramos and Ellul, 2024b).
Published studies on the issues of transparency in the AI Act indicate that the existing requirements might not
be adequate to deal with the reality of AI journalism. Disclosure requirements as the key part of the legislation
fail to address the problem of algorithmic responsibility, reduction of bias, or quality control. The universal
quality of news delivery concerns the fact that the information generated in one regulatory framework can be
disseminated to the jurisdictions with different levels of protection standards (Busuioc, Curtin and Almada,
2022). Policy scholars propose more sophisticated approach of regulation which should take into account the
peculiarities of journalism as democratic institution. Suggested frameworks underline that specific sectoral rules
are necessary to maintain the editorial autonomy and at the same time accountability towards society. These
guidelines involve the need to audit algorithms, test bias, and perform frequent evaluation of the effects of the
AI systems on the quality and diversity of journalism (Gao et al., 2025).
Case Study 1: ChatGPT in newsrooms: Adherence of AI-generated content to journalism standards and
prospects for its implementation in digital media
Zagorulko researched the compliance of the press machine called ChatGPT with journalistic standards. Several
axes of judgment were employed during the analysis, such as balance, reliability, and accuracy. In fact, the author
concluded that the press machine is often unable to provide neutrality while writing texts, mainly if it acts based
on questions that use an inner tone. Out of 60 questions pertaining to public figures and presented with positive
and negative tones and neutral ones, ChatGPT was able to produce responses that complied with the rules of
balanced reporting only 48% of the time. This is noticeable with political leaders as the model tends to discover
either the good or the bad depending on the tone of the question asked (Zagorulko, 2023). This example
illustrates that AI can help very quickly produce content, but human editing is still necessary to ensure
journalistic standards. Associated Press and Automated Financial Reporting with Wordsmith The AP is also
another example of how AI is integrated into their news service as they utilize Automated Insights' Wordsmith
platform in the production of their financial reports. In this aspect, they were able to produce coverage increased
from then 300 to almost 3700 per quarter without affecting accuracy and efficiency negatively through
www.rsisinternational.org
Page 564
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
automated routine earnings reportage work. This work illustrates how AI augments journalistic output in data-
intensive tasks and makes human journalists available to present more in-depth analysis or even more substantial
work, such as that of investigative reporting, according to what was discovered by Zagorulko (2023). Microsoft's
AI Editor Replacement Mishap Microsoft has relied on AI to substitute its human editors who oversee news on
its MSN website, which has led to unanticipated editorial blunders. One of the most shocking examples of this
is how AI appended a photo to a completely irrelevant news post, causing a huge public outcry over the
insensitivity of the mismatching (Zagorulko, 2023). This makes it clear that, at the moment, AI still fails to
understand content or context; hence, some things would take too much time to understand and need human
input in their comprehension as far as culture and context are concerned. Heliograf at The Washington Post: AI
for Political and Sports Reporting To demonstrate AI's potential in meeting the fast-paced requirements of
newsrooms, The Washington Post leveraged the Heliograf AI for producing political and sports content.
Heliograf applies automation to specific event coverage by processing structured data, freeing human reporters
to focus on more complex and interpretative work (Zagorulko, 2023). This application of AI thus demonstrates
the strategic use of automation towards delivering real-time content updates while not diminishing quality
journalistic output. It is through these case studies that the evolution of AI in journalism- from both the positive
and negative dimensions- will obviously be present in every current AI tool. Clearly, while AI supports efficiency
and scalability, the reliability and ethical standards of journalistic content remain human-dependent.
The Path Forward: Toward Ethical AI Journalism
The problems of the AI journalism demand all-inclusive answers that will address technological, regulatory,
ethical, and organizational aspects. Studies have consistently highlighted the importance of multi-stakeholder
cooperation that engages journalists, technologists, policymakers, and civil society groups to come up with
governance frameworks that are effective (Samson et al., 2024). The takeaways based on the existing studies are
development of industry-wide ethical guidelines on the use of AI in journalism, investment in AI literacy among
journalists and the establishment of transparent audit procedures on AI systems employed to produce news.
Moreover, researchers encourage keeping human editorial judgment on important aspects such as the choice of
stories, the verification of sources, and ethical decisions and apply AI features to the processing of data and the
optimization of content (Porlezza and Schapals, 2024).
Given that the profession must continue to carry out its fundamental democratic roles, it is probable that the
future of journalism in the AI era will be determined by how able the profession is to balance its existing
technological realities. This will need not just technical answers but a reinvigoration of journalistic values, public
service and editorial autonomy in an ever-more automated media environment. The ever-growing capabilities of
AI will also demand the journalism profession to take a proactive role in determining how the technology is used
to serve the interests of the people as opposed to its commercial effectiveness. The analysis of AI-created
contents in journalism demonstrates that there is a multifaceted terrain of opportunities and challenges, which
will keep on changing as technology develops. The only way to succeed through this change will be to remain
focused on transparency, accountability, and ethical considerations as journalism remains a vital part of the
democratic society.
RESULT AND DISCUSSION
Table 1: Case Processing Summary
Cases international journal of research and innovation in applied science (ijrias)
Issn No. 2454-6194 | Doi: 10.51584/Ijrias | Volume X Issue Ix September 2025
Valid
Missing
Total
N
Percent
N
Percent
N
Percent
100
100.0%
0
0.0%
100
100.0%
www.rsisinternational.org
Page 565
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
100
100.0%
0
0.0%
100
100.0%
100
100.0%
0
0.0%
100
100.0%
100
100.0%
0
0.0%
100
100.0%
100
100.0%
0
0.0%
100
100.0%
Table 1, "Case Processing Summary," reveals that all four variables, AP, EC, PT, and PI, have 100% valid
responses, indicating a high quality of data. This completeness allows for reliable and consistent results in further
analysis, avoiding the risks of biases or inaccuracies due to incomplete records. The study includes specific items
measuring public attitudes toward AI-generated journalism. Awareness and Perception of AI-generated
journalism (AP) measures participants' understanding of AI-generated content in journalism, their trust in such
content, their encounters with AI-generated news, and their preference for transparency around AI use. CEI
challenges respondents to evaluate media outlets' accountability for AI-caused errors, anxiety about bias in
AIdriven journalism, disclosure of AI usage, and whether principles like fairness and accuracy are prioritized
above convenience.
PT (Public Trust and AI) gauges whether AI is impacting the way people view media institutions in terms of
trust. It solicits opinions from the general public about whether AI is affecting the level of confidence that exists
for media organizations, whether they would still
support news institutions releasing information that they are using AI, and whether there are any ethical issues
with AI-generated journalism. This is important for determining how AI may be influencing the trustworthiness
and credibility of media institutions.
The Perceived Impact of AI in Journalism (PI) measures opinions regarding potential positive impacts of AI
journalism, such as speed and accuracy in journalistic work. This further reveals ethical concerns related to this
new form. It records widespread acceptance of guidelines on using ethical AI, and other relevant thoughts people
might have regarding AI impacting journalism.
The dataset is complete, with no gaps or inconsistencies, making it both reliable and unbiased. The coverage of
the various variables and their items shows the spectrum of attitudes of the public, from being familiar to having
faith and ethical concerns towards seeing what AI has or might have on the issues facing journalism. AP provides
insight into public knowledge and acceptance of AI-generated journalism, while EC provides insight into
accountability and ethical demands placed on AI-driven media processes. PI captures two sides of AI's roles: its
promise and possible drawbacks, indicating a two-sided view concerning the influence of AI on journalism. The
findings of this analysis can ultimately be used to support the responsible integration of AI in journalism,
balancing technological advancement with ethical integrity and public trust.
Table 2 - Descriptives
AP
EC
PT
PI
DI
Statisti c
Std.
Error
Statisti c
Std.
Error
Statisti c
Std.
Error
Statisti c
Std.
Error
Statisti c
Std.
Error
Mean
3.3050
.08100
2.9675
.07758
2.3433
.06001
1.8650
.07346
1.8260
.04364
95%
Confidence
Interval for
Mean
Lower
Bound
3.1443
3.4657
2.8136
3.1214
2.2243
2.4624
1.7192
2.0108
1.7394
1.9126
Upper
Bound
5% Trimmed
Mean
3.3250
2.9833
2.3593
1.8278
1.8133
www.rsisinternational.org
Page 566
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
Median
3.3750
3.0000
2.3333
2.0000
1.8000
Variance
.656
.602
.360
.540
.190
Std. Deviation
.80996
.77578
.60014
.73462
.43638
Minimum
1.25
1.00
1.00
1.00
1.20
Maximum
4.75
4.75
3.67
3.50
3.00
Range
3.50
3.75
2.67
2.50
1.80
Interquarti le
Range
1.25
1.00
.67
1.50
.80
Skewness
-.354
.241
-.327
.241
-.427
.241
.325
.241
.318
.241
Kurtosis
-.527
.478
-.087
.478
-.114
.478
-.893
.478
-.834
.478
Table 2 focuses on the perception of AI-generated journalism in the context of journalism. It includes measures
of central tendency and dispersion, with a measure for the shape of the distribution of each: Awareness and
Perception of AI-Generated Journalism (AP), Ethical Considerations (EC), Public Trust and AI (PT), and
Perceived Impact of AI in Journalism (PI).
The mean score for AP is 3.3050, indicating a moderate to high level of familiarity with and positive perception
of AI-generated journalism. A weak negative skew indicates that respondents assigned above-average ratings
regarding awareness and perception, relatively more on the comparison side than the lower side of ratings. This
suggests that the sample is aware of or open to AI-generated journalism. Ethical considerations (EC) have a
mean score of 2.9675, indicating a balanced concern regarding the moral aspects of AI in journalism. However,
responses remain concentrated around the central values, suggesting that ethical concerns exist but are not
polarized. This may indicate an expectation from the public side that AI should be responsibly used in journalism
without excessive distrust.
PT has a mean score of 2.3433, meaning participants stand at a middle to a higher level of trust in applications
of AI in journalism, although slightly below the middle value. A moderate negative skew of (-0.427) indicates
that responses seem to be skewing more on the lower side, possibly due to skepticism on the participants' side
about the increased influence of AI on the veracity of media. This finding may call for an area of vulnerability
for AI journalism: its need for media organizations to build transparency and accountability toward gaining
people's trust. The perceived impact of AI in journalism (PI) scored an average of 1.8650, indicating a relatively
low perception regarding the positive impact of AI or at least a rather conservative view of the potential
advantages AI may bring to journalism. A slight positive skew and flatter than normal distribution suggest that
although some participants do foresee benefits, a more significant percentage remains cautious or uncertain.
The study highlights the need to educate the public about the practical advantages of AI for journalism while
clearing out misconceptions. The results highlight the importance of understanding the perceptions of
AIgenerated journalists and their potential benefits, as well as the need for media organizations to build
transparency and accountability.
Table 3 - Tests of Normality
Kolmogorov-Smirnov
a
Shapiro-Wilk
Statistic
df
Sig.
Statistic
df
Sig.
AP
.095
100
.026
.972
100
.029
EC
.107
100
.007
.981
100
.149
www.rsisinternational.org
Page 567
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
PT
.173
100
.000
.946
100
.000
PI
.180
100
.000
.887
100
.000
DI
.178
100
.000
.920
100
.000
a. Lilliefors Significance Correction
Table 3 Tests of Normality. In addition, the following list the Kolmogorov-Smirnov and Shapiro-Wilk tests of
normality for each of the variables: AP= Awareness and Perception of AI-generated journalism; EC = Ethical
Considerations; PT = Public Trust and AI; PI = Perceived Impact of AI in Journalism. For all these variables, we
assessed whether the data follows an appropriate normal distribution.
AP: The K-S test has a statistic of 0.095 with a 𝒑-value of 0.026, and the S-W test shows a statistic of 0.972 with
a p-value of 0.029. Both tests are at the 0.05 level, which may indicate that the AP was taken from a nonnormal
distribution.
EC: The K-S test value is 0.107 with a 𝒑-value of 0.007, and the S-W test value is 0.981 with a 𝒑-value of 0.149.
Here, the K-S test suggests a statistically significant departure from normality, 𝒑< 0.05, whereas the S-W test
does not, 𝒑= 0.149. This mixed result seems to indicate a slight deviation from normality in the distribution of
EC, which may still approximate normality.
PT: The K-S test statistic for PT is 0.173 with an associated 𝒑-value of 0.000, and the S-W test statistic is 0.946
with a 𝒑-value of 0.000. The two tests are highly significant, 𝒑< 0.001, meaning a marked deviation from
normality for PT.
PI: For PI, the statistic for the K-S test was 0.180, while the 𝒑-value was 0.000, and for the S-W test, it was
0.887, and the 𝒑-value was also 0.000. Both tests have a substantial significance value with a 𝒑-value
The results of the normality tests show that the four variables are deviating to different extents from the normal
distribution. Normality has long been an assumption that often underlies many types of statistical analyses,
specifically most parametric tests. A departure from normality for these variables implies the applicability of
non-parametric tests when the goals of further study and the assumptions subsequent tests require are met.
For AP, both Kolmogorov-Smirnov and Shapiro-Wilk tests are significant; the distribution of AP does not follow
a perfectly standard curve. However, based on the descriptive statistics, skewness and kurtosis are close to zero.
Considering this, AP's distribution may be close enough to expect to be treated as approximately usual, especially
for larger sample sizes where the Central Limit Theorem mitigates the impact of mild non-normality.
The variable EC gives a mixed result. On one hand, the K-S test shows an extreme deviation from normality,
while the Shapiro-Wilk test does not. Given the fact that the descriptive statistics report a slight skewness of
0.327 and kurtosis of -0.087, the distribution for EC is apparently mildly non-normal. In this sense, the
distribution of EC might be treated as approximately usual for some parametric analysis. Yet, researchers may
prefer carrying out non-parametric tests if stricter normality is demanded.
For Public Trust and AI (PT), the normality tests are significant, so PT is not normally distributed. Descriptive
statistics for PT show a moderate left skewness and a slightly flattened distribution, which supports the result
further. Therefore, it also suggests the appropriateness of using PT if non-parametric tests are inevitable because
normality being crucially assumed tends to hamper the test validity by the deviation caused in it for the specific
variable.
The PI variable of the impact of AI in journalism comes out with the most robust departures from normality at
both tests being highly significant (𝒑< 0.001). The skewness value is at 0.325, and the Kurtosis value is at 0.893,
www.rsisinternational.org
Page 568
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
meaning positive skewness and flattened at the center, so it may not normally be distributed. So, in this case,
non-parametric methods need to be used in this analysis, which is sensitive to normality assumptions.
Tests indicate that the distributions for AP and EC are nearly normal but have some mild deviations, so it can be
allowed to analyze them with some flexibility. PT and PI, however, show more pronounced non-normality, so in
analyses involving these variables, it would be better to use non-parametric methods, such as the Mann-Whitney
U test or the Kruskal-Wallis test. Understanding these distributional characteristics can help in the proper
selection of a test, thereby providing more reliable and valid findings in any research conducted on public
perceptions, ethical concerns, trust, and perceived impacts of AI in journalism.
Table 4 Case Processing Summary
Cases
Valid
Missing
Total
N
Percent
N
Percent
N
Percent
AP
100
100.0%
0
0.0%
100
100.0%
EC
100
100.0%
0
0.0%
100
100.0%
PT
100
100.0%
0
0.0%
100
100.0%
PI
100
100.0%
0
0.0%
100
100.0%
The Case Processing Summary table reveals the quality of data for four key variables: Awareness and Perception
of AI-generated journalism (AP), Ethical Considerations (EC), Public Trust and AI (PT), and Perceived Impact
of AI in Journalism (PI). All 100 valid cases have no missing value (100% complete), ensuring 100%
completeness. This whole data set provides a robust basis for sound analysis, preventing bias and reducing
statistical power.
Due to the high-quality data, researchers can now conduct detailed statistical analyses for each variable, such as
descriptive statistics, inferential tests, or relationships among variables. This data supports findings regarding
public awareness, ethical concerns, trust levels, and perceived impacts of AI in journalism without imputation
or adjustment for missing data. The case processing summary reveals an excellent standard of data quality,
strengthening the reliability of the study results and allowing for a full exploration of participants' attitudes
toward AI in journalism.
Table 5 Tests of Normality
Kolmogorov-Smirnov
a
Shapiro-Wilk
Statistic
df
Sig.
Statistic
df
Sig.
AP
.095
100
.026
.972
100
.029
EC
.107
100
.007
.981
100
.149
PT
.173
100
.000
.946
100
.000
PI
.180
100
.000
.887
100
.000
a. Lilliefors Significance Correction
The Tests of Normality present the characteristics of the distribution of the four variables: AP, EC, PT, and PI.
From the Kolmogorov-Smirnov and Shapiro-Wilk tests, it can be inferred that none of these variables are
www.rsisinternational.org
Page 569
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
perfectly normal because all of them have at least one statistically significant deviation from normality.
For AP, both tests are significant (Kolmogorov-Smirnov statistic = 0.095, 𝒑= 0.026; Shapiro-Wilk statistic =
0.972, p = 0.029), which indicates that AP is not normally distributed. This means that participants' awareness
and perception of AI-generated content in journalism vary and do not follow the standard normal distribution
curve.
For EC, the Kolmogorov-Smirnov test was significant: statistic = 0.107, 𝒑= 0.007, Shapiro-Wilk test was
nonsignificant: statistic = 0.981, 𝒑= 0.149. This gives a mixed result, so while the distribution of EC is nearly
normal compared to the others, it can only be slightly due to skewness or outliers. Thus, EC is almost customarily
distributed, but a parametric test should still be used with caution and selection.
For Public Trust and AI, both tests show significant deviation from normality (Kolmogorov-Smirnov statistic =
0.173, 𝒑< 0.001; Shapiro-Wilk statistic = 0.946, 𝒑< 0.001). This result indicates that the responses regarding PT
are generally not distributed, as the participants have held a wide range of opinions on the role of AI in
journalism. The substantial deviation from normality means that perhaps PT has a more extreme response or a
non-symmetrical distribution.
Finally, the variable Perceived Impact of AI in Journalism (PI) is also not normally distributed
(KolmogorovSmirnov statistic = 0.180, 𝒑< 0.001; Shapiro-Wilk statistic = 0.887, 𝒑< 0.001). This shows that the
participants who attended the survey perceived AI's impact on journalism in somewhat dissimilar ways. Some
believe it has considerable influence, and others hold an opposite view, saying it barely affects journalism.
Consequently, a high degree of non-normality may show the diversification of points of view and perhaps a
polarization in views regarding how AI functions in journalism.
Table 6 - Correlations
AP
EC
PT
PI
DI
AP
Pearson Correlation
1
.417
**
.418
**
.125
.002
Sig. (1-tailed)
.000
.000
.108
.494
N
100
100
100
100
100
EC
Pearson Correlation
.417
**
1
.393
**
.218
*
.128
Sig. (1-tailed)
.000
.000
.015
.102
N
100
100
100
100
100
PT
Pearson Correlation
.418
**
.393
**
1
.186
*
.058
Sig. (1-tailed)
.000
.000
.032
.283
N
100
100
100
100
100
PI
Pearson Correlation
.125
.218
*
.186
*
1
.065
Sig. (1-tailed)
.108
.015
.032
.261
N
100
100
100
100
100
DI
Pearson Correlation
.002
.128
.058
.065
1
Sig. (1-tailed)
.494
.102
.283
.261
N
100
100
100
100
100
**. Correlation is significant at the 0.01 level (1-tailed).
*. Correlation is significant at the 0.05 level (1-tailed).
www.rsisinternational.org
Page 570
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
The correlational analysis is employed to establish relationships between some of the critical variables in relation
to AI journalism: awareness and perception of AI-generated journalism, ethical considerations, public trust in
AI, perceived impact of AI in journalism, and demographic information. The results show moderate, statistically
significant positive correlations between AP and EC (r = 0.417, 𝒑< 0.01) and AP and PT (r = 0.418, 𝒑< 0.01).
In other words, those persons who are more aware of AI in journalism are the ones who have strong concerns
regarding ethics and express their higher levels of trustfulness about the use of AI in the media. These
relationships would imply that awareness of AI accompanies a better understanding of its ethics and a better
score in the responsible use of AI.
Moreover, EC and PT are positively correlated with a value of r = 0.393, 𝒑< 0.01, suggesting that subjects who
emphasized more ethical use of AI also had higher levels of trust in AI's involvement in news journalism. This
could reflect the improvement of trust in AI when moral issues are given proper input. There are also weak but
statistically significant correlations between EC and PI (r = 0.218, 𝒑< 0.05) and PT and PI (r = 0.186, 𝒑< 0.05).
These suggest that ethical issues and trust are modestly related to the extent to which people consider AI's role
in journalism to be impactful. This may imply that a modest base of trust and ethical standards increases
perceptions of the value or effectiveness of AI in media.
Curiously, DI has no significant relationships with any of the other variables, as all 𝒑-values are more critical
than 0.05. This suggests that demographic factors in terms of age, gender, education, occupation, and news usage
are not statistically crucial to any of the levels mentioned above of awareness, considerations, trust, or impacts
of AI in journalism. This may indicate independence of attitudes toward AI when considering the demographic
background, which is a notion that opinions about AI in journalism are most probably the outcome of personal
exposure or experience rather than the demographics.
Overall, the findings tend toward a nuanced relationship in which awareness, ethical considerations, and trust
intertwine highly in forming attitudes toward AI in journalism. Interestingly, ethical frameworks appear more
salient in promoting public trust, whereas the demographic factor has little significance in the perception. Results
indicate that perhaps ethical standard promotion might be exactly what could help develop public trust in AI
applications. Still, some perceptions related to AI are not as strongly associated with demographic or attitudinal
factors.
When comparing AI-related and traditional journalism to each other, one will find that there are major
discrepancies in the perception of the audience, ethical standards, and reliability. Although the AI-generated
content was also perceived as efficient and able to provide quick updates, especially on data-intensive reporting,
participants indicated a more positive reaction to the human-written news, with moderate awareness (M =3.30)
and low trust (M = 2.34) evaluations of AI-driven journalism. Such trends correspond with previous research on
the issue of algorithmic biasness, situational misperceptions, and the lack of human editorial instincts. According
to the respondents, AI-generated news must also have clear labels of transparency and editorial verification to
ensure credibility. Unlike this, traditional journalism has always been linked with authenticity, accountability,
and compliance with normative ethical constructs. This comparative observation is an example of the unending
credibility hierarchy in support of human-written journalism, but at the same time acknowledging the increased
usefulness of AI in reporting routine or structured chores. These results make sense of the necessity to use AI as
a complementary tool, but not substitutes, in ethical newsroom practices.
The statistical outcome indicates significant relations between awareness, ethical concern, and trust that the
public participation in AI in journalism takes places due to cognitive understanding and normative expectations.
The close relationship between awareness of AI-generated content and consideration of ethical factors (r =.417)
indicates the impact of the Social Responsibility Theory, in which audiences demand accountability and
transparency of any system that is involved in the production of news. On the same note, the direct relationship
between ethical issues and trust in the organization (r =.393) supports the assumption that the trust in automated
journalism depends on the perceived compliance with the ethical standards including fairness, accuracy, and
disclosure.
www.rsisinternational.org
Page 571
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
These results provide an insight in the importance of the role of transparency as the primary means of reducing
perceived risk when related to the algorithmic accountability framework. The main differences between
normality of PT and PI distributions reveal a split in the outlook of the population, indicating that people are
interested and doubtful at the same time about the growing impact of AI on journalism. Such ambivalent
perceptions highlight the need to have governance systems in forms of audit, explainability protocols, and
transparency labels, which translate the ethical values in real life newsroom operations. The discussion confirms
that empirical attainment is reinforced not only by technological precision but also by observable, ethically valid
practices by placing the results of the study within the constructs of theory.
CONCLUSION
It is revolutionary, unlocking new efficiencies and analytics capabilities in the news production chain. It also
introduces complex ethical challenges connected to transparency, accountability, and sources that might go
untraced, particularly synthetic media. This research underlines that AI in journalism is at once a precious
resource and a great ethical responsibility, thus requiring a broad ethical framework that will guide its use
responsibly.
Our findings point, therefore, to the importance of clear and enforceable ethical standards and best practices in
the application of AI within journalism standards that, by truthfulness, independence, and accountability, might
enable media organizations to unlock the benefits of AI while retaining public trust. It will also help audiences
better understand, critically assess, and actively participate in the journalistic process by raising more awareness
about the role that AI plays in news production while reinforcing mutual accountability between news
organizations and the public.
More importantly, as the uses of AI in journalism continue to evolve, it will be essential to maintain an open,
collaborative dialogue about the same. This approach allows the field to adapt responsibly to rapid technological
advancement and retains the core values of journalistic integrity. We call for a collective effort by journalists,
technologists, policymakers, and the public to shape an ethical framework that balances innovation with
principles of truth and transparency. Now, it becomes the responsibility of all stakeholders to ensure that the
development of journalism driven by AI must have ethics at its core in enriching the quality and reliability of
journalism and becoming a reliable pillar for the informed society of the digital age.
The future studies should combine comparative observations and multi-stakeholder views of journalists, editors,
technologists and audiences to come up with a complete ethical framework of AI-driven journalism. Such a
framework must bring normative standards of media ethics in line with the factual conditions surrounding
newsroom automation and make sure that transparency, fairness and human control are both present in AIassisted
practices. To ensure the protection of editorial independence and increase the level of public trust, it will be
necessary to introduce cross-functional governance frameworks, regular algorithmic audits, and sector-wide
disclosure guidelines. With the further development of AI, the journalism profession should go out of its way to
influence its adoption positively by applying ethical reflexivity and collaborative governance, so that
technological innovation reinforces, not weakens, the democratic and epistemic roles of the press.
REFERENCE
1. Abuhamad, A., & Andersson, M. (2024). Algorithmography: Intersections of Truth, Authenticity, and
Representation of AI-Generated Visual Content in Journalism.
2. Adriana Lacy Consulting. (2024, January 3). Ethical Considerations in AI Journalism. Media Minds
by Adriana Lacy Consulting. https://blog.adrianalacyconsulting.com/ethical-considerations-
aijournalism/#:~:text=The%20content%20produced%20by%20AI
3. Al-Zoubi, O., Ahmad, N., & Hamid, N. A. (2024). Artificial Intelligence in Newsrooms: Ethical
Challenges Facing Journalists. Studies in Media and Communication, 12(1),
401.https://doi.org/10.11114/smc.v12i1.6587
www.rsisinternational.org
Page 572
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
4. Altay, S., & Gilardi, F. (2024). People are skeptical of headlines labeled as AI-generated, even if true
or human-made, because they assume full AI automation. PNAS Nexus, 3(10).
https://doi.org/10.1093/pnasnexus/pgae403
5. Anandhasivam, V. S., Anusri, A. K., Logeshwar, M., & Gopinath, R. (2024). Enhancing Deepfake
Detection Through Hybrid MobileNet-LSTM Model with Real-Time Image and Video Analysis. 2024
6. 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS),
1989–1995. https://doi.org/10.1109/icuis64676.2024.10867159
7. Artificial Intelligence in Journalism. (2024, July 23). Center for News, Technology & Innovation.
https://innovating.news/article/ai-in-journalism/
8. Bartholomew, J., & Mehta, D. (2023, May 26). How the media is covering ChatGPT. Columbia
Journalism Review. https://www.cjr.org/tow_center/media-coverage-chatgpt.php
9. Busuioc, M., Curtin, D., & Almada, M. (2022). Reclaiming transparency: contesting the logics of
secrecy within the AI Act. European Law Open, 2(1), 1–27. https://doi.org/10.1017/elo.2022.47
10. Danzon-Chambaud, S. (2021a). The Tow Center newsletter: experimenting with automated news at
the BBC. Columbia Journalism Review. https://www.cjr.org/tow_center/the-tow-center-
newsletterexperimenting-with-automated-news-at-the-bbc.php
11. Danzon-Chambaud, S. (2021b, August 6). Covering COVID-19 with automated news. Columbia
Journalism Review. https://www.cjr.org/tow_center_reports/covering-covid-automated-news.php
12. deLimaSantos, M., Yeung, W. N., & Dodds, T. (2025). Guiding the way: a comprehensive examination
of AI guidelines in global media. AI & SOCIETY, 40(4), 2585–2603.
https://doi.org/10.1007/s00146024019735
13. Dhiman, B. (2023). Does artificial intelligence help journalists: A boon or bane? SSOAR Social
Science Open Access
Repository.https://www.ssoar.info/ssoar/bitstream/handle/document/86437/ssoar-2023-
dhimanDoes_Artificial_Intelligence_help_Journalists.pdf
14. Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., López de Prado, M., Herrera-Viedma, E., &
Herrera, F. (2023). Connecting the Dots in Trustworthy Artificial Intelligence: from AI principles,
ethics, and Key Requirements to Responsible AI Systems and Regulation. Information Fusion,
99(101896), 101896. https://doi.org/10.1016/j.inffus.2023.101896
15. Fabia Ioscote, Gonçalves, A., & Quadros, C. (2024). Artificial Intelligence in Journalism: A Ten-Year
Retrospective of Scientific Articles (2014–2023). Journalism and Media, 5(3), 873–891.
https://doi.org/10.3390/journalmedia5030056
16. Gao, R., Yu, D., Gao, B., Hua, H., Hui, Z., Gao, J., & Yin, C. (2025). Legal regulation of AI-assisted
academic writing: challenges, frameworks, and pathways. Frontiers in Artificial Intelligence, 8.
https://doi.org/10.3389/frai.2025.1546064
17. Gotfredsen, S. G. (2021). Q&A: AI in Newsrooms: Revolution or Retooling? Columbia Journalism
Review. https://www.cjr.org/tow_center/qa-ai-in-newsrooms-revolution-or-retooling.php
18. Gotfredsen, S. G. (2023, May 23). Q&A: How can artificial intelligence help journalists? Columbia
Journalism Review. https://www.cjr.org/tow_center/tow-center-newsletter/ai-journalism-qa.php
19. Grimme, M., & Zabel, C. (2024). AI in the newsroom: a collective case study about newsworker-AI
collaboration in the German newspaper industry. Journal of Media Business Studies, 1–25.
https://doi.org/10.1080/16522354.2024.2380120
20. Grimme, M., & Zabel, C. (2025). AI in the newsroom: a collective case study about newsworkerAI
collaboration in the German newspaper industry. Journal of Media Business Studies, 22(2), 118–142.
https://doi.org/10.1080/16522354.2024.2380120
21. Hofeditz, L., Mirbabaie, M., & Stieglitz, S. (2021). Do you Trust an AI-Journalist? A Credibility
Analysis of News Content with AI- Authorship.
22. Illia, L., Colleoni, E., & Zyglidopoulos, S. (2022). Ethical implications of text generation in the age
of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201–210.
https://doi.org/10.1111/beer.12479
23. Labajová, L. (n.d.). The state of AI Exploring the perceptions, credibility, and trustworthiness of the
users towards AI-Generated Content Media and Communication Studies: Culture Collaborative
Media, and Creative Industries.
www.rsisinternational.org
Page 573
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
24. Lewis, S. C., Markowitz, D. M., & Bunquin, J. B. A. (2025). Journalists, Emotions, and the
Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the
Launch of ChatGPT. Social Media + Society, 11(1). https://doi.org/10.1177/20563051251325597
25. Mathias-Felipe de-Lima-Santos, Wang Ngai Yeung, & Dodds, T. (2024). Guiding the way: a
comprehensive examination of AI guidelines in global media. AI & Society, 40, 2585–2603.
https://doi.org/10.1007/s00146-024-01973-5
26. Murugesan, S. (2023, April 24). The Rise of Ethical Concerns about AI Content Creation: A Call to
Action.IEEEComputerSociety.https://www.computer.org/publications/technews/trends/ethicalconcer
ns-on-ai-content-creation
27. Navaroli, A. C., & E McNealy, J. (2021). The Role of Journalism, Law, and Trust & Safety in an AI
Dominated World. Columbia Journalism Review. https://www.cjr.org/tow_center/the-role-
ofjournalism-law-and-trust-safety-in-an-ai-dominated-world.php
28. Petre Breazu, & Katson, N. (2024). ChatGPT-4 as a journalist: Whose perspectives is it reproducing?
Discourse & Society, 35(6), 687–707. https://doi.org/10.1177/09579265241251479
29. Piasecki, S., Morosoli, S., Helberger, N., & Naudts, L. (2024). AI-generated journalism: Do the
transparency provisions in the AI Act give news readers what they hope for? Internet Policy Review,
13(4). https://doi.org/10.14763/2024.4.1810
30. Porlezza, C., & Schapals, A. K. (2024). AI ethics in journalism (studies): An evolving field between
research and practice. Emerging Media, 2(3). https://doi.org/10.1177/27523543241288818
31. Qingtao Liu. (2025). Generative AI and Journalism Ethics: Controversies over ChatGPT. Journal of
Information, Technology and Policy, 3(1), 1–6. https://doi.org/10.62836/jitp.2025.346
32. Ramos, S., & Ellul, J. (2024a). Blockchain for Artificial Intelligence (AI): enhancing compliance with
the EU AI Act through distributed ledger technology. A cybersecurity perspective. International
Cybersecurity Law Review, 5, 1–20. https://doi.org/10.1365/s43439-023-00107-9
33. Ramos, S., & Ellul, J. (2024b). Blockchain for Artificial Intelligence (AI): enhancing compliance with
the EU AI Act through distributed ledger technology. A cybersecurity perspective. International
Cybersecurity Law Review, 5(1), 1–20. https://doi.org/10.1365/s43439023001079
34. Roush, C. (2015, January 29). AP boosts its automated earnings story output - Talking Biz News.
Talking Biz News. https://talkingbiznews.com/they-talk-biz-news/ap-boosts-its-automated-
earningsstory-output/
35. Samson, B., None Olukunle Oladipupo Amoo, None Akoh Atadoga, Abrahams, O., None Femi
Osasona, & None Oluwatoyin Ajoke Farayola. (2024). Ethical AI in practice: Balancing technological
advancements with human values. International Journal of Science and Research Archive, 11(1),
1311–1326. https://doi.org/10.30574/ijsra.2024.11.1.0218
36. Saniat Javid Sohrawardi, Y. Kelly Wu, Hickerson, A., & Wright, M. (2024). Dungeons & Deepfakes:
Using scenario-based role-play to study journalists’ behavior towards using AI-based verification tools
for video content. CHI ’24: Proceedings of the 2024 CHI Conference on Human Factors in Computing
Systems. https://doi.org/10.1145/3613904.3641973
37. Sinclair, V. L. (2025). The Influence of AI-Generated News on Public Trust in Journalism: Evidence
from the UK. Journal of Research in Social Science and Humanities, 4(2), 16.
https://www.pioneerpublisher.com/jrssh/article/view/1196
38. Somorin, K., & Ademola, O. E. (2024). Ethical Imperatives in the Era of AI Journalism: Navigating
the Intersection of Technology and Responsibility. Advances in Multidisciplinary & Scientific
Research Journal Publications, 12(2), 31–36. https://doi.org/10.22624/aims/humanities/v12n2p4
39. Sonni, A. F., Hafied, H., Irwanto, I., & Latuheru, R. (2024). Digital Newsroom Transformation: A
Systematic Review of the Impact of Artificial Intelligence on Journalistic Practices, News Narratives,
and Ethical Challenges. Journalism and Media, 5(4), 1554–1570.
https://doi.org/10.3390/journalmedia5040097
40. The AI Revolution: Is it a Game Changer for Disability Inclusion? (2024). UNDP.
https://www.undp.org/uzbekistan/blog/ai-revolution-it-game-changer-disability-inclusion
41. The Ethical Role of AI in Media: Combating Misinformation. (n.d.). Omdena.
https://www.omdena.com/blog/the-ethical-role-of-ai-in-media-combating-misformation
42. The fascinating evolution of AI and its integration into our lives. (n.d.). Iberdrola.
https://www.iberdrola.com/innovation/ai-evolution
www.rsisinternational.org
Page 574
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue XIII October 2025 | Special Issue on Communication
43. The Impact of AI-Generated Articles on the Future of Journalism Originality.AI. (n.d.).
Originality.ai. https://originality.ai/blog/impact-ai-generated-articles-future-journalism
44. Ugwuagbo Emmanuel Chizoba, & Okafor Sebastine Chukwuebuka. (2024, August 14). SURVIVAL
OF PRINT MEDIA AND JOURNALISM IN THE AGE OF ARTIFICIAL INTELLIGENCE.
ResearchGate.https://www.researchgate.net/publication/383091266_SURVIVAL_OF_PRINT_MED
IA_AND_JOUR NALISM_IN_THE_AGE_OF_ARTIFICIAL_INTELLIGENCE
45. Voinea, D. V. (2025). Reconceptualizing Gatekeeping in the Age of Artificial Intelligence: A
Theoretical Exploration of Artificial Intelligence-Driven News Curation and Automated Journalism.
Journalism and Media, 6(2), 68. https://doi.org/10.3390/journalmedia6020068
46. Xiao, Q., Fan, X., Simon, F. M., Zhang, B., & Eslami, M. (2025). “It Might be Technically Impressive,
But It’s Practically Useless to us”: Motivations, Practices, Challenges, and Opportunities for
CrossFunctional Collaboration around AI within the News Industry. CHI 2025: CHI Conference on
Human Factors in Computing, 1–19. https://doi.org/10.1145/3706598.3714090
47. Zagorulko, D. I. (2023). CHATGPT IN NEWSROOMS: ADHERENCE OF AI-GENERATED
CONTENT TO JOURNALISM STANDARDS AND PROSPECTS FOR ITS IMPLEMENTATION
IN DIGITAL MEDIA. “Scientific Notes of v. I. Vernadsky Taurida National University”, Series:
“Philology. Journalism,” 2(1), 319–325. https://doi.org/10.32782/2710-4656/2023.1.2/50