INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Ethical Challenges in the Era of Generative AI: Insights from a  
Practice-Informed Rapid Review  
Nguyen Tat Hiep, PhD  
Faculty of Foerign Languages, University of Labour and Social Affairs (HCM City Campus)  
Received: 26 October 2025; Accepted: 04 November 2025; Published: 20 November 2025  
ABSTRACT  
This study investigates the integration of Generative AI (GenAI) into academic research, highlighting both its  
transformative potential and the ethical, methodological, and epistemological challenges it introduces. While  
GenAI enhances efficiency in tasks like text generation, data analysis, and translation, it raises serious concerns  
around authorship, originality, transparency, data privacy, and accountability. Through a rapid review of  
literature from 2022 to 2025, guided by the European Code of Conduct for Research Integrity, the study identifies  
recurring risks such as algorithmic bias, fabricated citations, and diminished scholarly authorship. In response,  
it proposes a five-principle ethical frameworkhuman oversight, accuracy, accountability, data protection, and  
institutional governanceand emphasizes that responsible GenAI use requires not only technical safeguards but  
also ethical literacy, critical reflection, and transparent disclosure. Ultimately, GenAI should serve as a  
collaborative partner that augments human creativity while preserving the integrity and rigor of scientific  
inquiry.  
Keywords: Generative Artificial Intelligence, research ethics, academic integrity, accountability, transparency,  
data privacy  
INTRODUCTION  
The rapid advancement of Generative Artificial Intelligence (GenAI) has profoundly reshaped academic  
research by enabling the creation of realistic text, images, and even synthetic datasets. These technologies  
promise to accelerate knowledge production, enhance accessibility, and improve the efficiency of research  
workflows. However, their integration into scholarly practice also raises serious ethical, methodological, and  
epistemological questions concerning authorship, originality, data integrity, and accountability (European  
Commission Directorate General for Research and Innovation, 2024).  
Recent evidence underscores the urgency of this issue. Nearly 1% of abstracts submitted to the preprint  
repository arXiv in 2023 displayed indicators of GenAI-generated content (Gray, 2024), suggesting that these  
tools are rapidly becoming embedded within the academic ecosystem. As GenAI’s sophistication increases—  
capable even of conducting research with minimal human input (Sakana.ai, 2024)its potential to disrupt  
traditional research norms and ethical standards becomes increasingly apparent (Kobak et al., 2024; Liang et al.,  
2024).  
A clear distinction must therefore be drawn between Artificial Intelligence (AI) and Generative Artificial  
Intelligence (GenAI), as their roles and ethical implications differ significantly. According to the European  
Commission (2018, p. 4), AI systems demonstrate “intelligent behaviour by analysing their environment and  
taking actionswith some degree of autonomy—to achieve specific goals.” In contrast, GenAI systems  
“generate new content in response to prompts based on their training data” (Lorenz et al., 2023, p. 8). This  
distinction is crucial, as GenAI’s capacity to autonomously produce seemingly original text and data raises  
unique challenges in research authorship, reproducibility, and moral accountability.  
Despite GenAI’s transformative potential, research on its ethical dimensions remains fragmented and  
underdeveloped. The majority of existing scholarship focuses on academic integrity issues among students, such  
Page 8146  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
as plagiarism and inappropriate AI-assisted writing (Cotton et al., 2024; Foltýnek et al., 2023; Perkins, 2023).  
However, ethical implications for research practicesincluding data analysis, publication, peer review, and  
intellectual propertyhave received limited attention. Key challenges include the difficulty of replicating results  
due to stochastic outputs (Perkins & Roe, 2024c) and the lack of transparency in how these systems generate  
responses (Weber-Wulff et al., 2023).  
Conceptual analyses have identified several recurring ethical risks, including algorithmic bias, inaccuracy, the  
“black box” problem, and the absence of moral agency in AI-driven processes (Resnik & Hosseini, 2024). To  
mitigate these concerns, Kurz and Weber-Wulff (2023) propose three core principles for responsible AI use: it  
must be permitted, transparent, and accompanied by full accountability on the part of the user.  
In response to these challenges, international organizations and regulatory bodies have begun developing ethical  
frameworks to guide AI integration in research. UNESCO (2024) and Miao and Holmes (2023) emphasize the  
potential for GenAI to enhance research productivity and inclusivity while warning of emerging risks related to  
fairness, bias, and data misuse. Similarly, various academic publishers have released AI-use policies, though  
many have been criticized for being overly restrictive or inconsistently implemented across disciplines (Perkins  
& Roe, 2024a).  
Against this backdrop, the present study aims to explore the ethical implications of GenAI integration in  
academic research and to identify strategies for ensuring its responsible use. Specifically, it seeks to address two  
guiding research questions:  
What ethical challenges emerge from the use of Generative AI across different stages of the research process—  
from conceptualization to dissemination?  
How can researchers and institutions ensure the responsible and transparent integration of GenAI tools while  
upholding the principles of research integrity and academic freedom?  
By examining GenAI applications throughout the research lifecycle, this study identifies key ethical concerns—  
including data privacy, accuracy of outputs, bias, transparency, intellectual property rights, and research  
misconductand proposes practical recommendations to support ethical, transparent, and accountable GenAI  
use in academic contexts.  
LITERATURE REVIEW  
The ethical integration of Generative Artificial Intelligence (GenAI) in academic research represents a critical  
yet underexplored area within contemporary scholarship. While the potential benefits of GenAI are widely  
recognizedsuch as increased efficiency, accessibility, and automationthe ethical, social, and  
epistemological implications of its adoption remain poorly understood.  
Conceptualizing Artificial Intelligence and Generative AI  
The distinction between AI and GenAI is foundational to understanding their ethical implications. Artificial  
Intelligence (AI) refers to computational systems that analyze data and perform decision-making tasks with some  
degree of autonomy (European Commission, 2018). Generative AI (GenAI), by contrast, is a subclass of AI that  
creates new content in response to prompts based on patterns learned from large datasets (Lorenz et al., 2023).  
Whereas traditional AI supports analysis and prediction, GenAI participates directly in the creation of research  
materials, including text, images, code, and simulated datathus introducing profound challenges concerning  
authorship, originality, and reproducibility.  
Emerging Ethical Concerns in Research Practice  
The application of GenAI in academic research raises multifaceted ethical concerns. These include transparency  
of tool operation, potential biases embedded in training data, reliability of generated outputs, and the risk of  
inadvertent plagiarism. Resnik and Hosseini (2024) emphasize that GenAI tools lack moral agency, meaning  
Page 8147  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
that accountability must rest entirely with the human researcher. Moreover, the “black box” nature of many  
GenAI systemswherein internal decision-making processes are opaquecreates difficulties in verifying or  
reproducing results, challenging the foundational principle of scientific transparency (Weber-Wulff et al., 2023).  
Current Research Gaps and Disciplinary Contexts  
While numerous studies have examined AI use in education, most have centered on academic integrity issues  
among students (Cotton et al., 2024; Foltýnek et al., 2023; Perkins, 2023). In contrast, research ethics and  
scientific integrity in GenAI-assisted scholarship have been comparatively neglected. Nonetheless, several  
domain-specific inquiries have begun to emerge: in psychology (Chenneville et al., 2024), scholars debate the  
ethical implications of AI-mediated data analysis; in health research, concerns center on privacy and consent  
(Spector-Bagdady, 2023); and in software engineering, questions arise about authorship and code licensing  
(Kirova et al., 2023). Despite these contributions, a systematic, cross-disciplinary framework for addressing  
GenAI ethics in research is still lacking.  
Ethical Frameworks and Regulatory Responses  
Global and institutional efforts to guide GenAI integration are growing. UNESCO (2024) and Miao and Holmes  
(2023) advocate for ethical frameworks emphasizing transparency, accountability, and human oversight.  
Similarly, Kurz and Weber-Wulff (2023) propose that AI use in research should be permitted by ethical  
guidelines, transparent in disclosure, and accountable through explicit authorship responsibility. However,  
many journal-level and funding-agency policies remain inconsistent, creating confusion regarding acceptable  
AI-assisted practices (Perkins & Roe, 2024a).  
Theoretical Synthesis and Research Need  
Synthesizing across existing literature, several ethical themes consistently emerge:  
Transparency and explainability in AI outputs.  
Bias and fairness in training data and model use.  
Intellectual property and authorship concerns.  
Data privacy and informed consent.  
Accountability and moral responsibility in automated research.  
These recurring issues highlight the urgent need for empirically grounded research to map ethical challenges  
across the entire research lifecycle. This study therefore aims to address this gap by systematically analyzing  
how GenAI affects each phase of academic research and proposing actionable guidelines for ethical  
implementation.  
METHODOLOGY  
Research Design and Rationale  
The ethics of Generative AI (GenAI) in research remain largely underexplored despite its rapid uptake across  
academic disciplines. Given the fast-paced evolution of GenAI tools, a comprehensive evaluation of each tool  
would be impractical and quickly outdated. Therefore, this study employs a practice-informed rapid review  
approach, strategically focusing on representative tools used at different stages of the research process.  
This approach enables the researchers to chart the ethical concerns that emerge as GenAI becomes embedded in  
academic workflows, rather than attempting a full systematic assessment. The study is guided by the European  
Code of Conduct for Research Integrity (ALLEA, 2023) and informed by the interdisciplinary expertise of the  
research team, which spans history, computer science, ethics, linguistics, and medicine.  
Page 8148  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Data Sources and Evidence Base  
To build a robust evidence foundation, the study draws upon both peer-reviewed and grey literature published  
since 2022, complemented by hands-on probes of contemporary GenAI services. Because ethical issues vary  
across the research life cycle, the distribution of evidence also differsbeing more extensive in areas such as  
text generation, but sparse in later stages like data analysis or visualization.  
Where literature was plentiful, we synthesized existing studies and cited them directly. In domains with limited  
prior research, new empirical probes were conducted to generate illustrative case examples. This blended method  
balances comprehensiveness with timeliness, ensuring that findings remain relevant to rapidly evolving  
technologies.  
Analytical Framework and Case Study Approach  
The study’s analytical design employs detailed case reports derived from real-world applications of GenAI tools  
across multiple research phases. Each case was systematically examined by:  
Comparing GenAI-supported outcomes with conventional research methods,  
Identifying ethical and methodological challenges, and  
Evaluating implications for research integrity.  
Rather than attempting an exhaustive tool catalogue, the focus remains on representative cases that highlight  
recurring ethical patterns. This enables deeper insight into core ethical principlesauthorship, transparency,  
reproducibility, and accountabilityacross diverse research contexts.  
Research Lifecycle Framework  
The methodological framework encompasses the entire research lifecycle, structured into four key phases:  
a. Conceptualization and Design  
This phase involves idea generation, hypothesis formation, and literature review. GenAI tools may assist with  
multilingual research synthesis, grant proposal drafting, and ethics application writing. These activities raise  
questions about originality, bias in source selection, and authorship attribution.  
b. Data Collection and Analysis  
GenAI supports data-driven processes such as transcription, coding, statistical analysis, and even  
programming/debugging in computational studies. It also assists in generating images or visual data  
representations, each introducing specific ethical risks related to accuracy, bias, and data provenance.  
c. Writing and Communication  
GenAI tools are widely used for text generation, paraphrasing, and editingparticularly beneficial for  
researchers who are English-as-a-Foreign-Language (EFL) users. They enhance linguistic clarity and visual  
presentation but also raise concerns about intellectual ownership, plagiarism, and the erosion of academic voice.  
d. Dissemination and Review  
In the final stage, GenAI tools can assist with peer review preparation, publication, and outreach. This introduces  
issues surrounding transparency in AI-aided content creation, disclosure requirements, and maintaining integrity  
in public communication of research findings.  
Evaluation and Ethical Integration  
By examining GenAI tools within their research contexts and comparing outcomes with established integrity  
standards, this methodology highlights both opportunities and risks. The study prioritizes ethical reflection over  
Page 8149  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
technical evaluation, aiming to develop practical recommendations that uphold the principles of transparency,  
accountability, and research quality.  
This approach acknowledges that research activities often occur concurrently rather than sequentially, creating  
complex ethical intersections that require continuous, context-sensitive evaluation.  
RESULTS  
Literature Gathering and Summarization  
Table 1. Ethical Issues Related To Literature Review  
Theme  
Representative  
Tools  
Key  
Observations  
Findings  
/ Ethical Issues Recommendations  
Identified Mitigation Strategies  
/
Literature  
Gathering  
Perplexity,  
ResearchRabbit,  
Tools return mixed- Inaccuracy,  
quality results; often fabricated  
Verify sources and  
DOIs manually; cross-  
Consensus, Elicit, include non-academic or references, lack check with academic  
Litmap  
predatory sources; may of ransparency, databases; prefer tools  
fabricate citations or misleading  
linking directly to  
verified references.  
misunderstand  
word terms.  
multi- citations.  
Textual  
Enago  
Read, Capable of concise Hallucinations,  
summaries but prone to superficial  
Use for preliminary  
Understanding SciSummary,  
Scholarcy,  
overviews  
only;  
&
errors,  
hallucinations,  
colloquial  
author understanding,  
and loss of context.  
tone;  
manually verify key  
claims; pair summaries  
with human critical  
evaluation.  
Summarization NotebookLM,  
ChatPDF,  
ChatGPT  
accuracy depends on  
information position in  
text.  
Copyright and All above tools  
IP Concerns  
Users may inadvertently Violation  
of Avoid  
uploading  
transfer  
intellectual copyright laws; paywalled papers; read  
property rights when unclear data use Terms of Service; use  
uploading copyrighted by providers.  
content.  
platforms without IP  
transfer clauses.  
Table 1 reveals that while Generative AI (GenAI) tools significantly enhance efficiency in literature gathering  
and summarization, they also present notable ethical and reliability concerns. AI-based platforms such as  
Perplexity, ResearchRabbit, Consensus, Elicit, and Litmap streamline access to academic sources but frequently  
return mixed-quality results, including non-academic or predatory materials, fabricated citations, and  
misinterpreted search terms (Foltýnek et al., 2020). Similarly, summarization tools such as Enago Read,  
SciSummary, Scholarcy, NotebookLM, ChatPDF, and ChatGPT offer concise overviews but often produce  
hallucinated content, superficial interpretations, and colloquial tones inconsistent with academic discourse (Liu  
et al., 2023; Fong & Wilhite, 2017). Ethical issues extend to intellectual property (IP) management, as many  
GenAI services require users to upload copyrighted texts, thereby risking unauthorized data sharing or implicit  
transfer of content ownership (Bakos et al., 2014; Steinfeld, 2016). To ensure integrity and compliance,  
researchers should manually verify AI-generated references, cross-check results with established databases, and  
avoid uploading paywalled or copyrighted materials. Overall, while GenAI offers clear benefits in research  
productivity and accessibility, responsible use requires critical human oversight, transparency, and adherence to  
ethical standards to prevent misinformation, bias, and IP violations (ALLEA, 2023; Perkins & Roe, 2024;  
Weber-Wulff et al., 2023).  
Page 8150  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Study Design and Data Collection  
Table 2. Ethical Issues Related To Study Design And Data Collection  
Representative Key Findings / Ethical  
Theme  
Issues Recommendations  
Mitigation Strategies  
/
Tools  
Observations  
Identified  
GenAI can identify  
Overgeneralisation,  
Maintain  
of ethics reviews; use AI  
transparency in ethical output as advisory, not  
human-led  
potential  
ethical  
Ethical Risk ChatGPT,  
Identification Claude, Gemini  
bias,  
lack  
issues but lacks  
contextual  
reasoning.  
moral  
for  
judgments.  
authoritative.  
Useful  
generating  
Reinforcement  
of Pilot test AI-generated  
Survey  
Interview  
Design  
and  
ChatGPT, Elicit, questions but risks stereotypes; bias in questions; review for  
Copilot  
producing biased or phrasing; harm to inclusivity;  
culturally  
retain  
researcher oversight.  
participants.  
insensitive content.  
Can help draft  
plain-language  
consent text but misleading  
Always apply human  
validation; avoid full  
automation of consent  
documents.  
Inaccurate statements,  
Informed  
Consent  
ChatGPT,  
Claude  
prone  
to information.  
hallucination.  
The findings summarized in Table 2 illustrate how Generative AI (GenAI) tools are increasingly applied during  
the study design and data collection phases of research, providing new efficiencies while introducing notable  
ethical challenges. In the area of ethical risk identification, tools such as ChatGPT, Claude, and Gemini  
demonstrate the ability to flag potential ethical concerns but lack the depth of contextual or moral reasoning  
necessary for complex judgment (Perni et al., 2023). This limitation results in risks of overgeneralization, bias,  
and insufficient transparency in ethical evaluations (European Commission, 2018). Therefore, GenAI output  
should be regarded as advisory rather than authoritative, ensuring that human-led ethics reviews remain central  
to research governance (ALLEA, 2023).  
For survey and interview design, GenAI systems like ChatGPT, Elicit, and Copilot are effective in rapidly  
generating question sets and identifying thematic gaps. However, these systems may inadvertently reinforce  
stereotypes, use culturally insensitive phrasing, or introduce biases that could harm participants (Currie et al.,  
2023). Ethical best practice involves pilot testing AI-generated questions, reviewing them for inclusivity, and  
maintaining researcher oversight to uphold fairness and contextual sensitivity (Council for International  
Organizations of Medical Sciences, 2016).  
Finally, in the area of informed consent, GenAI tools such as ChatGPT and Claude can assist in producing plain-  
language consent forms that enhance participant comprehension. Nonetheless, these tools are prone to  
hallucinations and inaccuracies, sometimes generating misleading or incomplete statements (Shiraishi et al.,  
2024). To protect participant autonomy and data integrity, researchers must validate all AI-generated consent  
documents manually and avoid full automation of ethical communication.  
In summary, while GenAI tools offer valuable support in identifying ethical risks, developing research  
instruments, and drafting consent materials, their use must remain subordinate to human expertise and ethical  
judgment. Proper oversight, transparency, and validation processes are essential to ensure that AI-assisted study  
design aligns with principles of research integrity, participant protection, and cultural sensitivity (Weber-Wulff  
et al., 2023; Perkins & Roe, 2024).  
Page 8151  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Transcription and Data Processing  
Table 3. Ethical Issues Related To Transcription And Data Processing  
Theme  
Representative Key  
Findings  
/ Ethical Issues Recommendations  
Identified Mitigation Strategies  
/
Tools  
Observations  
Audio/Video  
Transcription  
Zoom,  
MS Efficient but may Data  
non- violations;  
native or minority demographic  
privacy Use  
local/offline  
anonymize  
Teams, Otter.ai misrecognize  
transcription;  
data; seek informed consent  
training for storage/use.  
of  
speech patterns.  
bias;  
reuse  
recordings.  
Data  
Processing  
ChatGPT,  
Can  
automate Methodological  
Log all AI-assisted steps;  
validate outputs with  
Copilot, Code cleaning, imputation, inconsistency;  
Interpreter  
Textwash,  
or feature extraction data fabrication statistical checks; disclose  
but may introduce if undocumented. AI use.  
errors.  
Data  
Effective  
for Re-identification Combine AI with human  
Anonymisation LLM-based  
tools  
structured data; less risk; data misuse review; avoid online tools for  
reliable for text with by third parties.  
contextual identifiers.  
sensitive  
compliance with privacy  
law.  
data;  
ensure  
The findings in Table 3 demonstrate that while Generative AI (GenAI) and related technologies greatly improve  
the efficiency of transcription and data processing, they introduce several ethical and methodological risks that  
require careful oversight. In audio and video transcription, tools such as Zoom, Microsoft Teams, and Otter.ai  
enable rapid conversion of speech to text but often misrecognize non-native or minority speech patterns, leading  
to potential demographic bias and inaccuracies in recorded data (Blodgett & O’Connor, 2017). Additionally, the  
use of cloud-based transcription services raises data privacy and consent concerns, particularly regarding the  
storage or reuse of recordings for AI model training (Council for International Organizations of Medical  
Sciences, 2016). To address these challenges, researchers should prioritize local transcription solutions, ensure  
explicit participant consent, and adhere to institutional data protection protocols.  
In the area of data processing, AI systems such as ChatGPT, Copilot, and Code Interpreter can automate tasks  
like data cleaning, imputation, and feature extraction, thereby reducing researcher workload. However, these  
benefits come with the risk of methodological inconsistency and potential data fabrication if outputs are not  
thoroughly documented (Perkins & Roe, 2024). Since GenAI systems operate stochastically, the same prompt  
may yield varying outputs, threatening replicability and research transparency (Weber-Wulff et al., 2023).  
Consequently, it is essential that researchers maintain detailed records of AI-assisted procedures and validate all  
processed data against recognized standards.  
Finally, in data anonymisation, tools like Textwash and other LLM-based systems are useful for structured  
datasets but remain unreliable for text containing contextual identifiers, where re-identification risks persist  
(Patsakis & Lykousas, 2023). The potential for data misuse by third parties further complicates ethical  
compliance, particularly when sensitive information is transmitted to external servers. Researchers should  
therefore favor institutional or offline anonymisation systems and avoid web-based tools that claim ownership  
or reuse rights over uploaded data.  
Page 8152  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Data Analysis and Visualization  
Table 4. Ethical Issues Related to Data Analysis and Visualization  
Theme  
Representative  
Tools  
Key Findings / Ethical  
Observations Identified  
Issues Recommendations  
Mitigation Strategies  
/
Qualitative  
Analysis  
Claude 3 Opus, Can generate codes Data fabrication; Require evidence-backed  
ChatGPT and themes but also hallucination; lack coding; human validation  
fabricate supporting of traceability.  
quotes.  
of quotes; disclose AI  
involvement.  
Quantitative ChatGPT (Python Performs statistical Reproducibility  
Analysis plugin), Copilot tasks rapidly but issues;  
Pre-register analysis plans;  
bias confirm results manually;  
may enable “p- amplification; data emphasize theoretical over  
hacking.” manipulation. statistical significance.  
AI  
Image DALL-E 2, Stable Generates visuals Copyright Use ethically trained  
for research, but infringement; data datasets; disclose AI-  
Generation  
Diffusion,  
Midjourney,  
Firefly  
may  
include integrity  
issues; generated visuals; retain  
copyrighted  
or privacy risks.  
editable  
originals  
for  
misleading content.  
verification.  
The findings in Table 4 highlight both the opportunities and ethical risks of applying Generative AI (GenAI)  
tools in data analysis and visualization. In qualitative analysis, systems such as Claude 3 Opus and ChatGPT can  
efficiently generate codes, subthemes, and summaries of large datasets, supporting faster thematic exploration.  
However, these tools often fabricate supporting quotes or misattribute textual evidence, raising concerns about  
data fabrication, hallucination, and traceability (Lee et al., 2024; Perkins & Roe, 2024a). Such issues threaten  
the credibility of qualitative research, as fabricated or unverifiable data undermine the interpretive validity of  
findings. To mitigate these problems, researchers should maintain audit trails, manually verify generated content,  
and use GenAI tools only to assistnot replacehuman analysis.  
In quantitative analysis, GenAI-enabled tools such as ChatGPT (Python plugin) and Copilot demonstrate  
proficiency in performing statistical operations, modeling, and coding support. While these systems can enhance  
analytical speed and accessibility, they also risk promoting “p-hacking”, bias amplification, and data  
manipulation, particularly when outputs are not rigorously validated (Head et al., 2015; Perkins & Roe, 2024b).  
The stochastic nature of AI-generated outputs further raises concerns about reproducibility, as identical prompts  
can yield differing statistical interpretations (Weber-Wulff et al., 2023). Researchers must therefore ensure  
transparency in data processing, document all AI-assisted steps, and replicate results using independent methods  
to maintain scientific reliability.  
For AI image generation, tools like DALL-E 2, Stable Diffusion, Midjourney, and Firefly provide novel ways  
to visualize research findings but raise significant ethical issues, including copyright infringement, data integrity  
risks, and privacy concerns (Bendel, 2023). Generated visuals may contain copyrighted elements or misleading  
representations, compromising both scientific accuracy and legal compliance. As a result, researchers are urged  
to use ethically trained modelssuch as Adobe Firefly, which sources only licensed materialsand to provide  
clear disclosure of any AI-generated imagery used in publications (Meyer et al., 2024).  
Programming and Code Generation  
Table 5. Ethical Issues Related To Programming And Code Generation  
Theme  
Representative  
Tools  
Key Findings / Ethical  
Observations Identified  
Issues Recommendations  
Mitigation Strategies  
/
Automated  
Code  
GitHub  
OpenAI  
Copilot, Produces runnable Licensing  
Codex, code but with violations;  
Develop test suites; review  
for vulnerabilities; check  
Generation  
Tabnine, Claude  
Page 8153  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
limited  
accuracy insecure  
code; license compliance before  
use.  
and security testing. plagiarism.  
Programming  
Education  
ChatGPT, Replit, Enhances learning Over-reliance;  
Encourage  
in engagement;  
critical  
Cursor  
efficiency but may reduction  
use  
tool,  
as  
not  
increase reliance on independent skill scaffolding  
AI outputs. development.  
replacement for reasoning.  
The findings in Table 5 reveal that Generative AI (GenAI) tools play an increasingly influential role in  
programming and code generation, improving productivity and learning efficiency but also introducing critical  
ethical and technical concerns. In automated code generation, platforms such as GitHub Copilot, OpenAI Codex,  
Tabnine, and Claude can produce runnable code snippets across multiple programming languages, substantially  
reducing development time. However, the generated code frequently suffers from limited accuracy, insufficient  
security validation, and potential licensing violations when outputs reproduce segments of copyrighted material  
from non-free repositories (Yetiştiren et al., 2023; Poldrack et al., 2023). Moreover, the lack of transparency  
regarding model training data raises questions about plagiarism and intellectual property rights, as users may  
unknowingly distribute code derived from proprietary sources. Researchers and developers are therefore  
encouraged to conduct rigorous testing, apply secure coding practices, and verify code provenance before  
deployment (Martin, 2008; Perkins & Roe, 2024).  
In programming education, GenAI systems such as ChatGPT, Replit, and Cursor have demonstrated clear  
pedagogical benefits, notably in enhancing learning efficiency and supporting novice programmers with real-  
time feedback and debugging assistance (Kazemitabaar et al., 2023). However, this convenience comes with the  
risk of over-reliance on AI-generated solutions, which may hinder the development of independent problem-  
solving and coding skills (Uplevel, 2024). Ethical teaching practices should emphasize AI literacy, guiding  
students to critically evaluate AI-generated outputs and use them as learning aids rather than complete substitutes  
for human reasoning.  
Overall, while GenAI offers transformative advantages for software development and programming education,  
its integration must be guided by accountability, transparency, and pedagogical responsibility. Human oversight,  
proper citation of AI-generated code, and explicit awareness of licensing constraints are essential to maintain  
technical integrity and ethical compliance in AI-assisted programming (ALLEA, 2023; Weber-Wulff et al.,  
2023).  
Academic Writing, Editing, and Translation  
Table 6. Ethical issues related to Academic Writing, Editing, and Translation  
Theme  
Representative Key  
Findings  
/ Ethical  
Identified  
Issues Recommendations  
Mitigation Strategies  
/
Tools  
Observations  
Grant  
Proposal  
Writing  
ChatGPT,  
Claude, Copilot but  
Aids structure and style Confidentiality  
may dilute breach;  
originality and leak plagiarism.  
sensitive ideas.  
Use local instances;  
idea avoid  
uploading  
unpublished proposals;  
disclose AI use.  
Text  
Generation  
ChatGPT,  
Gemini, Grok, creation but  
Kahubi  
Enables  
content Misinformation;  
risks censorship;  
and authorship  
freedom ambiguity.  
Fact-check all content;  
bias; ensure  
maintain  
transparency;  
academic  
hallucination  
academic  
restrictions.  
freedom and authorship  
integrity.  
Text Editing PaperPal,  
Grammarly,  
Proofreading WordTune  
Improves clarity but Plagiarism  
can distort meaning or paraphrase;  
remove citations.  
by Conduct human review  
&
after  
editing;  
lock  
Page 8154  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
grammatical  
inconsistencies.  
technical terms; recheck  
all citations.  
Translation  
DeepL, Google Facilitates multilingual Cultural/semantic  
Apply post-editing by  
Translate,  
research  
but bias; detector false bilingual experts; avoid  
ChatGPT, BERT inconsistent in rare positives.  
languages.  
AI detectors on translated  
text.  
The findings presented in Table 6 show how Generative AI (GenAI) systems have become integral to academic  
writing, editing, and translation, offering substantial productivity gains while simultaneously introducing new  
ethical vulnerabilities. In grant proposal writing, tools such as ChatGPT, Claude, and Copilot assist researchers  
in improving structure, tone, and stylistic consistency. However, these tools also risk breaching confidentiality,  
particularly when users upload unpublished proposals or sensitive intellectual property to cloud-based servers  
(Perkins & Roe, 2024). Furthermore, over-reliance on AI-driven phrasing may dilute originality and lead to  
inadvertent idea plagiarism (Moulin, 2023). To mitigate such risks, researchers should rely on local AI  
deployments, avoid sharing confidential drafts online, and explicitly disclose AI use in funding submissions  
(ALLEA, 2023).  
In text generation, models like ChatGPT, Gemini, Grok, and Kahubi can generate fluent academic prose, aiding  
in idea development and draft expansion. Nonetheless, their tendency to hallucinate references, promote  
censorship bias, and create authorship ambiguity poses significant threats to academic integrity (Weber-Wulff  
et al., 2023). As GenAI models are trained on unevenly distributed data, they can unconsciously reproduce  
ideological or linguistic biases that undermine academic freedom and transparency (Perkins & Roe, 2024). To  
address this, all AI-generated content should undergo human fact-checking and authorship verification, ensuring  
that credit attribution remains accurate and verifiable.  
For text editing and proofreading, AI tools such as PaperPal, Grammarly, and WordTune enhance linguistic  
accuracy and stylistic fluency but may inadvertently distort meaning or remove essential citations, leading to  
plagiarism by paraphrase or grammatical inconsistencies (Lorenz et al., 2023). Ethical practice requires post-  
editing by human experts, particularly in technical disciplines where terminology precision is critical. Scholars  
should also maintain careful control over citation management to preserve traceability and accountability in  
revised texts.  
Finally, in translation, systems such as DeepL, Google Translate, ChatGPT, and BERT play a pivotal role in  
facilitating multilingual research dissemination, especially for scholars writing in non-dominant languages.  
However, these tools remain inconsistent in low-resource or rare languages and may introduce cultural or  
semantic biases, resulting in false positives in AI detection systems (Castro et al., 2024). Consequently, translated  
texts should undergo post-editing by bilingual experts, and researchers should avoid AI detection tools when  
evaluating translated content to prevent unjustified academic penalties.  
Overall, GenAI tools are transforming academic writing by promoting accessibility, efficiency, and multilingual  
collaboration. Yet, responsible integration requires a firm commitment to ethical transparency, originality,  
authorship integrity, and cultural sensitivity to ensure that technological assistance strengthensrather than  
underminesthe credibility of scholarly communication (ALLEA, 2023; Perkins & Roe, 2024; Weber-Wulff et  
al., 2023).  
RECOMMENDATIONS  
Based on the synthesis of findings across all research stagesfrom literature gathering to academic writing—  
this section outlines key recommendations for the ethical, transparent, and effective integration of Generative  
AI (GenAI) tools in academic research. These recommendations are organized around four guiding principles:  
accountability, accuracy, authorship integrity, and ethical governance.  
Page 8155  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Promote Human Oversight and Accountability  
While GenAI systems such as ChatGPT, Claude, and Copilot offer powerful assistance in research design, data  
processing, and manuscript writing, they should function as supporting tools rather than autonomous decision-  
makers. Human oversight must remain central in every stage of the research workflowfrom verifying AI-  
generated references to reviewing analytical outputs and editing manuscripts (Perkins & Roe, 2024; Weber-  
Wulff et al., 2023). Institutions should develop clear AI accountability frameworks, requiring researchers to  
document when and how AI systems are used, along with human verification steps undertaken to ensure research  
integrity.  
Ensure Accuracy, Transparency, and Reproducibility  
Given the well-documented risks of AI hallucinations, fabricated citations, and inconsistent reproducibility in  
GenAI outputs (Liu et al., 2023; Head et al., 2015), researchers must apply rigorous validation strategies. These  
include cross-checking references, re-analyzing AI-derived data manually, and retaining transparent records of  
prompt histories and outputs. Journals and funding bodies should mandate disclosure statements specifying the  
extent and purpose of AI use in research and writing.  
Safeguard Intellectual Property and Data Privacy  
Researchers must exercise caution when uploading materialsparticularly unpublished manuscripts,  
confidential data, or proprietary datasets—to GenAI platforms. Many tools’ terms of service permit data reuse  
for model training, creating risks of intellectual property (IP) transfer and data breaches (Bakos et al., 2014;  
Journal of Urology, 2025). To mitigate this, institutions should prioritize local or institutionally hosted AI  
systems that comply with data protection regulations such as GDPR and ensure explicit informed consent when  
human subjects’ data are involved.  
Reinforce Ethical Research Design and Participant Protection  
In study design and data collection, GenAI may inadvertently generate biased or culturally insensitive survey  
questions or misinterpret ethical nuances (Perni et al., 2023). Ethics committees should therefore maintain  
human-led reviews and adopt AI ethics checklists to assess potential harms related to bias, discrimination, or  
participant autonomy. Moreover, informed consent statements generated with AI must undergo human  
validation to avoid misinformation or vague disclosures (Currie et al., 2023).  
Support Responsible AI Use in Education and Writing  
In teaching and academic writing, the overuse of GenAI tools can hinder the development of independent  
reasoning and critical thinking (Kazemitabaar et al., 2023; Uplevel, 2024). Educators should integrate AI literacy  
programs into curricula, emphasizing critical engagement, bias recognition, and citation ethics. In academic  
publishing, researchers should treat GenAI outputs as assistive drafts, not as final text, and perform manual fact-  
checking and sensitivity review for topics involving historical trauma, marginalized groups, or politically  
sensitive issues (Resnik & Hosseini, 2023; Waddington, 2024).  
Develop Institutional and Disciplinary Guidelines  
Universities, funding agencies, and publishers should collaboratively develop discipline-specific AI use policies  
that balance innovation with ethical responsibility. These policies should define acceptable use cases (e.g.,  
grammar correction, summarization), mandate AI use disclosure, and prohibit plagiarism or AI-assisted peer  
review without consent (COPE, 2023). A global standard aligned with the ALLEA (2023) Code of Research  
Integrity could promote consistency in addressing authorship, accountability, and transparency concerns across  
research contexts.  
Page 8156  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
Encourage Collaboration Between AI Developers and Researchers  
Finally, fostering partnerships between AI developers and academic institutions can promote ethical model  
training, reduce data bias, and improve domain-specific accuracy. Co-developing open-access AI systems  
trained on peer-reviewed scientific literaturerather than uncontrolled internet datacould enhance reliability  
while safeguarding research integrity (Meyer et al., 2024).  
In summary, while Generative AI holds immense potential to advance research efficiency, creativity, and  
accessibility, its ethical deployment demands continuous vigilance, transparency, and education. The future of  
responsible AI-assisted scholarship depends on balancing technological innovation with human judgment,  
ensuring that these tools enhancenot erodethe values of integrity, originality, and intellectual rigor that  
define academic research.  
CONCLUSION  
This study provides a comprehensive examination of how Generative Artificial Intelligence (GenAI) is reshaping  
the research landscape across multiple stages of the academic processfrom literature gathering and study  
design to data analysis, code generation, and academic writing. Drawing on a practice-informed rapid review  
and a series of tool-based probes, the findings highlight a clear duality: while GenAI offers unprecedented  
efficiency, accessibility, and creative support, it simultaneously raises profound ethical, methodological, and  
epistemological challenges.  
Across the research lifecycle, GenAI demonstrates strong capabilities in automating repetitive tasks, enhancing  
multilingual collaboration, and improving communication for non-native English speakers. However, recurring  
issues such as fabricated citations, data privacy risks, bias reinforcement, and authorship ambiguity underscore  
the persistent need for human oversight and ethical governance. Particularly concerning are the threats to  
academic freedom, intellectual property integrity, and research transparency, which emerge from the opaque and  
probabilistic nature of large language models (Perkins & Roe, 2024; Weber-Wulff et al., 2023).  
The study also shows that ethical risks are unevenly distributed across the research process. Early stages such as  
literature gathering and summarization suffer from misinformation and copyright transfer risks, while later stages  
like data analysis and image generation reveal reproducibility issues and potential data fabrication. In writing  
and translation, GenAI supports stylistic and linguistic refinement but risks diluting originality and introducing  
semantic or cultural bias. These findings emphasize that the utility of GenAI must be balanced with principled  
restraint and critical reflection.  
Ultimately, the responsible use of GenAI in research depends on embedding transparency, accountability, and  
human critical judgment at every step of the academic workflow. Institutions, publishers, and funding agencies  
must implement clear policies, disclosure requirements, and AI literacy programs to guide ethical use. As AI  
continues to evolve, researchers must move beyond simple adoption toward ethical adaptation, ensuring that  
technology complements rather than compromises the rigor and trustworthiness of scientific inquiry.  
In essence, GenAI should be viewed not as a replacement for human intellect, but as a catalyst for more  
reflective, equitable, and ethically grounded scholarship. Only through continuous dialogue, critical evaluation,  
and collective responsibility can the academic community harness AI’s transformative potential while  
safeguarding the fundamental values of integrity, originality, and academic freedom that underpin the pursuit of  
knowledge.  
REFERENCES  
1. Adelani, D. I. (2024). Meta’s AI translation model embraces overlooked languages. Nature, 630(8018),  
2. Adobe. (2024). AI ethics: Everything you need to know Our approach to Generative AI with Adobe  
Page 8157  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
3. ALLEA All European Academies. (2023). The European Code of Conduct for Research Integrity (2023  
4. Bakos, Y., Marotta-Wurgler, F., & Trossen, D. R. (2014). Does anyone read the fine print? Consumer  
attention  
to  
standard-form  
contracts.  
Journal  
of  
Legal  
Studies,  
43(1),  
135.  
5. Baron, R. (2024). AI editing: Are we there yet? Science Editor, 47(3), 7882.  
6. Beall, J. (2024). Beall’s list of potential predatory journals and publishers. https://beallslist.net  
7. Bendel, O. (2023). Image synthesis from an ethical perspective. AI  
&
Society.  
8. Bijker, R., Merkouris, S. S., Dowling, N. A., & Rodda, S. N. (2024). ChatGPT for automated qualitative  
research: Content analysis. Journal of Medical Internet Research, 26(1), e59050.  
9. Blodgett, S. L., & O’Connor, B. (2017). Racial disparity in natural language processing: A case study of  
social media African-American English [arXiv:1707.00061]. arXiv. http://arxiv.org/abs/1707.00061  
10. Cao, X., & Yousefzadeh, R. (2023). Extrapolation and AI transparency: Why machine learning models  
should reveal when they make decisions beyond their training. Big Data & Society, 10(1).  
11. Chenneville, T., Duncan, B., & Silva, G. (2024). More questions than answers: Ethical considerations at  
the intersection of psychology and generative artificial intelligence. Translational Issues in Psychological  
Science, 10(2), 152178. https://doi.org/10.1037/tps0000400  
12. Committee on Publication Ethics (COPE). (2023). Authorship and AI tools Position statement.  
13. Committee on Publication Ethics & Scientific, Technical & Medical Publishers. (2022). Paper mills  
research. https://doi.org/10.24318/jtbG8IHL  
14. Conroy, G. (2024). Do AI models produce more original ideas than researchers? Nature News.  
15. Copeland, D. E., Radvansky, G. A., & Goodwin, K. A. (2009). A novel study: Forgetting curves and the  
reminiscence bump. Memory, 17(3), 323336. https://doi.org/10.1080/09658210902729491  
16. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic  
integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228239.  
17. Council for International Organizations of Medical Sciences (CIOMS). (2016). International ethical  
guidelines for health-related research involving humans (4th ed.). World Health Organization.  
18. Currie, G., Robbie, S., & Tually, P. (2023). ChatGPT and patient information in nuclear medicine: GPT-  
3.5 versus GPT-4. Journal of Nuclear Medicine Technology, 51(4), 307313.  
us/articles/360019925219-Languages-included-in-DeepL-Pro  
20. Else, H. (2021). “Tortured phrases” give away fabricated research papers. Nature, 596(7872), 328329.  
21. Else, H. (2022). Paper-mill detector tested in push to stamp out fake science. Nature, 612(7939), 386–  
22. European Commission Directorate-General for Research and Innovation. (2024). Living guidelines on  
the  
responsible  
use  
of  
generative  
AI  
in  
research.  
innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en  
23. European Commission. (2018). Artificial Intelligence for Europe. Brussels: European Union  
Publications. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2018:237:FIN  
24. Foltýnek, T., Dlabolová, D., Anohina-Naumeca, A., Razı, S., Kravjar, J., Kamzola, L., Guerrero-Dib, J.,  
Çelik, Ö., & Weber-Wulff, D. (2020). Testing of support tools for plagiarism detection. International  
00192-4  
Page 8158  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
25. Foltýnek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023).  
ENAI recommendations on the ethical use of artificial intelligence in education. International Journal for  
Educational Integrity, 19(12). https://doi.org/10.1007/s40979-023-00133-4  
26. Fong, E. A., & Wilhite, A. W. (2017). Authorship and citation manipulation in academic research. PLOS  
27. Friðriksdóttir, S. R., & Einarsson, R. H. (2024). Gendered grammar or ingrained bias? Exploring gender  
bias in Icelandic language models. In Proceedings of the 2024 Joint International Conference on  
Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 7596–  
28. Zhang H, Wu C, Xie J, Lyu Y, Cai J, Carroll JM (2024) Redefining qualitative analysis in the AI era:  
utilizing ChatGPT for efficient thematic analysis. ArXiv. https://doi.org/10.48550/ArXiv.2309.10771  
29. Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023).  
Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence  
output detector, plagiarism detector, and blinded human reviewers. NPJ Digital Medicine, 6, Article 75.  
30. Gao, D., Chen, K., Chen, B., Dai, H., Jin, L., Jiang, W., Ning, W., Yu, S., Xuan, Q., Cai, X., Yang, L.,  
& Wang, Z. (2024). LLMs-based machine translation for e-commerce. Expert Systems with  
31. Ghosh, S., & Caliskan, A. (2023). ChatGPT perpetuates gender bias in machine translation and ignores  
non-gendered pronouns: Findings across Bengali and five other low-resource languages. Proceedings of  
the  
2023  
AAAI/ACM  
Conference  
on  
AI,  
Ethics,  
and  
Society,  
901912.  
32. Google Blog. (2024). 110 new languages are coming to Google Translate.  
33. Gray, A. (2024). ChatGPT contamination: Estimating the prevalence of LLMs in the scholarly literature.  
34. Grudniewicz, A., Moher, D., Cobey, K. D., Bryson, G. L., Cukier, S., Allen, K., Ardern, C., Balcom, L.,  
Barros, T., Berger, M., Ciro, J. B., Cugusi, L., Donaldson, M. R., Egger, M., Graham, I. D., Hodgkinson,  
M., Khan, K. M., Mabizela, M., Manca, A., … Lalu, M. M. (2019). Predatory journals: No definition, no  
defence. Nature, 576, 210212. https://doi.org/10.1038/d41586-019-03759-y  
35. Hacker, P., Mittelstadt, B., Borgesius, F. Z., & Wachter, S. (2024). Generative discrimination: What  
happens when generative AI exhibits bias, and what can be done about it. arXiv.  
36. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and  
consequences  
of  
p-hacking  
in  
science.  
PLOS  
Biology,  
13(3),  
e1002106.  
37. Hennessy, M., Dennehy, R., Doherty, J., & O’Donoghue, K. (2022). Outsourcing transcription:  
Extending ethical considerations in qualitative research. Qualitative Health Research, 32(7), 11971204.  
38. Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias?  
Considerations and recommendations for use of ChatGPT and other large language models in scholarly  
00133-5  
39. Ismail, F., Crawford, J., Tan, S., Rudolph, J., Tan, E., Seah, P., Tang, F. X., Ng, F., Kaldenbach, L. V.,  
Naidu, A., Stafford, V., & Kane, M. (2024). Artificial intelligence in higher education database (AIHE  
V1): Introducing an open-access repository. Journal of Applied Learning & Teaching, 7, Article 1.  
12a. Journal of Urology. (2025). Home page. https://www.auajournals.org/journal/juro  
12b. Kahubi.com. (2023). AI for research. https://kahubi.com/  
40. Kazemitabaar, M., Chow, J., Ma, C. K. T., Ericson, B. J., Weintrop, D., & Grossman, T. (2023). Studying  
the effect of AI code generators on supporting novice learners in introductory programming. Proceedings  
of  
the  
2023  
CHI  
Conference  
on  
Human  
Factors  
in  
Computing  
Systems.  
Page 8159  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
41. Keith, K. D. (2013). Serial position effect. In K. D. Keith (Ed.), The encyclopedia of cross-cultural  
psychology (Vol. 3, p. 1155). Wiley. https://doi.org/10.1002/9781118339893.wbeccp482  
42. Kendall, G., & da Teixeira, J. A. (2024). Risks of abuse of large language models, like ChatGPT, in  
scientific publishing: Authorship, predatory publishing, and paper mills. Learned Publishing, 37(1), 55–  
43. Kirova, V. D., Ku, C. S., Laracy, J. R., & Marlowe, T. J. (2023). The ethics of artificial intelligence in  
the era of generative AI. Journal of Systemics, Cybernetics and Informatics, 21(4), 4250.  
44. Kleinberg, B., Davies, T., & Mozes, M. (2022). TextWash Automated open-source text anonymisation.  
45. Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2024). Delving into ChatGPT usage in  
academic writing through excess vocabulary. arXiv:2406.07016. http://arxiv.org/abs/2406.07016  
46. Kurz, C., & Weber-Wulff, D. (2023). Maschinelles Lernen: Nicht so brillant wie von manchen erhofft.  
erhofft/  
47. Lee, V. V., van der Lubbe, S. C. C., Goh, L. H., & Valderas, J. M. (2024). Harnessing ChatGPT for  
thematic analysis: Are we ready? Journal of Medical Internet Research, 26, e54974.  
48. Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S, Huang, Z., Yang, D.,  
Potts, C., Manning, C. D., & Zou, J. Y. (2024). Mapping the increasing use of LLMs in scientific papers.  
49. Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2023). Lost in the  
middle: How language models use long contexts. Transactions of the Association for Computational  
Linguistics, 12, 157173. https://doi.org/10.1162/tacl_a_00638  
50. Lorenz, P., Perset, K., & Berryhill, J. (2023). Initial policy considerations for generative artificial  
intelligence  
51. Martin, R. (2008). Clean code: A handbook of agile software craftsmanship. Prentice Hall.  
(OECD  
Artificial  
Intelligence  
Papers,  
No.  
1).  
OECD  
Publishing.  
52. Meyer, J. G., Urbanowicz, R. J., Martin, P. C. N., O’Connor, K., Li, R., Peng, P.-C., Bright, T. J.,  
Tatonetti, N., Won, K. J., Gonzalez-Hernandez, G., & Moore, J. H. (2023). ChatGPT and large language  
models in academia: Opportunities and challenges. BioData Mining, 16(1), 20.  
53. Meyer, J., Padgett, N., Miller, C., & Exline, L. (2024). Public Domain 12M: A highly aesthetic image-  
text dataset with novel governance mechanisms. arXiv. http://arxiv.org/abs/2410.23144  
54. Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO.  
55. Mrowinski, M. J., Fronczak, P., Fronczak, A., Ausloos, M., & Nedic, O. (2017). Artificial intelligence  
in peer review: How can evolutionary computation support journal editors? PLOS ONE, 12(9),  
56. No Language Left Behind (NLLB) Team. (2024). Scaling neural machine translation to 200 languages.  
57. Patsakis, C., & Lykousas, N. (2023). Man vs the machine in the struggle for effective text anonymisation  
in the age of large language models. Scientific Reports, 13, 16026. https://doi.org/10.1038/s41598-023-  
58. Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic  
era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2).  
59. Perkins, M., & Roe, J. (2024a). Academic publisher guidelines on AI usage: A ChatGPT-supported  
thematic analysis (Version 2; peer review: 3 approved, 1 approved with reservations). F1000Research,  
60. Perkins, M., & Roe, J. (2024b). Generative AI tools in academic research: Applications and implications  
for  
qualitative  
and  
quantitative  
research  
methodologies.  
arXiv.  
Page 8160  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
61. Perkins, M., & Roe, J. (2024c). The use of generative AI in qualitative analysis: Inductive thematic  
analysis  
with  
ChatGPT.  
Journal  
of  
Applied  
Learning  
&
Teaching,  
7(1).  
62. Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2024a). Detection of GPT-4  
generated text in higher education: Combining academic judgement and software to identify generative  
AI tool misuse. Journal of Academic Ethics, 22(1), 89113. https://doi.org/10.1007/s10805-023-09492-  
6
63. Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024b).  
Simple techniques to bypass GenAI text detectors: Implications for inclusive education. International  
00487-w  
64. Perni, S., Lehmann, L. S., & Bitterman, D. S. (2023). Patients should be informed when AI systems are  
used in clinical trials. Nature Medicine, 29(8), 18901891. https://doi.org/10.1038/s41591-023-02367-8  
65. Poldrack, R. A., Lu, T., & Beguš, G. (2023). AI-assisted coding: Experiments with GPT-4. arXiv.  
66. Resnik, D. B., & Hosseini, M. (2024). The ethics of using artificial intelligence in scientific research:  
New guidance needed for a new tool. AI and Ethics. https://doi.org/10.1007/s43681-024-00493-8  
67. Retraction Watch. (2024, July 22). Giant rat penis redux: AI-generated diagram, errors lead to retraction.  
retraction/  
68. Sakana.ai. (2024). The AI Scientist: Towards fully automated open-ended scientific discovery.  
69. Shiraishi, M., Tomioka, Y., Miyakuni, A., Moriwaki, Y., Yang, R., Oba, J., & Okazaki, M. (2024).  
Generating informed consent documents related to blepharoplasty using ChatGPT. Ophthalmic Plastic  
and Reconstructive Surgery, 40(3), 316320. https://doi.org/10.1097/IOP.0000000000002574  
70. Si, C., Yang, D., & Hashimoto, T. (2024). Can LLMs generate novel research ideas? A large-scale human  
study with 100+ NLP researchers. arXiv. https://arxiv.org/abs/2409.04109  
71. Solohubov, I., Moroz, A., Tiahunova, M. Y., Kyrychek, H. H., & Skrupsky, S. (2023). Accelerating  
software development with AI: Exploring the impact of ChatGPT and GitHub Copilot. In S. Papadakis  
(Ed.), Proceedings of the 11th Workshop on Cloud Technologies in Education (CTE 2023). https://ceur-  
72. Sotolář, O., Plhák, J., & Šmahel, D. (2021). Towards personal data anonymization for social messaging.  
In K. Ekštein, F. Pártl, & M. Konopík (Eds.), Text, speech, and dialogue (pp. 281292). Springer  
International Publishing.  
73. Spector-Bagdady, K. (2023). Generative-AI-generated challenges for health data research. The American  
Journal of Bioethics, 23(10), 15. https://doi.org/10.1080/15265161.2023.2252311  
74. Steinfeld, N. (2016). I agree to the terms and conditions: (How) do users read privacy policies online?  
An eye-tracking experiment. Computers in Human Behavior, 55(Part B), 9921000.  
75. Suleiman, A., von Wedel, D., Munoz-Acuna, R., Redaelli, S., Santarisi, A., Seibold, E.-L., Ratajczak,  
N., Kato, S., Said, N., Sundar, E., Goodspeed, V., & Schaefer, M. S. (2024). Assessing ChatGPT’s ability  
to emulate human reviewers in scientific research: A descriptive and qualitative approach. Computer  
Methods and Programs in Biomedicine, 254, 108313. https://doi.org/10.1016/j.cmpb.2024.108313  
76. Šupak Smolčić, V. Š. (2013). Salami publication: Definitions and examples. Biochemia Medica, 23(3),  
77. Tenzer, H., Feuerriegel, S., & Piekkari, R. (2024). AI machine translation tools must be taught cultural  
differences too. Nature, 630, 820. https://doi.org/10.1038/d41586-024-02091-4  
78. Tong, S., Mao, K., Huang, Z., Zhao, Y., & Peng, K. (2024). Automating psychological hypothesis  
generation with AI: When large language models meet causal graph. Humanities and Social Sciences  
79. UNESCO IRCAI. (2024). Systematic prejudices: Bias against women and girls in large language models  
(Report CI/DIT/2024/GP/01). International Research Centre on Artificial Intelligence.  
Page 8161  
INTERNATIONAL JOURNAL OF RESEARCH AND INNOVATION IN SOCIAL SCIENCE (IJRISS)  
ISSN No. 2454-6186 | DOI: 10.47772/IJRISS | Volume IX Issue X October 2025  
80. UNESCO. (n.d.). Languages. UNESCO World Atlas of Languages. (Retrieved September 13, 2024.)  
81. University of Leeds. (2025). Policy on the proof reading of student work to be submitted for assessment.  
82. Uplevel.  
(2024).  
Can  
generative  
AI  
improve  
developer  
productivity?  
83. Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621,  
funding/applying-for-a-grant/guidelines-for-the-use-of-ai-tools.html  
85. Waddington, L. (2024). Navigating academic integrity in the age of GenAI: A historian’s perspective on  
censorship.  
International  
Center  
for  
Academic  
Integrity  
(Blog).  
historian-s-perspective-on-censorship  
86. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O.,  
Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal  
for Educational Integrity, 19(1), 26. https://doi.org/10.1007/s40979-023-00146-z  
87. Wilkes, L., Cummings, J., & Haigh, C. (2015). Transcriptionist saturation: Knowing too much about  
sensitive health and social data. Journal of Advanced Nursing, 71(2), 295303.  
88. Yan, L., Echeverria, V., Fernandez-Nieto, G. M., Jin, Y., Swiecki, Z., Zhao, L., Gašević, D., & Martinez-  
Maldonado, R. (2024). HumanAI collaboration in thematic analysis using ChatGPT: A user study and  
design recommendations. In Extended Abstracts of the 2024 CHI Conference on Human Factors in  
Computing Systems (pp. 17). https://doi.org/10.1145/3613905.3650732  
89. Yetiştiren, B., Özsoy, I., Ayerdem, M., & Tüzün, E. (2023). Evaluating the code quality of AI-assisted  
code generation tools: An empirical study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT.  
90. Zhang, H., Wu, C., Xie, J., Lyu, Y, Cai, J., & Carroll, J. M. (2024). Redefining qualitative analysis in the  
AI  
era:  
Utilizing  
ChatGPT  
for  
efficient  
thematic  
analysis.  
arXiv.  
Page 8162