Ethical Challenges in the Era of Generative AI: Insights from a Practice-Informed Rapid Review
Authors
Faculty of Foerign Languages, University of Labour and Social Affairs (Vietnam)
Article Information
DOI: 10.47772/IJRISS.2025.910000666
Subject Category: Education
Volume/Issue: 9/10 | Page No: 8146-8162
Publication Timeline
Submitted: 2025-10-26
Accepted: 2025-11-04
Published: 2025-11-20
Abstract
This study investigates the integration of Generative AI (GenAI) into academic research, highlighting both its transformative potential and the ethical, methodological, and epistemological challenges it introduces. While GenAI enhances efficiency in tasks like text generation, data analysis, and translation, it raises serious concerns around authorship, originality, transparency, data privacy, and accountability. Through a rapid review of literature from 2022 to 2025, guided by the European Code of Conduct for Research Integrity, the study identifies recurring risks such as algorithmic bias, fabricated citations, and diminished scholarly authorship. In response, it proposes a five-principle ethical framework—human oversight, accuracy, accountability, data protection, and institutional governance—and emphasizes that responsible GenAI use requires not only technical safeguards but also ethical literacy, critical reflection, and transparent disclosure. Ultimately, GenAI should serve as a collaborative partner that augments human creativity while preserving the integrity and rigor of scientific inquiry.
Keywords
Generative Artificial Intelligence, research ethics
Downloads
References
1. Adelani, D. I. (2024). Meta’s AI translation model embraces overlooked languages. Nature, 630(8018), 821–822. https://doi.org/10.1038/d41586-024-00964-2 [Google Scholar] [Crossref]
2. Adobe. (2024). AI ethics: Everything you need to know – Our approach to Generative AI with Adobe Firefly. https://www.adobe.com/ai/overview/ethics.html [Google Scholar] [Crossref]
3. ALLEA – All European Academies. (2023). The European Code of Conduct for Research Integrity (2023 revised edition). https://doi.org/10.26356/ECoC [Google Scholar] [Crossref]
4. Bakos, Y., Marotta-Wurgler, F., & Trossen, D. R. (2014). Does anyone read the fine print? Consumer attention to standard-form contracts. Journal of Legal Studies, 43(1), 1–35. https://doi.org/10.1086/674424 [Google Scholar] [Crossref]
5. Baron, R. (2024). AI editing: Are we there yet? Science Editor, 47(3), 78–82. https://doi.org/10.36591/SE-4703-18 [Google Scholar] [Crossref]
6. Beall, J. (2024). Beall’s list of potential predatory journals and publishers. https://beallslist.net [Google Scholar] [Crossref]
7. Bendel, O. (2023). Image synthesis from an ethical perspective. AI & Society. https://doi.org/10.1007/s00146-023-01780-4 [Google Scholar] [Crossref]
8. Bijker, R., Merkouris, S. S., Dowling, N. A., & Rodda, S. N. (2024). ChatGPT for automated qualitative research: Content analysis. Journal of Medical Internet Research, 26(1), e59050. https://doi.org/10.2196/59050 [Google Scholar] [Crossref]
9. Blodgett, S. L., & O’Connor, B. (2017). Racial disparity in natural language processing: A case study of social media African-American English [arXiv:1707.00061]. arXiv. http://arxiv.org/abs/1707.00061 [Google Scholar] [Crossref]
10. Cao, X., & Yousefzadeh, R. (2023). Extrapolation and AI transparency: Why machine learning models should reveal when they make decisions beyond their training. Big Data & Society, 10(1). https://doi.org/10.1177/20539517231169731 [Google Scholar] [Crossref]
11. Chenneville, T., Duncan, B., & Silva, G. (2024). More questions than answers: Ethical considerations at the intersection of psychology and generative artificial intelligence. Translational Issues in Psychological Science, 10(2), 152–178. https://doi.org/10.1037/tps0000400 [Google Scholar] [Crossref]
12. Committee on Publication Ethics (COPE). (2023). Authorship and AI tools – Position statement. https://publicationethics.org/cope-position-statements/ai-author [Google Scholar] [Crossref]
13. Committee on Publication Ethics & Scientific, Technical & Medical Publishers. (2022). Paper mills research. https://doi.org/10.24318/jtbG8IHL [Google Scholar] [Crossref]
14. Conroy, G. (2024). Do AI models produce more original ideas than researchers? Nature News. https://doi.org/10.1038/d41586-024-03070-5 [Google Scholar] [Crossref]
15. Copeland, D. E., Radvansky, G. A., & Goodwin, K. A. (2009). A novel study: Forgetting curves and the reminiscence bump. Memory, 17(3), 323–336. https://doi.org/10.1080/09658210902729491 [Google Scholar] [Crossref]
16. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148 [Google Scholar] [Crossref]
17. Council for International Organizations of Medical Sciences (CIOMS). (2016). International ethical guidelines for health-related research involving humans (4th ed.). World Health Organization. https://www.who.int/docs/default-source/ethics/web-cioms-ethicalguidelines.pdf [Google Scholar] [Crossref]
18. Currie, G., Robbie, S., & Tually, P. (2023). ChatGPT and patient information in nuclear medicine: GPT-3.5 versus GPT-4. Journal of Nuclear Medicine Technology, 51(4), 307–313. https://doi.org/10.2967/jnmt.123.266151 [Google Scholar] [Crossref]
19. DeepL. (2024). Languages included in DeepL Pro. https://support.deepl.com/hc/en-us/articles/360019925219-Languages-included-in-DeepL-Pro [Google Scholar] [Crossref]
20. Else, H. (2021). “Tortured phrases” give away fabricated research papers. Nature, 596(7872), 328–329. https://doi.org/10.1038/d41586-021-02134-0 [Google Scholar] [Crossref]
21. Else, H. (2022). Paper-mill detector tested in push to stamp out fake science. Nature, 612(7939), 386–387. https://doi.org/10.1038/d41586-022-04245-8 [Google Scholar] [Crossref]
22. European Commission Directorate-General for Research and Innovation. (2024). Living guidelines on the responsible use of generative AI in research. https://research-and-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en [Google Scholar] [Crossref]
23. European Commission. (2018). Artificial Intelligence for Europe. Brussels: European Union Publications. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2018:237:FIN [Google Scholar] [Crossref]
24. Foltýnek, T., Dlabolová, D., Anohina-Naumeca, A., Razı, S., Kravjar, J., Kamzola, L., Guerrero-Dib, J., Çelik, Ö., & Weber-Wulff, D. (2020). Testing of support tools for plagiarism detection. International Journal of Educational Technology in Higher Education, 17(46). https://doi.org/10.1186/s41239-020-00192-4 [Google Scholar] [Crossref]
25. Foltýnek, T., Bjelobaba, S., Glendinning, I., Khan, Z. R., Santos, R., Pavletic, P., & Kravjar, J. (2023). ENAI recommendations on the ethical use of artificial intelligence in education. International Journal for Educational Integrity, 19(12). https://doi.org/10.1007/s40979-023-00133-4 [Google Scholar] [Crossref]
26. Fong, E. A., & Wilhite, A. W. (2017). Authorship and citation manipulation in academic research. PLOS ONE, 12(12), e0187394. https://doi.org/10.1371/journal.pone.0187394 [Google Scholar] [Crossref]
27. Friðriksdóttir, S. R., & Einarsson, R. H. (2024). Gendered grammar or ingrained bias? Exploring gender bias in Icelandic language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 7596–7610). https://aclanthology.org/2024.lrec-main.671.pdf [Google Scholar] [Crossref]
28. Zhang H, Wu C, Xie J, Lyu Y, Cai J, Carroll JM (2024) Redefining qualitative analysis in the AI era: utilizing ChatGPT for efficient thematic analysis. ArXiv. https://doi.org/10.48550/ArXiv.2309.10771 [Google Scholar] [Crossref]
29. Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2023). Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. NPJ Digital Medicine, 6, Article 75. https://doi.org/10.1038/s41746-023-00819-6 [Google Scholar] [Crossref]
30. Gao, D., Chen, K., Chen, B., Dai, H., Jin, L., Jiang, W., Ning, W., Yu, S., Xuan, Q., Cai, X., Yang, L., & Wang, Z. (2024). LLMs-based machine translation for e-commerce. Expert Systems with Applications, 258, 125087. https://doi.org/10.1016/j.eswa.2024.125087 [Google Scholar] [Crossref]
31. Ghosh, S., & Caliskan, A. (2023). ChatGPT perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across Bengali and five other low-resource languages. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 901–912. https://doi.org/10.1145/3600211.3604672 [Google Scholar] [Crossref]
32. Google Blog. (2024). 110 new languages are coming to Google Translate. https://blog.google/products/translate/google-translate-new-languages-2024/ [Google Scholar] [Crossref]
33. Gray, A. (2024). ChatGPT contamination: Estimating the prevalence of LLMs in the scholarly literature. arXiv. https://doi.org/10.48550/arXiv.2403.16887 [Google Scholar] [Crossref]
34. Grudniewicz, A., Moher, D., Cobey, K. D., Bryson, G. L., Cukier, S., Allen, K., Ardern, C., Balcom, L., Barros, T., Berger, M., Ciro, J. B., Cugusi, L., Donaldson, M. R., Egger, M., Graham, I. D., Hodgkinson, M., Khan, K. M., Mabizela, M., Manca, A., … Lalu, M. M. (2019). Predatory journals: No definition, no defence. Nature, 576, 210–212. https://doi.org/10.1038/d41586-019-03759-y [Google Scholar] [Crossref]
35. Hacker, P., Mittelstadt, B., Borgesius, F. Z., & Wachter, S. (2024). Generative discrimination: What happens when generative AI exhibits bias, and what can be done about it. arXiv. https://doi.org/10.48550/arXiv.2407.10329 [Google Scholar] [Crossref]
36. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLOS Biology, 13(3), e1002106. https://doi.org/10.1371/journal.pbio.1002106 [Google Scholar] [Crossref]
37. Hennessy, M., Dennehy, R., Doherty, J., & O’Donoghue, K. (2022). Outsourcing transcription: Extending ethical considerations in qualitative research. Qualitative Health Research, 32(7), 1197–1204. https://doi.org/10.1177/10497323221101709 [Google Scholar] [Crossref]
38. Hosseini, M., & Horbach, S. P. J. M. (2023). Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Research Integrity and Peer Review, 8, Article 4. https://doi.org/10.1186/s41073-023-00133-5 [Google Scholar] [Crossref]
39. Ismail, F., Crawford, J., Tan, S., Rudolph, J., Tan, E., Seah, P., Tang, F. X., Ng, F., Kaldenbach, L. V., Naidu, A., Stafford, V., & Kane, M. (2024). Artificial intelligence in higher education database (AIHE V1): Introducing an open-access repository. Journal of Applied Learning & Teaching, 7, Article 1. https://doi.org/10.37074/jalt.2024.7.1.35 [Google Scholar] [Crossref]
40. 12a. Journal of Urology. (2025). Home page. https://www.auajournals.org/journal/juro [Google Scholar] [Crossref]
41. 12b. Kahubi.com. (2023). AI for research. https://kahubi.com/ [Google Scholar] [Crossref]
42. Kazemitabaar, M., Chow, J., Ma, C. K. T., Ericson, B. J., Weintrop, D., & Grossman, T. (2023). Studying the effect of AI code generators on supporting novice learners in introductory programming. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3580919 [Google Scholar] [Crossref]
43. Keith, K. D. (2013). Serial position effect. In K. D. Keith (Ed.), The encyclopedia of cross-cultural psychology (Vol. 3, p. 1155). Wiley. https://doi.org/10.1002/9781118339893.wbeccp482 [Google Scholar] [Crossref]
44. Kendall, G., & da Teixeira, J. A. (2024). Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills. Learned Publishing, 37(1), 55–62. https://doi.org/10.1002/leap.1578 [Google Scholar] [Crossref]
45. Kirova, V. D., Ku, C. S., Laracy, J. R., & Marlowe, T. J. (2023). The ethics of artificial intelligence in the era of generative AI. Journal of Systemics, Cybernetics and Informatics, 21(4), 42–50. https://doi.org/10.54808/JSCI.21.04.42 [Google Scholar] [Crossref]
46. Kleinberg, B., Davies, T., & Mozes, M. (2022). TextWash – Automated open-source text anonymisation. arXiv. https://arxiv.org/abs/2208.13081 [Google Scholar] [Crossref]
47. Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2024). Delving into ChatGPT usage in academic writing through excess vocabulary. arXiv:2406.07016. http://arxiv.org/abs/2406.07016 [Google Scholar] [Crossref]
48. Kurz, C., & Weber-Wulff, D. (2023). Maschinelles Lernen: Nicht so brillant wie von manchen erhofft. netzpolitik.org. https://netzpolitik.org/2023/maschinelles-lernen-nicht-so-brillant-wie-von-manche n-erhofft/ [Google Scholar] [Crossref]
49. Lee, V. V., van der Lubbe, S. C. C., Goh, L. H., & Valderas, J. M. (2024). Harnessing ChatGPT for thematic analysis: Are we ready? Journal of Medical Internet Research, 26, e54974. https://doi.org/10.2196/54974 [Google Scholar] [Crossref]
50. Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S, Huang, Z., Yang, D., Potts, C., Manning, C. D., & Zou, J. Y. (2024). Mapping the increasing use of LLMs in scientific papers. arXiv. https://doi.org/10.48550/arXiv.2404.01268 [Google Scholar] [Crossref]
51. Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2023). Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12, 157–173. https://doi.org/10.1162/tacl_a_00638 [Google Scholar] [Crossref]
52. Lorenz, P., Perset, K., & Berryhill, J. (2023). Initial policy considerations for generative artificial intelligence (OECD Artificial Intelligence Papers, No. 1). OECD Publishing. https://doi.org/10.1787/fae2d1e6-en [Google Scholar] [Crossref]
53. Martin, R. (2008). Clean code: A handbook of agile software craftsmanship. Prentice Hall. [Google Scholar] [Crossref]
54. Meyer, J. G., Urbanowicz, R. J., Martin, P. C. N., O’Connor, K., Li, R., Peng, P.-C., Bright, T. J., Tatonetti, N., Won, K. J., Gonzalez-Hernandez, G., & Moore, J. H. (2023). ChatGPT and large language models in academia: Opportunities and challenges. BioData Mining, 16(1), 20. https://doi.org/10.1186/s13040-023-00339-9 [Google Scholar] [Crossref]
55. Meyer, J., Padgett, N., Miller, C., & Exline, L. (2024). Public Domain 12M: A highly aesthetic image-text dataset with novel governance mechanisms. arXiv. http://arxiv.org/abs/2410.23144 [Google Scholar] [Crossref]
56. Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000386693 [Google Scholar] [Crossref]
57. Mrowinski, M. J., Fronczak, P., Fronczak, A., Ausloos, M., & Nedic, O. (2017). Artificial intelligence in peer review: How can evolutionary computation support journal editors? PLOS ONE, 12(9), e0184711. https://doi.org/10.1371/journal.pone.0184711 [Google Scholar] [Crossref]
58. No Language Left Behind (NLLB) Team. (2024). Scaling neural machine translation to 200 languages. Nature, 630, 841–846. https://doi.org/10.1038/s41586-024-07335-x [Google Scholar] [Crossref]
59. Patsakis, C., & Lykousas, N. (2023). Man vs the machine in the struggle for effective text anonymisation in the age of large language models. Scientific Reports, 13, 16026. https://doi.org/10.1038/s41598-023-42977-3 [Google Scholar] [Crossref]
60. Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07 [Google Scholar] [Crossref]
61. Perkins, M., & Roe, J. (2024a). Academic publisher guidelines on AI usage: A ChatGPT-supported thematic analysis (Version 2; peer review: 3 approved, 1 approved with reservations). F1000Research, 12, 1398. https://doi.org/10.12688/f1000research.142411.2 [Google Scholar] [Crossref]
62. Perkins, M., & Roe, J. (2024b). Generative AI tools in academic research: Applications and implications for qualitative and quantitative research methodologies. arXiv. https://doi.org/10.48550/arXiv.2408.06872 [Google Scholar] [Crossref]
63. Perkins, M., & Roe, J. (2024c). The use of generative AI in qualitative analysis: Inductive thematic analysis with ChatGPT. Journal of Applied Learning & Teaching, 7(1). https://doi.org/10.37074/jalt.2024.7.1.22 [Google Scholar] [Crossref]
64. Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2024a). Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. Journal of Academic Ethics, 22(1), 89–113. https://doi.org/10.1007/s10805-023-09492-6 [Google Scholar] [Crossref]
65. Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024b). Simple techniques to bypass GenAI text detectors: Implications for inclusive education. International Journal of Educational Technology in Higher Education, 21(1), 53. https://doi.org/10.1186/s41239-024-00487-w [Google Scholar] [Crossref]
66. Perni, S., Lehmann, L. S., & Bitterman, D. S. (2023). Patients should be informed when AI systems are used in clinical trials. Nature Medicine, 29(8), 1890–1891. https://doi.org/10.1038/s41591-023-02367-8 [Google Scholar] [Crossref]
67. Poldrack, R. A., Lu, T., & Beguš, G. (2023). AI-assisted coding: Experiments with GPT-4. arXiv. https://arxiv.org/abs/2304.13187 [Google Scholar] [Crossref]
68. Resnik, D. B., & Hosseini, M. (2024). The ethics of using artificial intelligence in scientific research: New guidance needed for a new tool. AI and Ethics. https://doi.org/10.1007/s43681-024-00493-8 [Google Scholar] [Crossref]
69. Retraction Watch. (2024, July 22). Giant rat penis redux: AI-generated diagram, errors lead to retraction. https://retractionwatch.com/2024/07/22/giant-rat-penis-redux-ai-generated-diagram-errors-lead-to-retraction/ [Google Scholar] [Crossref]
70. Sakana.ai. (2024). The AI Scientist: Towards fully automated open-ended scientific discovery. https://sakana.ai/ai-scientist/ [Google Scholar] [Crossref]
71. Shiraishi, M., Tomioka, Y., Miyakuni, A., Moriwaki, Y., Yang, R., Oba, J., & Okazaki, M. (2024). Generating informed consent documents related to blepharoplasty using ChatGPT. Ophthalmic Plastic and Reconstructive Surgery, 40(3), 316–320. https://doi.org/10.1097/IOP.0000000000002574 [Google Scholar] [Crossref]
72. Si, C., Yang, D., & Hashimoto, T. (2024). Can LLMs generate novel research ideas? A large-scale human study with 100+ NLP researchers. arXiv. https://arxiv.org/abs/2409.04109 [Google Scholar] [Crossref]
73. Solohubov, I., Moroz, A., Tiahunova, M. Y., Kyrychek, H. H., & Skrupsky, S. (2023). Accelerating software development with AI: Exploring the impact of ChatGPT and GitHub Copilot. In S. Papadakis (Ed.), Proceedings of the 11th Workshop on Cloud Technologies in Education (CTE 2023). https://ceur-ws.org/Vol-3679/paper17.pdf [Google Scholar] [Crossref]
74. Sotolář, O., Plhák, J., & Šmahel, D. (2021). Towards personal data anonymization for social messaging. In K. Ekštein, F. Pártl, & M. Konopík (Eds.), Text, speech, and dialogue (pp. 281–292). Springer International Publishing. [Google Scholar] [Crossref]
75. Spector-Bagdady, K. (2023). Generative-AI-generated challenges for health data research. The American Journal of Bioethics, 23(10), 1–5. https://doi.org/10.1080/15265161.2023.2252311 [Google Scholar] [Crossref]
76. Steinfeld, N. (2016). I agree to the terms and conditions: (How) do users read privacy policies online? An eye-tracking experiment. Computers in Human Behavior, 55(Part B), 992–1000. https://doi.org/10.1016/j.chb.2015.09.038 [Google Scholar] [Crossref]
77. Suleiman, A., von Wedel, D., Munoz-Acuna, R., Redaelli, S., Santarisi, A., Seibold, E.-L., Ratajczak, N., Kato, S., Said, N., Sundar, E., Goodspeed, V., & Schaefer, M. S. (2024). Assessing ChatGPT’s ability to emulate human reviewers in scientific research: A descriptive and qualitative approach. Computer Methods and Programs in Biomedicine, 254, 108313. https://doi.org/10.1016/j.cmpb.2024.108313 [Google Scholar] [Crossref]
78. Šupak Smolčić, V. Š. (2013). Salami publication: Definitions and examples. Biochemia Medica, 23(3), 237–241. https://doi.org/10.11613/BM.2013.030 [Google Scholar] [Crossref]
79. Tenzer, H., Feuerriegel, S., & Piekkari, R. (2024). AI machine translation tools must be taught cultural differences too. Nature, 630, 820. https://doi.org/10.1038/d41586-024-02091-4 [Google Scholar] [Crossref]
80. Tong, S., Mao, K., Huang, Z., Zhao, Y., & Peng, K. (2024). Automating psychological hypothesis generation with AI: When large language models meet causal graph. Humanities and Social Sciences Communications, 11, 896. https://doi.org/10.1057/s41599-024-03407-5 [Google Scholar] [Crossref]
81. UNESCO IRCAI. (2024). Systematic prejudices: Bias against women and girls in large language models (Report CI/DIT/2024/GP/01). International Research Centre on Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000388971 [Google Scholar] [Crossref]
82. UNESCO. (n.d.). Languages. UNESCO World Atlas of Languages. (Retrieved September 13, 2024.) https://en.wal.unesco.org/discover/languages [Google Scholar] [Crossref]
83. University of Leeds. (2025). Policy on the proof reading of student work to be submitted for assessment. https://secretariat.leeds.ac.uk/wp-content/uploads/sites/109/2025/03/Proof-Reading-Policy-March-2025-FINAL.docx [Google Scholar] [Crossref]
84. Uplevel. (2024). Can generative AI improve developer productivity? https://resources.uplevelteam.com/gen-ai-for-coding [Google Scholar] [Crossref]
85. Van Noorden, R., & Perkel, J. M. (2023). AI and science: What 1,600 researchers think. Nature, 621, 672–675. https://doi.org/10.1038/d41586-023-02980-0 [Google Scholar] [Crossref]
86. Vetenskapsrådet. (2023). Guidelines for the use of AI tools. https://www.vr.se/english/applying-for-funding/applying-for-a-grant/guidelines-for-the-use-of-ai-tools.html [Google Scholar] [Crossref]
87. Waddington, L. (2024). Navigating academic integrity in the age of GenAI: A historian’s perspective on censorship. International Center for Academic Integrity (Blog). https://academicintegrity.org/resources/blog/536-navigating-academic-integrity-in-the-age-of-genai-a-historian-s-perspective-on-censorship [Google Scholar] [Crossref]
88. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26. https://doi.org/10.1007/s40979-023-00146-z [Google Scholar] [Crossref]
89. Wilkes, L., Cummings, J., & Haigh, C. (2015). Transcriptionist saturation: Knowing too much about sensitive health and social data. Journal of Advanced Nursing, 71(2), 295–303. https://doi.org/10.1111/jan.12510 [Google Scholar] [Crossref]
90. Yan, L., Echeverria, V., Fernandez-Nieto, G. M., Jin, Y., Swiecki, Z., Zhao, L., Gašević, D., & Martinez-Maldonado, R. (2024). Human–AI collaboration in thematic analysis using ChatGPT: A user study and design recommendations. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1–7). https://doi.org/10.1145/3613905.3650732 [Google Scholar] [Crossref]
91. Yetiştiren, B., Özsoy, I., Ayerdem, M., & Tüzün, E. (2023). Evaluating the code quality of AI-assisted code generation tools: An empirical study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT. arXiv. https://arxiv.org/abs/2304.10778 [Google Scholar] [Crossref]
92. Zhang, H., Wu, C., Xie, J., Lyu, Y, Cai, J., & Carroll, J. M. (2024). Redefining qualitative analysis in the AI era: Utilizing ChatGPT for efficient thematic analysis. arXiv. https://doi.org/10.48550/arXiv.2309.10771 [Google Scholar] [Crossref]
Metrics
Views & Downloads
Similar Articles
- Assessment of the Role of Artificial Intelligence in Repositioning TVET for Economic Development in Nigeria
- Teachers’ Use of Assure Model Instructional Design on Learners’ Problem Solving Efficacy in Secondary Schools in Bungoma County, Kenya
- “E-Booksan Ang Kaalaman”: Development, Validation, and Utilization of Electronic Book in Academic Performance of Grade 9 Students in Social Studies
- Analyzing EFL University Students’ Academic Speaking Skills Through Self-Recorded Video Presentation
- Major Findings of The Study on Total Quality Management in Teachers’ Education Institutions (TEIs) In Assam – An Evaluative Study