The Ethical Implications of Artificial Intelligence Use in Education: A Student Perspective
- Bernardo Ramos
- 6468-6471
- Sep 10, 2025
- Education
The Ethical Implications of Artificial Intelligence Use in Education: A Student Perspective
Bernardo Ramos
Pamantasan ng Lungsod ng Maynila
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0471
Received: 06 August 2025; Accepted: 11 August 2025; Published: 10 September 2025
ABSTRACT
The rapid integration of Artificial Intelligence (AI) tools into academic settings has necessitated a critical examination of their ethical implications from a student’s viewpoint. Employing a sequential explanatory mixed-methods design, this study explored the perceptions of 100 college students regarding the use of AI in their academic work. An online survey was used to quantify usage patterns, while focus group discussions (FGDs) and interviews provided rich qualitative data. Key findings reveal a complex tension between AI’s perceived convenience and students’ concerns about academic integrity, blurred lines of authorship, and unequal access. Thematic analysis revealed a general consensus that AI use for tasks like brainstorming and grammar correction is acceptable, but full reliance on AI for content generation is viewed as academically dishonest. The discussion interprets these findings through utilitarian, deontological, and virtue ethics lenses, highlighting the need for clear institutional policies, enhanced digital literacy training, and open dialogue to navigate the evolving educational landscape responsibly.
INTRODUCTION
The integration of Artificial Intelligence (AI) tools, such as ChatGPT and Grammarly, into academic settings is rapidly transforming how students engage with learning and complete assignments. While these technologies offer undeniable benefits in academic writing, research, and general learning support, their increasing prevalence has concurrently amplified a range of ethical concerns. Key issues include the complexities of authorship, maintaining academic honesty, the potential for overdependence, and disparities in access to these advanced tools. As AI continues to embed itself within educational paradigms, a critical examination of student perceptions regarding its ethical implications becomes imperative. This study, therefore, aims to comprehensively explore how students understand and navigate the ethical boundaries and broader impacts of AI in their academic endeavors.
This paper’s primary objectives are to:
- Assess the frequency and purpose of AI tool usage among college students.
- Identify students’ perceptions of the ethical boundaries and acceptable uses of AI in academic work.
- Explore how students’ understanding of authorship and academic honesty is impacted by AI.
- Examine student views on equity and institutional policy related to AI in education.
REVIEW OF RELATED LITERATURE
Existing scholarship has extensively explored the ethical dimensions of AI integration within educational contexts. Floridi et al. (2018) underscore the paramount importance of transparency and accountability in AI systems, particularly given their influence on learning outcomes. Similarly, Luckin et al. (2016) highlight AI’s potential for personalized learning but caution against excessive reliance, which could inadvertently stifle students’ critical thinking and creative capacities.
Within the Philippine context, Garcia and Santos (2023) have documented the burgeoning adoption of AI tools among college students, drawing attention to the moral ambiguities surrounding their use in assessments and academic writing. Complementing these insights, Johnson (2022) argues that despite AI’s utility as an educational aid, the absence of robust institutional guidelines and ethical frameworks has led to inconsistent practices among both students and faculty.
More recent research further emphasizes these challenges. Wang (2023) conducted a comprehensive review of the psychological impact of generative AI on students, finding a significant increase in stress and anxiety related to navigating AI’s ethical use while also noting its benefits for productivity. Similarly, Chua and Lim (2024) explore the concept of “student agency” in the age of AI, arguing that effective policies must empower students to make informed ethical choices rather than simply imposing bans. Collectively, these studies emphasize the urgent need to understand student perspectives to inform the development of equitable and responsible AI usage policies in education.
METHOD
Design
This study employed a sequential explanatory mixed-methods design, integrating both quantitative (online survey) and qualitative (focus group discussions and interviews) approaches to gain a comprehensive understanding of student perceptions.
Participants
A total of 100 college students, representing diverse disciplines and year levels, participated in the study. Participants were selected using purposive sampling to ensure a broad representation of academic backgrounds and experiences with AI tools.
Instruments
- Online Survey Questionnaire: This instrument comprised both multiple-choice and open-ended questions designed to gather data on AI tool usage patterns, frequency, purposes, and perceived ethical implications. Sample questions included:
- “Have you ever used AI tools (e.g., ChatGPT, Grammarly) in your academic work?” (Yes/No)
- “For what purposes do you use AI tools?” (Check all that apply: writing assistance, grammar checking, idea generation, coding, others)
- “How often do you use AI tools for academic tasks?” (Never, Rarely, Sometimes, Often, Always)
- “Do you believe using AI tools compromises academic honesty?” (Likert scale: Strongly disagree to Strongly agree)
- “In your opinion, when is it ethically acceptable to use AI in schoolwork?”
- “Do you think access to AI tools is equal among students? Why or why not?”
- “Should there be school policies regulating AI usage? Please explain.”
- Focus Group Discussion (FGD) Guide Questions: These semi-structured questions facilitated in-depth qualitative data collection:
- “Describe your typical frequency of AI tool use for academic tasks.”
- “What do you perceive as the primary benefits and risks of integrating AI into your schoolwork?”
- “Under what specific circumstances do you consider AI use ethically acceptable or unacceptable?”
- “How does the use of AI impact your understanding of academic honesty and personal authorship?”
- “Do you believe that institutional policies regarding AI use are necessary? Please elaborate.”
Procedure
Participants initially completed the online survey questionnaire. Following the survey, a subset of students was invited to participate in small FGDs, where they engaged in detailed discussions regarding the ethical dimensions of AI use across various academic tasks, including writing assignments, research projects, and exam preparation. Additionally, individual interviews were conducted with selected respondents who offered particularly profound or unique insights.
RESULTS
The survey findings indicated that a significant majority (82%) of students had utilized AI tools for academic purposes at least once. The most frequently reported uses included writing assistance (65%), idea generation (48%), and grammar checking (43%).
Thematic analysis of the qualitative data from FGDs and interview transcripts revealed four prominent recurring themes:
- Convenience vs. Integrity: While many students acknowledged AI’s utility in alleviating academic pressure, they simultaneously expressed considerable apprehension regarding the potential for inadvertently crossing ethical boundaries, particularly when submitting AI-generated content as their original work.
- Acceptable Use Limits: A general consensus emerged among students that employing AI for brainstorming and grammar correction was ethically permissible. However, they largely viewed complete reliance on AI for generating essays or solving complex problems as academically dishonest.
- Access and Equity: A notable concern raised by some students pertained to the unequal access to AI tools, citing both technological infrastructure and financial limitations as significant barriers.
- Blurred Lines of Authorship: A pervasive issue revolved around the concept of authorship, encapsulated by the question: “Whose work is it really if AI helps write it?” This highlighted a fundamental challenge to traditional notions of intellectual ownership.
Furthermore, observed differences in ethical perceptions were correlated with year level; senior students generally demonstrated a more critical and reflective stance on AI’s ethical implications compared to their first-year counterparts.
DISCUSSION
Student perspectives, as revealed in this study, underscore the inherent complexity of ethically integrating AI into educational practices. Applying established ethical frameworks provides further insight:
- Utilitarian Lens: From a utilitarian viewpoint, students clearly recognized the immediate benefits of AI, such as enhanced efficiency and academic support. However, they also critically questioned whether these short-term gains are ultimately outweighed by potential long-term consequences, including increased dependency and a reduction in genuine learning and skill development. This tension prompts crucial questions about balancing immediate advantages with overarching educational objectives.
- Deontological View: A deontological perspective was evident in students who prioritized adherence to academic rules and established policies. These students largely believed that exceeding acceptable AI usage limits constituted a violation of their duties and responsibilities as learners. This framework reinforces the necessity for institutional clarity, as moral choices, in this view, should be guided by principles regardless of the perceived outcome.
- Virtue Ethics: The influence of personal values, including honesty, diligence, and humility, was apparent through the lens of virtue ethics. Students who articulated a strong commitment to self-improvement typically framed AI as a supplementary tool to augment their learning, rather than a substitute for their own intellectual effort.
The study identified several key risks, including the erosion of academic integrity due to AI misuse, the pervasive confusion surrounding authorship and intellectual ownership, and the exacerbation of the digital divide, which disproportionately disadvantages students with limited access to advanced AI technologies. These challenges highlight an urgent imperative for educational institutions to proactively:
- Establish clear, comprehensive ethical guidelines for AI use.
- Invest significantly in digital literacy education to equip students with critical evaluation skills.
- Facilitate open and ongoing dialogues among students, faculty, and administrators regarding AI’s evolving role in learning.
Encouraging active student participation in the policy-making process is also crucial to ensure that regulations are aligned with the lived experiences and genuine needs of learners.
CONCLUSION
This study unequivocally emphasizes the pressing need for educational institutions to proactively address the multifaceted ethical implications arising from AI integration in academic settings. As AI rapidly transitions from a novel tool to a fundamental component of modern education, fostering its responsible and equitable use becomes paramount. The implementation of clear institutional policies, comprehensive student training programs, and sustained open dialogue are essential strategies to effectively balance technological innovation with academic integrity. Furthermore, cultivating a robust culture of ethical awareness will empower students to leverage AI in ways that genuinely support and enhance, rather than diminish or replace, their intrinsic learning processes. Future research should expand to include the perspectives of educators, administrators, and policy-makers to facilitate the development of a more holistic, enforceable, and adaptable ethical framework that can effectively navigate the rapidly evolving digital landscape of education.
REFERENCES
- Chua, R., & Lim, J. (2024). Student agency and AI: Navigating the new academic frontier. Journal of Educational Technology and Society, 27(1), 121–135.
- Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
- Garcia, A., & Santos, M. (2023). Ethical Ambiguities in AI-Assisted Academic Work Among Filipino College Students. Philippine Journal of Educational Ethics, 11(2), 44–58.
- Johnson, T. (2022). Rethinking Responsibility: AI, Authorship, and the Future of Academic Integrity. Journal of Higher Education Ethics, 19(1), 101–120.
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education. Retrieved from https://www.pearson.com
- Wang, L. (2023). The psychological impact of generative AI on student learning and well-being. Educational Psychology Review, 35(4), 1–18.