Learners’ Perception in AI Utilization in Education and their Conceptual Understanding in Grade 11 Life Science
- Doreen Khrystel P. Gonzales
- Edna B. Nabua
- 654-665
- Feb 17, 2025
- Education
Learners’ Perception in AI Utilization in Education and their Conceptual Understanding in Grade 11 Life Science
Doreen Khrystel P. Gonzales, Edna B. Nabua
Department of Science and Mathematics Education, College of Education, Mindanao State University – Iligan Institute of Technology
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0040
Received: 15 January 2025; Accepted: 19 January 2025; Published: 17 February 2025
ABSTRACT
This study investigates the perceptions of Grade 11 students regarding the integration of Generative AI in education, focusing on its impact on students’ academic achievement in Life Science. A quantitative research design was employed, involving surveys administered to 183 Grade 11 students within the Division of Malaybalay, Bukidnon, Philippines, assessing both content knowledge in Life Science and perceptions of AI integration. The findings revealed that students generally hold positive attitudes towards the use of Generative AI, recognizing its potential to enhance learning experiences. However, a significant number of students struggled academically in Life Science, with many failing to meet expected achievement levels. Additionally, a correlation was found between students’ perceptions of AI integration and their academic performance, suggesting that favorable attitudes towards AI could positively influence learning outcomes. To address these challenges, it is recommended that future researches be done on the implication of AI integration to Life Science.
Keywords: AI integration in Education, Content Knowledge, Generative AI, Life Science
INTRODUCTION
Life science encompasses a broad range of scientific disciplines focused on the study of living organisms, including humans, animals, plants, and microorganisms. It integrates various fields such as biology, chemistry, and bioinformatics, facilitating interdisciplinary research that enhances understanding of life processes and their interactions with the environment [22] [18]. In Philippine high schools, the subject of Life Science is characterized by a curriculum that emphasizes relevance to students’ everyday lives and integrates ecological studies, reflecting the local context and national developmental goals. The curriculum is designed to be learner-centered and multidisciplinary, incorporating essential topics such as genetics, ecology, and biodiversity, while addressing common misconceptions that impede student understanding [6].
The performance of Filipino students in science exhibits a complex landscape characterized by both strengths and weaknesses. A study indicated that senior high school students generally possess high scientific reasoning and critical thinking skills, with performance levels categorized as “very satisfactory” across various academic strands, particularly in STEM and General Academic Strands [13]. However, the Philippines’ performance in the PISA 2018 assessment revealed a concerning trend, as Filipino students ranked near the bottom in science literacy among 78 countries, highlighting significant challenges in achieving global standards [4].
To combat this trend, researchers turned to the potential of the use of AI in education. Globally, AI is reshaping educational paradigms by enhancing personalized learning, streamlining administrative processes, and fostering innovative teaching methodologies, particularly accelerated by the COVID-19 pandemic’s push towards online education and the use of extended reality tools [27].
The application of AI in education has been widely explored across various domains, and its potential has been well-documented in scholarly literature. The integration of AI has been shown to boost teachers’ self-efficacy in science instruction according to a descriptive analytical study focusing on a sample of 83 science teachers from government schools in Abu Dhabi [1]. Similar results were seen in a study which highlighted the advantages of integrating AI in schools [16]. AI facilitates continuous, seamless, and just-in-time feedback on students’ performance, which improves the learning process. One well-known AI program discussed in the literature is ChatGPT, which was found to facilitate knowledge access through its human-like conversation interface [10].
In the Philippines, the use of ChatGPT were investigated as assistant for language learning, academic support, and as a formative assessment tool for social studies. ChatGPT as an effective tool in the classroom is further supported by the results of a study done on Grade 11 learners in Cebu [16]. Integrating ChatGPT in interactive Q&A formative assessments led to an improvement in students’ academic performance, enhanced student teacher engagement, and providing personalized feedback. But just as the previous studies, it has recommended for educators to address academic integrity and fairness in the educative process. This section reinforces the view that AI can significantly improve educational processes, but careful design and continuous refinement are crucial for its success.
Through scaffolded activities, students learn to recognize the limitations of ChatGPT, such as its inability to provide real-time information and the importance of verifying responses. Similar results are echoed in a study investigating 178 secondary learners in Cagayan, Philippines. Despite results showing positive results on the use of ChatGPT as academic support, the authors emphasized that the effective and ethical use of ChatGPT plays a significant role in enhancing student achievement [7].
Despite the transformative potential, there are numerous challenges associated with AI use. AI’s impact on education is tenuous. Over-reliance on AI tools may diminish critical thinking and problem-solving skills among students [3]. If students become too dependent on AI for answers or assistance, they may not develop the necessary skills to tackle challenges independently. While chatbots like ChatGPT can enhance efficiency and accuracy in certain tasks, they may also mask fundamental gaps in students’ understanding of the subject matter [26]. Additionally, educators and institutions may resist adopting AI technologies due to fear of the unknown, lack of understanding or skepticism about their effectiveness [17] [20]. This resistance can hinder the potential benefits of AI in enhancing educational practices. The successful integration of ChatGPT into educational settings requires a balanced approach that considers both the advantages and limitations of the technology.
Current studies on AI integration in education highlights that investigating learners’ perceptions of AI in education is crucial, as students exhibit diverse views, categorizing AI as an essential academic aid, a facilitator of personalized learning, an inhibitor to critical thinking, and an ethical challenger [9]. In addition, there have been a few studies on the impact of AI in education, as aforementioned, there has been no studies on the integration of AI into Life Science classes among secondary learners in the Philippines, hence, this study provides baseline data in investigating the perception of the learners in Generative AI integration in education and their understanding of Life Science topics.
Objectives
Specifically, the following research questions were sought:
- Assess the achievement level of Grade 11 learners in Life Science
- Determine the perceptions of the learners in Generative AI integration in education.
- Determine the relationship between the learner’s academic achievement in Life Science and their perception on AI integration in education.
METHODOLOGY
This study employed a quantitative design. The quantitative component involved a content knowledge evaluation and perception evaluation for the students. The subject of this study are the Grade 11 General Academic Strand (GAS) students of Bukidnon National High School, Division of Malaybalay City.
Validity and Pilot Testing of the Instruments
The content knowledge questionnaire initially contained 60 items which was designed to measure the learner’s knowledge of the different lesson included in Life Science. These main lesson contents were: (1) Introduction to Life Science, (2) Bioenergetics, (3) Perpetuation of Life, (3) How Animals Survive, (4) Process of Evolution, and (5) Interaction and Interdependence, as identified from the Most Essential Learning Competencies (MELCS) by the Department of Education.
To establish the validity and reliability of the instrument, this was subjected to rigorous face validation, content validation, and a series of reliability tests. After the face validation of the instrument with the research adviser, three (3) content experts were identified for content validation. The first one is the Department Head of the Secondary Education Department of Bukidnon State University, with 9 years of experience teaching high school biology. She also has a doctoral degree in science education and Instructional Systems Design. The second one is an education program supervisor in the Department of Education Malaybalay City Division and had taught high school biology for 10 years. He has a master’s degree in General Science and a doctoral degree in educational administration. The third expert is a Special Science Teacher 1 with 7 years of experience teaching high school biology with doctoral degree in Science Education. The content validity was determined using five (5) parameters, as summarized in Table 1 in the appendix.
Appropriate corrections and modifications were done as suggested. After this, the content knowledge questionnaire was pilot tested to 183 Grade 12 Senior High School students under the General Academic Strand (GAS) of Bukidnon National High School. The pilot testing data underwent rigorous item analysis, identifying its difficulty index, discrimination index, distractor analysis, and reliability.
Difficulty index. Item difficulty can be defined as the percentage of the examinees that marked the item right. Difficulty value of an individual item is the proportion of certain sample of subjects who actually know the answer of that item [15]. Items are considered easier when value of “F” is above 0.90 and are comparatively difficult when fall below 0.30. These figures are in accordance with International Assessment Resource [25] as tabulated below (see Table I):
Table I Evaluation of Item Difficulty for Item Analysis
Items difficulty Index | Item Evaluation |
Above 0.90 | Very easy item |
0.62 | Ideal value |
Below 0.20 | Very difficult item |
Discrimination index. It refers the extent to which the test item distinguishes between high achiever and low achiever. The discrimination index of an item is the ability to discriminate between superior and inferior [5]. Value of D is acceptable when ranges from 0.30 to 1.0. Discrimination is 100% when value of D is more than 0.40 and value below 0.30 show incapability of an item to discriminate [12]. See Table II.
Table II Evaluation of Discrimination Indexes for Item Analysis
Item discrimination index | Item discrimination | Item evaluation |
0.40 and above | Very Good | Very good items; accept |
0.30-0.39 | Good | Reasonably good but maybe subject to improvement |
0.20-0.29 | Marginal | Marginal items usually need and subject to improvement |
Below 0.19 | Poor | Poor items to be rejected or improved by revision |
The average difficulty index of the content knowledge questionnaire is 0.6509 which means that the instrument has ideal difficulty, no items that are too difficult nor too easy. In addition, its Discrimination index shows a reasonably good score of 0.3564. As individual items were analyzed, all items that were “Very difficult” and “Very Easy” were discarded. Similarly, items with “Poor” Discrimination Index were also discarded.
After a rigorous item Analysis, the researcher is left with 43 items. These items were satisfying the requirement as determine by the procedure of item analysis, i.e., item difficulty, discrimination power, and multiple-choice distractor analysis.
The reliability of the content knowledge questionnaire was determined by using Kuder-Richardson formulas. The KR20 is used for test items which are of varying difficulties (easy, moderate, and challenging). The KR21 is used if all test items in your binary test (answering either right or wrong) are equally challenging. One must produce KR20 reliability coefficient 0.70 and above to make sure reliable score [14]. The instrument gained a KR-20 of 0.8166 and KR-21 of 0.7892 which both confirms its reliability. A Cronbach alpha of 0.8155 further supports is reliability which means that the questionnaire demonstrates a strong level of internal consistency, making it a reliable tool for assessing content knowledge in Life Science.
The student’s perception on Generative AI in education is modified from a validated questionnaire [8]. This contains two (2) sections. The first one collects the personal data including the name, school, and gender. The second section contains 15 Likert Scale items divided equally between three (3) parameters; namely, Knowledge on generative AI technologies, Willingness to use ChatGPT, and Attitude towards AI. Participants rated their agreement on these items using a 4-point scale: 4 – Strongly Agree, 3 – Agree, 2 – Disagree, and 1 – Strongly Disagree.
The instrument also underwent face validity with the research professor, content validity with the experts, and reliability test. Suggestions from the face validity were made and the instrument was subjected to content validity by the same experts. Statement 4 (“I understand generative Al technologies like ChatGPT may rely too heavily on statistics, which can limit their usefulness in certain contexts”) and Statement 5 (“I understand generative Al technologies like ChatGPT have limited emotional intelligence and empathy, which can lead to output that is insensitive or inappropriate”) under the Knowledge of Generative AI technology and Statement 2 (“Generative Al technologies such as ChatGPT will limit my opportunities to interact with others and socialize while completing coursework”) under the Attitudes towards AI were identified as double-barreled questions and were modified accordingly.
After the necessary modifications, the instrument was pilot tested to 183 students. To identify the reliability, Cronbach’s alpha was calculated for each parameter, gaining the following results: 0.718 for Knowledge on Generative AI, 0.816 for Willingness to Use ChatGPT, and 0.706 for Attitude towards AI. The overall Cronbach’s Alpha of 0.747 indicates acceptable reliability of the instrument.
Data Gathering Procedure
This study underwent the proper protocol in seeking approval to conduct the study from the division of Malaybalay, the concerned school heads, teachers, students, and their parents. To accomplish this, letters were sent out to the named authorities, and a consent/assent form was given to the students. The research instruments were pilot-tested in appropriate schools and grade levels. This ensured the validity and reliability of the instruments used in the main study. The pilot test involved a limited number of participants, and the data gathered was not included in the final analysis. This study involved 183 Grade 11 General Academic Strand students and 32 Senior High School Science teachers from the Division of Malaybalay. The achievement level of students were described using a scoring procedure aligned with Department of Education guidelines [11] as shown on Table III. Their perception scores are interpreted by Table IV [2] .
Table III Interpretation of Learners’ Performance on the Achievement Test
Description | Grading Sclae | Remarks |
Outstanding | 39-43 | Passed |
Very Satisfactory | 36-38 | Passed |
Satisfactory | 34-35 | Passed |
Fairly saisfactory | 32-34 | Passed |
Did not meet expectations | Below 31 | Failed |
Table IV Percentage Scale of Students’ Responses to Perception Questionnaire Items
Scale | Description |
less than 59 | Very low |
60-69 | Low |
70-79 | Medium |
80-89 | High |
90-100 | Very High |
RESULTS AND DISCUSSION
Filipino students have shown poor performances in international and national assessments in science including topics in Life Science [21] [23] [24]. For this reason, the need for innovative and targeted teaching strategies has been put to highlight. In the recent years, there has been a rise of AI integration in education with the introduction of various AI tools including Generative AI like ChatGPT. this research aims to investigate the perception of students on the use of Generative AI in education and their content knowledge in Life Science.
This study employed a quantitative design to measure the perception of students towards Generative AI in education, and the content knowledge of Grade 11 students in Life Science. Data was collected from 183 Grade 11 General Academic Strand students from the Division of Malaybalay. A valid and reliable Perception Questionnaire was administered together with a content knowledge questionnaire.
Data for the perception and content knowledge was analyzed using descriptive statistics including its mean, standard deviation, and percentage. The achievement level of students were described using a scoring procedure aligned with DepEd Order No. 8, s. 2015. The grading scale was adjusted based on the transmutation table outlined in the aforementioned DepEd Order, as shown in Table V below.
Table V Achievement level of Grade 11 students in Life Science
Achievement Level | Frequency | Percentage | Remarks |
Outstanding | 0 | 0 | Passed |
Very Satisfactory | 0 | 0 | Passed |
Satisfactory | 0 | 0 | Passed |
Fairly saisfactory | 0 | 0 | Passed |
Did not meet expectations | 181 | 100 | Failed |
The table presents the achievement level of Grade 11 students in Life Science based on a content knowledge questionnaire. The results show that all 181 students (100%) fell into the category of “Did not meet expectations,” indicating that none of the students achieved an “Outstanding,” “Very Satisfactory,” “Satisfactory,” or “Fairly Satisfactory” level. Since these Life Science topics have not yet been taught, the results are expected and reflect the students’ lack of prior knowledge in the subject matter. This outcome underscores the need for interventions to address these gaps and improve student achievement. The 100% failure rate emphasizes the importance of introducing engaging, scaffolded, and meaningful learning experiences to help students acquire and retain the necessary knowledge and skills in Life Science.
Further insights on the students’ performance is given by the Achievement scores per topic in Life Science as presented in Figure 1, below.
Figure 1 Achievement Score Per Topic in Life Science
Figure 1 presents the achievement scores of Grade 11 students per topic in Life Science, highlighting notable variations in their performance. Organ Systems recorded the highest score at 50%, suggesting that students have relatively better prior knowledge in this area compared to other topics. In contrast, Interaction and Interdependence had the lowest score at 17%, indicating a significant gap in understanding and familiarity with this topic.
Another notable finding is that the scores for other topics, such as Bioenergetics (41%) and Evolution (38%), remain well below a passing level, reflecting overall low achievement across the subject. The substantial difference between the highest and lowest scores highlights areas of strength and weakness that require attention. Specifically, the poor performance in Interaction and Interdependence suggests the need for more targeted instructional strategies, while the moderate score in Organ Systems can serve as a foundation to build upon. These findings underscore the importance of implementing contextualized and engaging teaching approaches to address knowledge gaps and improve overall student achievement in Life Science.
The following sections will discuss the perception of students in integration Generative AI, specifically ChatGPT in education. Parameters such as Knowledge of Generative AI, Willingness to use ChatGPT, and Attitude towards AI are explored.
Table VI Knowledge of Generative AI
Items | M | s.d. | Ratio | Awareness Level |
1. I understand generative Al technologies like ChatGPT have limitations in their ability to handle complex tasks.
cont. of Table VI |
3.66 | 0.49 | 91.50 | Very High |
2. I understand generative Al technologies like ChatGPT can generate output that is factually inaccurate. | 3.42 | 0.67 | 85.50 | High |
3. I understand generative Al technologies like ChatGPT can exhibit biases and unfairness in their output. | 3.30 | 0.71 | 82.50 | High |
4. I understand generative Al technologies like ChatGPT may have limited usefulness in certain contexts. | 3.51 | 0.66 | 87.75 | High |
5. I understand generative Al technologies like ChatGPT can lead to output that is insensitive or inappropriate due to its limited emotional intelligence and empathy. | 3.25 | 0.71 | 81.25 | High |
Total | 85.70 | High |
Table 6 reveals students’ knowledge of generative AI technologies like ChatGPT, indicating a generally strong awareness, with a total ratio of 85.70% categorized as “High.” Students demonstrate the highest understanding (91.50%) regarding the limitations of generative AI in handling complex tasks, reaching a “Very High” awareness level. Their knowledge of AI’s potential to produce factually inaccurate outputs (85.50%), exhibit biases (82.50%), and have limited usefulness in specific contexts (87.75%) is consistently rated “High.” Additionally, students recognize that generative AI can produce insensitive or inappropriate outputs due to its lack of emotional intelligence (81.25%). These findings highlight a well-informed student body that is aware of both the potential and limitations of generative AI, which is critical for promoting responsible and discerning use of such technologies in educational settings.
Table VII Willingness to Use ChatGPT
Items | M | s.d. | Ratio | Awareness Level |
1. I envision integrating generative Al technologies like ChatGPT into my learning practices in the future. | 3.08 | 0.64 | 77.00 | Medium |
2. Generative Al technologies such as ChatGPT can improve my digital competence. | 3.00 | 0.55 | 75.00 | Medium |
3. Generative Al technologies such as ChatGPT can help me save time.
Cont. of Table VII |
3.47 | 0.64 | 86.75 | High |
4. Al technologies such as ChatGPT can provide me with unique insights and perspectives that I may not have thought of myself. | 3.37 | 0.72 | 84.25 | High |
5. I think Al technologies such as ChatGPT can provide me with personalized and immediate feedback and suggestions for my assignments. | 3.00 | 0.73 | 75.00 | Medium |
Total | 79.60 | Medium |
The table reflects students’ willingness to use generative AI technologies like ChatGPT, showing an overall “Medium” awareness level with a total ratio of 79.60%. Students recognize ChatGPT’s ability to save time (86.75%) and provide unique insights and perspectives (84.25%), both rated as “High.” However, their willingness to integrate generative AI into their learning practices (77.00%) and its potential to improve digital competence (75.00%) are perceived at a “Medium” level. Similarly, students rate the usefulness of ChatGPT for providing personalized and immediate feedback at 75.00% (“Medium”). These findings suggest that while students appreciate the efficiency and innovative support that AI tools like ChatGPT offer, there is still room to enhance their willingness and confidence in fully integrating these technologies into their learning processes.
Table VIII Attitude Towards AI
Items | M | s.d. | Ratio | Awareness Level |
1. I envision integrating generative Al technologies like ChatGPT into my learning practices in the future. | 3.07 | 0.71 | 76.75 | Medium |
2. Generative Al technologies such as ChatGPT can improve my digital competence. | 2.90 | 0.82 | 72.50 | Medium |
3. Generative Al technologies such as ChatGPT can help me save time. | 2.91 | 0.74 | 72.75 | Medium |
4. Al technologies such as ChatGPT can provide me with unique insights and perspectives that I may not have thought of myself. | 2.79 | 0.98 | 69.75 | Low |
5. I think Al technologies such as ChatGPT can provide me with personalized and immediate feedback and suggestions for my assignments. | 3.14 | 0.74 | 78.50 | Medium |
Total | 74.05 | Medium |
The table on students’ attitudes towards AI indicates a “Medium” overall awareness level, with a total ratio of 74.05%. While students recognize the potential of generative AI technologies like ChatGPT to provide personalized feedback (78.50%) and integrate into their future learning practices (76.75%), the scores remain within the “Medium” range. Similarly, students view AI’s ability to improve digital competence (72.50%) and save time (72.75%) moderately. However, the perception that AI can offer unique insights and perspectives (69.75%) falls into the “Low” category, suggesting a gap in appreciation for AI’s creative and supportive potential. These findings highlight a cautious yet positive attitude toward AI, with opportunities to improve students’ confidence and understanding of its broader benefits in education.
With the findings above, students display their perception of generative AI technologies like ChatGPT in educational contexts. While recognizing the potential benefits, such as providing personalized feedback, unique insights, and time-saving opportunities, students also express concerns about potential drawbacks, including the risk of superficial learning, reduced critical thinking, and ethical issues [3].
The research suggests that students acknowledge the immediate usefulness of ChatGPT in enhancing their learning experience, but they are also aware of the need to balance its application to maintain essential skills like critical thinking and problem-solving [3]. Ultimately, the findings highlight the importance of carefully integrating and managing the use of generative AI in education, with a focus on promoting responsible and discerning practices that empower students to leverage these technologies while also developing essential cognitive and social-emotional competencies.
Students recognize the potential benefits of AI technologies but are also aware of their limitations, such as the risk of superficial learning and reduced critical thinking. This supports the claim that students are more critical of AI’s role in education, focusing on its constraints rather than its advantages. However, there are also studies that present a picture that contradicts the results of this study. For instance, it was argued that students may not fully understand the implications of AI in education and may overestimate its capabilities, suggesting that their perceptions may be less nuanced than previously thought [29]. Moreover, research highlights that both students and teachers express concerns about AI potentially widening the educational gap among different student groups, suggesting a shared concern about equity that may not be fully acknowledged in the initial findings [21]. Lastly, research indicates that there can be a misalignment between teachers’ expectations of AI’s role in education and students’ actual experiences with these technologies, highlighting a disconnect that contradicts the notion of a unified perception [23].
To explore the relationship between students’ performance in Life Science and their perception of AI integration in education, a correlation analysis was conducted. The goal was to determine whether students’ openness to and acceptance of AI-based learning approaches are associated with their academic achievement. By examining test scores in Life Science and perception scores toward AI integration, this analysis provides insights into how students’ attitudes toward technology may influence or align with their learning outcomes. Table 9 below presents the results of this analysis, including the descriptive statistics (mean and standard deviation), the Pearson correlation coefficient, and the significance level.
Table IX Relationship between the Students’ Achievement Scores in Life Science and Their Perception Score in AI utilization in education
Mean | s.d | N | |
Test scores | 17.2120 | 5.69 | 183 |
Perception scores | 3.44 | 1.15 | 183 |
Pearson correlation | 0.389 | ||
Sig (2-tailed) | 0.007 |
The Pearson correlation coefficient of 0.389 indicates a moderate positive correlation between the two variables. This means that students with higher perception scores toward AI integration tend to have higher achievement scores in Life Science. The p-value of 0.007 (Sig 2-tailed) is less than 0.05, indicating that the correlation is statistically significant. The significant positive relationship suggests that students’ openness and positive perception of AI integration may be associated with better performance, even at low achievement levels. This finding highlights the potential of AI-based learning tools to engage students and enhance their learning outcomes in Life Science. Moving forward, integrating AI into instruction could serve as a strategy to address gaps in student achievement while fostering positive attitudes toward innovative learning approaches.
Studies supporting this claim highlight that students who hold positive perceptions of technology integration, including AI, tend to demonstrate higher academic performance. For instance, it was found out that students with favorable attitudes towards technology significantly influenced their engagement and motivation, which in turn affected their academic success [2]. Similarly, it was emphasized that students who interacted with these tools reported increased interest and motivation, leading to improved performance [28].
Conversely, some studies raised concerns about the effectiveness of AI in improving educational outcomes. It was found that while some students appreciated AI tools, many expressed skepticisms about their actual impact on learning, suggesting a disconnect between perception and performance [21]. Additionally, it was argued that students often have a limited understanding of AI technologies, which can lead to inflated perceptions that do not correlate with actual performance [29]. This lack of understanding may hinder effective engagement with AI, potentially undermining academic achievement. Moreover, equity concerns were highlighted, noting that while some students benefit from AI integration, others, particularly those from disadvantaged backgrounds, may face challenges that limit their access to AI tools and resources, resulting in disparities in achievement [21].
In conclusion, while there is substantial evidence supporting the notion that positive perceptions of AI integration correlate with higher achievement in Life Science, the complexities of this relationship cannot be overlooked. The significant positive relationship suggests that AI-based learning tools have the potential to enhance educational outcomes. However, skepticism about AI’s effectiveness, limited understanding of its applications, and equity concerns highlight the need for a more comprehensive approach to integrating AI in educational settings. Future research should continue to explore these dynamics to better understand how to effectively leverage AI while addressing the diverse needs of all students.
CONCLUSION AND IMPLICATIONS
This research examined the perceptions of students, as well as the relationship between students’ academic achievement in Life Science and their views on AI. The study aimed to address the challenges faced by Filipino students in science subjects and explore how AI can support their learning.
Results showed that a significant number of Grade 11 students struggled academically in Life Science, with most failing to meet the expected achievement levels. Additionally, the findings indicated that students generally have a positive perception of Generative AI, recognizing its potential to enhance learning and engagement. However, the attitude of the students on AI integration shows a gap in appreciation for AI’s creative and supportive potential. Lastly, the analysis revealed a correlation between students’ perceptions of AI integration and their academic performance, suggesting that a more favorable attitude towards AI could positively influence their learning outcomes.
RECOMMENDATIONS
Based on the results of the study, the following recommendations are proposed:
- The Department of Education may implement comprehensive training programs for teachers to enhance their understanding of Generative AI tools and their effective integration into the curriculum. This will equip educators with the necessary skills to utilize AI in a way that supports student learning and engagement.
- Schools and teachers may establish targeted support programs for students struggling in Life Science, including tutoring and mentorship initiatives that leverage AI technologies to provide personalized learning experiences. This can help address the academic challenges identified in the study.
- Researchers may conduct further research to continuously assess the impact of AI integration on student learning outcomes and gather feedback from both teachers and students. This will help refine strategies and ensure that the implementation of AI in education meets the diverse needs of learners.
ACKNOWLEDGEMENT
I am profoundly grateful for the support, guidance, and inspiration that I have received throughout the journey of completing this research. Without the unwavering presence and assistance of several individuals, this endeavor will not be possible. I would like to express my heartfelt gratitude to God, my institution, research advisers, family and friends, who stood by me, offered encouragement, and provided much-needed breaks, thank you for being a source of laughter and respite during moments of intensity.
I am humbled and grateful for the opportunities and experiences that have come and will come as I go through this research journey. Each person mentioned here has played an indispensable role, and I am forever indebted to you all.
REFERENCES
- Al Darayseh, A. S. (2023). Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers and Education: Artificial Intelligence, 100132. https://doi.org/10.1016/j.caeai.2023.100132
- Alshorman, S. (2024). THE READINESS TO USE AI IN TEACHING SCIENCE: SCIENCE TEACHERS’ PERSPECTIVE. Journal of Baltic Science Education, 23(3), 432–448. https://doi.org/10.33225/jbse/24.23.432
- Aruleba, K., Sanusi, I T., Obaido, G., & Ogbuokiri, B. (2023, December 22). Integrating ChatGPT in a Computer Science Course: Students Perceptions and Suggestions. Cornell University. https://doi.org/10.48550/arxiv.2402.01640
- Bernardo, A. B. I., Cordel, M. O., Calleja, M. O., Teves, J. M. M., Yap, S. A., & Chua, U. C. (2023). Profiling low-proficiency science students in the Philippines using machine learning. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-01705-y
- Blood, D.F., and Budd, W.C. (1972). Educational measurement and evaluation. New York: Harper & Row.
- Camara, J. S. (2020). Philippine Biology Education for a Curricular Innovation towards Industrial Revolution 4.0: A Mixed Method. Asian Journal of Multidisciplinary Studies, 3(1), 41–51. https://asianjournal.org/online/index.php/ajms/article/view/212
- Caratiquit, K. D., & Caratiquit, L. J. C. (2023). ChatGPT as an academic support tool on the academic performance among students: The mediating role of learning motivation. Journal of Social, Humanity, and Education, 4(1), 21-33. https://doi.org/10.35912/jshe.v4i1.1558
- Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8
- Chukhno, O. (2024). TEACHERS AND LEARNERS’ PERSPECTIVES ON THE USE OF GENERATIVE AI IN FOREIGN LANGUAGE EDUCATION. Modern Information Technologies and Innovation Methodologies of Education in Professional Training Methodology Theory Experience Problems, 73, 53–60. https://doi.org/10.31652/2412-1142-2024-73-53-60
- Deng, J., & Lin, Y. (2023). The Benefits and Challenges of ChatGPT: An Overview. Frontiers in Computing and Intelligent Systems, 2(2), 81–83. https://doi.org/10.54097/fcis.v2i2.4465
- DO 8, s. 2015 – Policy Guidelines on Classroom Assessment for the K to 12 Basic Education Program | Department of Education. April 1, 2015
- Ebel, R. L. (1972). Essentials of educational measurement. Prentice-Hall.
- Farillon, L. M. F. (2022). Scientific reasoning, critical thinking, and academic performance in science of selected Filipino senior high school students. Utamax Journal of Ultimate Research and Trends in Education, 4(1), 51–63. https://doi.org/10.31849/utamax.v4i1.8284
- Fraenkel, J., Wallen, N., & Hyun, H. (1993). How to Design and Evaluate Research in Education 10th ed. McGraw-Hill Education.
- Freeman, M. H. (1962). A graphical method of objective forecasting derived by statistical techniques. Quarterly Journal of the Royal Meteorological Society, 88(377), 337–338. https://doi.org/10.1002/qj.49708837715
- Halagatti, M., Gadag, S., Mahantshetti, S., Hiremath, C. V., Tharkude, D., & Banakar, V. (2023). Artificial Intelligence: The New Tool of Disruption in Educational Performance Assessment. RePEc-Econpapers.. https://econpapers.repec.org/bookchap/emecsefzz/s1569-37592023000110a014.htm
- Liang, Y. (2023, August 22). Balancing: The Effects of AI Tools in Educational Context. , 3(8), 7-10. https://doi.org/10.54691/fhss.v3i8.5531
- Madhavan, M., & Mustafa, S. (2022). Systems biology–the transformative approach to integrate sciences across disciplines. Physical Sciences Reviews, 8(9), 2523–2545. https://doi.org/10.1515/psr-2021-0102
- Milloria, B. R. B., Marzonº, A. M. D., & Derasin, L. M. C. (2024). Investigating AI-Integrated Instruction in Improving Academic Performance of Senior High School Students in the Philippines. Journal of Harbin Engineering University, 45(6). ISSN 1006-7043.
- Mohammadkarimi, E. (2023, July 17). Teachers’ reflections on academic dishonesty in EFL students’ writings in the era of artificial intelligence. Canadian Philosophy of Education Society, 6(2). https://doi.org/10.37074/jalt.2023.6.2.10
- Orine, P. A., Casipit, J., Fontanilla, A. M., Munar, K. D., Soriano, J. M., & Ong Jr., D. (2024). Cidade da transformação: um material de bolso de intervenção estratégica para a ciência 6. Diversitas Journal, 9(3). https://doi.org/10.48017/dj.v9i3.3066
- Özyiğit, İ. İ. (2020). About Life Sciences and Related Technologies. Frontiers in Life Sciences and Related Technologies, 1(1), 1-11.
- Quimat, R. M., & Picardal, M. (2024). Context-Based Teaching through Education for Sustainable Development in Philippine Secondary Schools: A Meta-analysis. Recoletos Multidisciplinary Research Journal, 12(1), 25–40. https://doi.org/10.32871/rmrj2412.01.03
- Retone, L. E., & Prudente, M. S. (2020, January). Effects of Technology-Integrated Brain-Friendly Teaching on Retention and Understanding in Photosynthesis and Cellular Respiration. In Proceedings of the 2020 11th International Conference on E- Education, E-Business, E-Management, and E-Learning (pp. 59-63). https://doi.org/10.1145/3377571.3377590
- Rezigalla, A. A., Eleragi, A. M. E. S. A., Elhussein, A. B., Alfaifi, J., ALGhamdi, M. A., Ameer, A. Y. A., Yahia, A. I. O., Mohammed, O. A., & Adam, M. I. E. (2024). Item analysis: the impact of distractor efficiency on the difficulty index and discrimination power of multiple-choice items. BMC Medical Education, 24(1). https://doi.org/10.1186/s12909-024-05433-y
- Waseem, M., Das, T., Ahmad, A., Liang, P., Fahmideh, M., & Mikkonen, T. (2024, January 1). ChatGPT as a Software Development Bot: A Project-Based Study. https://doi.org/10.5220/0012631600003687
- Yadav, S. (2024). Reimagining Education With Advanced Technologies: Transformative Pedagogical Shifts Driven by Artificial Intelligence. In Impacts of Generative AI on the Future of Research and Education (pp. 1-26). IGI Global
- Zhai, X. (2024). Transforming Teachers’ Roles and Agencies in the Era of Generative AI: Perceptions, Acceptance, Knowledge, and Practices. In arXiv (Cornell University). Cornell University. https://doi.org/10.48550/arxiv.2410.03018
- Zhang, P., & Tur, G. (2023). A systematic review of ChatGPT use in K‐12 education. European Journal of Education, 59(2). https://doi.org/10.1111/ejed.12599