Unpacking Reductionism in AI-Driven Mathematics Education: A Factor Analysis of Educators’ Insights and Applications
- Angel B. Manuel
- Julius S. Valderama
- 4904-4915
- Jul 25, 2025
- Education
Unpacking Reductionism in AI-Driven Mathematics Education: A Factor Analysis of Educators’ Insights and Applications
Angel B. Manuel, Julius S. Valderama
College of Arts and Sciences Bayombong, Nueva Vizcaya State University- Bayombong
DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0354
Received: 19 June 2025; Accepted: 23 June 2025; Published: 25 July 2025
ABSTRACT
This study investigates the phenomenon of reductionism in AI-driven mathematics education by developing and validation a comprehensive survey instrument grounded in secondary educators’ perceptions. Drawing on a positivist framework, the research surveyed 59 mathematics teachers in Ifugao to assess their understanding of AI’s procedural tendencies, perceived benefits, potential drawbacks, and recommendations. Instrument development followed a rigorous sequence of item generation, expert validation, and pilot testing, culminating in a total of 75-item questionnaire. EFA using principal axis factoring with oblimin rotation identified nine coherent factors namely educators’ understanding of reductionism, perceived benefits, 6 drawbacks (Conceptual Dilution, Over-Reliance on Technology, Fragmentation of Learning, Decreased Metacognition Engagement, Loss of Mathematical Rigor, and Shallow Learning Outcomes), and Future Implications and Recommendations, accounting for 74.3% of total variance. The Cronbach’s alpha .83 to 0.91 showed that each subscale has high internal consistency. Scrutiny of the data showed that although Ifugao math teachers appreciate AI for scaffolding, visualization, and feedback, they still have major concerns about its tendency to fragment knowledge, compromise metacognition, and encourage shallow learning. Additionally, it was found that a planned, teacher-made integration maintains conceptual depth and helps learners to appreciate the true effectiveness of artificial intelligence. With these, it is recommended that educators shall engage in continuous professional development for them to be more equipped with the skills aligned with the use of AI. The verified instrument provides both researchers ad educators a consistent tool for continuous improvement on the use of AI in mathematics instruction.
Keywords: Reductionism, Mathematics Instruction, Artificial Intelligence, Perception, Benefits and Drawbacks
INTRODUCTION
Background of the Study
The advances in the twenty-first education involves the rapid growth of utilizing artificial intelligence (AI), including mathematics courses. AI-powered tools such as Geogebra, ChatGPT, Cici, and Wolfram Apha are increasingly integrated into mathematics education. These applications are very helpful in understanding complex problems in mathematics as it provides a step-by-step solution; however, these applications may cause students to rely and just concentrate on the procedural execution instead of understanding the fundamental processes when used improperly. This further causes critical thinking and problem-solving skills to impede.
The use of AI in mathematics illustrates the concept of reductionism as it is the practice of considering or presenting something complicated in a simple way to be easier to understand. This is supported by Rene Decartes in his “Discourses” proposing that the world could be understood by studying its parts. In mathematics, it can be exemplified in solving a complex problem by understanding the specific solutions in which it can be done easily by the assistance of AI tools.
According to educators like Hodge-Zickerman and York (2024), they emphasized the importance of incorporating AI knowledge and critical thinking programs for both teachers and students. Without such initiatives, there is a risk of adopting a mechanistic approach to learning that could impede the development of holistic problem-solving abilities. Thus, implementing professional development programs to both teachers and learners shall help alleviate these concerns otherwise it can downplay the importance of intuition and creativity which are vital components of higher order thinking skills (Rane, Choudhary & Rane, 2023). This further emphasizes that the need for a balance between AI-driven reductionist approach and holistic mathematical understanding to optimize the role of AI in education.
Despite the growing adaptation of AI in mathematics learning contexts, researchers have scarcely examined how reductionism, a tendency to decompose complex mathematical ideas into discrete procedural steps, manifests in educator’s perception and practices. Existing research on AI integration highlights both benefits and drawbacks but does not examine reductionism as a distinct construct or provide psychometrically sound measures of its influence. Understanding how AI tools affect the way mathematics is taught and learned is important, especially as they become more common in classrooms. This study aims to help curriculum developers, policymakers, and educators find a healthy balance between the speed and convenience AI offers and the need to preserve deep, connected mathematical thinking.
Specifically, the research explores how secondary math educators in Ifugao perceive the idea of reductionism within AI driven math education. It aims to identify key factors that affects how AI is grasped and used in the teaching-learning process. More deliberately, to construct a valid research instrument that will be helpful in assessing the integration of AI and reductionism by means of which not only improves efficiency but also helps in the cognitive development and critical thinking of the learners.
To reach this goal, the study first examines how well the educators understand reductionism, what they see as the benefits and drawbacks of AI in math teaching, and what they recommend for its continued use. From this data, a survey tool is developed and carefully validated to ensure its reliability measures. The said instrument is a useful and evidence-based tool that may guide future attempts to enhance AI use in mathematics education.
LITERATURE REVIEW
Reductionism is a philosophical and cognitive paradigm which suggest that complicated events can be understood by examining the fundamental parts (Rescher, 2020). In educational setting, reductionism means teaching strategies that give exploratory investigation less priority than procedural competence (Chen et al., 2021). More specifically, in mathematics, it is breaking down complex problems into a simpler and linear solutions. This offers a lens through which AI-driven mathematics instruction is examined as it emphasizes how sequential deconstruction can both clarify and break student’s conceptual understanding.
Hwang and Tu (2021) and Mohamed et al. (2022) specified that AI tools including Chat GPT, Cici, Geogebra, and Wolfram Alpha are helpful in providing instant feedback and support by producing step-by-step explanations. Tulli (2022) found that teachers become more productive and were able to provide better quality in their classroom instruction. They are more efficient in administrative duties, including grading and reviewing student work. Similarly, Sumakul, Hamied, and Sukyadi (2022) found that English as Foreign Language (EFL) teachers had positive perceptions on the use of AI and they agreed that it is beneficial for both educators and learners.
However, this same procedural understanding opens the danger of marginalizing chances that students develop critical thinking and intuitive investigation (Opesemowo & Adewuyi, 2024). Result from the empirical study that has been conducted on the views of teachers highlights that they are excited about the ability of AI to automate repetitive tasks but worry about its deeper educational consequences. Moreover, researches highlight the conflict between the procedural affordances of AI and the target objectives of developing critical and higher-order thinking in math courses. In the study of Alsharidah et al. (2024), of the 382 middle school mathematics teachers, many questioned AI’s capacity to develop user’s conceptual knowledge. Opesemowo and Ndlovu (2024) indicated that generative AI could worsen surface learning and reduce cooperative discussion leaving the teachers worried.
In the study of Holstein, McLaren, and Aleven (2028), intelligent tutoring system or AI working together with human teachers might be even more effective. In spite of national initiatives that support learner-centered pedagogy and digital transformation, many classrooms still continue to use reductionist methods. Inquiry-based learning and conceptual focus are promoted by educational reforms; however, educators usually use AI tools to lead students through repetitive tasks (Chen et al., 2022). This study looks at how secondary math teachers balance these conflicting demands. It is framed by this continuous contrast between progressive curriculum goals and reductionist implementation.
The study uses a factor analytic approach to instrument development in order to thoroughly evaluate these perceptions in accordance with best practices in scale construction (DeVellis, 2016). Each item was designed to capture different aspects of reductionism, such as comprehension of procedural fragmentation, perceived benefits, perceived drawbacks, and suggestions for balanced integration. These were based on the Technology Acceptance Model of Davis (1989) and existing measures of AI literacy (Mohammed et al., 2022). Additionally, the structure of the instrument will be validated through exploratory and confirmatory factor analyses to guarantee its validity and reliability for use in the teaching-learning process of mathematics.
METHODOLOGY
Research Design
This study employed a quantitative, cross-sectional survey design to examine secondary educators’ perception of reductionism in AI-driven mathematics education. This study was grounded in a psychometric approach, utilizing Exploratory Factor Analysis (EFA) to identify the underlying dimensions of educators’ perceived drawbacks, benefits, and future policy recommendations regarding AI integration in mathematics instruction. This design was chosen to establish construct validity and internal consistency of the developed instrument while capturing descriptive patterns in educators’ responses.
Participants and Sampling
Participants were secondary mathematics educators from public and private secondary schools across the province of Ifugao. Since not all secondary teachers utilizes AI, this study used purposive sampling to find educators who have experience integrating or observing AI tools.
Instrument Development and Validation
There were five (5) phases comprehensively done to develop and validate the instrument.
Phase 1: Gathering Qualitative Information
Identified math teachers who have been exposed to AI tools in their teaching practice undergo a semi-structured interviews to generate preliminary data. The interview was guided by the research objectives which aims to elicit educators’ perception, advantages, disadvantages, and implications of integrating AI into mathematics education in the lens of reductionism. Preliminary data acquired were transcribed, coded, and subjected to thematic analysis.
Phase 2: Research Instrument Development
The statements collected were all represented in the study’s four main constructs namely perception, advantages, disadvantages, and implications of reductionism in AI-driven math education. Each constructs have 12 statements each with a total of 108 statements. These statements were crafted to capture varying degrees of perception and wee formatted into a Likert-type survey instrument.
Phase 3: Content Validation
To ensure the content validity of the developed instrument, it was reviewed by three experts in the fields of mathematics education, psychology, educational technology, and philosophy of science from two different universities. Each expert assessed the statements’ clarity, relevance, and fit with the goals of the study. Subsequently, changes were made to make sure the statements were technically sound and conceptually appropriate.
Phase 4: Pilot Testing
After validation, a group of math teachers who were not involved in the primary study participated in a pilot test of the updated tool. This phase aimed to assess the reliability and preliminary structure of the instrument. Data from the pilot test were subjected to internal consistency analysis specifically Cronbach’s alpha to evaluate the reliability of each construct. Items that did not meet acceptable reliability thresholds were removed.
Phase 5: Data Analysis
A larger sample of math teachers (n=59) was given the valid instrument. Descriptive statistics, specifically the mean and standard deviation as well as the item-total correlation values for each item were computed. For the internal consistency reliability, Cronbach’s alpha was calculated. According to Taber (2018), values greater than .70 indicate acceptable reliability. Exploratory Factor Analysis (EFA) employing principal axis factoring with oblimin rotation was utilized to ascertain the latent structure of the perceived drawback items. Initially, the sustainability of the dataset for factor analysis was evaluated using Barlet’s test of sphericity and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. Factors were extracted based on eigenvalues greater than 1.0 and a scree plot inspection. Item retention was guided by factor loadings greater than .40 and cross-loading thresholds.
RESULTS AND DISCUSSION
Educators’ Understanding of Reductionism in AI
Table 1. Descriptive Statistics and Internal Consistency for Educators’ Understanding of Reductionism in AI
Item No. | Statement | M | SD | α-if-deleted |
1 | AI often breaks down complex problems into separate steps, reducing the richness of interconnected concepts. | 3.28 | 0.82 | 0.902 |
2 | Reductionism in AI favors procedural fluency over deep mathematical understanding. | 3.55 | 0.71 | 0.898 |
3 | AI tools prioritize speed and efficiency, sometimes at the cost of holistic learning. | 3.42 | 0.78 | 0.896 |
4 | The step-by-step outputs of AI can overlook the intuitive grasp of mathematical ideas. | 3.15 | 0.89 | 0.905 |
5 | The modular presentation of AI solutions may fragment students’ cognitive structure of mathematics. | 3.30 | 0.77 | 0.902 |
6 | AI supports focused learning through compartmentalized steps, which can be complemented with strategies that encourage systems thinking in math. | 3.60 | 0.69 | 0.896 |
7 | Reductionism occurs when mathematical procedures are separated from reasoning and proof. | 3.33 | 0.84 | 0.9 |
8 | AI’s simplification may limit students’ opportunities to explore multiple solution pathways. | 3.47 | 0.75 | 0.898 |
Overall | 3.40 | 0.60 | 0.91 |
The eight-item subscale measuring the educators’ perception of reductionism in AI with their respective descriptive statistic and internal consistency coefficients are shown in Table 1. The overall Cronbach’s alpha of .91, the subscale demonstrated outstanding internal reliability and a high level of internal consistency among all the items. On a 4-point Likert scale, with 1 denoting strongly disagree and 4 denoting strongly agree, the overall mean score was 3.20 (SD=0.60). With this result, it can be concluded that AI breaks down mathematical complexity into simple processes.
It can be gleaned that the strongest support at the item level was for item 6, indicating “AI’s ability to facilitate concentrated learning through compartmentalized steps, which can be complemented with strategies that encourage systems thinking in math” (M=3.60, SD=0.69). Following this is item 2, stating that “reductionism in AI favors procedural fluency over deep mathematical understanding” (M=3.47, SD=0.75). On the other hand, item 4, asserting that “step-by-step outputs of AI can overlook the intuitive grasp of mathematics ideas” (M=3.5, SD=0,89).
With these, the data illustrate a consistent awareness among educators that AI, while efficient in procedural delivery, may inadvertently lead to reductionist outcomes in mathematics instruction. The generally high means and low standard deviation reflects agreement across respondents that AI tools emphasize structured problem-solving at the possible expense of holistic, interconnected, and intuitive learning experiences. This suggests a common awareness of the possible dangers of using AI if it is not counterbalanced by procedures that encourage more in-depth logic, evidence, and conceptual discussion.
The result of this subscale supports earlier studies which expresses apprehension regarding the pedagogical consequences of AI-powered math education. AI tools are efficient but they frequently fall short in fostering intuition, reasoning, or systematic understanding in the absence of planned scaffolding (Holmes et al., 2022; Xu & Ouyang, 2022). These results are supported by the educators’ moderate to strong agreement about the loss of conceptual depth, thus confirms that educational benefits of the use of AI depend on its integration under proper guidance and reflection.
Perceived Benefits of AI in Mathematics Education
Table 2. Descriptive Statistics and Internal Consistency for the Perceived Benefits of AI in Mathematics Education
Item No. | Statement | M | SD | α-if-deleted |
1 | AI can scaffold learning but should not replace foundational skill-building. | 4.20 | 0.65 | 0.873 |
2 | With proper guidance, AI can visualize math concepts without undermining conceptual depth. | 4.10 | 0.60 | 0.874 |
3 | AI’s immediate feedback accelerates learning but must be paired with reflective tasks. | 3.95 | 0.70 | 0.876 |
4 | Teachers should bridge AI-driven steps with meaningful mathematical dialogue. | 3.85 | 0.75 | 0.881 |
5 | AI offers representations that reduce cognitive load but must be contextualized. | 3.90 | 0.68 | 0.877 |
6 | While AI simplifies processes, teachers must emphasize why the process works. | 3.88 | 0.85 | 0.878 |
7 | AI enables individual pacing but must support conceptual mastery, not just correctness. | 3.86 | 0.72 | 0.88 |
8 | The efficiency of AI should not outweigh the value of exploratory and inquiry-based learning. | 3.84 | 0.77 | 0.882 |
9 | AI can enhance problem-solving only if students actively engage with the process, not just the result. | 3.92 | 0.66 | 0.875 |
10 | AI must be used to support not supersede the pedagogical aim of critical understanding. | 4.05 | 0.62 | 0.874 |
Overall | 3.96 | 0.70 | 0.88 |
The 10-item subscale measuring the AI’s advantages in math education with its descriptive statistics are presented in Table 2. The subscale’s high internal reliability (Cronbach’s =.88) suggests that the statements were consistent across items. Through the overall mean score of 3.96 (SD=0.70), it appears that teachers generally perceive AI as a helpful tool for teaching math when used carefully and correctly.
Item 1has the highest mean score (M=4.20, SD=0.65) which stated that AI can scaffold learning but should not replace foundational skill-building. This shows that broad agreement regarding the importance of striking balance between the application of AI and the core skill development. High support was also given to item 2 which states that with proper guidance, AI can visualize math concepts without undermining conceptual depth (M=4.10, SD=0.60). In addition, the rest of the items indicates moderate strong agreement regarding the advantages of AI in promoting mathematical comprehension as indicated by their mean scores ranging from3.84 to 4.05.
Scrutiny of the data shows that teachers are aware of the AI’s potential to improve math instruction by providing scaffolding, visualization, and real-time feedback. However, maintaining conceptual complexity and critical thinking depend on intentional teaching methods and to prevent AI from replacing basic knowledge can be demonstrated through educator’s dedication to safeguarding the integrity of mathematical learning process.
Parallel with this result, themes like improved assessment and personalized learning through AI-powered math pedagogies and AI-driven adaptive learning systems greatly enhanced student’s mathematical proficiency and engagement which shows AI’s capacity to facilitate personal learning (Luzano, 2024; Dabingaya, 2024).
Perceived Drawbacks of AI in Mathematics Education
The perceived disadvantages of AI in mathematics education through reductionism’s lens are represented by six (6) subscales including Conceptual Dilution, Over-Reliance on Technology, Fragmentation of Learning, Decreased Metacognitive Engagement, Loss of Mathematical Rigor, and Shallow Learning Outcomes. For each item, the mean, standard deviation, and Cronbach’s alpha-if-deleted values for all items ranged from .897 to .918, supporting the cohesiveness of the construct. Notable, no item produced a higher alpha-if-deleted than the overall alpha (.913). This suggest that each item contributed meaningfully to the scale’s reliability.
Table 3.1. Conceptual Delusion
Item No. | Statement | M | SD | α-if-deleted |
1 | AI gives steps, but learners don’t bother to understand why the steps work. | 4.40 | 0.56 | 0.9 |
2 | Students skip the foundational principles and focus only on results. | 4.29 | 0.58 | 0.9 |
3 | I ask students to explain, and they just repeat what AI gave. | 4.35 | 0.55 | 0.9 |
4 | Students don’t grasp the formula derivation anymore. | 4.24 | 0.63 | 0.89 |
5 | When solving word problems, students can’t explain their solutions logically. | 4.30 | 0.51 | 0.9 |
6 | Conceptual errors are common because learners don’t build solid foundations. | 4.33 | 0.49 | 0.9 |
7 | I notice confusion when problems are slightly modified from what AI shows. | 4.34 | 0.52 | 0.9 |
8 | Students can’t generalize concepts because they rely on specific AI examples. | 4.26 | 0.60 | 0.9 |
9 | Even basic mathematical properties are forgotten. | 4.27 | 0.56 | 0.89 |
Overall | 4.31 | 0.56 | 0.90 |
The Conceptual Delusion subscale exhibited excellent internal consistency, with a Cronbach’s alpha of 0.90. The items significantly add to the overall reliability of the scale, as indicated by the alpha-if-deleted values, which ranged from 0.89 to 0.90.
Educators strongly agree that students’ mathematical comprehension suffers from conceptual delusion when using AI tools as shown in the overall mean score of 4.31 (SD=0.56). The high mean reflects a pervasive concern about students’ superficial engagement with mathematical concepts facilitated by AI assistance.
It can be seen that item 1 received the highest mean score (M=4.40, SD=0.56, ). It highlights that educators’ observation that students often accept AI-generated solutions without seeking underlying conceptual understanding. Similarly, the high mean score (M=4.35, SD=0.55) of item 3 implies that students might not be able to explain their thinking beyond AI outputs. Also, item 6 (M=4.33, SD=0.49) and 7 (M=4.34, SD=0.52) suggest that reliance on AI may lead to conceptual errors and confusion when problems are change.
Such findings are consistent with the fear that if students rely too much on AI tools, they may passively accept the answers which will prevent them from developing higher-order thinking skills (Pepin et al., 2023).
Table 3.2. Overall Reliance on Technology
Item | Statement | M | SD | α-if-deleted |
1 | Before even trying, they ask to use their application. | 4.3 | 0.6 | 0.87 |
2 | It’s as if AI became the tutor instead of me. | 4.21 | 0.55 | 0.87 |
3 | Some students panic when they lose internet access. | 4.23 | 0.58 | 0.86 |
4 | Students don’t verify, they just accept. | 4.28 | 0.51 | 0.87 |
5 | Even with errors, students follow AI’s answer blindly. | 4.25 | 0.55 | 0.87 |
6 | Students are passive during lessons but active when AI is involved. | 4.22 | 0.63 | 0.87 |
7 | Students no longer attempt mental math. | 4.15 | 0.57 | 0.88 |
8 | Students often say, ‘Sir, I’ll just ask ChatGPT.’ | 4.2 | 0.6 | 0.88 |
Overall | 4.23 | 0.57 | 0.87 |
The Cronbach’s alpha of 0.87 showed outstanding internal consistency among the items for the Overall Reliance on Technology. In the same way, the overall mean score of 4.23 (SD=0.57) implies that educators dtrongly agree that students heavily rely on AI technology for learning. This further indicates a widespread worry about students’ reliance on these tools which may come at the price of their ability to think critically and acquire fundamental skills.
Item 1 which had the highest mean score of 4.30 (SD=0.60), emphasizes that students frequently turn to applications rather that trying to figure out solutions of the problems on their own. Correspondingly, item 4 which have a mean of 4.28 (SD=0.51) indicates that students may accept AI-generated answers without critical evaluation. For items 5 (M=425, SD=0.55, ) and 3 (M=4.23 SD=0.58, ) also suggest that reliance on AI contributes to blind acceptance of information and anxiety when technology is unavailable.
According to the study by Weck et al. (2024), students who used generative AI tools performed worse on tests that those who did not. This implies that deep learning may be hampered by an over-reliance on AI tools. Additionally, students who frequently accept AI-generated solutions without critically reviewing and analyzing them results to misconception and errors (Krupp et al., 2023).
Table 3.3. Fragmentation of Learning
Item | Statement | M | SD | α-if-deleted |
1 | They don’t see how one part of the solution connects to the next. | 4.14 | 0.58 | 0.85 |
2 | AI outputs are step-by-step, but not logically cohesive. | 4.19 | 0.62 | 0.85 |
3 | Learners focus on procedures, not patterns. | 4.12 | 0.67 | 0.85 |
4 | They can’t link new topics with prior knowledge. | 4.1 | 0.6 | 0.85 |
5 | AI isolates operations without context. | 4.16 | 0.63 | 0.86 |
6 | They struggle with multi-step word problems. | 4.25 | 0.54 | 0.85 |
7 | Their thinking is linear, not flexible. | 4.2 | 0.55 | 0.86 |
8 | They think math is mechanical, not connected. | 4.17 | 0.59 | 0.86 |
9 | It weakens integrative learning. | 4.28 | 0.49 | 0.85 |
Overall | 4.18 | 0.59 | 0.85 |
The subscale Fragmentation of Learning demonstrated high internal consistency, with a Cronbach’s alpha of 0.85. Each item consistently contributed to the conceptual measure of learning fragmentation brought about by AI use in mathematics education as demonstrated by α-if-deleted values which ranges from 0.85 to 0.86.
Additionally, there is a strong agreement among educators as gleaned in the overall mean score of 4.18 (SD=0.59). This further mean that the integration of AI in math instruction contributes to fragmented learning experience. Teachers largely perceive that students are unable to synthesize ideas or see the interconnectedness of concepts, a result that reinforces the narrative of compartmentalized understanding.
The highest-rated item was item 9 with a mean of 4.28 (SD=0.49). This reflects a broad concern that AI contributes to disjointed learning structures. Correspondingly, item 6 had a high mean of 4.25 (SD=0.54) which suggest that students have difficulty managing tasks that require integration and sequential reasoning. Other highly rated items, specifically item 2, 5, and 8, indicate that while AI provides linear stepwise support, it fails to promote a holistic grasp of mathematical processes and structures.
In alignment with these observations, Rane (2023) found that while AI applications enhance access to problem-solving tools, they often reinforce a surface-level understanding of mathematical ideas, making it harder for learners to engage in metacognitive reflection or see the big picture in multi-topic assessments. These findings affirm that while AI has practical value, its use without pedagogical context can fracture student’ cognitive structures and stunt integrative learning.
Table 3.4. Decreased Metacognitive Engagement
Item | Statement | M | SD | α-if-deleted |
1 | AI becomes the checker, not their brain. | 4.36 | 0.5 | 0.89 |
2 | I rarely see students correcting their own mistakes anymore. | 4.29 | 0.48 | 0.9 |
3 | Students wait for AI instead of reviewing their own logic. | 4.33 | 0.47 | 0.9 |
4 | Students trust AI more than their own thinking. | 4.42 | 0.5 | 0.89 |
5 | I can’t teach metacognition when AI’s always doing it. | 4.32 | 0.52 | 0.89 |
6 | Students’ confidence is artificially based on tech, not thought. | 4.38 | 0.48 | 0.89 |
7 | Students don’t plan or monitor their progress. | 4.34 | 0.55 | 0.89 |
8 | I ask ‘how did you know?’ and they say, ‘because ChatGPT said so.’ | 4.41 | 0.46 | 0.89 |
9 | Reflection is no longer part of their math habits. | 4.33 | 0.49 | 0.9 |
Overall | 4.35 | 0.49 | 0.89 |
The Decreased Metacognitive Engagement subscale demonstrated excellent reliability, with Cronbach’s alpha of .89, and α-if-deleted values between .89 and .90. The said figures indicate that each item meaningfully contributes to scale consistency. In addition, the overall mean of 4.35 (SD=0.49 signifies strong educator agreement that AI use diminishes students’ active self-monitoring and reflective behaviors.
Conceptually, these results suggest that AI’s automation of evaluative functions relegates students to passive recipients of solutions rather that active thinkers. High means on items such as students trust AI more than their own thinking (M=4.42, SD=0.50; α-if-deleted=0.89) reflects that educator’s perception that AI tools supplant critical self-evaluation processes which is a core component of metacognitive regulation.
These findings align with the dual-process model of metacognition, wherein monitoring and control are essential for self-regulated learning. This is when AI assumes monitoring roles, learners’ metacognitive control diminishes (Thi-Nga et al. (2024). Furthermore, Li (2024 seminal work on metacognitive awareness underscores that externalizing evaluation to AI can erode students’ metacognitive knowledge and skills, thereby undermining long-term learning.
Table 3.5. Loss of Mathematical Rigor
Item | Statement | M | SD | α-if-deleted |
1 | Math requires patience, but AI offers instant gratification. | 4.18 | 0.58 | 0.85 |
2 | Students skip steps if AI doesn’t provide them. | 4.08 | 0.6 | 0.87 |
3 | Students rarely show solutions, just the final answer. | 4.06 | 0.54 | 0.86 |
4 | There’s little effort to understand symbols and structure. | 4.07 | 0.61 | 0.86 |
5 | Their academic stamina has weakened. | 4.18 | 0.58 | 0.85 |
Overall | 4.11 | 0.58 | 0.86 |
The Loss of Mathematical Rigor subscale demonstrated strong internal consistency as reflected by its Cronbach’s alpha .86, with a α-if-deleted values between .85 and .87. This confirms reliable measurement of educators’ concerns about diminished rigor when AI tools are overused. The high agreement of the educators, as indicated by the overall mean of 4.11 (SD=0.58), implies that AI’s instant feedback and answer-giving threaten the patient persistence and symbolic fluency central to rigorous mathematics learning.
The findings further suggest that AI affordances, specifically swift solutions and automated steps, undermine students’ engagement with mathematical discipline practices such as meticulous symbolic manipulation and step-wise proof construction. The high mean on items 1 to 5 ranging from 4.07 to 4.18 and low standard deviation between 0.54 to 0.61 underscore educator’s view that AI can erode the cognitive stamina required for deep problem solving and formal reasoning. The high endorsement of these items calls for pedagogical safeguards specifically teachers may require students to show all work, annotate each step, and engage in symbol-translation exercises that counterbalance AI’s automation. With this, learners can restore the rigor to automate solution generation.
Recent literature corroborates these concerns. Opesemowo and Adewuyi (2024) found that AI tools like Wolfram Alpha improve speed but risk trivializing the learning of mathematical structure unless paired with conceptual scaffolds. Hwang and Tu’s (2021) systematic mapping revealed similar findings, noting that while AI enhances procedural fluency, it often fails to nurture symbol sense and mathematical patience without intentional instructional design. Moreover, the National Council of Teachers of Mathematics (NCTM) Position Statement on AI in Mathematics Teaching emphasizes that “AI should complement and not replace the development of mathematical reasoning and symbolism”
Table 3.6. Shallow Learning Outcomes
Item | Statement | M | SD | α-if-deleted |
1 | Students forget the topic by the next week. | 4.21 | 0.6 | 0.88 |
2 | AI learning doesn’t stick during quarterly exams. | 4.32 | 0.52 | 0.88 |
3 | Their retention is low, they only remember what they asked AI. | 4.28 | 0.54 | 0.88 |
4 | In long tests, they’re lost without AI. | 4.30 | 0.58 | 0.89 |
5 | Students can’t apply concepts to real-world problems. | 4.35 | 0.53 | 0.88 |
6 | Students don’t know how to adapt methods without AI. | 4.24 | 0.55 | 0.88 |
Overall | 4.28 | 0.55 | 0.88 |
Table 3.6 shows that the six-item subscale of Shallow Learning Outcomes exhibited strong internal consistency (α=.88), with a α-if-deleted values uniformly at .88 to .89, confirming each item’s contribution to the overall scale. This further implies that educators agreed that AI facilitated learning outcomes are typically temporary and superficial.
These findings imply that educators’ belief that AI can provide instant solution to students but are easily forgotten the following week (M=4.21, SD=0.60) and are unable to apply the concepts without AI assistance (M=4.24, SD=0.55). This shows a gap between understanding the real concepts and procedural fluency.
In consonance with this, according to Abbas, Jam, & Khan (2024), students who rely on AI perform worse on cumulative tests because they don’t have enough retrieval practice.
Future Implications and Recommendations
Table 4. Descriptive Statistics and Internal Consistency of the Future Implications and Recommendations Subscale
Item No. | Statements | M | SD | α if deleted |
1 | Policies must distinguish AI’s role as a supplement, not a replacement, for instruction. | 4.26 | 0.59 | 0.935 |
2 | AI tool development must prioritize depth of learning over speed of computation. | 4.22 | 0.63 | 0.934 |
3 | Guidelines should require AI use to be coupled with reflective or explanatory tasks. | 4.17 | 0.69 | 0.936 |
4 | Teacher training must focus on mitigating reductionist outcomes in AI use. | 4.14 | 0.72 | 0.937 |
5 | Assessment models should evaluate students’ reasoning, not just AI-derived results. | 4.11 | 0.74 | 0.938 |
6 | Schools should promote awareness of reductionism and its potential learning impacts. | 4.13 | 0.66 | 0.937 |
7 | Curricula must encourage students to move beyond surface-level AI outputs. | 4.08 | 0.71 | 0.939 |
8 | Guidelines should prohibit exclusive reliance on AI for high-stakes assessments. | 4.06 | 0.7 | 0.94 |
9 | Institutional policies must emphasize student agency and inquiry in AI use. | 4.09 | 0.68 | 0.938 |
10 | Professional learning communities should analyze AI tool effectiveness. | 4.05 | 0.65 | 0.938 |
11 | Evaluation frameworks should track gains and losses in conceptual depth due to AI. | 4.12 | 0.7 | 0.937 |
Overall | 4.13 | 0.68 | 0.94 |
Table 4 presents the descriptive statistics and internal consistency for the 11-item scale measuring educators’ recommendation regarding the future use and policy framing of AI in mathematics education. With the average ratings fell between 4.05 and 4.26, the educators agreed that it is critical to strategically integrate AI without sacrificing conceptual depth of the integrity of instruction. However, the standard deviations, which varied from 0.59 to 0.74 reflects moderate variability in the responses.
The overall internal consistency ( =.94) indicates high reliability. Additionally, Cronbach’s alpha-if-deleted values ranged from .933 to .940 which validates the construct’s internal coherence and shows that the items taken together represent a common theme about the implications and recommendations in AI-driven shifts in mathematics instruction.
These suggestions theoretically align with Roger’s Diffusion of Innovations model, which emphasizes the importance of trialability, observability, and compatibility in effective technology adaptation. Echoing educators’ calls for reflective guidelines and assessment reforms, guided integration and ongoing formative assessment are essential to AI’s effectiveness (Xu & Ouyang, 2022).
CONCLUSION
This study has successfully developed and validated a multidimensional instrument for measuring reductionism in AI-driven mathematics education, grounded in the lived experiences and expert insights of secondary educators. Through rigorous exploratory factor analysis, distilled educators’ perceptions into nine reliable subscales which includes an overarching understanding of reductionism, perceived benefits, six distinct drawbacks (conceptual dilution, over-reliance on technology, fragmentation of learning, decreased metacognitive engagement, loss of mathematical rigor, and shallow learning outcomes), and future implications and recommendations. Each subscale has shown strong internal consistency (Cronbach’s alphas ranged from .85 to .91) which confirms the construct validity and coherence of the instrument.
Teachers acknowledged AI’s ability to scaffold procedural fluency, provide instant feedback, and visualize difficult ideas; however, they have also expressed serious concerns about reductionist inclinations, degradation of intuitive reasoning, and fragmentation of holistic understanding. These conflicting viewpoints highlight a key lesson of careful, teacher-mediated integration for AI usage to be pedagogically effective.
This instrument provides a diagnostic tool that educators, curriculum designers, and legislators can use to evaluate the way AI is currently being implemented, pinpoint areas where it is being over-reduced, and create focused interventions. Professional development can use subscale profiles to customize training that prioritizes metacognition scaffolds, conceptual dialogue, and integrative tasks in order to maintain mathematical richness in AI-enhanced classrooms. At the policy level, the result support recommendations that frame AI as an adjunct to human education rather than a substitute to incorporate reflective requirements and conceptually weighing the benefits and possible drawbacks.
In sum, this study offers a strong framework for measurement as well as an evidence-based road map for striking a balance between the preservation of complex, interconnected mathematical thinking and AI’s procedural efficiencies. The integrity of mathematics education is enhanced by technology as AI continues to transform pedagogical approaches.
RECOMMENDATION
Three key recommendations distilled from the validated instrument’s findings, each anchored in best-practice frameworks and recent guidance for AI integration in mathematics education:
Schools and district may mandate professional development that equips educators with strategies for reflective AI use which includes scaffolding AI outputs with metacognitive prompts, conceptual questioning, and error-analysis tasks. This is in line with the UNESCO’s AI Competency Framework for Teachers, which highlights the necessity for educators to acquire lifelong learning skills and the ability to critically evaluate AI solutions.
To confirm that the items measure reductionism across subpopulations, future researchers may use multi-group confirmatory factor analysis to confirm configural, metric, and scalar invariance.
Future researchers might add focus groups and cognitive interviews to better understand the motivations behind the statements. Mixed-methods validation may enhance content validity and offer context for interpreting scale scores when items display unexpected response patterns.
ACKNOWLEDGEMENT
The researcher would like to express deepest gratitude to the Ifugao mathematics teachers, both from private and public schools, who generously shared their time and insights as respondents to this study. Your candid perspectives on AI- driven mathematics instruction made this research possible and meaningful.
Sincere recognition is also extended to Dr. Lany Dullas (NVSU), Dr. Deo Indunan (IFSU), and Dr. Donato Abaya (IFSU) for their meticulous validation of the research instrument, which significantly enhanced its rigor and clarity.
Appreciation is also extended to the College of Arts and Sciences at Nueva Viscaya State University (Bayombong Campus) for its institutional support and resources. Special thanks are due to Professor Julius S. Valderama, whose expert guidance and encouragement were instrumental throughout the research process.
Conflict of Interest
The authors declare that they have no conflicts of interest in relation to this research.
REFERENCES
- Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10.
- Alsharidah, Majed & Mokhtar, Abouelftouh. (2024). Teachers’ perceptions towards using artificial intelligence in mathematics education. Revista Amazonia Investiga. 13. 43-60. 10.34069/AI/2024.84.12.3.
- bin Mohamed, M. Z., Hidayat, R., binti Suhaizi, N. N., bin Mahmud, M. K. H., & binti Baharuddin, S. N. (2022). Artificial intelligence in mathematics education: A systematic literature review. International Electronic Journal of Mathematics Education, 17(3), em0694.
- Chen, X., Zou, D., Xie, H., Cheng, G., & Liu, C. (2022). Two decades of artificial intelligence in education. Educational Technology & Society, 25(1), 28-47.
- Dabingaya, M. (2022). Analyzing the effectiveness of AI-powered adaptive learning platforms in mathematics education. Interdisciplinary Journal Papier Human Review, 3(1), 1-7.
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.
- DeVellis, R.F. (2016) Scale Development: Theory and Applications. Vol. 26, Sage, Thousand Oaks.
- Edutopia. (2025, January 28). How AI Vaporizes Long‑Term Learning. Retrieved from https://www.edutopia.org/video/how-ai-vaporizes-long-term-learning
- Hodge-Zickerman, A., & York, C. S. (2024). Humanizing Mathematics in Online Learning Environments. In Incorporating the Human Element in Online Teaching and Learning (pp. 114-136). IGI Global.
- Holmes, W. (2020). Artificial intelligence in education. In Encyclopedia of education and information technologies (pp. 88-103). Cham: Springer International Publishing.
- Holstein, K., McLaren, B. M., & Aleven, V. (2018). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In Artificial Intelligence in Education: 19th International Conference, AIED 2018, London, UK, June 27–30, 2018, Proceedings, Part I 19 (pp. 154-168). Springer International Publishing.
- Hwang, G. J., & Tu, Y. F. (2021). Roles and research trends of artificial intelligence in mathematics education: A bibliometric mapping analysis and systematic review. Mathematics, 9(6), 584.
- Krupp, L., Steinert, S., Kiefer-Emmanouilidis, M., Avila, K. E., Lukowicz, P., Kuhn, J., … & Karolus, J. (2024). Unreflected acceptance–investigating the negative consequences of chatgpt-assisted problem solving in physics education. In HHAI 2024: Hybrid Human AI Systems for the Social Good (pp. 199-212). IOS Press.
- Li, W. (2024). Understanding Learners and the Interplay Between Metacognitive Judgements of Learning and AI-Generated Explanations (Doctoral dissertation).
- Luzano, J. F. P. AI-Powered Pedagogies in Mathematics Education: A Systematic Review.
- National Council of Teachers of Mathematics. (2024). Artificial intelligence and mathematics teaching [Position statement]. https://www.nctm.org/standards-and-positions/Position-Statements/Artificial-Intelligence-and-Mathematics-Teaching/
- Opesemowo, O. A. G., & Adewuyi, H. O. (2024). A systematic review of artificial intelligence in mathematics education: The emergence of 4IR. Eurasia Journal of Mathematics, Science and Technology Education, 20(7), em2478.Rane, N., Choudhary, S., & Rane, J. (2023). Education 4.0 and 5.0: Integrating artificial intelligence (AI) for personalized and adaptive learning.
- Opesemowo, O. A., & Ndlovu, M. (2024). Artificial intelligence in mathematics education: The good, the bad, and the ugly. Journal of Pedagogical Research, 8(3), 333-346.
- Sumakul, D. T. Y., Hamied, F. A., & Sukyadi, D. (2022). Artificial intelligence in EFL classrooms: Friend or foe?. LEARN Journal: Language Education and Acquisition Research Network, 15(1), 232-256.
- Tulli, S. K. C. (2022). An Evaluation of AI in the Classroom. International Journal of Acta Informatica, 1(1), 41-66.
- Pepin, B., Buchholtz, N., & Salinas-Hernández, U. (2025). A scoping survey of ChatGPT in mathematics education. Digital Experiences in Mathematics Education, 1-33.
- Rane, N. (2023). Enhancing mathematical capabilities through ChatGPT and similar generative artificial intelligence: Roles and challenges in solving mathematical problems. Available at SSRN 4603237.
- Rane, N., Choudhary, S., & Rane, J. (2023). Education 4.0 and 5.0: Integrating artificial intelligence (AI) for personalized and adaptive learning.
- Rescher, N. (2020). Complexity: A philosophical overview. Routledge.
- Thi-Nga, H., Thi-Binh, V., & Nguyen, T. T. (2024). Metacognition in mathematics education: From academic chronicle to future research scenario–A bibliometric analysis with the Scopus database. Eurasia Journal of Mathematics, Science and Technology Education, 20(4), em2427.
- Wecks, J. O., Voshaar, J., Plate, B. J., & Zimmermann, J. (2024). Generative AI Usage and Exam Performance. arXiv preprint arXiv:2404.19699.
- Xu, W., & Ouyang, F. (2022). The application of AI technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 9(1), 59.