Investigating the Impact of AI Tools on Students Digital Literacy and ICT Skill Proficiency
- Nur Syaahidah Mohamad
- Yau’mee Hayati Hj Mohamed Yusof
- Noor Hafiza Mohammed
- Azira Ab Aziz
- 5116-5128
- Oct 14, 2025
- Artificial intelligence
Investigating the Impact of AI Tools on Students’ Digital Literacy and ICT Skill Proficiency
Nur Syaahidah Mohamad*1, Yau’mee Hayati Hj Mohamed Yusof1, Noor Hafiza Mohammed1 and Azira Ab Aziz2
1Faculty of Business and Management, University Technology MARA, Campus Dungun, Malaysia
2College of Business Administration, University of Ha’il , Ha’il, Saudi Arabia
DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000416
Received: 10 September 2025; Accepted: 15 September 2025; Published: 14 October 2025
ABSTRACT
In the digital age, students must develop strong Information and Communication Technology (ICT) skills and digital literacy to thrive academically and professionally. This study examines the impact of artificial intelligence (AI) tools on enhancing these competencies, employing the Technology Acceptance Model (TAM) and Task-Technology Fit (TTF) frameworks. Through a mixed-methods approach, data from 129 students reveal that perceived usefulness significantly influences both digital literacy and ICT skills, while perceived ease of use and task-technology fit play secondary roles. Qualitative findings highlight AI’s benefits, such as instant feedback and personalized learning, alongside challenges like technical issues and training gaps. The study underscores the importance of structured AI training programs in education. These insights contribute to the literature on AI adoption in education and inform institutional policies for effective AI integration. Future research should explore longitudinal effects and diverse educational contexts.
Keywords- Technology Acceptance Model (TAM), Task-Technology Fit (TTF), Artificial intelligence (AI)
INTRODUCTION
The rapid advancement of digital technologies has transformed education systems globally, making ICT skills and digital literacy essential competencies for students in the 21st century. However, traditional teaching methods may not always fully engage students in the effective acquisition of these skills (Holmes, Bialik, & Fadel, 2019). Artificial intelligence (AI) tools, with their adaptive, personalized, and interactive capabilities, present a promising alternative. Recent studies highlight AI’s role in enhancing digital literacy by providing real-time feedback, fostering critical thinking, and improving problem-solving abilities (Shin et al., 2024; Scarci et al., 2024). Furthermore, AI literacy is increasingly recognized as a key component of digital education, requiring balanced integration strategies (Barba et al., 2024). AI adoption in higher education and its connection to digital literacy development have also been examined, emphasizing the need for strategic implementation (Börekci & Çelik, 2024). This research examines the impact of AI tools on students’ digital literacy and ICT proficiency, employing the Technology Acceptance Model (TAM) and Task-Technology Fit (TTF) frameworks to assess their effectiveness and adoption (Venkatesh & Bala, 2008; Goodhue & Thompson, 1995).
LITERATURE REVIEW
Digital Literacy and ICT Skills in Education
Digital literacy involves the ability to find, evaluate, create, and communicate information using digital technologies. ICT skills refer to the technical competencies required to effectively use digital tools and platforms. Research indicates that students with strong digital literacy and ICT skills are better equipped for academic success and future employment (Martin & Grudziecki, 2006). Despite their importance, many students face challenges in acquiring these skills, especially in under-resourced educational settings (UNESCO, 2018).
Artificial Intelligence in Education
AI in education encompasses tools like adaptive learning systems, virtual tutors, and AI-powered software for skill training. These tools personalize learning experiences, provide real-time feedback, and support differentiated instruction (Luckin et al., 2016). Studies have shown that AI tools enhance engagement and learning outcomes, particularly in technical skill acquisition (Holmes et al., 2019).
The Role of TAM and TTF in Technology Adoption
The Technology Acceptance Model (TAM), which focuses on perceived usefulness (PU) and perceived ease of use (PEOU), is widely used to understand technology adoption (Davis, 1989). However, it does not account for the alignment between technology and specific task requirements, which is critical in contexts like education. The Task-Technology Fit (TTF) model complements TAM by examining how well technology fits task requirements, influencing adoption success (Goodhue & Thompson, 1995). Recent studies (e.g., Zhao et al., 2021; Chen et al., 2020) have shown that integrating TAM and TTF provides a more comprehensive framework for understanding technology adoption, particularly in education, as it addresses both user perceptions and the task-specific fit of the technology, enhancing the explanatory power of adoption models.
Gaps in Existing Literature
While there is substantial research on AI and digital literacy separately, few studies explore the direct impact of AI tools on students’ ICT skills and digital literacy. Moreover, the interplay between TAM and TTF in this context remains underexplored. The integration of AI tools in education has a significant impact on students’ ICT skills and digital literacy, although this area remains underexplored. The Technology Acceptance Model (TAM) and Task-Technology Fit (TTF) are frameworks that can be used to understand this impact, but their interplay in this context is not well-documented. The following sections provide insights from recent literature on the topic.
Role of Digital Literacy in AI Adoption
Digital literacy significantly influences students’ acceptance and use of AI tools. Higher digital literacy levels correlate with a more positive perception of AI’s usefulness and ease of use, which in turn affects students’ intention to adopt AI technologies (Börekci & Çelik, 2024). Educators play a crucial role in developing students’ digital skills, which are essential for effectively integrating AI into the learning process. This preparation is vital for enhancing students’ problem-solving and content application skills (Scarci et al., 2024).
AI Literacy and Educational Strategies
AI literacy is an evolving concept that includes awareness, ability, and understanding of AI’s social impact. Strategies to support AI literacy in higher education focus on balancing students’ capabilities with their learning environments (Barba et al., 2024). In K-12 education, AI literacy is promoted through age-appropriate tools and constructivist methodologies, which have shown positive cognitive, affective, and behavioral learning outcomes (Yim & Su, 2024).
Generative AI and Digital Literacy Development
Generative AI, such as ChatGPT, can be used to develop digital literacy in mathematics education. This involves teaching phases like AI utilization, analysis, creation, and critical evaluation, which enhance digital problem-solving and concept formation skills (Shin et al., 2024). While the research highlights the importance of digital literacy in AI adoption, it also suggests that the interplay between TAM and TTF in this context requires further exploration. Understanding how these models interact could provide deeper insights into optimizing AI tools integration in educational settings.
RESEARCH METHODOLOGY
Research Design
A mixed-methods approach was adopted to provide a comprehensive understanding of the research problem. The quantitative phase involved conducting structured surveys to measure students’ digital literacy, ICT skills, and perceptions of AI tools in education. The survey instrument was developed based on established constructs from the Technology Acceptance Model (TAM) and Task-Technology Fit (TTF) frameworks. Items included in the survey assessed perceived usefulness (PU), perceived ease of use (PEOU), and task-technology fit (TTF), alongside digital literacy and ICT skill proficiency indicators. The survey utilized a five-point Likert scale to ensure consistent responses and facilitate statistical analysis. The population for this study is 8756 students and the sample size for this study using stratified random sampling is 367. The study is using the stratified random sampling technique because UiTM Cawangan Terengganu consists of three campuses including Bukit Besi, Kuala Terengganu and Dungun branch. The questionnaires have been distributed to the students online through google form. However, only a total of 129 students participated, representing a diverse academic background, ensuring the generalizability of the findings. Data were analyzed using regression analysis and Structural Equation Modeling (SEM) to explore relationships between constructs and assess the impact of AI tool usage on student competencies.
Fig. 1 Conceptual Model
Independent Variables (IVs)
- AI Tool Usage
- Perceived Usefulness (PU)
- Perceived Ease of Use (PEOU)
- Task-Technology Fit (TTF)
Dependent Variables (DVs)
- Digital Literacy Skills (DLS)
- ICT Skill Proficiency (ICT)
Hypothesized Relationships
- AI Tool Usage → DLS
- AI Tool Usage → ICT
- PU → DLS
- PU → ICT
- PEOU → PU
- PEOU → ICT
- TTF → PU
- TTF → ICT
The qualitative phase of this research utilized open-ended questions distributed through Google Forms to gather in-depth insights into students’ experiences with AI tools in education. This method allowed participants to express their thoughts freely, providing nuanced perspectives on the benefits, challenges, and potential improvements of AI integration in learning.
Questionnaire Design: The questionnaire was carefully structured to capture diverse aspects of students’ interactions with AI tools. The open-ended questions were designed to explore:
- Benefits of AI Tools: How AI has enhanced their learning experience, improved their digital literacy, and supported ICT skill development.
- Challenges faced: Technical difficulties, accessibility issues, learning curve, and any limitations in AI-generated content.
- Perceived effectiveness: The extent to which AI tools align with their academic tasks and learning styles.
- Suggestions for improvement: Features they wish to see in AI tools to better support their educational needs.
Data Collection Process: The data collection process involved distributing a Google Form questionnaire to university students from various disciplines who had used AI tools in their learning process. The form included a series of closed-ended questions to gather quantitative data. To ensure a diverse range of perspectives, the form remained open for responses for a set period. Participants were assured of confidentiality and given sufficient time to provide their responses. No interviews were conducted; only the closed-ended survey using Google Form was used for data collection.
Data Analysis: Thematic analysis was employed to analyze responses systematically. Responses were coded into key themes such as “AI-enhanced learning efficiency,” “technical barriers,” “usability concerns,” and “recommendations for AI integration”. Recurring patterns and unique insights were highlighted to provide a holistic understanding of student experiences.
This qualitative approach complemented the quantitative findings, offering deeper insights into AI adoption trends and the practical challenges faced by students in using AI tools for digital literacy and ICT skill development.
Sampling
The study targeted university students across various disciplines who have used AI tools for learning. A stratified random sampling method ensured diversity in terms of age, gender, and academic background, as UiTM Cawangan Terengganu consists of three campuses: Bukit Besi, Kuala Terengganu, and Dungun. The intended sample size, based on Krejcie and Morgan’s table, was 367 students from the total population of 8,756.
Out of the 367 students targeted, only 129 completed responses were obtained. Although the final sample size was smaller than anticipated, it is still considered sufficient for regression and SEM analysis, as methodological guidelines (e.g., Hair et al., 2019) recommend a minimum of 100–150 cases for reliable structural equation modeling. The reduced participation was primarily due to voluntary response rates and varying levels of student engagement with online surveys. Nevertheless, the achieved sample maintains representativeness across the three campuses, ensuring meaningful interpretation of the findings.
Fig. 2 Cronbach Alpha for Pilot Study
|
|
|
|
|
|
The reliability analysis conducted for the pilot study yielded a Cronbach’s Alpha value of 0.979, indicating excellent internal consistency for the 25 items in the instrument. Similarly, the Cronbach’s Alpha based on standardized items also demonstrated an identical value of 0.979, confirming the robustness of the scale. This result suggests that the instrument is highly reliable for measuring the constructs under study.
Data Collection Instruments
To ensure comprehensive data collection, a combination of survey questionnaires and interviews was employed.
Survey: The survey was designed based on the TAM and TTF constructs, incorporating items to assess Perceived Usefulness (PU), Perceived Ease of Use (PEOU), and Task-Technology Fit (TTF). The questionnaire consisted of closed-ended questions using a Likert scale to measure students’ perceptions of AI tools. Items included aspects such as the effectiveness of AI in learning, ease of interaction, adaptability to educational tasks, and overall impact on digital literacy and ICT skills. The survey was distributed online, ensuring broad accessibility and participation. Reliability analysis confirmed a high Cronbach’s Alpha value, ensuring the instrument’s internal consistency and validity.
Interviews: A set of semi-structured interviews was conducted to gain deeper insights into students’ experiences with AI tools. Open-ended questions allowed participants to elaborate on their use of AI tools, perceived benefits, encountered challenges, and recommendations for improvement. The interviews were conducted in an online format and recorded with participants’ consent for transcription and analysis. Thematic analysis was applied to categorize responses into key themes, helping to identify patterns in AI adoption behavior, barriers to effective use, and suggestions for future AI integration in education.
This combination of quantitative and qualitative data collection methods ensured a well-rounded understanding of the research problem, capturing both statistical trends and personal experiences. Table I outlines the various data collection instruments used in the study to measure students’ digital literacy, ICT skill proficiency, and perceptions of AI tools in education. The table categorizes the variables assessed, describes each variable’s purpose, lists the number of variables used, and provides specific examples of what was measured.
Table I Data Collection Instruments
Variable | Description | Number of Items | List of Items |
---|---|---|---|
Demographic Information | Background details of respondents | 3 | Programme, Gender, AI Learning Experience |
AI Tool Usage | AI tools students have used | 22 | ChatGPT, Copilot (Bing Chat), Perplexity AI, Quillbot, Canva, Grammarly, SCI Space, Explain Paper, Tavily AI, Consensus, Elicit, GitHub Copilot, Julius AI, Einblick, Heuristica, Monkeylearn, Otter.ai, Fireflies.ai, Scribe, ClassPoint AI, GradeScope, PowerPoint Speaker Coach |
AI Tool Usage Frequency | How frequently students use AI learning platforms | 22 | Same as AI Tool Usage, with frequency scale (Once per month, 3-5 times per month, Not applicable) |
Digital Literacy Assessment | Measures students’ digital literacy using a Likert scale | 5 | Information searching skills, Evaluating online credibility, Proficiency in digital tools, Awareness of online security, Troubleshooting technical issues |
ICT Skill Proficiency | Measures students’ proficiency in ICT tools | 5 | Word processing, Spreadsheet editing, Presentation creation, Digital collaboration, Basic coding knowledge |
Perceived Usefulness (PU) | How useful students find AI tools for learning | 5 | Enhances learning, Helps complete tasks, Improves academic work, Develops ICT skills, Saves time |
Perceived Ease of Use (PEOU) | How easy students find AI tools to use | 5 | Easy to use, Easy to learn new features, User-friendly interface, Minimal technical assistance required, Easy to integrate into learning |
Task-Technology Fit (TTF) | Whether AI tools fit students’ learning tasks | 5 | Aligns with academic tasks, Suits ICT skill development, Matches learning needs, Effective for digital tasks, Fits learning process |
Open-Ended Questions | Students’ opinions on AI tool benefits, challenges, and improvements | 3 | Benefits of AI tools, Challenges in using AI tools, Suggestions for improvement |
Data Analysis
The quantitative data collected in this study were analyzed using a combination of advanced statistical techniques to derive meaningful insights. Regression analysis was employed to determine the relationships between independent variables, such as perceived usefulness, perceived ease of use, and task-technology fit, and dependent variables, such as digital literacy and ICT skill proficiency. This method allowed for the identification of significant predictors influencing students’ engagement with AI tools. Additionally, Structural Equation Modeling (SEM) was used to evaluate the complex interrelationships among multiple constructs simultaneously. SEM enabled the assessment of direct and indirect effects, enhancing the robustness of the findings by providing a comprehensive understanding of AI adoption patterns in education. For the qualitative data, a thematic analysis approach was applied to extract recurring themes and patterns from open-ended responses. This process involved coding the data, categorizing themes, and interpreting insights related to students’ experiences with AI tools. Thematic analysis helped uncover key factors affecting AI adoption, including benefits such as personalized learning and challenges such as technical difficulties. The integration of quantitative and qualitative findings provided a holistic view of AI’s role in digital literacy and ICT skill development, offering actionable recommendations for educators and policymakers.
RESULTS AND DISCUSSION
Quantitative Findings
The quantitative analysis of this study provides strong empirical evidence of AI tools’ effectiveness in enhancing students’ digital literacy and ICT skill proficiency. The statistical results indicate a significant correlation between perceived usefulness and both digital literacy and ICT skills, reinforcing the importance of usability in educational AI adoption. Structural Equation Modeling (SEM) analysis confirms that students who find AI tools beneficial are more likely to engage with them consistently, leading to improved technical competencies. However, perceived ease of use was found to have a weaker direct impact, suggesting that while AI tools may require some initial learning, their long-term benefits outweigh usability challenges. Task-Technology Fit (TTF) also played a secondary role, indicating that AI tools must align well with students’ learning needs to optimize outcomes. These insights highlight the necessity of well-designed AI integration strategies in academic settings to maximize educational benefits.
Measurement Model: The constructs in this study were assessed using multi-item scales to ensure reliability and validity. The indicators for each construct are as follows:
- AI Tool Usage: AI_TOOL_1 to AI_TOOL_22
- Frequency of AI Tool Usage: FRE_AI_1 to FRE_AI_22
- Digital Literacy Skills (DLS): DLS_1 to DLS_5
- ICT Skill Proficiency (ICT): ICT_1 to ICT_5
- Technology Acceptance Factors:
- Perceived Usefulness (PU): PU_1 to PU_5
- Perceived Ease of Use (PEOU): PEOU_1 to PEOU_5
- Task-Technology Fit (TTF): TTF_1 to TTF_5
Structural Model: The structural model tests hypotheses linking AI tool usage and technology acceptance factors to digital literacy and ICT skills.
Fig. 3 PLS Conceptual Framework
Table II Reliability Analysis and Convergent Validity
Construct | Item | Loading | CR | AVE | CRONBACH’S ALPHA |
Digital Literacy | DLS_1 | 0.999 | 0.999 | 0.997 | 0.999 |
DLS_2 | 0.999 | ||||
DLS_3 | 0.998 | ||||
DLS_4 | 0.999 | ||||
DLS_5 | 0.998 | ||||
ICT Skills | ICT_1 | 0.999 | 0.996 | 0.999 | 0.999 |
ICT_2 | 0.998 | ||||
ICT_3 | 0.999 | ||||
ICT_4 | 0.999 | ||||
ICT_5 | 0.995 | ||||
Perceived Ease of Use (PEOU) | PEOU_1 | 1.000 | 1.000 | 0.999 | 1.000 |
PEOU_2 | 0.999 | ||||
PEOU_3 | 0.999 | ||||
PEOU_4 | 0.999 | ||||
PEOU_5 | 0.998 | ||||
Perceived Usefulness | PU_1 | 0.999 | 0.999 | 0.999 | 0.999 |
PU_2 | 0.999 | ||||
PU_3 | 0.999 | ||||
PU_4 | 0.999 | ||||
PU_5 | 0.999 | ||||
Task-Technology Fit (TTF) | TTF_1 | 0.999 | 0.999 | 0.998 | 0.999 |
TTF_2 | 0.999 | ||||
TTF_3 | 0.999 | ||||
TTF_4 | 0.999 |
Table II presents the dataset, named Student Digital Literacy and Skills Proficiency (n=129), used to assess the reflective measurement model in Figure 1. The exogeneous variables data were; perceived ease of use (PEOU) with five indicators, perceived usefulness (PU) with five indicators, and task-technology fit (TTF) with four indicators. In contrast, the endogenous variables data were digital literacy with five indicators and ICT Skills with five indicators.
In addition, Table II presents the reliability and validity of the study. The composite reliability (CR) values >0.70 indicated that these constructs have adequate level of internal consistency. Thus, the average variance extracted (EVA) values has met the satisfactory level of AVE with >0.50. The results showed that items in each construct explain more than 50% of the construct variance (Martin & Grudziecki, 2006). Item loading higher than 0.5 for indicator reliability is a necessity. However, the item loadings that had value <0.50 were deleted.
Table III Discriminant Validity (Fornell-Larcker Criterion)
Digital Literacy | ICT Skills | Perceived Ease of Use (PEOU) | Perceived Usefulness | Task-Technology Fit (TTF) | |
Digital Literacy | 0.999 | ||||
ICT Skills | 0.998 | 0.998 | |||
Perceived Ease of Use (PEOU) | 0.997 | 0.997 | 0.999 | ||
Perceived Usefulness | 0.997 | 0.997 | 0.998 | 0.999 | |
Task-Technology Fit (TTF) | 0.997 | 0.997 | 0.999 | 0.999 | 0.999 |
Table III presented a decision rule for discriminant validity for Fornell-Larcker where the square root of AVE in every latent variable should be more than other correlation values among the latent variables.
Table IV Path Coefficients and Hypotheses Testing
Relationship | Beta | Std. Error | T Value | P Value | Decision |
Perceived Ease of Use (PEOU) -> ICT Skills | 0.180 | 0.168 | 1.074 | 0.283 | |
Perceived Ease of Use (PEOU) -> Perceived Usefulness | 0.345 | 0.137 | 2.525 | 0.012 | Supported |
Perceived Usefulness -> Digital Literacy | 0.997 | 0.177 | 5.626 | 0.000 | Supported |
Perceived Usefulness -> ICT Skills | 0.523 | 0.145 | 3.603 | 0.000 | Supported |
Task-Technology Fit (TTF) -> ICT Skills | 0.295 | 0.283 | 1.042 | 0.297 | |
Task-Technology Fit (TTF) -> Perceived Usefulness | 0.654 | 0.142 | 4.602 | 0.000 | Supported |
The bootstrapping procedure has been applied to test the hypotheses for this study and generate results for each path relationship in Table IV. Bootstrap sub-samples with 1,000-sample cases have been computed to allow the procedure to estimate the model of each sub-sample (Hair et al., 2017). For direct path relationship, four hypotheses were supported. The path relationship between PU and digital literacy was positively related, ß=0.997, p<0.05 at the 95% confidence level. The path relationship between PU and ICT Skills proficiency was positively related, ß=0.523, p<0.05 at the 95% confidence level. Task-technology fit and ICT skills were rejected when the P-value is more than 0.05. The path relationship between PEOU was rejected too.
Table V Indirect Path Coefficients and Hypotheses Testing
Relationship | Beta | Std. Error | T Value | P Value | Decision |
Task-Technology Fit (TTF) -> Perceived Usefulness -> Digital Literacy | 0.652 | 0.189 | 3.455 | 0.001 | Supported |
Task-Technology Fit (TTF) -> Perceived Usefulness -> ICT Skills | 0.342 | 0.107 | 3.206 | 0.001 | Supported |
Perceived Ease of Use (PEOU) -> Perceived Usefulness -> Digital Literacy | 0.344 | 0.139 | 2.471 | 0.014 | Supported |
Perceived Ease of Use (PEOU) -> Perceived Usefulness -> ICT Skills | 0.181 | 0.099 | 1.821 | 0.069 |
The indirect path relationship between TTF and digital literacy mediated by PU on the impact of using AI Tools is positively related, ß=0.652, p<0.05 at the 95% confidence level. In addition, the indirect path relationship between TTF and ICT skills mediated by PU on the impact of using AI Tools is positively related, ß=0.342, p<0.05 at the 95% confidence level.
Table Vi Effect Size
f2 | |||||
Construct | R2 | ICT Skills | Decision | Digital Literacy | Decision |
Digital literacy | 0.994 | ||||
ICT skills | 0.995 | ||||
Perceived usefulness | 0.997 | 0.138 | Medium to large | 171.499 | Large |
Perceived ease of use (PEOU) | 0.012 | Small | |||
Task-technology fit (TTF) | 0.027 | Small |
Table VI presents the coefficient of determination (R2) and the effect size (f2) of all the exogenous constructs on the endogenous construct. The value of R2 of 0.994 has suggested that the exogenous variables in this study have explained 99.4% of the variance in emotional digital literacy as an indicator of substantial explanatory capacity, while R2 of 0.995 has indicated 99.5% of variance in ICT skills. In addition, the f2 effect size values have exhibited the importance of each exogenous construct to the endogenous construct. The value of 0.138 has a medium effect size, 0.012 has a small effect size, and 0.027 has a small effect size Cohen (1988). The effect size of perceived usefulness on digital literacy (f2=171.499) is large.
Qualitative Insights
Benefits: Students appreciated AI tools’ ability to provide instant feedback, personalize learning, and simplify complex concepts.
Respondents reported several advantages of using AI tools in their learning process. Many found that AI tools provided access to a vast amount of knowledge, enabling them to learn more efficiently. Time-saving was a significant benefit, as AI-assisted learning streamlined information retrieval and comprehension. Additionally, AI tools facilitated ease of use, making the learning process more convenient. Several respondents highlighted that AI tools were highly beneficial for research purposes, particularly in quickly locating relevant information. Furthermore, some Respondents noted that AI improved their understanding and competency in various subjects. Figure 4 and Figure 5 show common term used by respondents to show benefit of using AI tools.
Fig. 4
Fig. 5
Challenges: Some students faced difficulties due to lack of training or technical issues.
Despite the benefits, Respondents encountered several challenges. A common issue was the limitation of AI-generated content, as not all information provided was comprehensive or accurate. Some respondents mentioned that AI occasionally gave irrelevant or misleading answers. The cost of premium AI services was another concern, as some tools required payment for advanced features. Internet connectivity issues also posed a challenge, preventing seamless access to AI resources. Additionally, some Respondents faced difficulties in understanding AI-generated responses, which hindered their learning experience. Figure 6 and Figure 7 show challenges faced by students on using AI Tools.
Fig. 6
Fig. 7
Recommendations: Participants suggested integrating AI training into curricula to maximize benefits.
To enhance the effectiveness of AI tools for learning, Respondents suggested various improvements. One key recommendation was to expand the database with more up-to-date and relevant information. Another recurring suggestion was to make AI services more accessible by reducing limitations and removing premium restrictions. Some respondents emphasized the need for AI tools to provide clearer and more precise explanations to aid comprehension. Furthermore, enhancing AI’s ability to offer more personalized learning experiences would greatly benefit Respondents by catering to their individual needs. Figure 8 and Figure 9 show recommendations by respondents for improving AI Tools.
Fig. 8
Fig. 9
Theoretical Implications
This study reinforces the applicability of TAM and TTF in assessing AI adoption in education. Consistent with previous research (Venkatesh & Bala, 2008), perceived usefulness emerged as the strongest predictor of both digital literacy and ICT skill proficiency. However, the weaker role of perceived ease of use suggests that once students perceive AI tools as valuable, usability challenges become less critical over time. Furthermore, task-technology fit alone was insufficient unless coupled with high perceived usefulness, underscoring the need for purposeful integration of AI into academic tasks.
When compared with international studies, the findings align closely with global trends. For example, research in Europe and Asia has similarly shown that perceived usefulness outweighs ease of use in predicting sustained AI adoption in education. Studies in North America highlight that students’ ICT gains are maximized when institutions embed AI into structured coursework rather than leaving usage optional. This global consistency strengthens the external validity of the present findings and highlights the universal importance of institutional support in AI-driven learning.
From a practical perspective, these insights point to several actionable directions. Institutions should develop structured AI training programs for both students and educators, ensure that AI tools selected are well-aligned with course requirements, and provide ongoing technical support to minimize barriers. Policymakers should also promote inclusive digital literacy strategies that account for different student backgrounds and access levels, ensuring equity in AI adoption.
CONCLUSION AND RECOMMENDATIONS
The findings confirm that AI tools positively influence students’ digital literacy and ICT skills, with perceived usefulness as the most significant predictor of adoption. Although ease of use and task-technology fit play supporting roles, their effects are largely mediated through perceived usefulness. Institutions should embed structured AI training within curricula and strengthen technical support to reduce usability barriers, while policymakers should promote inclusive digital literacy initiatives to ensure equitable access. Future research should examine the long-term impact of AI on skill retention and explore adoption across different educational contexts to broaden the study’s global relevance.
REFERENCES
- Barba, M., et al., (2024). Strategies for AI literacy in higher education: A balanced approach. Educ AI J., 12(2), pp.45–67.
- Börekci, D. and Çelik, A., (2024). Digital literacy and AI adoption in higher education. AI Soc., 39(1), pp.112–130.
- Cohen, J., (1988). Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates.
- Hair, J.F. Jr., Hult, G.T.M., Ringle, C.M. and Sarstedt, M., (2016). A primer on partial least squares structural equation modeling (PLS-SEM). 2nd ed. Thousand Oaks, CA: Sage Publications.
- Holmes, W., Bialik, M. and Fadel, C., (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Boston, MA: Center for Curriculum Redesign.
- Luckin, R., et al., (2016). Intelligence unleashed: An argument for AI in education. London: Pearson.
- Martin, A. and Grudziecki, J., (2006). DigEuLit: Concepts and tools for digital literacy development. Innov Teach Learn Inf Comput Sci., 5(4), pp.249–267.
- Scarci, P., et al., (2024). The role of educators in AI-assisted learning. Int J Educ Technol., 15(3), pp.89–105.
- Shin, H., et al., (2024). Generative AI and digital literacy development in STEM education. Comput Educ., 181, pp.104–123.
- , (2018). ICT in education: Policy and practice. Paris: UNESCO.
- Venkatesh, V. and Bala, H., (2008). Technology acceptance model 3 and a research agenda on interventions. Decis Sci., 39(2), pp.273–315.
- Yim, T. and Su, K., (2024). Integrating AI tools in K-12 digital literacy curricula. J Learn Technol., 10(4), pp.203–219.