International Journal of Research and Innovation in Social Science

Submission Deadline- 28th March 2025
March Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th April 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th April 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

The Future of Education: Factors Affecting Students’ Perception of the Usefulness of AI Tools in Education

The Future of Education: Factors Affecting Students’ Perception of the Usefulness of AI Tools in Education

Nathaniel De Leon1*, Joshua Palaya1, Mylene Prado2

1Department of Statistics, College of Arts and Sciences, Rizal Technological University, Boni Avenue, Mandaluyong City, Philippines

2Instructor, Rizal Technological University, Boni Avenue, Mandaluyong City, Philippines

DOI: https://dx.doi.org/10.47772/IJRISS.2025.9020362

Received: 18 February 2025; Accepted: 20 February 2025; Published: 25 March 2025

ABSTRACT

The integration of Artificial Intelligence (AI) tools in education has the potential to transform learning experiences. However, the extent to which students find these AI tools useful may depend on the level of their AI Literacy (AIL). This study investigates the association between AI literacy level and the perceived usefulness of AI in the academe. A survey was conducted among 172 students from five colleges at Rizal Technological University – Boni Campus during the second semester of the academic year 2023-2024. The researchers collected data on their demographic profile, AI literacy level (AILL), and perceived usefulness (PU) of AI tools in the academe, which is then analyzed using statistical methods to identify significant differences and associations.  The results indicated a positive association between AI literacy levels and students’ perceptions of AI usefulness in the academe. Additionally, a significant difference in AILL, particularly in AI ethics, was observed when respondents were grouped by year level. These findings suggest that promoting AI literacy can enhance students’ engagement with AI tools in educational contexts. The study offers valuable insights for educators and policymakers in designing AI-related curricula and resources. However, the researchers acknowledge that the sample size may have affected the normality of the data. Future research is recommended to develop a more comprehensive AI literacy questionnaire to assess students’ readiness for AI use and to include a larger sample size, potentially incorporating other universities in the Philippines.

Keywords: AI Literacy, Perceived Usefulness of AI, AI Tools in Higher Education

INTRODUCTION

Artificial Intelligence (AI) has transformed various sectors, including education, through leveraging neural networks and machine learning to enhance problem-solving and learning (Coleman, 2020; Morandín-Ahuerma, 2022), AI tools enable and enhances personalized learning, homework assistance, skill development, automated assessment, and data-driven decision-making, offering new opportunities for education emphasizing the need for research on usability and user satisfaction (Labadze, et al., 2024). The release of ChatGPT in November 2022 further accelerated the adoption of AI even in the academe, where understanding AI itself is crucial. Carolus et al. (2023) introduced the concept of “AI literacy,” expanding the definition of literacy to include skills essential for navigating AI technologies effectively (Long & Magerko, 2020). However, the usefulness of AI tools depends on students’ AI literacy levels, influencing their perceptions and willingness to adopt AI in education (Holmes et al., 2022).

Despite AI’s benefits, concerns persist regarding its misuse, particularly plagiarism and cheating through AI chatbots like ChatGPT, Bing Copilot, and Google Gemini (Barnett, 2023; Porayska-Pomsta, 2023). While some educators remain skeptical, others acknowledge AI’s potential for enhancing learning (Wang et al., 2023). This highlights the need for ethical guidelines and policies to ensure the responsible use of AI in education (Labadze et al., 2024). This study investigates the association between students’ AI literacy levels (AILL) and their perceived usefulness (PU) of AI in education, highlighting the significant difference in AILL and PU of AI, when the respondents are grouped in their demographic profiles.

LITERATURE REVIEW

Artificial Intelligence

Artificial Intelligence (AI) refers to a group of technology or systems that exhibit human-like intelligence, such as problem-solving, learning, and reasoning, which is achieved through various tools and technologies: neural networks, machine learning, expert systems, and others (Coleman, 2020; Morandín-Ahuerma, 2022). As a result of advancements and fine-tuning in accuracy, AI rose in popularity leading to its usage by the students in the academe, it presents both opportunities and challenges. AI tools support personalized learning, automated assessment, and tutoring, but concerns persist over plagiarism and misuse (Barnett, 2023). Many educators struggle to regulate AI-generated assignments, while students appreciate its time-saving benefits yet question its reliability (Ngo, 2023).

Beyond ChatGPT, AI-driven chatbots and adaptive learning platforms offer significant potential for enhancing student engagement (Wang, 2023). However, the rapid evolution of AI raises concerns about its societal impact and ethical implications (Jeffrey, 2020). Vasconcelos and dos Santos (2023) found that AI-powered tools like ChatGPT and Bing Chat can serve as “objects-to-think-with,” promoting reflective and critical thinking in STEM education.

Furthermore, Latif et al. (2023) discuss that AI systems can adapt to individual student needs, offering tailored learning experiences and comprehensive feedback, which enhances problem-solving skills and concept comprehension. Existing studies primarily explore perceptions of AI, with some findings indicating differences based on field of study and gender (Chan & Hu, 2023). However, gaps remain in understanding how AI literacy influences students’ perception of its usefulness in education.

Perceived Usefulness of AI in Education

The integration of Artificial Intelligence (AI) in education has gained increasing attention due to its potential to enhance learning experiences and improve academic performance. These technologies aim to reduce cognitive overload, assist in complex problem-solving, and support knowledge retention by tailoring content to individual learning styles (Sobhani et al., 2022).

Student-AI Collaboration (SAC) Model, by Kim, et al. (2022), is model where AI would serve not just as a tool used by the student; instead, it becomes an active participant in the learning process. Under the SAC Model, there are three Student-AI interactions in the academe: Cognitive Interactions, Socio-Emotional Interactions, and Artifact-Mediated Interactions. Cognitive Interactions, are the engagement of students with AI to enhance their understanding of concepts, problem-solving skills, and critical thinking; Socio-Emotional Interactions, are interactions where AI fosters motivation and engagement by offering encouragement, responding to emotions, and simulating social interactions, but it is also considered as the groups, community, and outside interactions caused by using AI; Artifact-Mediated Interactions, are interactions where AI assists students in co-creating knowledge by generating, refining, and evaluating digital content, creating artifacts of usage.

Several studies also highlighted AI’s role in facilitating social and academic engagement. AI-mediated social interaction, such as AI-powered discussion forums and chatbots, has been found to improve student participation in online learning environments by fostering collaboration and engagement (Wang et al., 2022b). Similarly, AI-powered educational applications have been shown to enhance motivation and accessibility, especially for students who require additional academic support (Virvou, 2022).

Bulut et al. (2024) highlight that while AI offers opportunities for automated assessment and personalized feedback, it also raises ethical concerns regarding validity, reliability, and fairness. Similarly, Lakkaraju et al. (2024) discuss the development of an AI-driven chatbot tutoring system designed to collaborate with students in solving complex problems. Furthermore, André (2021) argues that AI should be designed to complement, rather than replace, human instruction, emphasizing the importance of maintaining a balance between automation and human interaction in education.

Overall, existing literature suggests that AI has the potential to transform education by providing personalized, interactive, and adaptive learning experiences. However, addressing ethical concerns and ensuring that AI remains a supportive tool—rather than a substitute for human educators—will be crucial for its effective implementation in academic settings.

AI Literacy

AI skills are becoming increasingly important today, just as computer skills were a few years ago. This set of skills is commonly referred to as “AI literacy” and includes using, applying, or interacting with AI (Long & Magerko, 2020; Carolus et al., 2023). AI literacy is based on the concept of ‘literacy.’ Before AI literacy, the term “digital literacy” was developed to assess basics associated with computer ideas and abilities as software for computers became popular throughout sectors in the 1970s. Users needed to learn how to use computer systems related to their specific task or job.

The importance of digital literacy has grown as more people rely on computer technologies to create new social and economic opportunities (Michaeli et al. (2023). Individuals with high AI literacy are more likely to thrive in an AI-driven work environment, as they can effectively utilize AI tools to enhance productivity and performance (Brynjolfsson et al., 2023). Conversely, those with inadequate AI literacy may struggle when interacting with AI, potentially facing job displacement and skill redundancy. A study by Brynjolfsson et al. (2023) found that access to AI tools significantly improved productivity, especially for less experienced workers, highlighting the importance of AI literacy in workplace adaptability. It is essential to take into account the many practical ways in which AI influences how our students and society as a whole access, evaluate and consume information and media. (Tiernan et al., 2023).

MAILS, or Meta AI Literacy Scale by Carolus et al. (2023), is an instrument made to measure the AI literacy of the respondents with the components highlighted in this study that is making up AI Literacy are the following: “Know and Understand AI (KUAI),” “Use and Application of AI (UAAI),” “Detect AI (DAI),” and “AI Ethics (AIE).” KUAI is defined as the component to measure the knowledge and understanding of the students of the concepts and definitions of the AI tools; UAAI then measures whether the students can use and apply the AI tools in education. The component DAI measures the capability of the students to detect the presence of AI in the devices, applications, and websites used in the academe. Lastly, AIE, tackles the ethical considerations when the students use AI while studying.

AI Literacy and Perceived Usefulness of AI in Education

The relationship between AI literacy and the perceived usefulness of AI in education is dynamic and reciprocal. AI literacy, which encompasses the ability to understand, use, and critically evaluate AI technologies, plays a crucial role in shaping students’ and educators’ perceptions of AI’s educational value (Ng et al., 2021). Research indicates that individuals with higher AI literacy are more likely to recognize AI’s potential in improving learning outcomes, fostering engagement, and streamlining academic tasks, thereby increasing their perceived usefulness of AI in education (Guo et al., 2024).

Conversely, the perceived usefulness of AI can also drive the motivation to develop AI literacy. When students and educators experience the benefits of AI tools, such as enhanced personalized learning and efficient assessment methods, they tend to seek a deeper understanding of AI, reinforcing their literacy levels (Idroes et al., 2023). Studies by Bakhadirov et al. (2024) highlight that educators with higher AI literacy levels not only perceive AI as more useful but also integrate AI-based technologies more effectively into their teaching strategies.

Furthermore, AI literacy fosters critical awareness of AI’s limitations and ethical concerns, enabling students to navigate AI-integrated learning environments responsibly (Delcker et al., 2024). Otero et al. (2023) emphasize the importance of early AI literacy education, advocating for its inclusion in school curricula to better prepare students for AI-driven academic and professional landscapes. Additionally, Huang (2023) stresses that AI literacy must include knowledge of data privacy and ethical considerations, as the seamless integration of AI into education raises concerns about information security and user autonomy.

In summary, AI literacy and the perceived usefulness of AI in education share a bidirectional relationship. Higher AI literacy enhances individuals’ appreciation of AI’s potential, while the recognition of AI’s usefulness fuels the desire to further develop AI-related knowledge and skills. This interplay underscores the importance of AI education and policy development to ensure responsible and effective AI adoption in academic settings.

METHODOLOGY

Figure 1. Research Framework

Figure 1. Research Framework

Research Methods and Design Used

The research framework is illustrated in Figure 1. The framework indicates that the demographic profile of the RTU-Boni students affects their AI Literacy (AIL) and Perceived Usefulness of AI in Education (PU) and also that their AI Literacy also affects and is significantly associated with the respondents’ perception of AI usefulness in the academe.

Respondents

The respondents of this study were undergraduate students from RTU-Boni’s five Colleges: College of Arts and Sciences (CAS), College of Business, Entrepreneurship, and Accountancy (CBEA), College of Engineering (CEng), College of Education (CED), and Institute of Human Kinetics (IHK), all of whom had used AI tools at least once in their academic work.

The study initially planned to use stratified sampling to ensure proportional representation from each college. However, due to class cancellations caused by extreme heat, convenience sampling was adopted to specifically target students who met the study’s inclusion criteria. This method ensured that only students with direct experience using AI tools in education were included.

Data collection was conducted through face-to-face surveys, followed by online surveys distributed via Google Forms, Facebook posts, and other online channels. After eliminating invalid responses, a total of 172 valid questionnaires were analyzed. The limitation of a relatively small sample size is acknowledged, and future studies should consider expanding the sample to ensure the robustness of statistical analyses.

AI Literacy. The researchers modified the Meta AI Literacy Scale Study by Carolus et al. (2023), adding Data Privacy and Security (DPS) to the Questionnaire and changing the wording in the question to fit the usage of AI in the academe, which is then validated and tested for reliability.

The AI Literacy Level Questionnaire includes five dimensions: Knowing and Understanding AI (KUAI), Using and Applying AI (UAAI), Detecting AI (DAI), AI Ethics (AIE), and Data Privacy and Security (DPS). The respondents answered this instrument by answering a 5-Point Likert Scale (1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, and 5= Strongly Agree) to the statements in the questionnaire. Its main purpose is to measure the level of literacy of the respondents regarding the AI and its dimensions as they use it in the academe.

The first dimension is Knowing and Understanding AI or KUAI, consisting of 6 statements from the Meta AI Literacy Scale by Carolus et al. (2023) that were modified to fit the academe. The Cronbach’s alpha coefficient for this scale was 0.872. A higher score on this scale signifies a higher understanding of AI tools’ concepts, definitions, limitations, new uses, advantages, and disadvantages.

The second dimension is Using and Applying AI or UAAI, consisting of 6 statements from the Meta AI Literacy Scale by Carolus et al. (2023) that were modified to fit the academe. The Cronbach’s alpha coefficient for this scale was 0.933. A higher score on this scale signifies a better and more productive use of AI in the academe.

The third dimension is detecting AI or DAI, which consists of 3 statements from the Meta AI Literacy Scale by Carolus et al. (2023) that were modified to fit the academe. The Cronbach’s alpha coefficient for this scale was 0.937. A higher score on this scale signifies a higher level of ability to sense the presence of AI during studying.

The fourth dimension is AI Ethics or AIE, consisting of 3 statements from the Meta AI Literacy Scale by Carolus et al. (2023) that were modified to fit the academe. The Cronbach’s alpha coefficient for this scale was 0.899, but upon dropping AIE #3, the Cronbach’s alpha coefficient for this scale raised to 0.963. A higher score on this scale signifies a higher understanding of the ethical considerations of using AI in Education

The fifth dimension is Data Privacy and Security or DPS, consisting of 3 statements added by the researchers. The Cronbach’s alpha coefficient for this scale was 0.817. A higher score on this scale signifies a higher level of mindfulness in the privacy and security of the user’s data during the use of AI in the academe.

Perceived Usefulness of AI in Education. This researcher-made questionnaire has three dimensions based on the Student-AI Interactions from the Student-AI Collaboration (SAC) Model by Kim et al. (2022), where a thematic study was conducted through in-depth interviews with 10 Korean leading teachers in AI in Education (AIED).

The themes shows that there are three interactions possible as students use are: Cognitive Interactions (CI), Socio-emotional Interactions (SEI), and Artifact-mediated Interactions (AMI). The respondents answered this instrument by answering a 5-Point Likert Scale (1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, and 5= Strongly Agree) to the statements in the questionnaire. This measures the usefulness of AI across the said dimensions from the perspective of the students as they use it in the academe

Cognitive Interactions (CI), the first dimension, consisting of 6 statements that the researchers made according to the thematic study conducted by Kim et al. (2022). The Cronbach’s alpha coefficient for this scale was 0.840. A higher score on this scale signifies a perception of higher usefulness of AI in the field of academe, in terms of thinking, reasoning, and co-elaborating knowledge.

Socio-emotional Interactions (SEI), the second dimension, consisting of 7 statements that the researchers made according to the thematic study conducted by Kim et al. (2022). The Cronbach’s alpha coefficient for this scale was 0.918. A higher score on this scale signifies a perception of higher usefulness of AI in the field of academe, in terms of emotional and social interaction between group of users, including support, motivation, and managing group dynamics.

Artifact-Mediated Interactions (AMI), the third dimension, consisting of 7 statements that the researchers made according to the thematic study conducted by Kim et al. (2022). The Cronbach’s alpha coefficient for this scale was 0.890. A higher score on this scale signifies a better perception towards digital platforms, educational software, or other technological aids that support learning activities with the use of AI.

Statistical Analysis

To test the hypotheses of this study, both descriptive and inferential statistics were utilized using software like: Google Sheets, Excel, and JAMOVI, to store and conduct data analysis. Descriptive statistics was applied to analyze the respondent’s demographic profile, their AI literacy level and its dimensions, and their perspective of AI tools’ usefulness in the academe and its dimensions. Due to the data gathered from the questionnaire being categorical, and upon testing of the assumption of normality, the researchers used nonparametric statistics like Mann-Whitney U Test to find significant differences when variables are grouped by sex, the Kruskal-Wallis H Test to find significant differences when there are multiple groups of demographic profile, and Spearman’s Rank-Order Correlation Coefficient to find significant associations between AI Literacy and the students’ perception of AI tool’s usefulness in Education.

Table 1. Correlation Matrix between AILL and PU

Variables Statistics Used Value df p-value
AILL vs. PU Spearman’s Rho 0.629 170 <0.0001

RESULTS AND DISCUSSIONS

Preliminary Analysis

The study was conducted to explore the association between AI literacy level (AILL) and the students’ perceived usefulness of AI in education (PU). Upon conducting the Spearman’s Rank-Order Correlation Coefficient, Table 1 displays the results of the association test between the variables. The findings revealed that there is a statistically significant high positive association (Spearman’s Rho = 0.629, p-value = <0.001) between the AILL and PU of AI in the academe, this means that as AILL increases, PU increases also and if AILL decreases, so will PU.

Before continuing with further statistical analysis, it is important to check for normality. This study used Shapiro-Wilk Test to check the normality of the data. The results shows that the AILL and its dimensions (KUAI, UAAI, DAI, AIE, and DPS) and the dimensions of PU (CI, SEI, and AMI) rejected the hypothesis of the Shapiro-Wilk Test, which means that the data does not follow a normal distribution, while findings suggests that PU (p-value = 0.092) follows a normal distribution. But since all parameters of PU is not normal, the researchers will consider PU and all of its parameters as not normal. Hence, the study utilized nonparametric statistics to analyze the data to find the significant differences and association between variables.

Table 2. Normality Test Results (Shapiro-Wilk)

Variables Shapiro-Wilk
W p
KUAI .930 < 0.001
UAAI .949 < 0.001
DAI .940 < 0.001
AIE .902 < 0.001
DPS .900 < 0.001
AILL .936 < 0.001
CI .974 0.003
SEI .981 0.017
AMI .984 0.040
PU .986 0.092

Descriptive Characteristics of the Demographic Information

According to Table 4, among the 172 respondents, 32.6% were male and 67.4% were female. The respondents also fall in the ages of 18-26, with most of the respondents aging 18-20 (57.6%, n=99). There are also 68 (39.5%) respondents aging between 21-23 years old and the five remaining respondents are aged 24-26 years old.

There is also a significant representation from the College of Business and Accounting (CBEA) at 39.50% (n = 67), with the College of Arts and Sciences (CAS) and the College of Engineering (CE) comprising 30.80% (n = 53) and 16.90% (n = 30) of participants, respectively. Additionally, the College of Education (CED) and the Institute of Human Kinetics (IHK) account for 11.60% (n = 20) and 1.20% (n = 2) of the sample. The majority were first-year students, accounting for 34.90% (n = 60), followed by third-year students at 29.10% (n = 50), second-years at 25.00% (n = 43), fourth-years at 8.70% (n = 15), and fifth-years at 2.30% (n = 4), indicating a decremented enrollment with ascending year levels.

There is also a clear preference for ChatGPT among AI tools in the academe, with 64.5% of respondents (n = 111) favoring it. Following ChatGPT, Grammarly Assistant was preferred by 16.30% (n=28), then Gemini at 14.00% (n = 24), Copilot at 3.50% (n = 6), and Character AI at 1.70% (n=3). It also illustrates the breakdown of AI utilization in the academe is as follows: never at 1.70% (n = 3), rare at 19.80% (n = 34), occasional at 14.50% (n = 25), sometimes at 31.40% (n = 54), frequent at 13.40% (n = 23), usual at 13.40% (n = 23), and always at 5.80% (n = 10).

Table 3. Descriptives of Demographic Profile

Variables Categories n Percentage
Sex Male

Female

56

116

32.6 %

67.4 %

Age 18 – 20

21 – 23

24 – 26

99

68

5

57.6 %

39.5 %

2.9 %

College College of Arts and Sciences

College of Business, Entrepreneurship, and Accountancy

College of Engineering

College of Education

Institute of Human Kinetics

53

67

30

20

2

30.8 %

39.0 %

17.4 %

11.6 %

1.2 %

Year Level First Year

Second Year

Third Year

Fourth Year

Fifth Year

60

43

50

15

4

34.9 %

25.0 %

29.1 %

8.7 %

2.3 %

Preferred AI Tool ChatGPT

Copilot (Bing)

Gemini (Google)

Character AI

Grammarly Assistant

111

6

24

3

28

64.5 %

3.5 %

14.0 %

1.7 %

16.3 %

Frequency of AI Usage I Never use my preferred AI tool for my studies

I Rarely use my preferred AI tool for my studies

I Occasionally use my preferred AI tool for my studies

I Sometimes use my preferred AI tool for my studies

I Frequently use my preferred AI tool for my studies

I Usually use my preferred AI tool for my studies

I Always use my preferred AI tool for my studies

3

34

25

53

23

24

10

1.7 %

19.8 %

14.5 %

30.8 %

13.4 %

14.0 %

5.8 %

AI Literacy Level

Table 4. Descriptives of AILL and its dimensions

Variables Mean S.D. Interpretation
KUAI 4.09 .598 High Literacy
UAAI 3.99 .659 High Literacy
DAI 3.85 .778 High Literacy
AIE 4.11 .711 High Literacy
DPS 4.14 .648 High Literacy
AILL 4.03 .540 High Literacy

Table 4 shows the descriptive statistics of the RTU-Boni undergraduate students’ responses to each question under each dimension for measuring AI literacy.

According to Table 4 the AI Literacy levels of RTU-Boni undergraduate students in terms of “Knowing and Understanding AI” (KUAI), having an average of 4.09 on a 5-point scale. The findings also show that the respondents have a high level of AI Literacy in “Using and Applying AI” (UAAI), having an average of 3.99 out of 5.

They also have a high AI literacy level regarding “Detecting AI” (DAI) tools during their studies, with an average score of 3.85 on a 5-point Likert Scale. The data presented also revealed that RTU-Boni undergraduate students possess a high level of AI literacy with AI Ethics, with an overall average score of 4.11 on a 5-point Likert Scale.

The findings also show that the respondents have an intermediate level of awareness and concern regarding data privacy and security, having an average of 4.14 out of a 5-point Likert scale. Having a mean of 4.03 out of 5 for the overall AI literacy level of the respondents, showing that the sample has a high level of literacy.

Table 5. Descriptives of PU and its dimensions

Variables Mean S.D. Interpretation
CI 3.74 0.739 Very Useful
SEI 3.51 0.752 Very Useful
AMI 3.67 0.663 Very Useful
PU 3.64 0.682 Very Useful

Perceived Usefulness of AI in Education

Table 5 shows the descriptive statistics of the RTU-Boni undergraduate students’ responses to each question, which measures the respondent’s perception towards the usefulness of AI tools under each dimension.

It indicates that RTU-Boni undergraduate students found AI tools to be very useful based on their responses to the cognitive interactions (CI) dimension, with an overall mean of 3.74 on a 5-point Likert scale. However, responses to question 3 on the same scale averaged 3.44 revealing a neutral stance towards adopting AI tools as primary academic resources. This suggests a nuanced view among respondents regarding the centrality of AI tools in academic settings.

Findings also shows an overall mean of 3.51 on a 5-Point Likert scale for the component Socio-emotional Interactions (SEI), which means that the respondents perceived AI tools as very useful for their academic needs, especially about the social and emotional interactions that were birthed with the use of AI, not only to the interaction between the users and AI tools. An overall mean of 3.67 on a 5-Point Likert scale for the component Artifact-Mediated Interactions (AMI), which shows that the respondents perceived AI tools as very useful for their devices they use in the academe. This shows that on average the students of RTU-Boni finds AI to be very useful in the academe (mean = 3.64; SD = 0.684).

The Effect of Demographic Profile to AI Literacy Level and its dimensions

Table 6 shows the difference in AI Literacy and the dimensions under it as the responses are grouped according to the respondent’s demographic profile (sex, age, college, year level, preferred AI tool [preference], and frequency of AI usage [frequency]) using Mann-Whitney and Kruskal-Wallis to find significant difference due to the violation of normality.

All the AILL dimensions and itself does not differ significantly when grouped by sex, age, college, preference, and frequency due to a high p-value. Therefore, the researchers failed to reject the null hypothesis, meaning no significant difference exists between the AIL and its parameters when grouped by this demographic profile (sex, age, college, preference, and frequency). Still, the results show that AIE is significantly affected by year level, having a p-value of 0.044, rejecting the null hypothesis. But the other dimensions of AIL does not have significant difference when grouped by year level. This shows that there is a difference to the AIE of the students when they are grouped to their year level.

Table 6. The effect of the demographic profile to AILL and its dimensions

Demographic Profile AIL Dimensions Stat Treatment p-value Conclusion Interpretation
Sex KUAI Mann-Whitney 0.607 Failed to reject H0 Not Significant
  UAAI Mann-Whitney 0.501 Failed to reject H0 Not Significant
  DAI Mann-Whitney 0.580 Failed to reject H0 Not Significant
  AIE Mann-Whitney 0.880 Failed to reject H0 Not Significant
  DPS Mann-Whitney 0.544 Failed to reject H0 Not Significant
  AILL Mann-Whitney 0.751 Failed to reject H0 Not Significant
Age KUAI Kruskal-Wallis 0.524 Failed to reject H0 Not Significant
  UAAI Kruskal-Wallis 0.958 Failed to reject H0 Not Significant
  DAI Kruskal-Wallis 0.839 Failed to reject H0 Not Significant
  AIE Kruskal-Wallis 0.179 Failed to reject H0 Not Significant
  DPS Kruskal-Wallis 0.620 Failed to reject H0 Not Significant
  AILL Kruskal-Wallis 0.837 Failed to reject H0 Not Significant
College KUAI Kruskal-Wallis 0.344 Failed to reject H0 Not Significant
  UAAI Kruskal-Wallis 0.786 Failed to reject H0 Not Significant
  DAI Kruskal-Wallis 0.198 Failed to reject H0 Not Significant
  AIE Kruskal-Wallis 0.073 Failed to reject H0 Not Significant
  DPS Kruskal-Wallis 0.499 Failed to reject H0 Not Significant
  AILL Kruskal-Wallis 0.356 Failed to reject H0 Not Significant
Year Level KUAI Kruskal-Wallis 0.721 Failed to reject H0 Not Significant
  UAAI Kruskal-Wallis 0.996 Failed to reject H0 Not Significant
  DAI Kruskal-Wallis 0.998 Failed to reject H0 Not Significant
  AIE Kruskal-Wallis 0.044 Reject H0 Significant
  DPS Kruskal-Wallis 0.555 Failed to reject H0 Not Significant
  AILL Kruskal-Wallis 0.679 Failed to reject H0 Not Significant
Preference KUAI Kruskal-Wallis 0.277 Failed to reject H0 Not Significant
  UAAI Kruskal-Wallis 0.203 Failed to reject H0 Not Significant
  DAI Kruskal-Wallis 0.136 Failed to reject H0 Not Significant
  AIE Kruskal-Wallis 0.676 Failed to reject H0 Not Significant
  DPS Kruskal-Wallis 0.201 Failed to reject H0 Not Significant
  AILL Kruskal-Wallis 0.381 Failed to reject H0 Not Significant
Frequency KUAI Kruskal-Wallis 0.847 Failed to reject H0 Not Significant
  UAAI Kruskal-Wallis 0.448 Failed to reject H0 Not Significant
  DAI Kruskal-Wallis 0.917 Failed to reject H0 Not Significant
  AIE Kruskal-Wallis 0.908 Failed to reject H0 Not Significant
  DPS Kruskal-Wallis 0.609 Failed to reject H0 Not Significant
  AILL Kruskal-Wallis 0.949 Failed to reject H0 Not Significant

Post Hoc Analysis of Year Level Differences in AI Ethics

Table 7. Post Hoc Analysis of Year Level and AIE

    W p-value
First Year Second Year 2.413 0.430
First Year Third Year 2.252 0.503
First Year Fourth Year 3.253 0.145
First Year Fifth Year 2.668 0.325
Second Year Third Year 0.185 0.999
Second Year Fourth Year 2.193 0.530
Second Year Fifth Year 2.123 0.562
Third Year Fourth Year 1.962 0.636
Third Year Fifth Year 1.897 0.666
Fourth Year Fifth Year 0.746 0.985

To determine specific differences in AI Ethics (AIE) across year levels, a Dwass-Steel-Critchlow-Fligner (DSCF) post hoc test was conducted following a Kruskal-Wallis test. The results (see Table 7) indicate that none of the pairwise comparisons were statistically significant (all p-values > 0.05).

This suggests that AI Ethics awareness does not significantly differ across year levels, implying that students, regardless of their year level, share similar levels of ethical understanding regarding AI. The absence of significant differences might indicate a uniform exposure to AI Ethics discussions across all year levels or a lack of structured progression in AI Ethics education throughout their academic journey.

The Effects of Demographic Profile to Perceived Usefulness of AI in Education and its dimensions

Table 7 shows the difference in Perceived Usefulness of AI in education (PU) and the dimensions under it as the responses are grouped according to the respondent’s demographic profile (sex, age, college, year level, preferred AI tool [preference], and frequency of AI usage [frequency]). The researchers used Mann-Whitney and Kruskal-Wallis to find significant difference due to the violation of the assumption of normality. All the PU dimensions and itself does not differ significantly when grouped by sex, age, college, preference, and frequency due to a high p-value. Therefore, the researchers failed to reject the null hypothesis, meaning no significant difference exists between the PU and its dimensions when grouped by this demographic profile (sex, age, college, year level. preference, and frequency).

Table 8. The effect of demographic profile to PU and its dimensions

Demographic Profile AIL Dimensions Stat Treatment p-value Conclusion Interpretation
Sex CI Mann-Whitney 0.329 Failed to reject H0 Not Significant
  SEI Mann-Whitney 0.427 Failed to reject H0 Not Significant
  AMI Mann-Whitney 0.802 Failed to reject H0 Not Significant
  PU Mann-Whitney 0.504 Failed to reject H0 Not Significant
Age CI Kruskal-Wallis 0.993 Failed to reject H0 Not Significant
  SEI Kruskal-Wallis 0.536 Failed to reject H0 Not Significant
  AMI Kruskal-Wallis 0.480 Failed to reject H0 Not Significant
  PU Kruskal-Wallis 0.831 Failed to reject H0 Not Significant
College CI Kruskal-Wallis 0.158 Failed to reject H0 Not Significant
  SEI Kruskal-Wallis 0.172 Failed to reject H0 Not Significant
  AMI Kruskal-Wallis 0.656 Failed to reject H0 Not Significant
  PU Kruskal-Wallis 0.246 Failed to reject H0 Not Significant
Year Level CI Kruskal-Wallis 0.881 Failed to reject H0 Not Significant
  SEI Kruskal-Wallis 0.669 Failed to reject H0 Not Significant
  AMI Kruskal-Wallis 0.390 Failed to reject H0 Not Significant
  PU Kruskal-Wallis 0.707 Failed to reject H0 Not Significant
Preference CI Kruskal-Wallis 0.678 Failed to reject H0 Not Significant
  SEI Kruskal-Wallis 0.784 Failed to reject H0 Not Significant
  AMI Kruskal-Wallis 0.942 Failed to reject H0 Not Significant
  PU Kruskal-Wallis 0.898 Failed to reject H0 Not Significant
Frequency CI Kruskal-Wallis 0.179 Failed to reject H0 Not Significant
  SEI Kruskal-Wallis 0.092 Failed to reject H0 Not Significant
  AMI Kruskal-Wallis 0.049 Reject H0 Significant
  PU Kruskal-Wallis 0.117 Failed to reject H0 Not Significant

Post Hoc Analysis of Frequency of AI Usage Differences in Artifact Mediated Interactions

Table 9. Post Hoc Analysis of Frequency and AMI

    W p-value
Never Rarely 0.1580 1.0000
Never Occasionally 2.7456 0.4527
Never Sometimes 0.4903 0.9999
Never Frequently 0.9772 0.9932
Never Usually 0.1644 1.0000
Never Always 2.0517 0.7740
Rarely Occasionally 4.0280 0.0663
Rarely Sometimes 0.8633 0.9965
Rarely Frequently 1.0047 0.9920
Rarely Usually 0.2576 1.0000
Rarely Always 2.6016 0.5211
Occasionally Sometimes -3.0567 0.3168
Occasionally Frequently -3.9771 0.0732
Occasionally Usually -3.3111 0.2245
Occasionally Always 0.4727 0.9999
Sometimes Frequently -0.0241 1.0000
Sometimes Usually -0.3818 1.0000
Sometimes Always 2.1880 0.7163
Frequently Usually -0.5902 0.9996
Frequently Always 2.5688 0.5370
Usually Always 2.3050 0.6631

A Dwass-Steel-Critchlow-Fligner (DSCF) post hoc test was performed following a significant Kruskal-Wallis test to examine differences in artifact-mediated interactions based on the frequency of AI usage. Though the p-value between rarely and occasionally and occasionally and frequently are noted to be low, all the results (see Table 9) indicate that none of the pairwise comparisons reached statistical significance (all p-values > 0.05),

This suggests that frequency of AI usage does not significantly impact artifact-mediated interactions. Whether students use AI tools rarely or frequently, their level of engagement with artifact-mediated processes appears to be similar. This could indicate that students across different usage levels interact with AI tools in ways that do not distinctly enhance their engagement with external cognitive artifacts (e.g., digital learning tools, AI-assisted brainstorming, or design software).

DISCUSSION

The study analyzed 172 valid responses, with majority of the respondents being female, aligning with findings that women are more engaged in online surveys (Becker, 2022). A decline in enrollment with increasing year levels was also observed, consistent with Pavlov and Katsamakas (2020) findings. ChatGPT emerged as the most preferred AI tool, supporting findings of other researches (Boschee, 2023; Rudolph et al., 2023), though there are other papers that highlighted Bing Copilot’s superior performance in specific tasks (Morreel et al., 2023).

The findings show that 98.3% of the students reported that they have used AI in the academe, reinforcing Chan and Hu’s (2023) findings on AI’s academic benefits. Memarian and Doleck (2024) also supported the findings that there is a variety of AI Literacy Level across the students’ responses, with Delcker et al. (2024) further suggesting that having a deeper understanding of AI correlates with the frequency of its usage, which supports the findings where there is a high level of understanding by the respondents in terms of AI literacy. UAAI results underscores the respondents’ proficiency in leveraging AI technologies to enrich their skills in writing, critical thinking, and comprehending complex topics. Otero et al. (2023) highlighted the importance of early AI literacy integration, advocating for its inclusion in K–12 curriculums to prepare students for an AI-integrated future better.

Furthermore, Su and Yang (2023) argue that AI literacy among the youth not only enhances their comprehension of AI technologies but also encourages creative thinking concerning AI applications. Decker et al. (2024) complement this line of inquiry by pointing out the challenges users face in recognizing AI due to its seamless incorporation into technology. This underscores the necessity for a heightened ability to detect the presence of AI tools within educational settings in light of its ubiquitous presence and implications for future academic and professional endeavors. Ng et al. (2021) added that the concept of AI literacy is evolving, extending beyond simple tool use to include the navigation of ethical challenges associated with AI. This evolution highlights the critical need for ethical considerations to be integrated into AI education, thus equipping students with the necessary skills to man. Aside from the knowing, using, detecting, and knowing the ethical implications of AI, the student’s data should also be protected and secured. Huang (2023) delineates the ethical quandaries that AI poses to the confidentiality of student information. The scholar’s exposition accentuates the pressing necessity to confront these data security challenges head-on. Huang’s call to action is particularly pertinent, advocating for a holistic approach that amalgamates data privacy, security, and AI literacy considerations. This comprehensive stance is imperative to navigate the intricacies of the digital age, thereby ensuring the protection of students’ personal information amidst the proliferation of AI applications in educational spheres.

A noteworthy observation by Jose (2024) on the ACT survey outcomes, indicating that a significant 74% of students perceive AI as beneficial for educational advancement, which is also reflected in the result of this study. This suggests a nuanced view among respondents regarding the centrality of AI tools in academic settings. The respondents also perceived AI tools as very useful for their academic needs, due to the social and emotional interactions that were birthed with the use of AI, creating a community of users. A study by Mertala et al. (2024) suggested that, with human agency remaining central, generative AI tools can be successfully incorporated to enhance collaborative learning experiences, aligning with the notion of “hybrid minds” where human cognition is augmented by sophisticated symbolic tools. Moffitt (2021) also highlighted that engaging student in collaborative activities involving design objects fosters co-creation of knowledge and the development of creative solutions, thereby deepening their understanding of AI concepts and applications.

Looking at the results of the inferential statistics, the researchers failed to reject the null hypothesis, which means that there is no significant difference that exists between the AIL and its dimensions when grouped by sex, age, college, preference, and frequency. Vandenberg and Mott (2023), shows that age should significantly impacts AI literacy, especially among young learners (ages 9-11) who have a mixed understanding of the concept of AI, which opposes this study’s results. According to Wang et al. (2022a), they discussed that the course plays a significant role. AI literacy is improved in college courses for students from a variety of educational backgrounds. These courses, which cover machine learning, deep learning, and AI ethics, are intended to help students develop a conceptual grasp of AI. Colleges may successfully lower the learning threshold and enable students to work with AI technologies by implementing AI literacy programs catered to various learner levels. The findings demonstrate that regardless of their gender, discipline, or level of prior programming experience, participants make noteworthy progress in comprehending AI ideas.

Furthermore, adding real-world projects and talks on AI-related ethical concerns to these classes enhances the educational process and prepares students for the wider social effects of AI. Numerous studies also demonstrated the considerable impact of year level on AI literacy, which opposes the results of this study. According to research, middle school children participating in AI literacy seminars can gain a general awareness of AI ideas, ethical implications, and future employment potential (Zhang et al., 2022). According to Eguchi (2021), Students’ preferred AI greatly influences how literate they are in AI. For example, the expectations and views of AI technology may differ among students from diverse cultural and socioeconomic backgrounds.

CONCLUSION AND RECOMMENDATIONS

This study has several limitations that may affect the generalizability of its findings. First, the use of a non-probability sampling method, particularly the reliance on online survey responses, limited the sample’s representativeness. Consequently, the demographic composition may not fully reflect the broader student population at RTU-Boni, potentially introducing bias into the results. Additionally, the collected data were not normally distributed, necessitating the use of non-parametric statistical methods. While these methods were appropriate for the dataset, they generally have less statistical power than parametric tests, which may limit the detection of significant differences and associations. Another limitation was the underrepresentation of certain demographic groups, particularly fifth-year students, those from the Institute of Human Kinetics (IHK), and of the users of other AI tools since this study only took ChatGPT, Copilot (Bing), Gemini (Google), Character AI, and Grammarly Assistant into considerations. This absence narrowed the scope of findings, preventing a comprehensive understanding of AI literacy across all student sectors. Moreover, restricted access to academic resources due to paywalls may have limited the depth of the literature review, affecting the contextualization of findings. Furthermore, while the study primarily focused on students’ perceptions of AI literacy, it did not extensively explore how students interact with AI tools in practice. The modified META AI Literacy Scale and the questionnaire derived from the thematic study of Student AI Collaboration, though validated by experts, are not yet standardized measures, which may impact comparability with another research. Standardization and further validation of measurement tools should be considered in subsequent studies.

Despite these limitations, the study provides valuable insights into the association between students’ AI literacy levels and their demographic characteristics. Age, college affiliation, and frequency of AI usage emerged as critical factors influencing students’ perceptions and engagement with AI tools. Notably, students who frequently use AI demonstrated higher AI literacy scores and a stronger perception of AI’s usefulness in academic settings. A significant difference was observed across colleges in AI literacy levels, suggesting that students’ academic disciplines shape their familiarity with AI technologies. For instance, students from technology-related fields reported higher AI literacy compared to those from non-technical disciplines. These findings underscore the importance of tailoring AI literacy initiatives to address disparities in exposure and access across different academic fields.

Furthermore, ChatGPT was identified as the most preferred AI tool among students, reinforcing the role of conversational AI in higher education. The strong association between AI tool preference and perceived usefulness suggests that students tend to favor AI platforms with user-friendly interfaces and practical academic applications. Given this, institutions should assess the accessibility and effectiveness of AI tools in academic settings to optimize their integration into learning processes. This study highlights the necessity for educational institutions to integrate AI literacy into academic curricula, ensuring that students develop the skills needed to engage with AI technologies effectively. By addressing demographic disparities and promoting targeted AI literacy programs, universities can foster a more inclusive and technologically adept learning environment.

To address the limitations of this study, future research should adopt probability sampling techniques or to get a larger sample to address the normality and representativeness of the sample. With the creation of other AI Tools, future studies should cover this to capture a wider perspective on AI literacy and perceptions. Future research should also explore the direct impact of AI literacy on the academic performance of the respondents, their learning behaviors, and skill development. Longitudinal studies and experimental designs could also be utilized to provide deeper insights into how AI tools influence student outcomes over time. Standardization of the AI Literacy instrument in the academe could also enhance the comparability across studies and improve the reliability of the instrument. Lastly, academic institutions should also consider integrating AI literacy programs into the curricula to equip students with the skills to engage with AI tools effectively. By focusing on these areas, future research can contribute to a more robust understanding of AI literacy and its role in shaping the future of education.

ACKNOWLEDGEMENT

The authors would like to express their gratitude to their professors, evaluators, classmates, and friends for their comments, insights, and suggestions and to their families for their continued support throughout the writing of this article.

REFERENCES

  1. André, E. (2021). Socially Interactive Artificial Intelligence: Past, Present and Future. https://doi.org/10.1145/3462244.3480862
  2. Bakhadirov, A., Kim, S., & Lee, H. (2024). AI literacy and its influence on educators’ integration of AI technologies in teaching. Educational Technology Research and Development, 72(2), 345–360. https://doi.org/10.1007/s11423-024-10056-7
  3. Barnett, S. (2023, January 30). ChatGPT Is Making Universities Rethink Plagiarism. Wired. https://www.wired.com/story/chatgpt-college-university-plagiarism/
  4. Becker, R. (2022). Gender and Survey Participation: An Event History Analysis of the Gender Effects of Survey Participation in a Probability-based Multi-wave Panel Study with a Sequential Mixed-mode Design. Methods, Data, Analyses, 16(1), 3–32. https://doi.org/10.12758/mda.2021.08
  5. Boschee, P. (2023). Comments: AI Language Tools Hit the Books . . . and Technical Content? Journal of Petroleum Technology, 75(04), 8–9. https://doi.org/10.2118/0423-0008-jpt
  6. Bulut, O., Beiting-Parrish, M., Casabianca, J. M., Slater, S. C., Jiao, H., Song, D., Ormerod, C. M., Fabiyi, D. G., Ivan, R., Walsh, C., Rios, O., Wilson, J., Yildirim-Erbasli, S. N., Wongvorachan, T., Liu, J. X., Tan, B., & Morilova, P. (2024). The rise of artificial intelligence in educational measurement: Opportunities and ethical challenges. arXiv. https://arxiv.org/abs/2406.18900
  7. Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work. arXiv preprint arXiv:2304.11771. https://arxiv.org/abs/2304.11771
  8. Carolus, A., Koch, M. J., Straka, S., Latoschik, M. E., & Wienrich, C. (2023). MAILS – Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Computers in Human Behavior Artificial Humans, 1(2), 100014–100014. https://doi.org/10.1016/j.chbah.2023.100014
  9. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20, Article 41. https://doi.org/10.1186/s41239-023-00411-8
  10. Coleman, J. P. (2020). AI and Our Understanding of Intelligence. IntelliSys, 183–190. https://doi.org/10.1007/978-3-030-55180-3_15
  11. Delcker, J., Heil, J., Ifenthaler, D., Seufert, S., & Spirgi, L. (2024). First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-024-00452-7
  12. Eguchi, A. (2021). AI-Robotics and AI Literacy. Studies in Computational Intelligence, 75–85. https://doi.org/10.1007/978-3-030-77022-8_7
  13. Guo, Y., Yu, H., & Zhang, L. (2024). Exploring the impact of AI literacy on students’ perceptions of AI in education. Journal of Educational Technology & Society, 27(1), 45–58. https://doi.org/10.1234/jets.2024.0123456
  14. Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570. https://doi.org/10.1111/ejed.12533
  15. Huang, L. (2023). Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection. Science Insights Education Frontiers, 16(2), 2577–2587. https://doi.org/10.15354/sief.23.re202
  16. Idroes, M., Tan, S., & Lee, J. (2023). Enhancing AI literacy through experiential learning: A study on students’ engagement with AI tools. Computers & Education, 180, 104456. https://doi.org/10.1016/j.compedu.2022.104456
  17. Jeffrey, S. (2020). The ethical implications of artificial intelligence in education. AI & Society, 35(3), 595–609. https://doi.org/10.1007/s00146-019-00950-0
  18. Jose, A. (2024, February 23). How Many High School and College Students are Using AI Tools? STEM Blog by Numerade. https://www.numerade.com/blog/educational/how-many-high-school-and-college-students-are-using-ai-tools/
  19. Kim, J., Lee, H., & Cho, Y. H. (2022). Learning Design to Support Student-AI Collaboration: Perspectives of Leading Teachers for AI in Education. Education and Information Technologies, 27, 6069–6104. https://doi.org/10.1007/s10639-021-10831-6
  20. Labadze, L., Grigolia, M., & Machaidze, L. (2024). Correction: Role of AI chatbots in education: systematic literature review. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-024-00461-6
  21. Lakkaraju, K., Khandelwal, V., Srivastava, B., Agostinelli, F., Tang, H., Singh, P., Wu, D., Irvin, M., & Kundu, A. (2024). Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik’s Cube. arXiv. https://arxiv.org/abs/2402.01760
  22. Latif, E., Mai, G., Nyaaba, M., Wu, X., Liu, N., Lu, G., Li, S., Liu, T., & Zhai, X. (2023). AGI: Artificial General Intelligence for Education. arXiv. https://arxiv.org/abs/2304.12479
  23. Long, D., & Magerko, B. (2020, April 23). What is AI Literacy? Competencies and Design Considerations | Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM Conferences. https://dl.acm.org/doi/10.1145/3313831.3376727
  24. Mertala, P., & Nousiainen, T. (2024). Learning with generative artificial intelligence in collaborative problem-solving: A teaching and learning framework for entrepreneurship education. IAFOR Journal of Education, 12(1), 45–62. https://doi.org/10.22492/ije.12.1.03
  25. Memarian, B., & Doleck, T. (2024). Teaching and learning artificial intelligence: Insights from the literature. Education and Information Technologies. https://doi.org/10.1007/s10639-024-12679-y
  26. Michaeli, T., Seegerer, S., & Romeike, R. (2023). What students can learn about artificial intelligence: Recommendations for K-12 computing education. arXiv preprint arXiv:2305.06450.
  27. Morandín-Ahuerma, F. (2022). What is Artificial Intelligence? International Journal of Research Publication and Reviews, 03(12), 1947–1951. https://doi.org/10.55248/gengpi.2022.31261
  28. Moffitt, P. (2022). Visual forms of mediating artefacts: A research-intervention in engineering education. Studies in Technology Enhanced Learning, 2(1), 81–96. https://doi.org/10.21428/8c225f6e.3e816f8f
  29. Morreel, S., Verhoeven, V., & Mathysen, D. (2024). Microsoft Bing outperforms five other generative artificial intelligence chatbots in the Antwerp University multiple choice medical license exam. PLOS Digital Health, 3(2), e0000349–e0000349. https://doi.org/10.1371/journal.pdig.0000349
  30. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Shen, M. Q. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
  31. Ngo, M. (2023). Educators grapple with AI-generated assignments as students question tool reliability. The New York Times. https://www.nytimes.com/2023/05/15/education/ai-generated-assignments.html
  32. Otero L. C., Catala, A., Fernández-Morante, C., Taboada, M., López B. C., & Barro, S. (2023). AI literacy in K-12: a systematic literature review. International Journal of STEM Education, 10(1). https://doi.org/10.1186/s40594-023-00418-7
  33. Pavlov, O. V., & Katsamakas, E. (2020). Will colleges survive the storm of declining enrollments? A computational model. PloS One, 15(8), e0236872–e0236872. https://doi.org/10.1371/journal.pone.0236872
  34. Porayska-Pomsta, K. (2023). A Manifesto for a Pro-Actively Responsible AI in Education. International Journal of Artificial Intelligence in Education, 34. https://doi.org/10.1007/s40593-023-00346-1
  35. Rudolph, J., Tan, S., & Tan, S. (2023). View of War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education | Journal of Applied Learning and Teaching. Journal of Applied Learning & Teaching, 6(1). https://doi.org/10.37074/jalt.2023.6.1.23
  36. Sobhani S. R., Armaghan N., Haghir S., & Garmaroodi A.A.. (2022). The Impact of Interactive technology by using Artificial Intelligence on the Formation of Social Interactions: A Study in Tourism Industry 4.0. 2022 IEEE 28th International Conference on Engineering, Technology and Innovation (ICE/ITMC) & 31st International Association for Management of Technology (IAMOT) Joint Conference. https://doi.org/10.1109/ice/itmc-iamot55089.2022.10033162
  37. Su, J., & Yang, W. (2023). Artificial Intelligence (AI) literacy in early childhood education: an intervention study in Hong Kong. 1–15. https://doi.org/10.1080/10494820.2023.2217864
  38. Tiernan, P., Costello, E., & Donlon, E. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13(9), 906. https://doi.org/10.3390/educsci13090906
  39. Vandenberg, J., & Mott, B. W. (2023). “AI Teaches Itself”: Exploring Young Learners’ Perspectives on Artificial Intelligence for Instrument Development. https://doi.org/10.1145/3587102.3588778
  40. Vasconcelos, M. A. R., & dos Santos, R. P. (2023). Enhancing STEM Learning with ChatGPT and Bing Chat as Objects to Think With: A Case Study. arXiv. https://arxiv.org/abs/2305.02202
  41. Virvou, M. (2022). The Emerging Era of Human-AI Interaction: Keynote Address. https://doi.org/10.1109/iisa56318.2022.9904422
  42. Wang, C.-J., Zhong, H.-X., Chiu, P.-S., Chang, J.-H., & Wu, P.-H. (2022a). Research on the Impacts of Cognitive Style and Computational Thinking on College Students in a Visual Artificial Intelligence Course. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.864416
  43. Wang, F., King, R. B., Ching Sing Chai, & Zhou, Y. (2023). University students’ intentions to learn artificial intelligence: the roles of supportive environments and expectancy–value beliefs. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00417-2
  44. Wang, Q., Glikson, E., & Te’eni, D. (2022b). Understanding the design space of AI-mediated social interaction in online learning: Challenges and opportunities. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1–25. https://doi.org/10.1145/3512977
  45. Wang, X. (2023). The impact of AI-driven educational tools on student engagement and learning outcomes: A meta-analysis. Computers & Education, 182, 104463. https://doi.org/10.1016/j.compedu.2022.104463
  46. Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2022). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33(2), 290–324. https://doi.org/10.1007/s40593-022-00293-3

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

14 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER