International Journal of Research and Innovation in Social Science

Submission Deadline- 11th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-03rd October 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Determinants of Students’ Intention to Use AI-Powered Writing Tools in Academic Writing

  • Saifulnazirah Bongsu
  • Wan Nur Mardhiah Nik Mohamed
  • Wan Nazihah Wan Mohamed
  • 5616-5630
  • Aug 20, 2025
  • Education

Determinants of Students’ Intention to Use AI-Powered Writing Tools in Academic Writing

Saifulnazirah Bongsu1, Wan Nur Mardhiah Nik Mohamed2, Wan Nazihah Wan Mohamed3*

1Universiti Malaya Education Centre Bachok Campus, Kelantan, Malaysia

2Faculty of Computer Science and Mathematics, University Technology MARA Cawangan Kelantan, Malaysia

3Akademi Pengajian Bahasa, University Technology MARA Cawangan Kelantan, Malaysia

*Corresponding author

DOI: https://dx.doi.org/10.47772/IJRISS.2025.903SEDU0411

Received: 15 July 2025; Accepted: 23 July 2025; Published: 20 August 2025

ABSTRACT

The increasing adoption of Artificial Intelligence (AI) tools by students has significantly transformed various aspects of education, particularly academic writing. This research investigates the factors that affect students’ intention to use AI-based writing tools like ChatGPT and Perplexity in their academic work. Guided by the Value-Based Adoption Model (VAM), a survey involving 219 university students was conducted to assess their views on perceived usefulness, perceived enjoyment, perceived technicality, and perceived cost. Data were analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM) through SmartPLS software version 4.1.0.9. The results indicate that perceived usefulness and perceived enjoyment strongly impact students’ willingness to use AI tools, whereas perceived technicality and perceived cost have no significant effect. These outcomes imply that universities should develop clear policies for responsible AI use while leveraging its potential to support students’ academic development.

Keywords: Value-Based Adoption Model, perceived usefulness, perceived enjoyment, perceived technicality, perceived cost, AI applications

INTRODUCTION

In the current era driven by advanced technologies and the emergence of Industry 5.0, the use of Artificial Intelligence (AI) has increasingly spread across various sectors, including healthcare, finance, marketing, and education. According to Erbas and Maksuti (2024), the rapid advancement of modern technologies has led to AI becoming an integral part of everyday life. AI has also sparked a major transformation in the field of education, particularly in academic contexts. Research has shown that AI tools hold great promise for enhancing academic writing by helping students improve their writing skills, increase productivity, and minimize errors (Utami et al., 2023). Applications such as ChatGPT, ChatBots, and PaperPal are commonly used to support students in completing assignments, conducting research, and refining their writing abilities. These AI tools function through algorithms designed to help users perform specific academic tasks and solve problems. However, utilizing AI in academic writing effectively requires a certain level of readiness. Students must be adequately prepared to benefit from these technologies. Malik et al. (2023) defined students’ preparedness in using AI tools as the extent to which they possess the necessary knowledge, skills, and attitudes to engage with these applications effectively. This includes understanding fundamental AI concepts, being familiar with the features and capabilities of AI tools and having the motivation to use the tools to enhance their academic writing.

As digital technology continues to evolve, especially with the growing integration of AI in education, more students are becoming inclined to use AI tools to support their academic writing. The availability of AI-powered writing applications that assist with tasks such as topic development, research support, and drafting is steadily increasing (Malik et al., 2023). These tools also enhance writing by improving sentence structure, vocabulary, and correcting grammatical, spelling, and punctuation mistakes. Additionally, they assist students in finding relevant sources and organizing research data effectively. However, despite these advantages, many students face challenges in using AI tools due to unfamiliarity or lack of confidence with the technology. A significant number of students have not been formally trained in AI, leading to limited knowledge and usage. This lack of preparedness hinders their ability to fully benefit from AI tools in enhancing their writing and academic performance. It is essential for students to recognize that AI should serve as a support tool, not a replacement, for their original ideas and critical thinking. Rather than relying solely on AI, students should use it to refine their writing skills while maintaining ownership of the content they produce. This highlights the need for proper guidance and resources to help students confidently use AI in their academic work. Therefore, this study focuses on examining the readiness and intention of students at Universiti Malaya Bachok Campus to adopt AI applications in their academic writing (Khalifa & Albadawy, 2024).

This study adopts the Value-Based Adoption Model (VAM) to examine students’ intentions to use AI applications in academic writing. VAM is a theoretical model that explores how individuals decide to adopt AI technologies by evaluating the perceived benefits, such as usefulness and enjoyment, and the perceived sacrifices, which include technical complexity and cost (Liao et al., 2022). Perceived usefulness refers to the extent to which students believe that AI tools can enhance the quality of their academic writing. When students perceive these tools as helpful in improving their writing, they are more likely to embrace and use them. Similarly, perceived enjoyment plays a significant role in shaping intention; if students find the experience of using AI tools engaging and enjoyable, their willingness to use them increases. Perceived technicality, on the other hand, can act as a barrier. If students find AI applications difficult to understand or operate, they may be discouraged from using them due to the effort and technical skills required. Likewise, perceived cost influences adoption decisions in which students who perceive AI tools as expensive may be reluctant to use them, feeling that the financial cost outweighs the potential benefits.

Therefore, this study aims to identify the key factors that influence students’ readiness to use AI in academic writing by applying the Value-Based Adoption Model (VAM) within the context of AI adoption. Subsequently, the research question for this study is: What is the influence of perceived usefulness, perceived enjoyment, perceived technicality, and perceived cost on students’ intentions to use AI applications in academic writing? Gaining deeper insights into these factors can support the development and implementation of AI-based writing applications that align more closely with students’ needs and preferences, ultimately promoting broader acceptance and integration of these tools into the academic writing process.

The results of this study will be valuable to both students and lecturers by offering meaningful insights into students’ preparedness to adopt AI tools for academic writing. The research emphasizes the significance of incorporating AI applications into the writing process to enrich learning experiences and improve academic outcomes. It demonstrates that AI tools can positively influence students’ academic journeys by boosting understanding, creativity, and productivity. With the ability to enhance the efficiency and quality of writing, AI tools have become increasingly beneficial for students. Applications like ChatGPT, QuillBot, Perplexity, and Grammarly have shown notable improvements in students’ writing abilities. Furthermore, this study can assist educators in identifying gaps in students’ knowledge and in providing targeted feedback to support their academic progress. Through AI-powered chatbots and virtual assistants, lecturers can deliver timely support outside of class, helping to keep students engaged and motivated.

LITERATURE REVIEW

Artificial Intelligence (AI) refers to the simulation of human intelligence by technological systems, particularly computer-based technologies (Soori et al., 2023). Developing and training AI involves specialized software and hardware to create machine learning algorithms. Typically, AI systems operate by processing large amounts of labelled data, identifying patterns and relationships within that data, and using those patterns to predict future outcomes. AI encompasses several branches, such as machine learning (ML) and deep learning, which allow systems to learn from data and adapt over time (Soori et al., 2023). AI technology is now widely used across various sectors, including healthcare, finance, education, and marketing. In the field of education, AI applications have become increasingly prevalent in recent years (Zawacki-Richter et al., 2019). AI supports a range of educational functions, including personalized learning, assistance with research and writing, enhanced productivity, career guidance, and accessibility improvements. It is particularly effective in helping students strengthen their writing abilities and achieve better academic results. Popular AI tools like Grammarly, ChatGPT, QuillBot, and Perplexity assist students in completing academic tasks by offering support in researching, drafting, and editing. Additionally, AI-powered platforms help students locate relevant sources, organize information, and provide suggestions to improve the quality of their writing (Vieriu & Petrea, 2025).

Students’ intentions play a crucial role in determining their behaviour when it comes to using AI tools for academic writing. Numerous prior studies have employed theoretical models to evaluate users’ willingness to adopt and accept new technologies (Shelvia et al., 2020). These models provide researchers with a solid conceptual framework to identify and measure relevant independent variables. One widely used model is the Technology Acceptance Model (TAM) (Davis et al., 1989), which typically relies on quantitative methods to examine users’ acceptance of technology. TAM helps researchers understand the key factors that influence technology adoption, primarily focusing on perceived usefulness and perceived ease of use as the two core determinants (Davis et al., 1989). Other studies have shown that user attitudes play a significant mediating role in shaping how people accept and use technology (Al-Abdullatif, 2023). Previous research has also revealed strong links between attitude and behavioral intention in contexts such as smart homes, social media, online banking, and mobile payments (Mahmud et al., 2024). When users find a technology both beneficial and easy to use, they tend to form positive attitudes toward it, which in turn strengthens their intention to use it. As such, perceived usefulness, perceived ease of use, and attitude are central in shaping users’ intentions to adopt specific technology application.

Within the framework of Technology Acceptance Model (TAM), the model’s ability to predict individuals’ choices regarding the acceptance or rejection of new technologies is somewhat limited. This is mainly due to the reason that TAM-based studies often rely on self-reported data regarding users’ intentions, which may be affected by biases and inaccuracies. In addition, Liao et al. (2022) pointed out the limitations in TAM, particularly regarding its lack of consideration for social, personal, and cultural influences on technology acceptance. To address this limitation, Dai et al. (2020) integrated the Value-Based Adoption Model (VAM) with other models to better assess user intentions toward adopting various products and services.

Value-Based Adoption Model (VAM) is particularly useful for analysing the intention to adopt digital technologies, including AI applications. It blends the elements from TAM with the concept of perceived value to provide a more accurate and comprehensive understanding of technology acceptance (Al-Maroof et al., 2021). VAM emphasizes how perceived value is influenced by perceived benefits, such as usefulness and enjoyment, and perceived sacrifices, such as complexity and cost. It also explores how perceived value, in turn, affects the intention to use the technology. A previous study employing Multiple Linear Regression (MLR) analysis found that perceived benefits positively influence users’ continued use of mobile payment services, whereas perceived sacrifices have a negative impact (Shelvia et al., 2020). The findings suggest that perceived benefits enhance hedonic value, referring to the emotional satisfaction users experience when engaging with AI tools. Conversely, perceived sacrifices can result in negative experiences, such as frustration, disappointment, or anxiety, which may reduce users’ willingness to adopt or continue using AI technologies.

In the context of students’ readiness to use AI tools for academic writing, VAM suggests that their willingness is influenced by both perceived benefits and perceived sacrifices. The intention to adopt AI technology is ultimately driven by perceived value, highlighting the importance of understanding how users evaluate the worth of such tools during the adoption process. For students, especially those who highly prioritize their writing abilities, recognizing the value of AI in enhancing their skills can motivate them to embrace these technologies. Research has shown that incorporating AI into academic writing not only boosts students’ academic performance but also increases their engagement and motivation. It further supports the development of essential digital competencies that are increasingly sought after in today’s job market (Ayanwale & Ndlovu, 2024). Beyond improving writing, AI tools assist students in various research-related activities, such as locating credible sources, offering real-time assistance, and fostering technical and critical thinking abilities; skills that are highly regarded by employers.

Factors Associated with Students’ Intention to Use AI

The Value-Based Adoption Model (VAM) highlights perceived benefits as a key factor in predicting users’ acceptance of technology, with perceived usefulness being one of its most influential components. Perceived usefulness refers to an individual’s belief in the technology’s ability to assist them in completing tasks effectively. Research by Mahmud et al. (2024) confirmed that perceived usefulness significantly shapes positive attitudes toward AI technologies. In an academic setting, AI is considered useful when it helps students generate strong ideas, saves time, or enhances the quality of their writing. If students do not perceive these tools as beneficial, they are less likely to adopt them for academic purposes. Empirical findings by Salido (2023) suggested that students are more inclined to use AI tools regularly when they believe those tools can improve their writing quality and efficiency. Similarly, Bolanos et al. (2024) found that students’ intention to use AI-powered writing assistants is strongly influenced by how useful they perceive these tools to be, particularly in enhancing writing skills and delivering personalized feedback. Furthermore, research by Nagy and Hajdú (2021) emphasized that perceived usefulness is a stronger determinant of behavioral intention than perceived ease of use. This suggests that demonstrating the actual benefits of AI tools is critical in shaping students’ attitudes and increasing their willingness to adopt such technologies. Supporting this, a study by Al-Maroof et al. (2021) found a significant positive relationship between perceived usefulness and the intention to use platforms like Google Meet. Overall, a growing body of research confirms that perceived usefulness plays a vital role in influencing students’ intentions to use AI applications in academic writing.

In addition to perceived usefulness, perceived enjoyment is another key factor that contributes to perceived benefit. While perceived usefulness offers functional or utilitarian value, perceived enjoyment delivers emotional or hedonic value (Al-Abdullatif, 2023). Numerous studies have shown that integrating AI-powered technologies can enhance students’ engagement, motivation, and overall satisfaction. For example, Cabero-Almenara et al. (2019) noted that when university students find technologies like augmented reality enjoyable, they are more inclined to adopt them. The enjoyment factor has also been identified as a crucial element that sets the VAM apart from other models. In a foundational study by Davis et al. (1989), enjoyment and fun are found to significantly influence technology acceptance, even beyond the impact of usefulness. Users who feel comfortable with technology often experience immediate gratification, which fosters a sense of personal satisfaction. Over time, this positive emotional response can shift their mindset, leading to greater acceptance and use of the technology. Moreover, a survey by Sohn and Kwon (2020) revealed that among the different VAM factors, enjoyment has the strongest influence on users’ purchase intentions. When AI tools provide an enjoyable and rewarding learning experience, students are more likely to engage with them for academic writing, which can ultimately enhance learning outcomes.

Perceived sacrifice can be categorized into two types: monetary and non-monetary. Monetary sacrifice refers to the actual financial cost of acquiring a product or service, while non-monetary sacrifice involves factors such as time, effort, and dissatisfaction related to the product’s price (Kim et al., 2017). Within the VAM, perceived value is used to assess whether users are likely to accept or adopt a technology. This concept includes the evaluation of technical elements such as user engagement with content, learning effectiveness, audio-visual quality, and physical interaction (Al-Maroof et al., 2021). A recent study defined technicality as the assessment of how well a new technology performs in delivering its intended function, considering factors like ease of use, reliability, connectivity, and efficiency (Jingnan et al., 2023). Several studies have explored users’ perceptions of AI system complexity, stressing the need for intuitive interfaces and clear instructions to improve acceptance. For instance, Deepika et al. (2022) found that when users view chatbots as technically complicated, it negatively affects their attitudes toward using them. If users find AI difficult to understand or operate, it may lead to confusion or frustration, which in turn discourages adoption. Conversely, Mahmud et al. (2024) suggested that when users believe technology is easy to operate and master, their attitude toward adoption becomes more positive. This effect tends to be more pronounced among first-time users than those with prior experience. However, it is important to note that users’ perceptions of technical complexity do not always align with the actual technical features of the system (Shelvia et al., 2020). Overall, higher perceived technicality tends to diminish the perceived value of the technology, which can lower users’ intention to adopt it.

Previous studies have indicated that perceived sacrifice refers to the time, money, or other resources that users feel they give up in exchange for a product or service (Berto & Bursan, 2023). One of the key components of perceived sacrifice is perceived cost, which plays a crucial role in evaluating the overall value of a technology. According to a qualitative study by Andersson and Heinonen (2002), high costs can strongly affect students’ perceptions, making cost an important factor to consider both before and after adopting AI technologies. Further supporting this, Mahmud et al. (2024) found a negative relationship between perceived cost and user attitudes toward technology. On the other hand, Fan and Jiang (2024) discovered that perceived cost can have a significant positive impact on designers’ intention to continue using AI-based drawing tools. However, even if a tool is seen as useful, excessive costs may still discourage continued usage. These findings underline the importance of minimizing perceived costs to enhance technology acceptance and long-term usage. Understanding how cost influences students’ willingness to adopt and continue using AI tools for academic writing is essential for supporting successful integration in educational settings.

Based on the review of the variables that affect users’ intention to adopt AI applications, this study adopts the Value-Based Adoption Model (VAM) by focusing on the selected variables, as depicted in Figure 1 below. The independent variables in this study include perceived usefulness, perceived enjoyment, perceived technicality, and perceived cost. These variables are examined for their impact on the dependent variable which is students’ intention to use Artificial Intelligence (AI) tools in academic writing. Based on the framework, the study intends to investigate whether perceived usefulness, perceived enjoyment, perceived technicality and perceived cost positively influence students’ intentions to use AI applications in academic writing.

Figure 1 Theoretical Framework

METHODOLOGY

The objective of this study is to determine the influence of perceived usefulness, perceived enjoyment, perceived technicality, and perceived cost on students’ intentions to use AI-powered writing tools in academic writing. To determine the relationship between the factors, this study uses a cross-sectional analysis design. The data are collected using a self-administered questionnaire with variable items adapted from previous studies. Altogether, the questionnaire has six sections; Section A consists of a demographic profile with questions on gender, age and the respondents’ frequency of using AI-powered writing tools such as ChatGPT; Section B focuses on the dependent variable (intention to use AI tools), while Section C to Section F consists of independent variables (perceived usefulness, perceived enjoyment, perceived technicality, and perceived cost). The questionnaire items for Section B to Section F are designed using a 7-point Likert scale with the options ranging from strongly disagree to strongly agree. Table 1 below shows the description of variables applied in this study.

Table 1 Questionnaire Development

Variable Number of

Items

Sources
Students’ intention to use AI 7 Mahmud et al. (2024); Sohn & Kwon (2020); Utami et al. (2023); Al-Abdullatif (2023)
Perceived usefulness 7 Mahmud et al. (2024); Sohn & Kwon (2020); Utami et al. (2023)
Perceived enjoyment 7 Mahmud et al. (2024); Sohn & Kwon (2020); Utami et al. (2023)
Perceived technicality 7 Mahmud et al. (2024); Sohn & Kwon (2020)
Perceived cost 7 Mahmud et al. (2024); Sohn & Kwon (2020); Chan & Zhou (2023)

The target population includes students who are currently enrolled in Foundation programme at Universiti Malaya Bachok Campus which has a total of 264 students. Based on Krejcie and Morgan’s (1970) table of sample size, the required sample size for this study is 156 respondents. However, in consideration of potential sampling errors, blank responses and dropouts, this study distributes the questionnaire to the total population of 265 students. The researcher conducts the study by collecting information from the respondents using simple random sampling in which every respondent in the population has an equal chance of being selected as part of the sample. The questionnaire is distributed by using online Google Forms survey shared through WhatsApp application which took about one week to gather the responses.

Before analysing the relationship between the factors involved, the study conducted preliminary analysis which include descriptive analysis and reliability analysis. Descriptive analysis describes the sample’s characteristics and ensures that the variables meet statistical assumptions for answering research questions. Reliability analysis refers to the extent to which an instrument yields consistent results across different administrations or different sets of items purported to measure the same construct (DeVellis, 2016). According to Lai et al. (2022), the coefficient of Cronbach’s Alpha should be more than 0.7 to be considered to have good reliability.

Subsequently, Partial Least Squares Structural Equation Modeling (PLS-SEM) is applied to analyse the data in achieving the research objectives. The data analysis using PLS-SEM includes Common Method Variance (CMV), measurement model and structural model. Common method variance (CMV) is the variance attributed to the measuring method rather than the constructs represented by the measures (Chang et al., 2010), which analyses whether the items provide false internal consistency, resulting in an apparent correlation between variables created by the same source. The measurement model includes the analysis of indicator reliability, internal consistency reliability, convergent validity and discriminant validity (Hair et al., 2021).

To assess indicator reliability, loadings greater than 0.70 are suggested since they represent that the construct explains more than half of the indicator’s variance, resulting in satisfactory. For internal consistency reliability, Cronbach’s alpha scores between 0.60 and 0.70 are classified as “acceptable in exploratory research,” while those between 0.70 and 0.90 range from “satisfactory to good”. To assess convergent validity, the average variance extracted (AVE) for all indicators on each construct is calculated by summing the squared loadings of the construct’s indicators and dividing it with the total number of indicators. A minimum acceptable AVE of 0.50 shows that the construct explains 50% or more of the variance in the indicators. Lastly, discriminant validity is examined by using heterotrait-monotrait ratio (HTMT) of correlations which is the mean value of indicator correlations across constructs compared to the mean of average correlations for indicators measuring the same construct. Henseler et al. (2015) asserted that an HTMT score of below 0.85 (or 0.90) suggest good discriminant validity.

The next step of analysis is structural model evaluation which analyses the connection between constructs through regression estimates (Edeh et al., 2022). However, collinearity must be tested before examining structural relationships to ensure that the regression outcomes are not biased. Thus, Variance Inflation Factor (VIF) values are used to detect and assess multicollinearity among predictor variables, which should be below 5. Next, the statistical significance and relevance of the path coefficients are evaluated. The significance assessment uses bootstrapping standard errors to calculate t-values of path coefficients. The path coefficient estimates represent the change in the dependent construct (measured in standard deviations) when an independent construct is increased by one standard deviation while all other explanatory constructs remain constant (Yahaya et al., 2019). In this study, a path coefficient is considered significant at the 5% level if its value is not zero within the 95% confidence interval.

In addition, the coefficient of determination (R2) analysis is applied as it measures how accurately a statistical model predicts an outcome. The outcome is represented by the model’s dependent variable. Edeh et al. (2022) suggested that R2 values of 0.75, 0.50, and 0.25 are regarded as significant, moderate, and weak, respectively. Another method to assess the PLS path model’s predictive accuracy is by calculating the Q2 value. According to Yahaya et al. (2019), Q2 value greater than zero for an endogenous construct indicates the structural model’s prediction accuracy. Besides, Q2 values above 0, 0.25, and 0.50 indicate the PLS-path model has small, medium, and large predictive relevance, respectively.

FINDINGS

The data analysis for this study contains three major parts, which are preliminary analysis, measurement model, and structural model assessments. The preliminary analysis includes descriptive analysis, reliability analysis and common method bias, which are conducted using Statistical Package for Social Science (SPSS) software. Then, the measurement model is analysed to find the indicator reliability, internal consistency, convergent validity, and discriminant validity using SmartPLS software version 4.1.0.9. The last step is the structural model, which contains the collinearity statistics, assessment of the coefficient of determination, predictive relevance, and path coefficient.

The process of data collection began by distributing the questionnaire to the whole population of 310 students, which then obtained a total of 219 responses from the respondents. According to Cleave (2020), a response rate of 50% or higher is frequently viewed as excellent. Since the study was able to obtain 219 responses, which is a 71% response rate, it is acceptable to proceed with the analysis and findings. Out of 219 participants, the majority were female (60.7%), while males constituted 39.3 percent. In terms of age, almost all of the respondents (97.3%) were below 20 years, with just a minor proportion aged between 21 and 25 years (2.3%) or older than 25 years (0.5%). In the context of academic programs, all respondents were from Foundation in Islamic Studies (100%). For the semester distribution, semester 2 had 100 percent of participants, with a total of 219 respondents. When it comes to the frequency of AI usage, nearly half of the participants (47.7%) used it twice or three times per week, while 33.2 percent used it four or five times per week. A smaller minority (18.2%) reported utilizing AI six or more times per week, and none of the respondents said they never utilized AI.

The next step is assessing the reliability of the questionnaire items. Analysing the reliability of the construct is important to measure how well a set of items measures a single construct. Referring to Table 2, all the constructs in the study produced a very good to excellent reliability score, with Cronbach’s Alpha values ranging between 0.867 to 0.958. This indicates that the instruments are likely stable and consistent to be used in the main study.

Table 2 Reliability Analysis of Questionnaire Items

Construct No. of Items Cronbach’s Alpha
Students’ intention to use AI 7 0.883
Perceived usefulness 7 0.934
Perceived enjoyment 7 0.958
Perceived technicality 7 0.921
Perceived cost 7 0.867

The data for this study were collected from foundation level students of Universiti Malaya at Bachok campus using self-reported questionnaire during the same period. The use of the same respondent for both independent and dependent variables can lead to common method variance, which is systematic measurement error that can cause bias in estimates of the relationship between constructs (Podsakoff et al., 2003). By performing marker variable technique, the R2 values obtained with and without the marker variable were both 0.670. This indicates that the difference of the R2 value is less than the suggested 10 percent, and the collected data are free from the threats of common method bias.

The next analysis involves the measurement model assessment to evaluate the reliability and validity of the constructs before proceeding to the structural model. Referring to Table 3, the factor loadings of the questionnaire items were greater than 0.7 except for SI4 (0.609), PT2 (0.649), PC1 (0.582), PC2 (0.497), PC3 (0.487), PC4 (0.659) and PC5 (0.654). However, other acceptable indicator reliability values are also proposed by Byrne (2000), who explained that loading values of equal to or greater than 0.50 and 0.60 are also acceptable. Besides, Cheung et al. (2024) also proposed that factor loadings greater than 0.4 can be acceptable in exploratory contexts, particularly when retaining them enhances overall validity and reliability measures such as average variance extracted (AVE) and composite reliability (CR). Thus, all the items in this study achieved the verification standard. Next, the Cronbach’s alpha and CR values of all dimensions were also greater than 0.7, indicating good reliability and internal consistency. Besides, the AVE values of each dimension were all greater than 0.5 except for PC (0.419), indicating that the construct explains less than 50% of the variance in its indicators.

Table 3 Measurement Model Assessment

Item Factor Loading Cronbach’s Alpha Composite Reliability Average Variance Extracted
Students’ Intention (SI)

SI1

SI2

SI3

SI4

SI5

SI6

SI7

 

0.800

0.847

0.833

0.609

0.719

0.813

0.786

0.888 0.913 0.602
Perceived Usefulness (PU)

PU1

PU2

PU3

PU4

PU5

PU6

PU7

 

0.809

0.886

0.878

0.800

0.891

0.832

0.835

0.934 0.947 0.719
Perceived Enjoyment (PE)

PE1

PE2

PE3

PE4

PE5

PE6

PE7

 

0.898

0.881

0.892

0.889

0.911

0.891

0.889

0.958 0.965 0.798
Perceived Technicality (PT)

PT1

PT2

PT3

PT4

PT5

PT6

PT7

 

0.703

0.649

0.765

0.758

0.801

0.845

0.878

0.922 0.912 0.600
Perceived Cost (PC)

PC1

PC2

PC3

PC4

PC5

PC6

PC7

 

0.582

0.497

0.487

0.659

0.654

0.806

0.774

0.865 0.830 0.419

Referring to Table 4, it shows the analysis for discriminant validity. It can be seen that the square root of the AVE values were greater than the values in its row and column. For example, the square root of AVE for PC = √0.419 = 0.647, which was higher than the other correlation values of 0.234, 0.546, 0.233, and 0.276. As such, it is acknowledged that the collected data achieved good discriminant validity.

Table 4 Discriminant Validity analysis

Dimensions AVE PC PE PT PU SI
PC 0.419 0.647
PE 0.798 0.234 0.893
PT 0.600 0.546 0.043 0.775
PU 0.719 0.233 0.877 0.107 0.848
SI 0.602 0.276 0.767 0.100 0.804 0.776

Note: The bold slash text is the square root value of AVE, and the rest are the correlation coefficients between the various dimensions.

However, the heterotrait–monotrait analysis in Table 5 shows that not all values were less than 0.9 (PU↔PE = 0.926), indicating a lack of discriminant validity between the constructs. Therefore, bootstrapping was conducted to test the HTMT significance. If the confidence interval does not include 1, discriminant validity may still be established even if the HTMT value exceeds 0.9 (Roemer et al., 2021).

Table 5 Heterotrait-Monotrait Ratio of Correlations

Dimensions PC PE PT PU SI
PC
PE 0.140
PT 0.752 0.068
PU 0.159 0.926 0.082
SI 0.186 0.821 0.104 0.867

Table 6 below displays the output of confidence intervals for HTMT values after performing the bootstrapping using 25% confidence intervals. The column of 2.5% represents the lower bound, whereas the column 97.5% represents the upper bound. Since the confidence interval did not include 1, the discriminant validity can still be acceptable despite a high HTMT value.

Table 6. Confidence Intervals for HTMT Values

Constructs Original sample (O) Sample mean (M) 2.5% 97.5%
PE ↔ PC 0.140 0.175 0.115 0.265
PT ↔ PC 0.752 0.749 0.620 0.857
PT ↔ PE 0.068 0.110 0.062 0.193
PU ↔ PC 0.159 0.189 0.116 0.302
PU ↔ PE 0.926 0.925 0.890 0.955
PU ↔ PT 0.082 0.125 0.080 0.196
SI ↔ PC 0.186 0.225 0.161 0.321
SI ↔ PE 0.821 0.821 0.744 0.886
SI ↔ PT 0.104 0.142 0.091 0.225
SI ↔ PU 0.867 0.868 0.788 0.930

After validating the measurement model, the structural model assessment is performed to analyse whether the hypothesized relationships in the model are statistically significant and meaningful. The first step of evaluating structural equation modeling is to check the multicollinearity by analysing the Variance Inflation Factor (VIF) values for all predictor constructs. Based on Table 7, the VIF values for PC-SI (1.521) and PT-SI (1.465) were less than 5, indicating no collinearity among the dimensions. However, the VIF values for PE-SI (4.458) and PU-SI (4.410) were close to the common threshold of 5. In this case, moderate collinearity exists, but still acceptable to proceed with the structural model assessment.

Table 7 Collinearity Analysis

Dimensions Variance Inflation Factor
PC → SI 1.521
PE → SI 4.458
PT → SI 1.465
PU → SI 4.410

The path analysis results in Table 8  present the path coefficients, t-values, and p-values for the relationships between the independent constructs (PC, PE, PT, PU) and the dependent construct (SI). To determine statistical significance, the t-values and p-values are compared against standard thresholds. If the t value is >2.576 and the p value is <0.01, it means that the path is significant at 1% level (indicated by *). Based on the results, it can be seen that PE and PU have significant effects on SI. Specifically, PU has the strongest impact, as its high path coefficient and highly significant p-value (p < 0.01) suggest a strong positive relationship with SI. Similarly, PE also has a statistically significant influence at the 1% level (p < 0.01), though its effect is moderate compared to PU. On the other hand, PC and PT do not have statistically significant effects on SI, as their p-values exceed 0.05 and their t-values fall below the critical threshold of 1.96. Therefore, these results suggest that while PU and PE are important determinants of SI, PC and PT do not play a significant role in influencing it within this model. The PLS-SEM path analysis model is shown in Figure 2.

Table 8 Path Analysis Results

Path Analysis Path Coefficient T value P value Interpretation
PC → SI 0.099 1.659 0.097 Not significant
PE → SI 0.254 2.825 * 0.005 Significant
PT → SI -0.025 0.571 0.568 Not significant
PU → SI 0.561 6.348 * 0.000 Significant

* Significant at p<0.01

Figure 2 Model of PLS-SEM Path Analysis

Finally, the coefficient of determination (R2) and predictive relevance (Q2) value are measured to assess the explanatory power of the structural model and to determine the model’s ability to predict for endogenous constructs. According to Table 9, the R2 value that shows the explanatory power of students’ intention was 67.0%, indicating that the model of this study explains the latent variables very well and has a high degree of explanatory power. Likewise, the Q2 value was 0.654, which is greater than zero, indicating that the model has predictive relevance for the endogenous construct. Thus, the structural model is able to provide a prediction of the endogenous latent variable indicators.

Table 9 Coefficient of Determination and Predictive Relevance

R2 Q2
SI 0.670 0.654

DISCUSSION AND CONCLUSION

This study explores the intention of using artificial intelligence (AI) in academic writing among students of Universiti Malaya Bachok campus by focusing on the factors derived from the Value-Based Adoption Model (VAM). Based on the results of reliability and validity tests in the measurement model assessment, it shows that all factor loading values are greater than 0.5, indicating that the measurement model has acceptable indicator reliability. The Cronbach’s alpha and composite reliability values of all items are also greater than 0.7, indicating good reliability and internal consistency. Besides, the AVE value of each item is also greater than 0.5, signifying that the model has good convergent validity. Furthermore, the result of the discriminant validity test using the Fornell-Larcker criterion shows that the square root of the AVE of a construct is all greater than its correlations with other constructs, suggesting that the model achieves good discriminant validity. However, since the outcome of the heterotrait–monotrait analysis shows that not all values are less than 0.9 (lack of discriminant validity), bootstrapping was performed to test the HTMT significance. As a result, the confidence interval obtained does not include 1, indicating that discriminant validity can still be acceptable despite a high HTMT value.

In addition to checking for multicollinearity, all VIF values are less than 5, but some of them are close to the threshold value. This shows that there is moderate collinearity between the research facets, but it is still appropriate to evaluate the structural model. It can be seen from the path relationship of the model that perceived usefulness and perceived enjoyment significantly influence students’ intention to use AI applications in academic writing. This highlights the importance of tools that not only improve productivity and writing quality but also create engaging and satisfying user experiences. On the other hand, perceived technicality and perceived cost do not significantly influence students’ intentions to use AI applications in academic writing. This shows that while complexity and expenditure are issues, they do not outweigh the perceived benefits for the majority of students. Finally, the R2 and Q2 values prove that the model has a high degree of explanatory power and predictive relevance for the endogenous construct.

The Value-Based Adoption Model (VAM) provides a robust framework to understand why students choose to adopt or reject AI applications in academic writing. According to this model, technology adoption is determined by a value judgment in which users weigh perceived benefits (such as usefulness and enjoyment) against perceived sacrifices (including technicality and cost) to arrive at an overall perceived value, which drives their behavioral intention to adopt (Rruplli et al., 2024). Statistical analyses of this study reveal that both perceived usefulness (e.g., improved writing efficiency) and perceived enjoyment (e.g., satisfaction from using AI) have strong positive effects on perceived value and adoption intention. Conversely, neither perceived technicality nor perceived cost reached significance in predicting students’ intent to adopt. This holds true across multiple technologies and contexts, confirming that VAM’s benefit-versus-sacrifice balance is generalizable and robust.

VAM-aligned research demonstrates that usefulness is consistently a top driver of perceived value, motivating the users’ adoption (Rruplli et al., 2024; Bian et al., 2023). In this study, perceived usefulness is found to affect users’ adoption; thus, students are more likely to adopt AI writing tools when they believe these tools will enhance their academic performance. This study also found that perceived enjoyment affects users’ adoption since enjoyment derived from using AI, such as ease, creativity, or satisfaction, strongly boosts perceived value, which further reinforces students’ intention to use these tools in academic work (Rruplli et al., 2024; Bian et al., 2023). In contrast, this study concludes that perceived technicality and perceived cost have no significant relationship with users’ adoption of AI-writing tools. While technical complexity can act as a barrier, multiple VAM-based studies on educational technology (including generative AI) find its impact on adoption intention is usually weak or non-significant. When technical demands are manageable, concerns subside and do not deter motivated students (Rruplli et al., 2024; Bian et al., 2023). Meanwhile, financial and time costs are included on the “sacrifice” side of VAM, but for most students, their effect on adoption intent is limited unless the costs clearly outweigh the perceived benefits. For many AI academic writing tools, especially those that are free or low-cost, cost does not serve as a major deterrent (Rruplli et al., 2024; Bian et al., 2023).

VAM research has consistently demonstrated that when students perceive significant usefulness and enjoyment in AI academic writing applications, their intention to use these tools increases, regardless of potential complexity or cost, as long as benefits prevail. This evidence-based approach offers clear direction for educators and developers: maximizing benefits is the most effective way to encourage widespread adoption. It also suggests that educational institutions must focus on improving knowledge of the practical benefits of AI in academic writing while also addressing students’ concerns about complexity and expense. By providing training programs and promoting ethical principles, institutions can boost students’ confidence and skills in using AI appropriately. As AI advances, future study should look into its larger implications for students’ learning experiences, especially its effects on creativity, critical thinking, and academic integrity.

As a conclusion, the study provides the basis for understanding the factors that influence the use of AI in education and focuses on the potential for these technologies to help students achieve academic achievement. To enhance the study’s impact, future research could expand the sample to include students from diverse academic levels and institutions to improve generalizability. Incorporating qualitative methods, such as interviews or focus groups, could also provide richer insights into students’ attitudes and experiences with AI tools. Finally, exploring longitudinal effects of AI tool usage on academic performance would add depth to the findings.

ACKNOWLEDGMENT

The authors would like to acknowledge and extend our gratitude to Universiti Malaya Bachok Campus and Universiti Teknologi MARA Cawangan Kelantan for the assistance and guidance in completing this study.

CONFLICT OF INTEREST

It is confirmed that there are no conflicting interests and that the research was conducted impartially and ethically. The conclusions presented in the manuscript are solely based on the analysis of the data collected during the study.

REFERENCES

  1. Al-Abdullatif, A. M. (2023). Modeling students’ perceptions of chatbots in learning: Integrating Technology Acceptance with the Value-Based Adoption Model. Education Sciences, 13(11). https://doi.org/10.3390/educsci13111151
  2. Al-Maroof, R. S., Alshurideh, M. T., Salloum, S. A., AlHamad, A. Q. M., & Gaber, T. (2021). Acceptance of google meet during the spread of coronavirus by Arab university students. Informatics, 8(2), 1–17. https://doi.org/10.3390/informatics8020024
  3. Andersson, P., & Heinonen, K. (2002). Acceptance of mobile services: Insights from the Swedish market for mobile telephony. Working paper, Stockholm School of Economics, Stockholm, October.
  4. Ayanwale, M. A., & Ndlovu, M. (2024). Investigating factors of students’ behavioral intentions to adopt chatbot technologies in higher education: Perspective from expanded diffusion theory of innovation. Computers in Human Behavior Reports, 14, Article 100396. https://doi.org/10.1016/j.chbr.2024.100396
  5. Berto, A., & Bursan, R. (2023). Value adoption model (VAM) and users’ intentions to use mobile banking: Examining perceived usefulness, perceived sacrifice and perceived risk. Jurnal Ilmiah Manajemen Kesatuan, 11(2), 393–402. https://doi.org/10.37641/jimkes.v11i2.2056
  6. Bian, D., Xiao, Y., Song, K., Dong, M., Li, L., Millar, R., Shi, C., & Li, G. (2023). Determinants influencing the adoption of internet health care technology among Chinese health care professionals: Extension of the Value-Based Adoption Model with Burnout Theory. Journal of Medical Internet Research, 25, e37671. https://doi.org/10.2196/37671
  7. Bolanos, F., Salatino, A., Osborne, F., & Motta, E. (2024). Artificial intelligence for literature reviews: Opportunities and challenges. Artificial Intelligence Review, 57, 259. https://doi.org/10.1007/s10462-024-10902-3
  8. Byrne, B. M. (2000). Structural equation modeling with AMOS: Basic concepts, applications, and programming (1st ed.). Psychology Press.
  9. Cabero-Almenara, J., Fernández-Batanero, J. M., & Barroso-Osuna, J. (2019). Adoption of augmented reality technology by university students. Heliyon, 5(5). https://doi.org/10.1016/j.heliyon.2019.e01597
  10. Chan, C. K. Y., & Zhou, W. (2023). An Expectancy Value Theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learning Environments, 10(1). https://doi.org/10.1186/s40561-023-00284-4
  11. Chang, S. J., Van Witteloostuijn, A., & Eden, L. (2010). From the editors: Common method variance in international business research. Journal of International Business Studies, 41(2), 178–184. https://doi.org/10.1057/jibs.2009.88
  12. Cheung, G. W., Cooper-Thomas, H. D., Lau, R. S., & Wang, L. C. (2024). Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pacific Journal of Management, 41, 745–783. https://doi.org/10.1007/s10490-023-09871-y
  13. Cleave, P. (2020, Dec 3). What is a good survey response rate? SmartSurvey. https://www.smartsurvey.co.uk/blog/what-is-a-good-survey-response-rate
  14. Dai, Y., Chai, C. S., Lin, P. Y., Jong, M. S. Y., Guo, Y., & Qin, J. (2020). Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability (Switzerland), 12(16), 1–15. https://doi.org/10.3390/su12166597
  15. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. https://doi.org/10.1287/mnsc.35.8.982
  16. Deepika, C., Ruchika, S., & Kanishk, K. (2022). Indian E-commerce consumer and their acceptance towards chatbots. Academy of Marketing Studies Journal, 26(S5), 1-10.
  17. DeVellis, R. F. (2016). Scale development: Theory and applications (4th ed.). Sage Publications
  18. Edeh, E., Lo, W.-J., & Khojasteh, J. (2022). Review of Partial Least Squares Structural Equation Modeling (PLS-SEM) using R: A workbook. Structural Equation Modeling, 30(1), 165–167. https://doi.org/10.1080/10705511.2022.2108813
  19. Erbas, I., & Maksuti, E. (2024). The impact of artificial intelligence on education. International Journal of Innovative Research in Multidisciplinary Education, 03(04). https://doi.org/10.58806/ijirme.2024.v3i4n01
  20. Fan, P., & Jiang, Q. (2024). Exploring the factors influencing continuance intention to use AI drawing tools: Insights from designers. Systems, 12(3), 1–27. https://doi.org/10.3390/systems12030068
  21. Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Evaluation of reflective measurement models. In Partial Least Squares Structural Equation Modeling (PLS-SEM) using R. Classroom Companion: Business. Springer, Cham. https://doi.org/10.1007/978-3-030-80519-7_4
  22. Henseler, J., Ringle, C.M. & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43, 115–135. https://doi.org/10.1007/s11747-014-0403-8
  23. Jingnan, J., Teo, P., Ho, T. C. F., Ling, C. H., Jingnan, J., Teo, P., Ho, T. C. F., Hooi, C., & The, L. (2023). The behavioral intention of young Malaysians towards cashless society: Value-based adoption model. Cogent Business & Management, 10(2). https://doi.org/10.1080/23311975.2023.2244756
  24. Khalifa, M., & Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5(March), 100145. https://doi.org/10.1016/j.cmpbup.2024.100145
  25. Kim, Y., Park, Y., & Choi, J. (2017). A study on the adoption of IoT smart home service: Using Value-based Adoption Model. Total Quality Management, October, 1–17. https://doi.org/10.1080/14783363.2017.1310708
  26. Krejcie, R. V., & Morgan, D. W. (1970) Determining sample size for research activities. Educational and Psychological Measurement, 30, 607-610.
  27. Lai, Y. X., Ting, S. T., & Wong, H. Y. (2022). Spending behavior of UTAR undergraduate students. Final Year Project, Universiti Tun Abdul Razak, Malaysia.
  28. Liao, Y. K., Wu, W. Y., Le, T. Q., & Phung, T. T. T. (2022). The integration of the Technology Acceptance Model and Value-Based Adoption Model to study the adoption of e-learning: The moderating role of e-WOM. Sustainability (Switzerland), 14(2). https://doi.org/10.3390/su14020815
  29. Mahmud, A., Sarower, A. H., Sohel, A., Assaduzzaman, M., & Bhuiyan, T. (2024). Adoption of ChatGPT by university students for academic purposes: Partial least square, artificial neural network, deep neural network and classification algorithms approach. Array, 21(February), 100339. https://doi.org/10.1016/j.array.2024.100339
  30. Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., & Marzuki. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5(September), 100296. https://doi.org/10.1016/j.ijedro.2023.100296
  31. Nagy, S., & Hajdú, N. (2021). Consumer acceptance of the use of artificial intelligence in online shopping: Evidence from Hungary. Amfiteatru Economic, 23(56), 1–1. https://doi.org/10.24818/EA/2021/56/155
  32. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879
  33. Roemer, E., Schuberth, F., & Henseler, J. (2021). HTMT2 – An improved criterion for assessing discriminant validity in structural equation modeling. Industrial Management and Data Systems, 121(12), 2637–2650. https://doi.org/10.1108/IMDS-02-2021-0082
  34. Rruplli, E., Frydenberg, M., Patterson, A., & Mentzer, K. (2024). Examining factors of student AI adoption through the value-based adoption model. Issues in Information Systems, 25(3), 218–230. https://doi.org/10.48009/3_iis_2024_117
  35. Salido, V. (2023). Impact of AI-powered learning tools on student understanding and academic performance. BAPS 85: Introduction to Political Analysis and Research, December. https://doi.org/10.13140/RG.2.2.17259.31521
  36. Shelvia, O., Teguh Prayitno, A., Kartono, R., & Sundjaja, A. M. (2020). Analysis of factors affecting consumer’s continuance intention to use mobile payments with a Value-Based Adoption Model (Vam) approach. Psychology and Education, 57(9), 2883–2898.
  37. Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial intelligence-based intelligent products. Telematics and Informatics, 47, Article 101324. https://doi.org/10.1016/j.tele.2019.101324.
  38. Soori, M., Behrooz Arezoo, B., & Roza Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics, 3, 54-70.
  39. Utami, S. P. T., Andayani, Winarni, R., & Sumarwati. (2023). Utilization of artificial intelligence technology in an academic writing class: How do Indonesian students perceive? Contemporary Educational Technology, 15(4). https://doi.org/10.30935/cedtech/13419
  40. Vieriu, A. M., & Petrea, G. (2025). The impact of Artificial Intelligence (AI) on students’ academic development. Education Sciences, 15(3), 343. https://doi.org/10.3390/educsci15030343
  41. Yahaya, M. L., Murtala, A. A., & Onukwube, H. N. (2019). Partial least squares (PLS-SEM): A note for beginners. International Journal of Environmental Studies and Safety Research, 4(2019), 1–30.
  42. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

50 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER