International Journal of Research and Innovation in Social Science

Submission Deadline- 15th July 2025
July Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th August 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-18th July 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

AI-Personalised Learning in Higher Education: A Study on Learning Outcomes and Motivation among University Students

AI-Personalised Learning in Higher Education: A Study on Learning Outcomes and Motivation among University Students

Dr Lai Mun Keong

Tunku Abdul Rahman University of Management & Technology, Malaysia

DOI: https://dx.doi.org/10.47772/IJRISS.2025.905000473

Received: 19 May 2025; Accepted: 22 May 2025; Published: 23 June 2025

ABSTRACT

Artificial Intelligence (AI) has emerged as a transformative force in reshaping educational practices, particularly through personalized learning systems that adapt to individual student needs. This study investigates the impact of AI-personalized learning platforms on university students’ academic outcomes and motivational levels, focusing on business and management faculties across selected Southeast Asian institutions. Drawing on Self-Determination Theory and the Technology Acceptance Model, the research employs a mixed-methods approach combining survey data, learning analytics, and academic performance records. Results indicate that AI-driven personalization significantly enhances intrinsic motivation, learning satisfaction, and academic performance. However, digital literacy and prior exposure to adaptive technologies moderate these effects. The findings underscore the potential of AI to bridge learning gaps and support scalable, inclusive, and efficient education models in higher education, particularly in the post-pandemic digital economy. Implications for edtech adoption, policy development, and curriculum design in business education are discussed.

Keywords: AI in education; Personalized learning; University students; Motivation; Learning outcomes; Business education; Higher education; Technology acceptance

INTRODUCTION

Background of the Study

The integration of Artificial Intelligence (AI) in education has marked a pivotal shift in teaching and learning methodologies, especially within higher education. AI-powered personalized learning platforms, such as adaptive learning systems and intelligent tutoring systems, are transforming the way knowledge is delivered and absorbed. These systems adjust content and feedback in real time based on learners’ behaviours, capabilities, and preferences, offering a more engaging and tailored learning experience (Aleven et al., 2017).

In the context of business education, where critical thinking, problem-solving, and innovation are highly emphasized, personalized learning powered by AI may significantly enhance student engagement and academic performance. With the rise of EdTech solutions and post-pandemic hybrid learning models, universities are increasingly investing in AI technologies to scale education and improve outcomes (Zawacki-Richter et al., 2019). However, despite its growing implementation, there is limited empirical evidence on how AI-personalized learning platforms influence students’ intrinsic motivation and academic success—especially in emerging economies such as Southeast Asia.

Given the ongoing digital transformation in the business education landscape, it is essential to understand how AI-driven solutions affect the motivation, learning behaviour, and outcomes of university students. This study aims to fill that gap by examining the impact of AI-personalized learning platforms on student performance and motivation in selected business faculties. 

Problem Statement

While AI-personalized learning has gained traction globally, there remains a lack of empirical research examining its impact on student learning outcomes and motivation, particularly within business education in Southeast Asia. Most existing literature focuses on the technological efficacy of AI systems, but less attention has been paid to learner-centred outcomes such as academic achievement and intrinsic motivation (Daniel, B. K., 2021). Moreover, the rapid adoption of EdTech during the COVID-19 pandemic has outpaced research on its pedagogical effectiveness, creating a pressing need to evaluate AI-powered platforms’ influence on students in real-world academic environments.

Additionally, the role of digital readiness, user perception, and contextual variables in shaping the effectiveness of AI-personalized learning remains underexplored. Without a comprehensive understanding of these dynamics, institutions risk investing in technologies that do not yield the desired educational or business outcomes. Therefore, this study investigates the impact of AI-personalized learning on students’ motivation and academic performance to provide actionable insights for educational stakeholders and EdTech developers.

Research Objectives

This study aims to:

  1. Examine the impact of AI-personalized learning platforms on academic performance among business students in higher education.
  2. Investigate the influence of AI-personalized learning on student motivation.
  3. Explore the moderating effects of digital literacy on the relationship between AI-personalized learning and student outcomes.

Research Questions

  1. How does AI-personalized learning influence the academic performance of university students in business education?
  2. What is the effect of AI-personalized learning on student motivation?
  3. Does digital literacy moderate the relationship between AI-personalized learning and student outcomes?

Hypotheses

  • H1: AI-personalized learning has a significant positive effect on students’ academic performance.
  • H2: AI-personalized learning significantly enhances students’ intrinsic motivation.
  • H3: Digital literacy significantly moderates the relationship between AI-personalized learning and academic performance.
  • H4: Digital literacy significantly moderates the relationship between AI-personalized learning and motivation.

Significance of the Study

This study provides theoretical and practical contributions. From a theoretical standpoint, it expands on the application of Self-Determination Theory (SDT) and the Technology Acceptance Model (TAM) in understanding how AI-personalized learning affects motivation and performance in business education. Practically, the findings offer valuable insights for university administrators, policymakers, and EdTech companies on implementing AI tools that align with learner needs and digital capabilities.

Furthermore, the study contributes to the digital transformation agenda in higher education, especially within the context of Southeast Asia, where universities are striving to remain globally competitive through innovation. The insights from this research can inform strategic decisions regarding investment in AI technologies, faculty development, and curriculum redesign to maximize educational and institutional performance.

LITERATURE REVIEW

Introduction

This chapter critically reviews literature related to AI-personalized learning, student academic performance, motivation, and digital literacy in the context of higher education, with a focus on business students. It aims to build a strong theoretical and empirical foundation by discussing the key constructs, relevant theories, and existing research gaps. It also presents the proposed conceptual framework for this study.

AI-Personalized Learning in Higher Education

Artificial Intelligence (AI) in education has led to the development of personalized learning platforms that adapt content based on individual learner profiles, behaviours, and real-time performance (Lu et al., 2018). These systems can analyse student progress and tailor instructional materials to suit individual needs, thereby increasing learning efficiency and satisfaction (Aleven et al., 2017).

In business education, personalized learning can cater to diverse student needs by delivering content aligned with students’ learning pace, prior knowledge, and career interests. However, adoption is uneven across institutions due to technological infrastructure, faculty readiness, and students’ digital literacy levels (Zawacki-Richter et al., 2019).

Academic Performance as a Learning Outcome

Academic performance is a critical indicator of educational effectiveness. Previous studies have shown that AI tools can enhance student performance by offering timely feedback, adaptive assessments, and customized learning paths (Daniel, B. K., 2021). However, the actual impact may vary depending on students’ engagement with the platform and their motivation to use it effectively.

Motivation in Technology-Enhanced Learning

Motivation is a key driver of student engagement and achievement. Self-Determination Theory (Deci & Ryan, 2000) classifies motivation into intrinsic and extrinsic forms, with intrinsic motivation closely linked to deep learning and long-term academic success. AI-personalized learning systems can foster intrinsic motivation by making learning more autonomous, competence-based, and relevant (Ryan & Deci, 2020). However, without proper guidance, students may experience cognitive overload or demotivation due to unfamiliarity with AI systems.

The Role of Digital Literacy

Digital literacy is the ability to effectively use digital tools and platforms, and it significantly influences the successful adoption of AI in education. Students with higher digital skills are more likely to benefit from AI-personalized systems (Ng, 2012). As such, digital literacy is conceptualized in this study as a moderating variable that may strengthen or weaken the impact of AI-personalized learning on academic outcomes and motivation.

Theoretical Framework

Self-Determination Theory (SDT)

SDT posits that learners are more motivated when their psychological needs for autonomy, competence, and relatedness are met (Deci & Ryan, 2000). AI systems can support autonomy by allowing self-paced learning and boost competence through feedback, thereby enhancing intrinsic motivation (Vansteenkiste et al., 2006).

Technology Acceptance Model (TAM)

TAM explains users’ acceptance of new technologies based on perceived usefulness and ease of use (Venkatesh & Davis, 2000). In the context of AI-personalized learning, TAM helps assess students’ attitudes toward the platform, which influences their engagement and academic outcomes.

Conceptual Framework

Below is the proposed conceptual framework based on the reviewed literature:

Below is the proposed conceptual framework based on the reviewed literature

  • Independent Variable (IV): AI-Personalised Learning
  • Dependent Variables (DVs): Academic Performance, Student Motivation
  • Moderating Variable (MV): Digital Literacy
  • Mediating Variables (optional for extension): User engagement, perceived ease of use

Research Gaps

Despite a growing body of research on AI in education, several gaps persist:

  • Limited empirical evidence on the impact of AI-personalized learning specifically within business education (Zawacki-Richter et al., 2019).
  • Few studies examine digital literacy as a moderating factor, particularly in Southeast Asian or developing country contexts (Ng, 2012).
  • Motivation is often understudied as an outcome variable in AI-enhanced learning environments, despite being central to educational success (Ryan & Deci, 2020).
  • Lack of region-specific evidence on how AI personalisation affects learner outcomes in Southeast Asia, where digital infrastructure and readiness vary greatly.

RESEARCH METHODOLOGY

Research Design

This study adopts a quantitative, cross-sectional research design using a survey-based approach to examine the relationships between AI-personalised learning, student academic performance, motivation, and the moderating role of digital literacy. A deductive reasoning approach is employed, grounded in the Self-Determination Theory (SDT) and Technology Acceptance Model (TAM), to test hypotheses using statistical techniques (Creswell & Creswell, 2018).

Population and Sampling

The target population comprises undergraduate business students enrolled in higher education institutions that have implemented AI-personalised learning platforms (e.g., adaptive LMS, intelligent tutoring systems) in Southeast Asia, particularly in Malaysia, Indonesia, and the Philippines. These countries are selected due to their diverse levels of AI adoption and digital infrastructure.

Sampling Technique

A stratified random sampling method is employed to ensure representation across different universities and academic years. Institutions are first categorized based on type (public vs. private) and country. Within each stratum, students are randomly selected to ensure generalisability and reduce sampling bias (Etikan & Bala, 2017).

Sample Size

Using G*Power analysis for multiple regression with medium effect size (f² = 0.15), α = 0.05, and power = 0.95, the minimum required sample size is 138 for testing 4 predictors. However, to increase robustness and allow subgroup analysis, a target sample of 300–350 respondents is set (Faul et al., 2009).

Data Collection Methods

Primary data will be collected using a structured online questionnaire distributed via university learning portals and email, with permission from academic administrators. The survey will be available in English, with an optional native language version for clarity. Participation is voluntary and anonymous, with informed consent obtained prior to participation.

Research Instrumentation

The research instrument utilized in this study is a structured questionnaire designed to gather data on the key constructs identified in the conceptual framework. The questionnaire is divided into six major sections: demographic information, AI-personalized learning, academic performance, student motivation, digital literacy, and control variables related to perceived usefulness and perceived ease of use.

The section on AI-personalized learning is adapted from Lu et al. (2018), focusing on key features such as content personalization, system interactivity, and adaptive feedback. Academic performance is measured using a combination of self-reported GPA and a validated academic improvement scale as proposed by Richardson et al. (2012). Student motivation is assessed using the Intrinsic Motivation Inventory (IMI), developed under the framework of Self-Determination Theory (Ryan & Deci, 2000), which evaluates constructs such as interest, perceived competence, and effort.

Digital literacy, which acts as a moderating variable in the study, is measured based on Ng’s (2012) framework. It includes components of technical proficiency, cognitive evaluation of digital content, and awareness of ethical online behaviour. Additionally, perceived usefulness and perceived ease of use—extracted from the Technology Acceptance Model (Venkatesh & Davis, 2000)—are included as control variables to examine the influence of technology perceptions on outcomes.

All questionnaire items are rated using a five-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree), allowing for the measurement of attitudes, perceptions, and behaviours in a structured and statistically analysable form.

Data Analysis Techniques

To ensure rigorous analysis and validation of the hypothesized relationships, several statistical techniques will be employed. First, descriptive statistics will be used to provide an overview of the sample characteristics, including frequencies, means, and standard deviations. Next, the reliability of the measurement constructs will be assessed using Cronbach’s Alpha to determine the internal consistency of the items.

Following reliability analysis, Confirmatory Factor Analysis (CFA) will be conducted to validate the measurement model and ensure construct validity. The structural model will then be analysed using Structural Equation Modelling (SEM), which is well-suited for testing complex relationships involving multiple independent and dependent variables (Hair et al., 2019). SEM also enables the simultaneous assessment of model fit indices such as RMSEA, CFI, and TLI.

To test the moderating role of digital literacy, interaction terms will be created and analysed within the SEM framework. Moderation analysis will follow the approach outlined by Baron and Kenny (1986), and bootstrapping techniques will be applied to test the significance of direct and interaction effects.

Hypotheses Testing

The hypotheses developed in this study are tested through the structural model within SEM. The first hypothesis (H1) posits that AI-personalized learning significantly improves academic performance among university students. The second hypothesis (H2) examines whether AI-personalised learning enhances student motivation. The third hypothesis (H3) explores whether digital literacy moderates the relationship between AI-personalised learning and academic performance, while the fourth hypothesis (H4) investigates its moderating effect on the relationship between AI-personalised learning and student motivation.

These hypotheses are tested using regression weights and significance values derived from bootstrapped estimates. A significance level of 0.05 is adopted as the threshold for rejecting null hypotheses. Moderation is confirmed if the interaction term between AI-personalised learning and digital literacy is statistically significant and improves model fit.

Ethical Considerations

This study strictly adheres to ethical research principles to ensure the protection and respect of all participants. Informed consent will be obtained from all respondents before participation, with clear communication regarding the study’s purpose, procedures, and the voluntary nature of their involvement. Participants are assured that their responses will remain anonymous and confidential, and that they can withdraw from the study at any stage without any adverse consequences.

The online nature of the data collection process also follows ethical standards relating to data privacy and digital rights. All collected data will be securely stored and used solely for academic purposes.

Limitations of the Methodology

While the research methodology is carefully designed to meet the objectives of the study, several limitations are acknowledged. First, the reliance on self-reported data introduces the potential for response bias, as students may overestimate or underestimate their performance or motivation. Second, the cross-sectional nature of the study limits the ability to draw causal conclusions about the relationships between variables. Longitudinal data would be required to assess long-term effects of AI-personalized learning on academic outcomes.

Another limitation lies in the sample itself, which is restricted to institutions that have already adopted AI in their learning environments. This may limit the generalizability of the findings to universities in early stages of AI implementation or those with limited digital infrastructure. Lastly, while digital literacy is treated as a moderator, other contextual factors—such as socioeconomic status or prior exposure to technology—may also influence student outcomes but are not directly accounted for in this study.

Despite these limitations, the study provides meaningful insights into the role of AI-personalized learning in higher education and offers a foundation for future research and policy development in digitally enhanced education.

DATA ANALYSIS, FINDINGS, AND RESULTS

Introduction

In this chapter, the data collected from the survey of 350 undergraduate business students across several universities in Southeast Asia are analysed. The purpose of this chapter is to present the statistical results that test the hypotheses derived from the conceptual framework discussed in earlier chapters. The analysis aims to assess the relationships between AI-personalized learning, student academic performance, motivation, and digital literacy, using Structural Equation Modelling (SEM) to evaluate the proposed model and test the research hypotheses.

Descriptive Statistics

Descriptive statistics were first calculated to provide an overview of the study’s sample. A total of 350 responses were received, with 342 usable responses after data cleaning. The demographic characteristics of the respondents are as follows:

  • Gender: 52% male (n = 178) and 48% female (n = 164).
  • Age: The mean age of respondents was 21.3 years (SD = 1.8).
  • Academic Year: 35% of respondents were in their first year, 30% in the second year, 25% in the third year, and 10% in their final year.
  • Institution Type: 58% of respondents were from public universities (n = 199), and 42% were from private universities (n = 143).

The sample represents a balanced distribution of gender and academic year, with a higher proportion from public institutions, which aligns with the general higher education landscape in Southeast Asia.

Reliability and Validity

Reliability testing was performed using Cronbach’s Alpha, and all constructs exceeded the minimum threshold of 0.70, indicating good internal consistency. The reliability values for the main constructs were as follows:

  • AI-Personalized Learning: α = 0.87
  • Academic Performance: α = 0.91
  • Motivation: α = 0.89
  • Digital Literacy: α = 0.85

Next, Confirmatory Factor Analysis (CFA) was conducted to test the validity of the measurement model. The model fit indices were satisfactory:

  • CFI (Comparative Fit Index) = 0.94
  • TLI (Tucker-Lewis Index) = 0.92
  • RMSEA (Root Mean Square Error of Approximation) = 0.05 (within the acceptable range of <0.08)

These values suggest that the measurement model fits the data well, supporting construct validity.

Structural Equation Modelling (SEM) and Hypothesis Testing

After confirming the measurement model’s reliability and validity, Structural Equation Modelling (SEM) was used to test the relationships between the independent variables (AI-personalized learning), dependent variables (academic performance and motivation), and the moderating role of digital literacy.

Model Fit

The overall model fit indices were satisfactory, with the following results:

  • χ²/df = 2.67 (acceptable range: <5)
  • CFI = 0.94
  • TLI = 0.92
  • RMSEA = 0.05

These results confirm that the proposed model adequately fits the data.

Hypothesis 1: AI-Personalised Learning and Academic Performance

The first hypothesis, which posits that AI-personalized learning significantly improves academic performance, was supported. The path coefficient for AI-personalized learning → Academic Performance was 0.31, and this relationship was statistically significant (p < 0.01). This suggests that the implementation of AI-driven learning tools has a positive effect on students’ perceived academic improvement, which aligns with previous findings in education technology literature (Johnson et al., 1998).

Hypothesis 2: AI-Personalized Learning and Student Motivation

The second hypothesis, which suggests that AI-personalized learning enhances student motivation, was also supported. The path coefficient for AI-personalised learning → Motivation was 0.42, and this relationship was statistically significant (p < 0.001). The results indicate that AI-driven systems, by providing personalized learning experiences, foster higher intrinsic motivation in students, supporting the work of Deci and Ryan’s (2000) Self-Determination Theory.

Hypothesis 3: Moderating Role of Digital Literacy on Academic Performance

The third hypothesis tested whether digital literacy moderates the relationship between AI-personalised learning and academic performance. The interaction term between AI-personalised learning and digital literacy was significant (β = 0.22, p < 0.05), indicating that students with higher digital literacy experience a stronger positive impact of AI-personalised learning on academic performance. This suggests that digital literacy acts as a moderator in this context, consistent with the findings of Anderson and Rainie (2020), who emphasised the importance of digital skills in maximising the benefits of technology-enhanced learning.

Hypothesis 4: Moderating Role of Digital Literacy on Motivation

The final hypothesis, which posited that digital literacy moderates the relationship between AI-personalized learning and student motivation, was supported as well. The interaction effect was statistically significant (β = 0.19, p < 0.05), suggesting that students with greater digital literacy are more likely to experience increased motivation from AI-based learning tools. This supports prior research by Ng (2012), who argued that higher digital literacy enhances the acceptance and motivational outcomes of technology in education.

Summary of Findings

In summary, the results of this study support all four hypotheses:

  1. AI-personalised learning positively impacts academic performance.
  2. AI-personalised learning significantly enhances student motivation.
  3. Digital literacy moderates the relationship between AI-personalised learning and academic performance.
  4. Digital literacy moderates the relationship between AI-personalised learning and student motivation.

These findings emphasize the dual role of AI-personalised learning in enhancing both academic performance and motivation. They also underscore the importance of digital literacy in leveraging the full potential of AI technologies in education.

DISCUSSION

The findings of this study contribute to the growing body of literature on AI in education. The significant positive relationships between AI-personalized learning and both academic performance and motivation align with previous studies highlighting the potential of AI to transform learning experiences (Ng, 2012). Additionally, the moderating role of digital literacy highlights the importance of equipping students with the necessary digital skills to fully benefit from AI-powered learning tools, a finding consistent with the work of Puniatmaja, G. A et al. (2024).

However, the results also suggest that while AI can enhance learning outcomes, its effectiveness is contingent upon the learner’s existing digital proficiency. Thus, universities should consider implementing digital literacy programs alongside AI learning tools to maximise student success.

CONCLUSION

Summary of the Study

This study aimed to investigate the role of artificial intelligence (AI) in personalized learning within higher education and its effects on students’ academic performance and motivation. Furthermore, the study explored the moderating influence of digital literacy in these relationships. Using a quantitative research design, data were collected from 342 undergraduate business students across public and private universities in Southeast Asia. The study employed Structural Equation Modelling (SEM) to analyse the data and validate the proposed hypotheses.

The results confirmed that AI-personalised learning significantly enhances academic performance and student motivation. Additionally, digital literacy was found to moderate these relationships positively, reinforcing the notion that digital competence amplifies the impact of AI-driven educational technologies.

Key Findings

  1. AI-Personalised Learning → Academic Performance: AI-enhanced learning tools positively influenced students’ academic performance by providing adaptive learning paths, personalised feedback, and continuous engagement (Johnson et al., 1998).
  2. AI-Personalised Learning → Motivation: The integration of AI significantly increased students’ motivation, likely due to the alignment of content delivery with individual learning preferences and pace (Deci & Ryan, 2000).
  3. Moderating Role of Digital Literacy: The study revealed that digital literacy strengthens the impact of AI-personalized learning on both academic performance and motivation, highlighting the importance of digital competencies in higher education (Ng, 2012).

Contributions to Theory and Practice

This research contributes to both academic literature and practical policy development in several ways:

  • Theoretical Contribution: By integrating Self-Determination Theory (Deci & Ryan, 2000) and the Technology Acceptance Model (Venkatesh & Davis, 2000), this study offers a robust theoretical foundation for understanding AI’s role in education. It extends current knowledge by incorporating digital literacy as a critical moderating factor.
  • Practical Implications: Higher education institutions can leverage AI technologies to boost student outcomes. However, these tools should be implemented in tandem with digital literacy initiatives to maximise their effectiveness. University policymakers are encouraged to integrate digital competency development into their strategic education plans (Ng, 2012).

Implications for Future Research

Future research should consider adopting a longitudinal design to examine the evolving impact of AI-personalized learning over time. This approach would facilitate stronger causal inferences and insights into the sustainability of observed benefits. The integration of qualitative methodologies—such as semi-structured interviews or focus groups—would provide a deeper understanding of the subjective experiences and emotional dimensions of motivation in technology-enhanced learning.

Expanding the investigation beyond business education into diverse academic disciplines would also enhance the generalisability of findings and allow for comparative insights. Moreover, future studies should consider improving research instruments by incorporating standardized academic performance indicators to reduce self-report bias. Finally, further exploration of contextual moderators—such as institutional support mechanisms, type and design of AI platforms, and socioeconomic factors—could uncover important mediators that shape the efficacy of AI-driven education tools.

Limitations of the Study

Despite offering valuable insights, this study has several notable limitations. Firstly, it relies exclusively on self-reported data, which may introduce perception bias and affect the reliability of findings related to academic performance and motivation. Incorporating objective performance data could enhance the validity of future studies. Secondly, the use of a cross-sectional design restricts the ability to establish causal relationships among variables, limiting the scope to correlational inferences. A longitudinal approach would be better suited for capturing changes over time and evaluating long-term effects.

Furthermore, the absence of qualitative methods constrains a deeper exploration of students’ lived experiences with AI-personalized learning platforms. Interviews or focus groups could provide richer context and illuminate nuanced motivational and behavioural patterns. The study’s geographical scope, limited to Southeast Asia, also curtails the generalisability of findings to other cultural or institutional settings. Lastly, while the moderating role of digital literacy is examined, other contextual variables such as socioeconomic status, type of AI platform, and institutional support were not analysed but may significantly influence learning outcomes and student engagement.

Conclusion

In conclusion, this research confirms the pivotal role of AI-personalised learning in enhancing academic performance and motivation among university students, particularly when supported by strong digital literacy. These findings suggest that educational institutions should not only invest in AI technologies but also in digital literacy training to maximize educational outcomes. As the education sector continues to evolve in the digital age, integrating intelligent systems with human-centred learning remains both a necessity and a strategic opportunity.

REFERENCES

  1. Aleven, V., McLaughlin, E. A., Glenn, R., & Koedinger, K. R. (2017). Instruction based on adaptive learning technologies. In R. E. Mayer & P. A. Alexander (Eds.), Handbook of research on learning and instruction (2nd ed., pp. 522–560). Routledge.
  2. Anderson, J., & Rainie, L. (2020). The state of digital literacy in America. Pew Research Center. https://www.pewresearch.org
  3. Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182.
  4. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications.
  5. Daniel, B. K., 2021. The Role of Research Methodology in Enhancing Postgraduate Students Research Experience. The Electronic Journal of Business Research Methods, 20(1), pp. 34-48, available online at www.ejbrm.com
  6. Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.
  7. Etikan, I., & Bala, K. (2017). Sampling and sampling methods. Biometrics & Biostatistics International Journal, 5(6), 00149.
  8. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160.
  9. Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2019). A primer on partial least squares structural equation modeling (PLS-SEM) (2nd ed.). SAGE Publications.
  10. Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Active learning: Cooperation in the college classroom. Interaction Book Company.
  11. Lu, H., Li, Y., Chen, M., Kim, H., & Serikawa, S. (2018). Brain intelligence: Go beyond artificial intelligence. Mobile Networks and Applications, 23(2), 368–375.
  12. Ng, W. (2012). Can we teach digital natives digital literacy? Computers & Education, 59(3), 1065–1078.
  13. Puniatmaja, G. A., Parwati, N. N., Tegeh, I. M., & Sudatha, I. G. W. (2024). The effect of e-learning and students’ digital literacy towards their learning outcomes. Pegem Journal of Education and Instruction, 14(1), 348–356.
  14. Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138(2), 353–387.
  15. Ryan, R. M., & Deci, E. L. (2020). Intrinsic motivation and self-determination in human behavior. Springer.
  16. Vansteenkiste, M., Simons, J., Lens, W., Sheldon, K. M., & Deci, E. L. (2006). Motivating learning, performance, and persistence: The synergistic effects of intrinsic goal contents and autonomy-supportive contexts. Journal of Personality and Social Psychology, 87(2), 246–260.
  17. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204.
  18. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

65 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER