International Journal of Research and Innovation in Social Science

Submission Deadline- 14th October 2025
October Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th November 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-17th October 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Understanding Sabah Educators’ Acceptance of ChatGPT: A Pilot Study Using TAM3

  • Aries Henry Joseph
  • Airil Haimi Mohd Adnan
  • Roseline binti Michael
  • Lindey Easter Apolonius
  • Bernadette Peter Lidadun
  • Shirleen Octavia Austin
  • 2701-2707
  • Oct 7, 2025
  • Education

Understanding Sabah Educators’ Acceptance of ChatGPT: A Pilot Study Using TAM3

Aries Henry Joseph*1, Airil Haimi Mohd Adnan², Roseline binti Michael3, Lindey Easter Apolonius4, Bernadette Peter Lidadun5, Shirleen Octavia Austin6

1,3,4,5,6Universiti Teknologi MARA, Sabah Branch Kota Kinabalu, Malaysia

2Universiti Teknologi MARA, Shah Alam, Malaysia

*Corresponding Author

DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000232

Received: 30 August 2025; Accepted: 05 September 2025; Published: 07 October 2025

ABSTRACT

This pilot study investigates educators’ acceptance of ChatGPT in Sabah’s higher education institutions using the Technology Acceptance Model 3 (TAM3) framework. A cross-sectional survey of 27 educators assessed six constructs: perceived usefulness, perceived ease of use, perceived credibility, social influence, privacy concerns, and behavioural intention. Results demonstrated excellent overall reliability (Cronbach’s α = 0.891), with construct reliabilities ranging from 0.718 to 0.854. Educators showed positive attitudes towards ChatGPT’s usefulness (M = 3.68) and ease of use (M = 3.75), with moderate privacy concerns (M = 3.42). Strong correlations emerged between perceived credibility and behavioural intention (r = 0.72, p < 0.001), perceived usefulness and intention (r = 0.68, p < 0.001), and ease of use and intention (r = 0.61, p < 0.001). High variability in colleague encouragement (SD = 1.07) indicates inconsistent institutional support. This study validates the TAM3 instrument for ChatGPT research in Malaysian contexts and provides a foundation for larger mixed-methods investigations.

Keywords: ChatGPT, artificial intelligence, technology acceptance model, higher education, educational technology, AI adoption

INTRODUCTION

Artificial intelligence (AI) integration in education represents a paradigm shift in teaching practices worldwide. ChatGPT, OpenAI’s large language model, has emerged as a prominent educational AI tool with capabilities in generating human-like text and supporting diverse pedagogical tasks. Recent studies demonstrate ChatGPT’s potential to enhance metacognitive self-regulated learning [5] and address challenges in English as a Second Language contexts [7]. Despite growing global adoption, factors influencing educators’ ChatGPT acceptance remain inadequately understood, particularly in Southeast Asian contexts. The knowledge gap is quite pronounced in Malaysia’s Sabah state, where empirical research on educators’ AI perceptions is virtually non-existent. Understanding these determinants has become critical as institutions navigate AI integration while addressing privacy, security, and ethical concerns [9]. The Technology Acceptance Model 3 (TAM3), developed by Venkatesh and Bala [10], provides a robust framework for examining technology acceptance factors. Recent research validates TAM applications in ChatGPT adoption contexts, demonstrating relevance for understanding AI acceptance among educational stakeholders [8], [1].

This pilot study addresses knowledge gaps by investigating ChatGPT acceptance among higher education educators in Sabah, Malaysia through an adapted TAM3 framework:

Objective 1: Assess Sabah educators’ perceptions of ChatGPT’s usefulness and ease of use in teaching contexts

Objective 2: Identify key factors influencing Sabah educators’ ChatGPT acceptance, including credibility, social influence, and privacy concerns

LITERATURE REVIEW

AI in Educational Contexts

AI integration in education has evolved from conceptual frameworks to practical implementation. ChatGPT demonstrates significant potential in content creation, assessment design, personalised learning, and instructional support [5]. Research involving 300 preservice teachers reveals positive academic perceptions of ChatGPT as an educational solution [11]. However, AI integration faces challenges including privacy concerns, algorithmic bias, potential over-reliance on technology, and implementation costs [4]. These challenges underscore the importance of understanding educators’ AI adoption perceptions [6].

Technology Acceptance Model Applications

TAM3 incorporates core constructs of perceived usefulness and ease of use while expanding to include social influence, perceived credibility, and privacy concerns [10]. Recent TAM applications to ChatGPT reveal that perceived ease of use, usefulness, and intelligence serve as mediators influencing awareness-adoption relationships [3]. Cross-cultural validation studies demonstrate TAM’s applicability across educational contexts, with research in Arab countries showing positive relationships between perceived usefulness, social influence, and ChatGPT behavioural intentions [9].

Theoretical Framework

This study adapts TAM3 to address ChatGPT’s unique characteristics in educational contexts. ChatGPT presents distinct challenges requiring framework modification: content generation raises accuracy concerns, educational contexts involve pedagogical and academic integrity issues, and Malaysian environmental factors may influence acceptance patterns differently than Western contexts where TAM3 was originally tested.

Fig. 1 Adapted TAM3 Framework for ChatGPT Acceptance in Educational Contexts

The adapted framework illustrates how core TAM3 constructs (perceived usefulness and ease of use) interact with context-specific factors (credibility and privacy concerns) and social influences to determine educators’ behavioural intention to adopt ChatGPT. Unlike traditional TAM applications, this model emphasises credibility as a critical factor due to ChatGPT’s content generation capabilities, while privacy concerns represent potential barriers specific to educational AI implementation.

Research Gaps

Despite the growing research on ChatGPT, significant gaps remain in understanding the regional factors that influence its adoption. Most studies focus on Western and East Asian contexts, with limited Southeast Asian research. This pilot study addresses these gaps by investigating ChatGPT acceptance in Sabah’s higher education institutions.

METHODOLOGY

Research Design

This quantitative cross-sectional survey examined educators’ ChatGPT acceptance in Sabah’s higher education institutions using an adapted TAM3 framework. Research ethics approval was obtained from the campus UiTM Research Ethics Committee (Reference: REC/12/2024 (ST/MR/249)).

Participants

The study involved 27 educators from Sabah’s higher education institutions, selected using random sampling. For reliability testing with five-item constructs, minimum sample size calculations indicate 24 participants are required, with 20% non-response provision yielding 30 respondents minimum. The sample of 27 meets threshold requirements while acknowledging pilot study limitations [2]. Participants represented diverse academic backgrounds across public and private universities, public and private colleges, teachers’ training institutes and polytechnics, ensuring representation across Sabah’s educational landscape.

Data Collection Instrument

A 30-item questionnaire using 5-point Likert scales (1 = Strongly Disagree, 5 = Strongly Agree) measured six TAM3 constructs:

TABLE 1 CHATGPT ACCEPTANCE IN EDUCATIONAL CONTEXTS

Perceived Usefulness ChatGPT’s helpfulness for teaching tasks
Perceived Ease of Use ChatGPT’s user-friendliness and learning curve
Perceived Credibility Trust in ChatGPT’s accuracy and reliability
Social Influence Peer and institutional support for ChatGPT use
Privacy Concerns Data security & information protection worries
Behavioural Intention Plans for actual ChatGPT adoption

Data Analysis

IBM SPSS Statistics 29.0 was used for analysis. Cronbach’s alpha assessed internal consistency reliability. Descriptive statistics included means, standard deviations, and frequency distributions. Pearson correlations examined construct relationships. Shapiro-Wilk tests confirmed normality assumptions (all p > 0.05).

Ethical Considerations

The study ensured participant confidentiality, voluntary participation, informed consent, and secure data storage following institutional ethics protocols.

RESULTS

Participant Demographics

The sample comprised 27 educators across diverse disciplines: education (33.3%), engineering (22.2%), business (18.5%), humanities (14.8%), and sciences (11.1%). Teaching experience ranged 1-25 years (M = 8.4, SD = 6.2), with 63% holding doctoral qualifications.

Reliability Analysis

TABLE 2 DESCRIPTIVE STATISTICS and RELIABILITY RESULTS (N=27)

TAM3 Construct Mean SD Cronbach’s α Reliability Level Range
Perceived Usefulness 3.68 0.86 0.770 Good 2.00-5.00
Perceived Ease of Use 3.75 0.71 0.776 Good 2.40-5.00
Perceived Credibility 3.82 0.79 0.849 Very Good 2.20-5.00
Social Influence 3.25 0.81 0.718 Acceptable 1.80-4.60
Privacy Concerns 3.42 0.82 0.731 Acceptable 2.00-5.00
Behavioural Intention 3.91 0.71 0.854 Very Good 2.40-5.00
Overall Scale 3.64 0.78 0.891 Excellent 2.17-4.83

Excellent overall reliability (α = 0.891) validates the adapted TAM3 framework. Individual construct reliabilities exceeded the 0.70 threshold, ranging from 0.718 to 0.854.

Correlation Analysis

Significant positive correlations emerged between core TAM constructs and behavioural intention. Perceived credibility was the strongest predictor (r = 0.72, p < 0.001), followed by perceived usefulness (r = 0.68, p < 0.001) and ease of use (r = 0.61, p < 0.001). Privacy concerns negatively correlated with credibility (r = -0.45, p < 0.05) and behavioural intention (r = -0.38, p < 0.05).

TABLE 3 INTER-CONSTRUCT CORRELATION MATRIX (N=27)

Construct 1 2 3 4 5 6
Perceived Usefulness 1.00
Perceived Ease of Use 0.54** 1.00
Perceived Credibility 0.61** 0.48* 1.00
Social Influence 0.43* 0.39* 0.52** 1.00
Privacy Concerns -0.28 -0.31 -0.45* -0.22 1.00
Behavioural Intention 0.68** 0.61** 0.72** 0.56** -0.38* 1.00

*Note: *p<0.05, *p<0.01

Key Findings Summary

 1. Highest Agreement Items:

  •  “I intend to explore more AI tools” (M = 4.12, SD = 0.50)
  • “ChatGPT provides reliable information” (M = 3.97, SD = 0.68)
  • “The interface is user-friendly” (M = 3.89, SD = 0.64)

2. Highest Variability Items:

  • “Colleagues encourage ChatGPT use” (SD = 1.07)
  • “Concerned about data privacy” (SD = 0.99)
  • “ChatGPT improves learning outcomes”  (SD = 0.98)

DISCUSSION

Key Findings

This pilot study provides valuable insights into ChatGPT acceptance among Sabah educators. Excellent overall reliability (α = 0.891) validates the adapted TAM3 framework for Malaysian educational contexts. Perceived credibility emerged as the strongest behavioural intention predictor (r = 0.72), suggesting educator trust in AI-generated content accuracy is paramount for acceptance, aligning with research emphasising trust as a cornerstone factor (Shahzad et al., 2024). High behavioural intention reliability (α = 0.854) and mean score (M = 3.91) indicate consistent educator receptiveness to ChatGPT adoption. However, moderate social influence scores (M = 3.25) and high variability in colleague encouragement (SD = 1.07) highlight institutional disparities requiring targeted organisational culture development. Privacy concerns’ negative correlation with behavioural intention (r = -0.38) reflects broader AI implementation challenges, particularly relevant in data-sensitive educational contexts (Robert et al., 2025).

Theoretical Contributions

This study makes a meaningful contribution to the growing body of literature on AI acceptance by demonstrating the applicability of the Technology Acceptance Model 3 (TAM3) within Southeast Asian educational contexts. In particular, the successful validation of measurement instruments, albeit tailored to ChatGPT, provides a good empirical foundation for future research across Malaysia and the wider region. These instruments offer a reliable means of capturing nuanced educator perceptions, thereby enabling more contextually grounded investigations into generative AI adoption in diverse institutional settings.

The emergence of credibility as the most significant predictor of acceptance is especially noteworthy, as it reflects the distinctive nature of generative AI tools in producing autonomous and content-rich outputs. This differentiates ChatGPT from more conventional educational technologies, which typically function as static repositories or delivery mechanisms rather than dynamic co-creators of knowledge. As such, the findings underscore the importance of trust, transparency, and perceived reliability in shaping local educators’ willingness to engage with AI-driven platforms. These insights not only advance theoretical understanding but also inform practical strategies for integrating generative AI into pedagogical practice in ways that are both effective and ethically sound.

Practical Implications

The findings of this study offer practical and contextually relevant guidance for the implementation of ChatGPT within higher education institutions. With credibility emerging as the most influential predictor of adoption, it is imperative that institutions prioritise trust-building initiatives. This can be achieved through well-designed training programmes that not only enhance educators’ technical competence but also cultivate critical AI literacy. Such programmes should focus on developing the ability to evaluate AI-generated content rigorously, address concerns around accuracy and reliability, and foster informed engagement with generative technologies in pedagogical settings.

The observed negative correlation between privacy concerns and adoption intention, however, underscores the necessity of establishing comprehensive data protection policies. Institutions must articulate clear, transparent guidelines regarding the handling of student information, ensuring that ethical considerations and regulatory compliance are embedded within AI integration frameworks. Without such safeguards, apprehensions surrounding data misuse and surveillance may continue to impede adoption, particularly in regions where digital infrastructure and policy enforcement remain uneven.

Additionally, the high variability in social influence across institutions points to significant disparities in organisational culture and peer support mechanisms. To address this, institutions should consider implementing peer mentoring schemes and cultivating communities of practice that encourage collaborative learning and shared experiences. These initiatives can play a pivotal role in normalising AI use, reducing resistance, and promoting a consistent culture of adoption across diverse educational environments. Given the complex and varied regional contexts in which these institutions operate, a phased and adaptive implementation strategy is likely to be more effective than rapid, uniform deployment. Gradual integration allows for iterative refinement, responsiveness to local needs, and the opportunity to build institutional capacity over time. Such an approach not only enhances the sustainability of AI adoption but also ensures that implementation is both equitable and pedagogically sound.

Limitations and Future Research

This pilot study has notable limitations. The small sample (N = 27) restricts statistical power and generalisability, while the cross-sectional design provides only snapshot perspectives rather than adoption evolution insights. Quantitative methodology limits the understanding of the underlying reasoning behind educator responses. Future research should include larger samples (~300 participants) enabling structural equation modelling, mixed-methods approaches incorporating qualitative exploration, longitudinal tracking of acceptance changes over time, cross-regional validation studies across Malaysian states, and discipline-specific investigations to understand varying adoption patterns across academic fields.

CONCLUSION

This pilot study offers a robust and insightful foundation for examining the acceptance of ChatGPT among educators within Sabah’s higher education landscape. The good reliability of the adapted Technology Acceptance Model 3 (TAM3) instrument affirms its suitability for application within Malaysian educational contexts, thereby lending methodological rigour to the study’s findings. Preliminary results indicate a generally positive disposition among educators towards the adoption of generative AI tools, notwithstanding notable disparities in institutional support and ongoing concerns surrounding data privacy and ethical governance. These findings underscore the nuanced interplay between technological receptiveness and contextual readiness, suggesting that acceptance is not merely a function of individual attitudes but is also shaped by broader systemic and infrastructural factors.

The validation of the measurement instrument, coupled with the establishment of baseline data, provides a compelling rationale for advancing towards more extensive mixed-methods research. Such future investigations should aim to capture the complexity of educator experiences and institutional dynamics through both quantitative breadth and qualitative depth. Notably, the emergence of ‘credibility’ as the most significant predictor of acceptance, points to the distinctive evaluative criteria educators apply when engaging with generative AI technologies. This particular finding highlights the imperative for course developers and policymakers to foreground transparency, reliability, and pedagogical alignment in the design and deployment of such tools.

Whilst these initial findings reflect a level of openness among Sabah’s educators, the successful integration of ChatGPT into teaching and learning practices will necessitate a concerted, longer-term multi-pronged approach. This includes the development of coherent institutional policies, the establishment of robust privacy and data protection frameworks, and the provision of targeted professional development programmes that are sensitive to the local region’s socio-cultural and educational particularities. Future research should incorporate larger, more diverse samples and embed qualitative methodologies to illuminate the lived realities of educators, thereby informing the creation of evidence-based strategies for sustainable and equitable implementation across Malaysia’s higher education sector.

ACKNOWLEDGEMENT

The authors would like to thank all of the participating educators and institutions in the state of Sabah, Malaysia for their valuable contributions to this pilot study, with special appreciation to Universiti Teknologi MARA, Sabah Branch for supporting this pilot study.

REFERENCES

  1. Albayati, H. (2024). Investigating undergraduate students’ perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 6, 100203. https://doi.org/10.1016/j.caeai.2024.100203
  2. Bujang, M. A., Omar, E. D., Foo, D. H. P., & Hon, Y. K. (2024). Sample size determination for conducting a pilot study to assess reliability of a questionnaire. Restorative Dentistry & Endodontics, 49(1), e3. https://doi.org/10.5395/rde.2024.49.e3
  3. Chen, L., Chen, P., & Lin, Z. (2024). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2024.3392451
  4. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148
  5. Dahri, N. A., Yahaya, N., Al-Rahmi, W. M., Aldraiweesh, A., & Alturki, U. (2024). Extended TAM based acceptance of AI-powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon, 10(8), e29317. https://doi.org/10.1016/j.heliyon.2024.e29317
  6. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
  7. Robert, J. D., Joseph, A. H., & Apolonius, L. E. (2025). AI-assisted language learning in education: ESL perceptions and challenges of ChatGPT. Journal of Creative Practices in Language Learning and Teaching, 13(2), 48-65.
  8. Shahzad, M. F., Xu, S., & Javed, I. (2024). ChatGPT awareness, acceptance, and adoption in higher education: The role of trust as a cornerstone. International Journal of Educational Technology in Higher Education, 21, 46. https://doi.org/10.1186/s41239-024-00478-x
  9. Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramaniam, R., & Khan, M. A. I. (2024). The predictors of behavioural intention to use ChatGPT for academic purposes: Evidence from higher education in Somalia. Cogent Education, 12(1), 2460250. https://doi.org/10.1080/2331186X.2025.2460250
  10. Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273-315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
  11. Wang, X., Liu, Q., Cheng, Y., & Li, S. (2024). Understanding university students’ acceptance of ChatGPT: Insights from the UTAUT2 model. Applied Artificial Intelligence, 38(1), 2371168. https://doi.org/10.1080/08839514.2024.2371168

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

0 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER