AI, Culture, and Trust: A Global Look at User Confidence in Virtual Assistants

Authors

Hampo, JohnPaul A.C

Lecturer, Rolof Computer Academy, Warri; Lecturer, Conarina Maritime Academy, Oria; and Data Scientist, Hamplus Technologies International [Hamplus Hub], (Nigeria)

Onovughe Anthonia Okeme

Lecturer, Southern Delta University, Ozoro and Researcher, Federal University of Technology (Nigeria)

Mega Ohis Grace

Lecturer, Delta State University, Abraka and Researcher, Federal University of Petroleum (Nigeria)

Umukoro Gift

Lecturer, Delta State University, Abraka and Researcher, Delta State University (Nigeria)

Article Information

DOI: 10.51584/IJRIAS.2025.101100116

Subject Category: Artificial Intelligence

Volume/Issue: 10/11 | Page No: 1259-1266

Publication Timeline

Submitted: 2025-11-30

Accepted: 2025-12-07

Published: 2025-12-23

Abstract

Virtual assistants (VAs) that are driven and powered by AI such as Siri, Alexa, and Google Assistant are increasingly embedded in everyday life. Their adoption is critically a correlation of user trust, which is influenced not only by system performance but also by cultural context. This paper investigates the dynamics of trust in VAs by synthesizing empirical findings from recent studies (n ≈ 1,250 participants across healthcare, consumer, and enterprise domains). We examine four principal antecedents—perceived competence, transparency/explainability, privacy and security, and anthropomorphism—and analyze how cultural dimensions moderate their influence. Findings indicate that competence and privacy consistently drive trust across contexts, but the weight of transparency and anthropomorphism varies by cultural orientation (notably, high uncertainty avoidance cultures demand transparency, while collectivist cultures emphasize social endorsement). We propose a conceptual model linking culture, trust antecedents, and adoption, and conclude with implications for design and governance.

Keywords

Trust, Virtual Assistants, Artificial Intelligence, Culture

Downloads

References

1. Al-Kfairy, M., Mustafa, D., Al-Adaileh, A., Zriqat, S. & Sendaba, O. (2024). User acceptance of AI voice assistants in Jordan’s telecom industry, Computers in Human Behavior Reports, Volume 16, https://doi.org/10.1016/j.chbr.2024.100521 [Google Scholar] [Crossref]

2. Dutsinma, F.L.I., Pal, D., Funilkul, S., & Chan, J.H. (2022). A Systematic Review of Voice Assistant Usability: An ISO 9241–11 Approach. SN Comput. Sci. 3, 4. https://doi.org/10.1007/s42979-022-01172-3 [Google Scholar] [Crossref]

3. Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919 [Google Scholar] [Crossref]

4. Hofstede, G. (2001), Culture’s Consequences: Comparing Values, Behaviors, Institutions, and Organizations Across Nations, 2nd ed. Sage, Thousand Oaks, CA. https://doi.org/10.1016/S0005-7967(02)00184-5 [Google Scholar] [Crossref]

5. Jian, J., Bisantz, A.M., Drury, C.G. & Llinas, J. (1998). Foundations for an Empirically Determined Scale of Trust in Automated Systems. https://apps.dtic.mil/sti/tr/pdf/ADA395339.pdf [Google Scholar] [Crossref]

6. Kohn, S.C., de Visser, E.J., Wiese, E., Lee, Y-C. & Shaw, T.H. (2021). Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front. Psychol. 12:604977. doi: 10.3389/fpsyg.2021.604977 [Google Scholar] [Crossref]

7. Lee, J. D., & See, K. A. (2004). Trust in automation: designing for appropriate reliance. Human factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392 [Google Scholar] [Crossref]

8. Razin, Y.S. & Feigh, K.M. (2024). Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires. J. Hum.-Robot Interact. 13, 4, Article 58, 41 pages. https://doi.org/10.1145/3677614 [Google Scholar] [Crossref]

9. Sabouri, S., et al., (2025). "Trust Dynamics in AI-Assisted Development: Definitions, Factors, and Implications," in 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE), Ottawa, ON, Canada, pp. 1678-1690, doi: 10.1109/ICSE55347.2025.00199. [Google Scholar] [Crossref]

10. Shan, Y., Ji, M., Xie, W., Lam, K. Y., & Chow, C. Y. (2022). Public Trust in Artificial Intelligence Applications in Mental Health Care: Topic Modeling Analysis. JMIR human factors, 9(4), e38799. https://doi.org/10.2196/38799 [Google Scholar] [Crossref]

11. Vimalkumar, M., Sharma, K.S., Singh, J.B., & Dwivedi, Y.K. (2021). ‘Okay google, what about my privacy?’: User's privacy perceptions and acceptance of voice based digital assistants, Computers in Human Behavior, Volume 120, 106763, https://doi.org/10.1016/j.chb.2021.106763. [Google Scholar] [Crossref]

12. Zhan, X., Abdi, N., Seymour, W., & Such, J. (2024). Healthcare Voice AI Assistants: Factors Influencing Trust and Intention to Use. Proc. ACM Hum.-Comput. Interact. 8, CSCW1, Article 62, 37 pages. https://doi.org/10.1145/3637339 [Google Scholar] [Crossref]

13. Zhang, Z., Xia, E., & Huang, J. (2022). Impact of the Moderating Effect of National Culture on Adoption Intention in Wearable Health Care Devices: Meta-analysis. JMIR mHealth and uHealth, 10(6), e30960. https://doi.org/10.2196/30960 [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles