From Perception to Action : The Role of Trust, Satisfaction, and Perceived Humanness in Ecological Chatbot Adoption
- Safa JRIDI
- 3495-3503
- Jul 10, 2025
- Artificial intelligence
From Perception to Action : The Role of Trust, Satisfaction, and Perceived Humanness in Ecological Chatbot Adoption
Safa JRIDI
University of Tunis El Manar, Tunisia
DOI: https://dx.doi.org/10.47772/IJRISS.2025.906000262
Received: 31 May 2025; Accepted: 04 June 2025; Published: 10 July 2025
ABSTRACT
In the context of ecological emergency and the growing development of artificial intelligence (AI)-based conversational agents, this research investigates the impact of relational perceptions, perceived humanness, trust, and satisfaction, on the intention to use these ecologically oriented agents.
An experiment combined with a questionnaire was conducted with 272 participants. The data collected were analyzed using the Partial Least Squares Structural Equation Modeling (PLS-SEM) method. The results reveal that perceived humanness, trust, and satisfaction toward the AI-based conversational agent positively and significantly influence the intention to use such technologies to promote sustainable behaviors.
These findings provide practical insights for stakeholders involved in the ecological transition—businesses, public institutions, and NGOs—who aim to deploy ethical and accessible digital solutions, especially in contexts where environmental awareness remains limited. This study is distinctive in its integrated approach linking responsible digital marketing, AI perception, and the promotion of sustainable behaviors.
Keywords : AI conversational agent ; perceived humanness ; trust ; satisfaction ; usage intention ; sustainable behavior
INTRODUCTION
Artificial intelligence (AI) is experiencing widespread adoption, profoundly transforming human interactions across various domains of daily life. Initially focused on optimizing technical tasks, AI is increasingly shifting toward more relational uses, raising new challenges around empathy, attentiveness, and the quality of communication [1]. These advances have led to the emergence of interfaces capable of engaging in fluid, personalized, and emotionally resonant dialogue with users [2].
By simulating key aspects of human communication, such interfaces can act as genuine agents of socialization, disseminating norms, values, and behaviors through digital interactions [3]. This social function becomes particularly meaningful in the context of ecological transition: can such technologies help foster more sustainable behaviors?
Recent research suggests that interactive AI systems can indeed promote pro-environmental attitudes, especially when they incorporate emotional and personalized dimensions (e.g., [4]; [5]). The perceived humanness, reliability, and benevolence of these technologies directly influence users’ satisfaction and their intention to engage in eco-friendly behaviors [6].
This study contributes to this emerging field by investigating the relational factors that drive the acceptance of AI as a mediator of ecological awareness, particularly in contexts where environmental consciousness remains relatively low. It examines how engaging, interactive experiences may stimulate interest in and commitment to sustainable practices (e.g., [7]; [8]).
In Tunisia, where the adoption of advanced digital technologies remains limited, it appears particularly relevant to build upon existing educational approaches while gradually integrating accessible AI tools that support concrete, achievable behavior change. Such integration could help foster an active ecological consciousness, especially among younger populations.
For instance, an AI-based application could provide personalized recommendations to reduce carbon footprints, optimize energy consumption, or adopt sustainable practices in everyday life [9]. These functionalities are especially valuable in contexts where environmental awareness still requires reinforcement.
From a theoretical standpoint, this research examines the acceptance of AI interfaces through their social and educational potential. From a practical perspective, it highlights their possible role as tools for guiding individuals toward sustainable development, by aligning interactive technologies with responsible behavioral transformation.
LITERATURE REVIEW
1) Conversational AI Agents and Sustainable Development: Conversational agents based on artificial intelligence (AI) have become ubiquitous tools in the contemporary digital landscape. Initially designed to automate interactions across a variety of contexts, their role has significantly expanded in recent years—particularly with the emergence of generative AI technologies [10]. These agents are now equipped with sophisticated conversational capabilities, enabling them to recognize, predict, translate, summarize, and even generate content in real time. Such technological evolution enhances the personalization and fluidity of interactions between users and systems (e.g., [11]; [12]).
From a sustainable development perspective, intelligent conversational agents can play a pivotal role in raising awareness and supporting individuals in adopting more responsible behaviors. By embedding ecological, emotional, or educational messages into their dialogues, these agents act as subtle influencers capable of encouraging users to engage in more sustainable practices, such as reducing their carbon footprint, recycling, or embracing responsible consumption. This ecological socialization function—still relatively underexplored—opens up promising avenues for mobilizing AI in support of environmental transition efforts.
Within the broader framework of ecological transition strategies, AI-driven conversational agents address a dual challenge : supporting individuals in adopting pro-environmental behaviors while ensuring a personalized and engaging user experience. Their adoption can be understood through the lens of major technology acceptance models, notably the Technology Acceptance Model (TAM ; [13]) and the Unified Theory of Acceptance and Use of Technology (UTAUT; [14]). These conceptual frameworks highlight several key factors influencing acceptance, including perceived usefulness, ease of use, social influence, and facilitating conditions.
While TAM emphasizes performance expectancy and ease of use, UTAUT offers a more nuanced perspective by integrating moderating variables such as age, gender, and user experience. Applying these models in the context of environmental awareness enables the identification of both psychological and technological levers that can foster user engagement with AI-powered conversational agents designed for sustainable development purposes.
2) Perceived Humanness of Conversational AI Agents : Research on human–machine interaction is grounded in the concept of anthropomorphism, which refers to the attribution of human characteristics to non-human entities in order to reduce uncertainty and foster social connection (e.g., [15]; [16]). This theoretical framework is particularly relevant for understanding the perceived humanness of intelligent conversational agents.
Recent studies indicate that human-like qualities—particularly emotional expressiveness and intentionality—play a critical role in building user trust in these agents [17]. A high level of perceived humanness is associated with more relevant responses, the integration of anthropomorphic cues, and clear, coherent language, all of which enhance the overall user experience (e.g., [18]; [19]; [20]).
When users perceive a conversational agent as human-like, they tend to adopt interpersonal interaction patterns, which, in turn, foster user satisfaction. Based on this reasoning, we propose the following hypothesis:
H1: The perceived humanness of a conversational AI agent positively influences user satisfaction with the agent.
3) Trust as a Determinant of Satisfaction with Conversational AI Agents: Trust is a key psychological variable in technology acceptance, particularly in the context of interactions with artificial intelligence (AI)-based conversational agents. It is defined as the user’s belief in the agent’s reliability, competence, honesty, and benevolence [18].
Like perceived humanness, trust operates as an independent factor that contributes to a positive user experience. When the agent is perceived as trustworthy and competent, users report greater satisfaction with the interaction, regardless of other perceived attributes (e.g., [21]; [22]).
User satisfaction is thus influenced by both the perceived humanness of the agent and the trust it elicits. These two dimensions play complementary but distinct roles in shaping users’ acceptance of the technology [23].
Accordingly, we propose the following hypothesis:
H2 : Trust in the conversational AI agent positively influences user satisfaction.
4) Satisfaction with Conversational AI Agents: Satisfaction is a key dimension of the effectiveness of conversational AI agents. It reflects the user’s level of contentment with the interaction experience, particularly regarding the fluidity of the exchange, the relevance of responses, and the quality of communication [24]. High levels of satisfaction foster a positive attitude toward the agent and enhance the user’s willingness to continue engaging with it [25].
More specifically, when users are satisfied with their experience, they are more likely to use these agents in their daily routines, including for activities related to sustainability or the promotion of pro-environmental behaviors [26]. Satisfaction thus serves as a crucial driver in encouraging the adoption of AI applications aimed at supporting environmental transitions.
Previous research has shown that satisfaction significantly predicts the intention to continue using conversational technologies [27]. In this respect, a positive user experience can create a virtuous cycle in which satisfaction fuels engagement, thereby strengthening the effectiveness of messages and actions promoting sustainable behaviors.
Accordingly, we propose the following hypothesis:
H3 : Satisfaction with the conversational AI agent positively influences the intention to use it.
Fig.1 Conceptual Model of the Role of Conversational AI in Promoting Sustainable Use Intentions: The Influence of Perceived Humanness, Trust, and Satisfaction
RESEARCH METHODOLOGY
1) Research Context, Target Population, and Sampling : The objective of this research was to explore the factors that foster the acceptance of an artificial intelligence-based conversational assistant as an ecological assistant. To enhance the realism and immersiveness of the user experience for survey participants, we developed a proprietary AI assistant, which was deployed on a dedicated online website.
Data were collected from 272 students in Tunisia. Participants were invited to take part in the study by completing a questionnaire, either online or via a self-administered paper version within their university, after having interacted with our AI-based ecological assistant, named Ecobot.
2) Measurement Instruments and Data Collection Procedures: Data were collected from 372 participants, primarily young Perceived humanness of AI chatbots, trust in AI chatbots (TS), and customer satisfaction (CS) were measured using adapted scales from Ramadhani et al. [28], comprising 4, 6, and 5 items respectively. The intention to use the ecological conversational assistant was assessed using a scale inspired by the Theory of Planned Behavior (TPB) (5 items) [29].
Responses were collected using a 5-point Likert scale (1 = strongly disagree; 5 = strongly agree). A final section of the questionnaire gathered participants’ sociodemographic information.
Prior to the official launch, the questionnaire was pretested with 30 participants to verify its relevance, question comprehension, and to eliminate any ambiguity. Based on feedback received, certain formulations were refined to improve clarity and fluidity.
3) Results of the Empirical Study: The sample consisted exclusively of students, representing a young and academic profile. In terms of educational level, nearly all respondents held a university degree, aligning with the targeted population.
Regarding monthly income, the majority of students reported earning less than 500 TND per month, followed by a minority with incomes ranging between 500 and 1,000 TND. This modest economic profile is an important factor to consider when analyzing sustainable consumption behaviors and intentions to use ecological technologies.
Following data collection, a descriptive analysis was conducted, followed by statistical tests to examine the validity of the conceptual model as well as the reliability of variables and measurement instruments.
The analysis was carried out in two main stages : first, an exploratory factor analysis using Principal Component Analysis (PCA) identified the underlying structure of the variables. Subsequently, the internal consistency of the scales was assessed using Cronbach’s alpha coefficient.
Finally, to test the hypothesized relationships and validate the quality of the structural model, the Partial Least Squares Structural Equation Modeling (PLS-SEM) method was applied, recognized for its adaptability to various types of research (confirmatory, predictive, exploratory).
4) Evaluation of The Psychometric Quality of the Measurement Scales: The PCA performed on all scales revealed Kaiser-Meyer-Olkin (KMO) indices above the recommended threshold of 0.5, confirming the adequacy of the data for factor analysis. Observed KMO values ranged between 0.65 and 0.96 across items, supporting the relevance of factor aggregation.
The total variance explained by the principal components consistently exceeded 50% for each scale, indicating that the extracted factors are meaningful for explaining the relationships among observed variables.
Internal reliability assessment via Cronbach’s alpha showed high values across all scales, ranging from 0.779 (perceived humanness) to 0.959 (satisfaction). These findings were corroborated by composite reliability coefficients (rho_c), all exceeding 0.8, as well as Average Variance Extracted (AVE) values above 0.5, ensuring internal consistency and convergent validity of the constructs.
Moreover, factor loadings derived from the PLS-SEM analysis were all above 0.7, demonstrating the significant contribution of each indicator to its latent construct. Discriminant validity was further confirmed by the Fornell-Larcker criterion and the Heterotrait-Monotrait (HTMT) ratio, both meeting recommended thresholds [30].
Detailed results of the exploratory factor analysis, reliability, and validity indicators are presented in Table 1.
Table 1 Results of the Psychometric Evaluation of the Measurement Scales
These results confirm the robustness of our structural equation model, demonstrating both the accuracy and reliability of the measures used to assess all constructs within the conceptual framework.
5) Discriminant Validity of Measurement Scales : To assess the discriminant validity of the measurement scales, we compared the outer loadings of the indicators with their cross-loadings. The results indicate that each indicator loads more strongly on its corresponding construct than on any other, suggesting a clear distinction between constructs. Moreover, examination of the correlation matrix reveals that the diagonal elements—representing the shared variance between each construct and its indicators—are higher than any inter-construct correlation. This observation supports the discriminant validity of the measurements.
In addition, the Average Variance Extracted (AVE) exceeds the 0.5 threshold for each construct, in line with the recommendations of Fornell and Larcker (1981), confirming convergent validity. Finally, the Heterotrait-Monotrait ratio (HTMT) values are all below 0.85 across constructs [31], providing further evidence of discriminant validity.
6) Predictive Validity of the Model : : The explanatory and predictive capabilities of the structural model were assessed using two key indicators: the coefficient of determination (R²) and the cross-validated redundancy measure (Q²) [32]. All R² values exceeded the minimum acceptable threshold of 0.10 [33], indicating that the model accounts for a meaningful proportion of variance in the dependent constructs. Specifically, the model’s explanatory power is mainly driven by the constructs “attitude toward the chatbot message,” “attitude toward purchasing second-hand products,” and “purchase intention of second-hand products.” According to established benchmarks, the structural model thus demonstrates an acceptable level of explanatory relevance [34].
In addition, predictive validity was examined using the Stone-Geisser Q² coefficient. Positive Q² values were obtained for all endogenous constructs, supporting the model’s predictive relevance and suggesting that the indicators exhibit adequate out-of-sample predictive capability [35].
Table 2 R² and Q² coefficients generated by the PLS-SEM algorithm
7) Confirmatory Analysis: Once the psychometric quality of the measurement instruments was validated and discriminant validity was confirmed, the evaluation of the conceptual model was conducted through a confirmatory analysis. This approach makes it possible to test the causal relationships between the variables defined in the theoretical model.
For this stage, the Partial Least Squares Structural Equation Modeling (PLS-SEM) method was chosen, as it is recognized for its robustness in confirmatory factor analyses and its ability to provide accurate results for validating conceptual models.
The analysis of the structural model’s direct effects provides insights, through the Bootstrapping method, into the correlations between constructs by estimating the T-values (standard error) and the statistical significance of the path coefficients (P-values), which must be below the 5% threshold (p < 0.05). Table 3 presents the direct effects of all inter-variable relationships (and their respective dimensions), as well as the causal paths within the full structural model.
Table 3 Path Analysis
DISCUSSION
The results of the PLS-SEM analysis empirically confirm the positive and significant influence of relational perceptions, namely perceived humanness, trust, and satisfaction toward the ecological conversational agent on the intention to use this technology. The proposed theoretical model demonstrates strong quality, both in terms of the measurement model (convergent and discriminant validity) and the structural model (high R² values and significant path coefficients), thereby validating all the hypotheses formulated.
More specifically, our findings show that perceived humanness (H1) and trust (H2) in the AI-based conversational agent positively and significantly influence user satisfaction, which in turn determines the intention to adopt this ecological technology (H3). This dynamic illustrates that, in a context where ethical and sustainability concerns are salient, users’ perceptions of the chatbot’s human-likeness primarily exert their influence through the satisfaction they generate [23].
Contrary to the commonly held assumption that increased anthropomorphism fosters closeness and engagement with technology [17], our results suggest that perceived humanness does not directly affect adoption intention. Instead, it operates indirectly through trust and satisfaction. This may be explained by the fact that, in ethically charged contexts such as sustainable consumption, users attach greater importance to the quality of the interaction and the reliability of the agent than to its human-like features. Moreover, an excessive degree of humanness in a chatbot can sometimes trigger an “uncanny valley” effect, eliciting discomfort or skepticism and thereby limiting user acceptance (e.g., [36] ; [37]).
These findings are consistent with the theoretical framework of technology acceptance models, particularly the Technology Acceptance Model (TAM) [13], which highlights trust and satisfaction as key predictors of usage intention, and the Unified Theory of Acceptance and Use of Technology (UTAUT) [14], which emphasizes the role of user perceptions and experience in technology adoption. In this sense, the effectiveness of an ecological agent appears to depend more on the satisfaction it provides and the trust it elicits than on its anthropomorphic attributes per se (e.g., [22] ; [24]).
Managerial Implications
The results of this study provide concrete insights for actors aiming to integrate ecological conversational agents based on artificial intelligence into their digital strategies, with a view to promoting ethical engagement and sustainable behaviors.
First, it is essential that managers give particular attention to the quality of the user experience, notably by fostering trust and satisfaction toward ecological chatbots. This involves the development of interfaces that are reliable, transparent, and easy to navigate, which can instill confidence and meet users’ environmental expectations. Enhancing these relational dimensions is likely to facilitate the adoption and continued use of such agents, thereby contributing to the dissemination of sustainable consumption practices.
Second, the findings suggest the importance of avoiding excessive anthropomorphism in chatbot design. An overly human-like appearance or behavior may induce discomfort or skepticism among users, which in turn could hinder acceptance. A more balanced design strategy, focused on clarity, message relevance, and the perceived authenticity of ecological commitment, may prove more effective in promoting positive attitudes and fostering sustainable behavioral intentions.
Taken together, these implications highlight the need for thoughtful integration of relational and ethical dimensions into the design of AI-powered conversational agents, in order to support the broader goals of digital sustainability.
Limitations and Directions for Future Research
This study offers valuable insights into the influence of relational perceptions, namely perceived humanness, trust, and satisfaction, on the intention to use ecological chatbots powered by artificial intelligence. Nevertheless, certain limitations must be acknowledged.
First, the experimental design was conducted in a controlled environment, which may constrain the generalizability of the findings to real-world usage contexts. In order to enhance the external validity of the proposed model, future research should consider deploying these agents within actual digital platforms, thereby allowing the observation of user adoption and usage behaviors under more ecologically valid conditions.
Second, the study focuses exclusively on immediate post-interaction usage intentions, without accounting for the medium- or long-term effects on actual sustainable behaviors. Longitudinal research would be necessary to assess the durability of these intentions over time and to examine the psychological mechanisms underlying the conversion of intention into effective and sustained behavior.
These research directions would advance our understanding of how the relational dimensions of ecological chatbots—particularly trust and satisfaction, which emerged as key antecedents in the proposed model—can contribute to the broader dissemination and sustainable adoption of digital technologies aligned with ecological transition goals.
CONCLUSION
This research falls within the emerging field of conversational technologies, particularly focusing on the use of ecological conversational agents as potential levers to encourage sustainable behaviors on online platforms.
By developing a model that incorporates key relational dimensions—perceived humanness, trust, and satisfaction—our study demonstrates that these three factors significantly influence the intention to adopt environmentally oriented conversational agents, although the effect of perceived humanness is moderated by other variables within the model. Trust and satisfaction, however, remain major determinants that reinforce the acceptance of this new technology in support of responsible consumption choices.
These findings underscore the importance for designers and managers to develop conversational agents that are credible, satisfying, and endowed with relevant human-like traits, in order to maximize user engagement and promote the sustainable adoption of ecological technologies.
From a theoretical perspective, this study enhances the understanding of the mechanisms driving the acceptance of conversational agents in ethical and environmental contexts, by drawing on frameworks from social psychology and human-computer interaction research.
Nevertheless, methodological limitations related to the controlled experimental setting and the one-time measurement of usage intentions suggest the need for future research in real-world environments and over extended timeframes.
Ultimately, this research offers valuable insights into the role of ecological conversational agents as catalysts for sustainable consumption behaviors, thereby contributing to the broader digital ecological transition.
REFERENCES
- Magni, D., Del Gaudio, G., Papa, A., & Della Corte, V. (2024). Digital humanism and artificial intelligence : The role of emotions beyond the human–machine interaction in Society 5.0. Journal of Management History, 30(2), 195-218.
- Huang, M. H., & Rust, R. T. (2024). The caring machine : Feeling AI for customer care. Journal of Marketing, 88(5), 1-23.
- Janhonen, J. (2023). Socialisation approach to AI value acquisition : enabling flexible ethical navigation with built-in receptiveness to social influence. AI and Ethics, 1-27.
- Majid, G. M., Tussyadiah, I., & Kim, Y. R. (2024). Exploring the potential of chatbots in extending tourists’ sustainable travel practices. Journal of Travel Research, 00472875241247316.
- Pham, H. C., Duong, C. D., & Nguyen, G. K. H. (2024). What drives tourists’ continuance intention to use ChatGPT for travel services ? A stimulus-organism-response perspective. Journal of Retailing and Consumer Services, 78, 103758.
- Pinxteren, M. M., Pluymaekers, M., & Lemmink, J. G. (2020). Human-like communication in conversational agents : A literature review and research agenda. Journal of Service Management, 31(2), 203-225.
- Guo, X., Li, R., Ren, Z., & Zhu, X. (2024). Examining the effect of nudging on college students’ behavioral engagement and willingness to participate in online courses. Journal of Health Psychology, 13591053241281588.
- Farshbafiyan Hosseininezhad, M., Heidari, M., & Letizia Guerra, M. (2025). The framing effect and sustainable hotel booking behaviour : A nudge marketing study. European Journal of Tourism Research, 39, 1-14.
- Wang, Y., Zhang, R., Yao, K., & Ma, X. (2024). Does artificial intelligence affect the ecological footprint?–Evidence from 30 provinces in China. Journal of Environmental Management, 370, 122458.
- Hsu, C. L., & Lin, J. C. C. (2023). Understanding the user satisfaction and loyalty of customer service chatbots. Journal of Retailing and Consumer Services, 71, 103211.
- Dsouza, R., Sahu, S., Patil, R., & Kalbande, D. R. (2019, December). Chat with bots intelligently : A critical review & analysis. In 2019 international conference on advances in computing, communication and control (ICAC3) (pp. 1-6). IE
- Mokoena, N., & Obagbuwa, I. C. (2025). An analysis of artificial intelligence automation in digital music streaming platforms for improving consumer subscription responses : A review. Frontiers in Artificial Intelligence, 7, 1515716.
- Davis, F. D. (1989). Technology acceptance model : TAM. Al-Suqri, MN, Al-Aufi, AS : Information Seeking Behavior and Technology Adoption, 205(219), 5.
- Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology : Toward a unified view. MIS quarterly, 425-478.
- Haslam, N., & Loughnan, S. (2014). Dehumanization and infrahumanization. Annual review of psychology, 65(1), 399-423
- Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human : A three-factor theory of anthropomorphism. Psychological review, 114(4), 864.
- Liu, I., Liu, F., Xiao, Y., Huang, Y., Wu, S., & Ni, S. (2025). Investigating the key success factors of chatbot-based positive psychology intervention with retrieval-and generative pre-trained transformer (GPT)-based chatbots. International Journal of Human–Computer Interaction, 41(1), 341-352.
- Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of Management Information Systems, 37(3), 875-900.
- Go, E., & Sundar, S. S. (2019). Humanizing chatbots : The effects of visual, identity and conversational cues on humanness perceptions. Computers in human behavior, 97, 304-316.
- Svenningsson, N., & Faraon, M. (2019, December). Artificial intelligence in conversational agents : A study of factors related to perceived humanness in chatbots. In Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference (pp. 151-161).
- Madianou, M. (2021). Nonhuman humanitarianism : When’AI for good’can be harmful. Information, Communication & Society, 24(6), 850-868.
- Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot : Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and informatics, 54, 101473.
- Chattaraman, V., Kwon, W. S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style ? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315-330.
- Singh, D., & Kunja, S. R. (2025). Engaging guests for a greener tomorrow : Examining the role of hotel chatbots in encouraging pro-environmental behavior. Tourism and Hospitality Research, 14673584241313339.
- Le, X. C., & Nguyen, T. H. (2024). The effects of chatbot characteristics and customer experience on satisfaction and continuance intention toward banking chatbots : Data from Vietnam. Data in Brief, 52, 110025.
- Gümüş, N., & Çark, Ö. (2021). The effect of customers’ attitudes towards chatbots on their experience and behavioural intention in Turkey. Interdisciplinary Description of Complex Systems : INDECS, 19(3), 420-436.
- Chung, M., Ko, E., Joung, H., & Kim, S. J. (2020). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of business research, 117, 587-595.
- Ramadhani, A., Handayani, P. W., Pinem, A. A., & Sari, P. K. (2023). The influence of conversation skills on Chatbot on Purchase Behavior in E-Commerce. Jurnal Manajemen Indonesia, 23(3), 287-302.
- Kautonen, T., Van Gelderen, M., & Fink, M. (2015). Robustness of the theory of planned behavior in predicting entrepreneurial intentions and actions. Entrepreneurship theory and practice, 39(3), 655-674.
- Evrard, Y., Pras, B., & Roux, E. (2003). Market : Etudes et recherches en marketing. 3 ème Edition. Paris, Dunod.
- Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the academy of marketing science, 43, 115-135.
- Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial management & data systems.
- Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. University of Akron Press.
- Chin, W. W. (1998). Commentary : Issues and opinion on structural equation modeling. MIS quarterly, vii-xvi.
- Stone, M. (1974). Cross-validation and multinomial prediction. Biometrika, 509-515.
- Pavlidou, P. (2021). The Uncanny Valley : The Human-Likeness of Chatbots and Its Paradoxical Impact on Consumers’ Purchase Intention in E-Commerce (Doctoral dissertation, Master’s Thesis, Tilburg University).
- Kim, W., Ryoo, Y., & Choi, Y. K. (2024). That uncanny valley of mind : When anthropomorphic AI agents disrupt personalized advertising. International Journal of Advertising, 1-30.