Navigating AI Ethics in Malaysian Universities: Addressing Privacy, Integrity, and Bias
- Norazlinda Hj Mohammad
- Norena Abd Karim Zamri
- Mastura Roni
- Siti Nur Izyandiyana Ab Hadi
- Siti Fairuz Nurr Sadikan
- Sulaiman Mahzan
- 2451-2465
- Feb 12, 2025
- Artificial intelligence
Navigating AI Ethics in Malaysian Universities: Addressing Privacy, Integrity, and Bias
Norazlinda Hj Mohammad*1, Norena Abd Karim Zamri2, Mastura Roni3, Siti Nur Izyandiyana Ab Hadi1, Siti Fairuz Nurr Sadikan4, Sulaiman Mahzan5
1Faculty of Communication and Media Studies, Universiti Teknologi MARA Melaka, Malaysia
2Institute of the Malay World and Civilization, Universiti Kebangsaan Malaysia, Selangor, Malaysia
3Faculty of Business Management, Universiti Teknologi MARA Melaka, Malaysia
4Faculty of Plantation and Agrotechnology, Universiti Teknologi MARA Melaka, Malaysia
5College of Computing, Informatics and Mathematics, Universiti Teknologi MARA Melaka, Malaysia
*Correspondent Author
DOI: https://dx.doi.org/10.47772/IJRISS.2025.9010197
Received: 26 December 2024; Accepted: 30 December 2024; Published: 12 February 2025
ABSTRACT
It is undeniable that Malaysia has gone through a digital transformation landscape since the adverse advance of the Information Revolution (IR 4.0). This includes having Artificial Intelligence (AI), which has been hyped as most people have learned to utilize AI tools in various sectors or workplaces. The education sector is one of them, as we can see most of the students and academicians are using it profusely in producing good assignments, reports, dissertations, videos, scripts, background music, design and others on a large scale. (Nemorin et al., 2022) AI is a great innovative technology where several types of AI tools are used in the education sector. Nevertheless, it has the implications of it as people are now interdependent or rely excessively on AI technologies and it somehow diminishes the quality, critical mind, analytical thinking, original work, quality and authenticity of the product output. Due to the negative repercussions as well as the challenges on integrity issues brought on by AI, researchers and society have increased their attention on the side effects of AI in the education field. Thus, this present study aimed to contribute in providing a framework or guidelines that monitor and mitigate this problem from getting out of hand. Using the online survey, data was collected from Malaysian respondents as part of the study’s quantitative research approach. The data were analyzed through the Structured Equation Modeling (SEM) technique using Smart PLS version 3.0 to investigate factors influencing the overuse of AI software in education tasks. The findings demonstrated that misuse of Artificial Intelligence software would lead to greater impact or challenges towards educators in university. The importance of this research will raise educator’s awareness by controlling the misuse of Artificial Intelligence usage, raising quality work academically and providing policy guidelines in ensuring more integrity in education.
Keywords: Artificial Intelligence, Academic Integrity, Credibility, Quality, UiTM Educators
INTRODUCTION
Artificial Intelligence (AI) has emerged as a transformative force across multiple sectors, with its impact particularly pronounced in the field of education. AI technologies have the potential to significantly enhance learning experiences, streamline administrative operations, and provide tailored educational solutions. These advancements are driven by sophisticated algorithms and data analytics capabilities that can personalise learning, automate grading, predict student performance, and manage administrative tasks efficiently (Bates et al., 2020; Hwang et al., 2020; Kamalov et al., 2023).
In Malaysia, the integration of AI tools in public universities is on the rise, reflecting a broader global trend towards digitisation in education. Malaysian public universities leverage AI to create more interactive and engaging learning environments. AI-driven platforms can adapt to the individual learning pace and style of students, thereby fostering a more inclusive educational experience (Ahmad et al., 2023; Yuskovych-Zhukovska et al., 2022; Kuleto et al., 2021). Furthermore, administrative tasks such as scheduling, enrollment management, and resource allocation are becoming more efficient with the help of AI, allowing university staff to focus on more strategic initiatives (Thongprasit & Wannapiroon, 2022; Gao et al., 2021).
Despite these advancements, the integration of AI in education raises significant ethical concerns, particularly regarding data privacy, academic integrity, and algorithmic bias. Data privacy is a major issue, as AI systems often require access to extensive personal data, including behavioural patterns, personal information, and academic records of students. Without robust data protection measures, this sensitive information could be exposed to unauthorised parties, leading to data breaches and serious repercussions for both students and institutions (Nguyen et al., 2023; Prokopowicz, 2023).
Academic integrity is another critical area of concern. AI tools designed to detect plagiarism, facilitate assessments, and provide learning support can be misused. For instance, AI-driven plagiarism detection systems may produce false positives or negatives, raising questions about their reliability and fairness (Akgun & Greenhow, 2022; Ouyang & Jiao, 2021). Additionally, students might exploit AI tools to complete assignments dishonestly, undermining the educational process’s integrity (Crompton & Burke, 2023). According to The Star (2024), the plagiarism detection platform Turnitin reported that over 22 million, or about 11%, of the more than 200 million articles reviewed, had at least 20% AI writing assistance. This reliance on AI tools can diminish students’ critical thinking skills and responsibility, impacting the integrity of their assignments.
Bias in AI algorithms poses a further challenge. AI systems trained on historical data can inherit and perpetuate existing biases. If not addressed, these biases can exacerbate inequalities within the educational system. AI-driven admission systems, for example, might inadvertently favour certain groups over others, leading to discrimination (Luan et al., 2020; Yang, 2022). Additionally, some suggestions made by AI tools like ChatGPT may not be suitable for the Asian context, potentially creating misleading statements or arguments.
Given these ethical concerns, it is crucial to ensure that the integration of AI in Malaysian public universities is managed responsibly. This involves implementing stringent data protection measures, ensuring the reliability and fairness of AI systems, and maintaining a balanced approach to technology use that supports rather than replaces traditional educational methods. By addressing these issues, stakeholders can harness AI’s potential while safeguarding students’ rights and upholding ethical standards.
Problem Statement
The rapid adoption of Artificial Intelligence (AI) in Malaysian public universities, while promising significant advancements in educational processes, presents several critical ethical challenges that require urgent attention. As AI technologies become more integrated into educational frameworks, issues surrounding data privacy, academic integrity, algorithmic bias, and over-reliance on technology have surfaced, posing substantial risks to the educational ecosystem.
Firstly, the extensive use of AI systems demands access to vast amounts of personal data, including students’ academic records, personal information, and behavioural patterns. According to Nguyen et al. (2023), these data sets are essential for AI algorithms to function effectively, but they also introduce significant privacy risks. Without robust data protection measures in place, there is an elevated risk of data breaches and unauthorized access to sensitive information. Such breaches can have severe consequences, including identity theft and the misuse of personal data, which could damage the reputation of educational institutions and violate students’ rights to privacy (Prokopowicz, 2023). Therefore, it is imperative to implement strict data protection policies and practices to safeguard student information.
Secondly, the integrity of academic processes is at stake with the integration of AI tools designed to detect plagiarism, facilitate assessments, and provide learning support. Ouyang and Jiao (2021) stated that these tools can enhance efficiency, but at the same time, it also can be misused. For instance, AI-driven plagiarism detection systems might produce false positives or negatives, raising questions about their reliability and fairness (Akgun & Greenhow, 2022). Additionally, students might exploit these tools to complete assignments dishonestly and decline the integrity of the educational process. Such misuse not only compromises academic standards but also affects the credibility of educational qualifications, potentially devaluing degrees awarded by the institutions (Crompton & Burke, 2023). Ensuring the reliability and fairness of AI tools is crucial to maintain academic integrity.
Moreover, bias in AI algorithms remains a significant concern. AI systems are typically trained on historical data, which can contain inherent biases reflecting societal inequalities. If these biases are not addressed, AI tools can preserve and even degrade the existing variances within the educational system (Luan et al., 2020). For example, AI-driven admission systems might inadvertently favour certain groups over others based on biased historical data, leading to discriminatory practices (Yang, 2022). This potential for discrimination requires the development and implementation of unbiased AI algorithms and regular audits to ensure fairness and equality in educational opportunities.
Additionally, there is a notable risk of over-reliance on AI tools in education. While AI can greatly support learning by providing personalized educational experiences and automating routine tasks, excessive dependence on technology might interrupt the development of critical thinking and problem-solving skills among students (Yuskovych-Zhukovska et al., 2022). The over-reliance on AI tools can reduce the role of educators, whose guidance and mentorship are essential for holistic student development (Ahmad et al., 2023). Educators play a vital role in nurturing students’ cognitive and emotional growth, which cannot be wholly replicated by AI.
Addressing these ethical concerns is crucial to ensure that the integration of AI in Malaysian public universities not only enhances educational outcomes but also upholds ethical standards. Good measures must be implemented to protect data privacy, ensure the reliability and fairness of AI tools, eliminate algorithmic bias, and prevent over-reliance on technology. By proactively managing these challenges, stakeholders can bind the full potential of AI while safeguarding the rights and development of students. Failure to address these issues could undermine the potential benefits of AI, leading to unfavourable effects on the educational system and student development. Therefore, it is essential to ensure that AI tools are used responsibly and effectively, fostering an educational environment that promotes both technological advancement and ethical integrity.
Research Objective
The study aims:
- To analyze the ethical concerns associated with the integration of AI tools in Malaysian public universities focusing on data privacy, academic integrity, and algorithmic bias.
Research Question
This study aims to answer the following question which is:
- To what extent does the integration and misuse of AI tools in Malaysian public universities affect data privacy, academic integrity, and algorithmic bias, as measured by faculty and student perceptions and reported incidents?
Significance of the Study
The importance of this research is in its thorough investigation of the moral dilemmas raised by the improper usage of artificial intelligence (AI) in Malaysian public universities and the effects on learning. Understanding these difficulties is essential for several reasons as AI begins to permeate many facets of education.
Firstly, this study highlights the need for robust data protection measures by identifying the potential risks AI poses to data privacy. Safeguarding students’ personal information is essential for maintaining trust and integrity within educational institutions. Nguyen et al. (2023) and Prokopowicz (2023) mentioned that effective data privacy strategies are crucial to prevent data breaches and unauthorized access, thereby protecting the rights and personal data of students.
Secondly, the study underscores the importance of upholding academic integrity. It reveals how AI tools, while beneficial, can also undermine academic standards if misused. AI-driven plagiarism detection systems, for instance, can produce unreliable results, which calls for the development of more accurate and fair AI applications. Ensuring the credibility of educational qualifications is essential for maintaining high academic standards (Crompton & Burke, 2023; Akgun & Greenhow, 2022; Ouyang & Jiao, 2021).
Another significant aspect addressed by the study is algorithmic bias. AI systems trained on historical data may contain inherent biases, potentially leading to discriminatory practices. By examining these biases, the study emphasizes the need for unbiased AI systems in education to ensure equal opportunities for all students in processes such as admissions and assessments. This is vital for fostering a comprehensive and reasonable educational environment (Luan et al., 2020; Yang, 2022).
Moreover, the study highlights the risks associated with over-reliance on AI in education. While AI can support learning, an unnecessary dependence on technology might discourage the development of critical thinking and problem-solving skills among students. Maintaining a balance between technological support and human interaction is crucial for holistic student development, as educators play an irreplaceable role in nurturing these essential skills (Ahmad et al., 2023; Yuskovych-Zhukovska et al., 2022).
Furthermore, the findings of this study are significant for policymakers, educational leaders, and practitioners. The detailed analysis of the ethical issues related to AI in education offers valuable insights that can inform the development of policies and practices aimed at integrating AI responsibly. This will help ensure that AI enhances educational outcomes without compromising ethical standards.
Lastly, the study contributes to the broader study on the ethical integration of AI in education. By highlighting both the benefits and the potential ethical pitfalls, it encourages a more thoughtful approach to AI adoption in educational settings. This can lead to the development of guidelines and frameworks that support the ethical use of AI, benefiting students, educators, and institutions alike.
In summary, this study provides a comprehensive examination of the ethical challenges posed by AI in Malaysian public universities. By addressing issues of data privacy, academic integrity, algorithmic bias, and the balance between technology and human interaction, the study offers critical insights that can help ensure the responsible and ethical integration of AI in education. This, in turn, will contribute to the overall improvement of educational outcomes and the protection of student rights and well-being.
LITERATURE REVIEW
Privacy Concerns in AI Applications within Higher Education
The implementation of artificial intelligence (AI) technologies in universities has resulted in notable progress in the management and provision of education (Chen et al., 2020; Limna et al., 2022). Nevertheless, it also gives rise to significant privacy issues, namely around the gathering and administration of personal information. According to Aldahwan and Alsaeed (2020), the applications powered by artificial intelligence, such as learning management systems (LMS) (Nithiyanandam et al., 2022), student performance analytics (Hooda et al., 2022), and campus security systems (Alam, 2022), typically necessitate the gathering of substantial amounts of data, including personal identifiers, academic records, and even behavioural patterns. The comprehensive accumulation of such data generates a huge bank of confidential information (Hu et al., 2022), which, if not effectively controlled, may be susceptible to unauthorised access or abuse. Gray et al., (2022) emphasise the importance of strong data governance frameworks that give priority to the privacy of students and staff, while also harnessing the potential of artificial intelligence to improve educational effectiveness.
Slimi and Carballido (2023) stated that the issue of consent becomes intricately complex when considering the use of artificial intelligence in higher education. Frequently, students and staff may lack complete awareness of the degree to which their data is being actively gathered, analysed, and utilised by artificial intelligence systems. The absence of transparency might result in circumstances where persons are not adequately informed or are incapable of giving authentic consent (Bietti, 2019; Laurijssen et al., 2022). The literature underscores the need to obtain informed permission in AI applications, whereby users should be duly informed about the intended use of their data, the possible hazards associated with it, and the safeguards implemented to ensure their privacy. In order to effectively handle privacy problems in educational AI systems, it is essential to establish consent as an ongoing process rather than a singular occurrence.
Furthermore, the difficulties of guaranteeing privacy in educational AI systems are intensified by the rapid development of AI technologies and the changing regulatory environment (Cath, 2018; Díaz-Rodríguez et al., 2023). The conventional privacy measures may not be adequate to tackle the distinct issues presented by AI, such as the capacity of algorithms to deduce novel information from preexisting data, resulting in the generation of profiles or forecasts about individuals without their explicit agreement. Praman and Anamalah (2023) emphasised that these circumstances give rise to ethical concerns regarding the equilibrium between the advantages of artificial intelligence in education and the possible encroachment into individual privacy. Existing research indicates that adopting a proactive strategy, which involves integrating privacy-by-design principles into AI systems, can effectively address these problems. This method guarantees that privacy is integrated into the technology from its inception.
Lastly, there is the matter of data retention and the possible enduring consequences of retaining substantial quantities of personal data (Samuelson, 1999; Politou et al., 2022). Academic institutions must deliberate on the appropriate duration for data retention and its intended uses, together with the potential hazards linked to prolonged preservation, such as data breaches or improper use of information. Quach et al. (2022) mentioned that the evidence from the literature suggests an increasing demand for explicit regulations regarding the storage and disposal of data, which are crucial for ensuring long-term privacy protection. Moreover, the possibility of AI-powered systems being adapted for purposes other than their initial goals, without sufficient attention to privacy consequences (Jiang et al., 2023), emphasises the need for continuous supervision and evaluation of AI implementations in educational environments.
Assurance of Academic Integrity at Malaysian Universities with Artificial Intelligence
Artificial intelligence (AI) has emerged as a crucial instrument in safeguarding academic honesty in Malaysian universities (Singh, 2023), namely by employing sophisticated plagiarism detection systems. These algorithms are specifically developed to detect cases of academic dishonesty by comparing student submissions with extensive databases of academic literature, online material, and previously submitted assignments. Software applications such as Turnitin and Grammarly have become ubiquitous in numerous educational institutions, serving as a first barrier against plagiarism (Ryan, 2020; Mulenga & Shilongo, 2024). Existing research indicates that although these methodologies are successful in identifying obvious types of plagiarism, they may encounter difficulties in identifying more nuanced forms, such as paraphrasing or the utilisation of outsourced ghostwriting services. Alzubaidi et al. (2023) stated that this gives rise to apprehensions over the constraints of artificial intelligence in completely protecting academic validity and the necessity for supplementary human supervision to analyse and take action based on the conclusions of these systems.
Besides, Mita (2022) emphasised that automated grading systems are a notable application of artificial intelligence in higher education, capable of both facilitating and probing academic honesty. By effectively grading enormous amounts of work, these systems guarantee uniformity and minimise the possibility of human bias. Nevertheless, they also give rise to ethical concerns, namely in their capacity to make intricate and subjective evaluations, such as essays or creative endeavours (George & Wooden, 2023). Critics contend that excessive use of AI for grading can compromise the comprehensiveness of feedback given to students and may not completely capture the subtleties of their work. Furthermore, Khosravi et al. (2022) added that there is a possibility that students may try to manipulate the system by creating answers that specifically target the AI’s algorithms instead of truly interacting with the content. Fedel et al. (2024) emphasise the need to adopt a well-rounded strategy, in which AI-assisted grading is complemented by human review to uphold the integrity of the assessment procedure.
Moreover, Mahmud (2024) states that employing artificial intelligence (AI) to oversee academic tasks, such as by using proctoring software during examinations, presents additional ethical quandaries. These artificial intelligence systems can monitor the actions of students during online examinations, detecting possible incidences of cheating by analysing their eye movements, keystrokes, and other behavioural cues (Slusky, 2020; Abbas & Hameed, 2022). Although these systems have substantial value in maintaining academic standards, they also give rise to notable concerns over privacy and fairness. Existing research by Osborne (2019) and Barrett (2022) indicates that these systems have the capacity to unjustly single out specific pupils, such as those with disabilities or those who may display anxious behaviour during examination procedures. Furthermore, there is the matter of data security and the extended retention of surveillance data, which may be susceptible to unauthorised access (Yigzaw et al., 2022). Consequently, universities must thoroughly evaluate the ethical consequences of implementing AI-driven proctoring tools, guaranteeing that they are utilised in a way that upholds student rights and is open about the data being gathered.
Finally, Dawson (2023) mentioned that the wider ethical concerns of AI in academic integrity encompass the possibility for these systems to unintentionally sustain prejudices or strengthen inequalities. For example, artificial intelligence systems that depend on past data to form judgements may unintentionally mirror and sustain preexisting prejudices in the educational system (Schwartz et al. 2022), such as those associated with spoken language skills, socioeconomic status, or availability of resources. Alzubaidi et al. (2023) underscores the need for universities to engage in a rigorous evaluation of the algorithms employed in artificial intelligence (AI) systems, therefore guaranteeing their fair and equitable design and implementation. Furthermore, Nguyen et al. (2023) mentioned that there is a demand for increased openness in the development and implementation of AI technologies, involving stakeholders such as students and teachers in the decision-making process to guarantee that the technology benefits the academic community.
Analysing and Reducing Bias in Artificial Intelligence-Powered Educational Tools
Artificial intelligence (AI)-driven educational tools possess the capacity to completely transform decision-making procedures at Malaysian universities, including admissions, grading, and peer assessments (Bujang et al., 2022). However, the algorithms that power these tools are not impervious to prejudice, which can heavily influence impartiality and equality in educational environments. Bias in artificial intelligence (AI) can originate from multiple origins, such as the data used for algorithm training (Varona & Suárez, 2022), the algorithm formulation, and the deployment environment (Kordzadeh & Ghasemaghaei, 2022). If an artificial intelligence system is trained with historical data that mirrors current inequalities or biases, it has the potential to sustain these biases in its predictions or choices. Aldoseri et al. (2023) emphasise the need to carefully analyse the data and approaches employed in AI systems to detect and resolve any sources of bias before their integration into educational processes.
Within the realm of admissions, artificial intelligence (AI) algorithms can be employed to evaluate candidates by considering several factors, including academic achievements, extracurricular involvements, and even predicted personality characteristics derived from application materials (Lira et al., 2023). Nevertheless, there is a potential danger that these algorithms may unintentionally show preference towards specific groups (Peters, 2022), guided by criteria such as gender, race, or financial status. Empirical research has demonstrated that artificial intelligence (AI) systems have the potential to reproduce and magnify the biases inherent in the data they are trained on (Ferrara, 2024). Consequently, these systems can make judgements that may unjustly undermine specific candidates. Nazer et al. (2023) stated that In order to address this issue, scholarly literature proposes the adoption of bias mitigation efforts, including the use of varied and inclusive datasets, algorithmic fairness methodologies, and continuous monitoring and auditing of AI systems to guarantee that they are making judgements in a just and impartial manner.
Besides, Baker and Hawn (2022) added that bias in artificial intelligence systems can also impact the process of grading and evaluating students, where algorithms are employed to appraise their work or performance. For instance, an artificial intelligence grading system that is trained on a limited range of data may not be able to precisely evaluate the work of students from diverse cultural or linguistic origins. This can result in unjust grading results and serve to strengthen pre-existing educational disparities (Doyle et al., 2023). Shams et al. (2023) emphasise the requirement of designing AI systems with cultural sensitivity and an understanding of various learning methods. Furthermore, these systems must possess transparency and interpretability, enabling educators to comprehend the decision-making process and assume intervention when deemed required (Simbeck, 2024). In order to mitigate possible biases and guarantee equitable assessment of all students, Fagbohun et al. (2024) added that it is advisable to include human supervision in AI-driven grading and evaluation procedures.
According to Cheen and Sanmugam (2023), the heterogeneous population and educational environment of Malaysia present distinct problems and opportunities for Malaysian institutions to tackle prejudice with AI-powered educational tools. The efficacy of bias reduction techniques in this particular setting relies on a thorough comprehension of the local socio-cultural dynamics that could impact the results of artificial intelligence. O’Connor and Liu (2023) indicate that universities should embrace a comprehensive strategy to mitigate bias, encompassing not just technological remedies but also legislative measures, such as the development of inclusive curriculum and the provision of teacher training on the ethical application of artificial intelligence. Moreover, continuous study and cooperation with international specialists can assist Malaysian institutions in maintaining a leading position in the ethical integration of artificial intelligence (Ariffin et al., 2023), therefore guaranteeing the effectiveness and fairness of their instructional instruments. Thus, by placing fairness and transparency as top priorities, these institutions may effectively utilise the advantages of AI while reducing the potential for prejudice and fostering a more inclusive educational setting.
METHODOLOGY
Research Design
This study used a quantitative research design to investigate the ethical concerns related to the integration of AI tools in Malaysian public universities, focusing on data privacy, academic integrity, and algorithmic bias. The research was conducted through an online survey using Google Forms, which allowed for the collection of structured, analyzable data and insights into the ethical challenges faced by lecturers, students, and administrative staff when using AI tools in educational settings.
Sample and Population
The study included lecturers, students, and administrative staff from Universiti Teknologi MARA (UiTM) Melaka. A total of 228 respondents participated in the survey, comprising 124 lecturers (54.39%), 76 students (33.33%), and 28 administrative staff (12.28%). This diverse sample provided a comprehensive perspective on the ethical concerns of AI tools, representing both academic and administrative functions.
Instrumentation
The primary instrument for data collection was a structured online questionnaire divided into five key sections:
- Demographic Information: This section collected basic details about the respondents, including their role within the university, age, gender, and educational level, which was crucial for interpreting the responses to the ethical questions about AI.
- Usage of AI Applications: This section focused on respondents’ familiarity and interaction with AI tools, aiming to establish the level of engagement and awareness participants had with AI technologies in their educational or administrative environments.
- Data Privacy: This section examined concerns related to data privacy when using AI tools, including respondents’ confidence in AI systems’ ability to secure personal data, their views on how their data is collected and used, and their opinions on the need for stricter data privacy regulations in university AI systems.
- Algorithmic Bias: This section assessed perceptions of bias within AI systems, exploring the need for training on recognizing and mitigating bias among both faculty and students.
- Academic Integrity: The final section focused on the impact of AI tools on academic integrity, including whether respondents believed AI tools increased the potential for academic dishonesty, the necessity of guidelines for the ethical use of AI, and whether regulation was required to prevent AI misuse in academic work, such as plagiarism or cheating.
Each question used a Likert scale from 1 (strongly disagree) to 5 (strongly agree), enabling respondents to express varying levels of agreement with the statements. This format provided nuanced insights into the ethical challenges AI tools pose in educational settings and allowed for the examination of trends and patterns in respondents’ perceptions.
FINDINGS
Demographic Information
Table 1.0: Demographic Information of Respondents (n = 228)
Category | Frequency | Percentage (%) |
Gender | ||
Male | 81 | 35.53 |
Female | 147 | 64.47 |
Age | ||
18 – 25 years old | 60 | 26.32 |
26 – 35 years old | 35 | 15.35 |
36 – 45 years old | 80 | 35.09 |
45 – 55 years old | 42 | 18.42 |
56 years old and above | 11 | 4.82 |
Educational Level | ||
Diploma | 47 | 20.61 |
Bachelor’s Degree | 45 | 19.74 |
Master’s Degree | 72 | 31.58 |
PhD | 58 | 25.44 |
Others | 6 | 2.63 |
Role in University | ||
Student | 76 | 33.33 |
Lecturer | 124 | 54.39 |
Administrative Staff | 28 | 12.28 |
Total | 228 | 100 |
A total of 228 respondents participated in a survey aiming to examine ethical concerns related to the use of AI tools in Malaysian public universities. Most of the respondents were female (64.47%), while male respondents made up 35.53%. The largest proportion of respondents fell into the 36-45 age group (35.09%), followed by those aged 18-25 (26.32%). Respondents aged 45-55 made up 18.42% of the sample, while those aged 26-35 accounted for 15.35%. Only 4.82% of respondents were 56 or older. This diverse age distribution provides a broad perspective on the ethical concerns surrounding AI across different generational viewpoints.
In terms of educational qualifications, 31.58% of the respondents held a Master’s degree, while 25.44% had a PhD. Additionally, 20.61% of the respondents had a Diploma, and 19.74% held a Bachelor’s degree. A small proportion of respondents (2.63%) reported having other types of qualifications. This varied educational background ensures that the survey captures insights from individuals with diverse levels of academic experience.
Respondents’ roles within the university were also diverse. The majority of participants were lecturers, making up 54.39% of the sample. Students accounted for 33.33% of respondents, while administrative staff represented 12.28%. This distribution reflects the involvement of both academic staff and students, providing a comprehensive understanding of the ethical concerns surrounding AI integration from different university roles.
Usage of AI Application
Table 2.0: Usage of AI Application among Respondents (n= 228)
Category | Frequency | Percentage (%) |
How familiar are you with AI tools? | ||
Not at all familiar | 5 | 2.19 |
Slightly familiar | 35 | 15.35 |
Moderately familiar | 89 | 39.03 |
Very familiar | 74 | 32.45 |
Extremely familiar | 25 | 10.96 |
Have you used AI tools for educational purposes? | ||
Yes | 209 | 91.67 |
No | 19 | 8.33 |
Which specific AI applications have you used or are you aware of being used in your university? | ||
AI chatbots (e.g., IBM Watson Assistant, Ada, ChatGPT) | 186 | 29.29 |
Turnitin (plagiarism detection) | 141 | 22.20 |
Grammarly (writing assistance) | 146 | 22.29 |
Coursera or EdX (AI-based learning platforms) | 10 | 1.57 |
Moodle with AI plugins (learning management system) | 5 | 0.79 |
Google Classroom with AI features | 68 | 10.71 |
IBM Watson (AI for research support) | 1 | 0.16 |
Canvas with AI features (learning management system) | 54 | 8.50 |
ProctorU (AI-powered exam proctoring) | 3 | 0.47 |
Perplexity | 5 | 0.79 |
Quillbot | 3 | 0.47 |
Gemini | 2 | 0.31 |
Gamma | 1 | 0.16 |
Leonardo AI | 1 | 0.16 |
Microsoft Image Creator | 1 | 0.16 |
Design Process | 1 | 0.16 |
Mid Journey | 1 | 0.16 |
Not applicable | 6 | 0.94 |
The data regarding familiarity and usage of AI tools in Malaysian public universities indicates a high level of awareness and adoption among the respondents. A significant portion of participants (39.03%) reported being “moderately familiar” with AI tools, while 32.45% described themselves as “very familiar.” This suggests that most respondents have a solid understanding of AI technologies, with only a small percentage (2.19%) indicating that they were not familiar with AI at all. This general familiarity is an important factor in the widespread use of AI tools across universities, indicating that staff and students are well-equipped to navigate these technologies.
In terms of using AI for educational purposes, an impressive 91.67% of respondents confirmed that they have utilized AI tools in their academic or administrative tasks. This high percentage highlights the deep integration of AI technologies into the educational framework of Malaysian public universities. Only a small proportion (8.33%) indicated that they had not used any AI tools, further emphasizing the ubiquity of AI applications in these institutions.
Several AI tools were particularly notable in terms of their usage or awareness within the universities. The most prominent among them were AI chatbots, such as IBM Watson Assistant, Ada, and ChatGPT, with 29.29% of respondents mentioning these platforms. AI chatbots have proven useful in facilitating administrative support and student services, providing immediate responses to queries and streamlining communication within the university system.
Grammarly, an AI-powered writing assistant, was also widely used, with 22.29% of respondents acknowledging its presence. Grammarly helps students and staff enhance their writing by providing real-time feedback on grammar, spelling, and style, contributing to the overall quality of academic work. Similarly, Turnitin, a popular plagiarism detection tool, was recognized by 22.20% of respondents. Turnitin’s role in maintaining academic integrity is crucial, as it helps prevent plagiarism by ensuring that students’ work is original and properly cited.
Additionally, AI-enhanced learning management systems, such as Google Classroom (10.71%) and Canvas (8.50%), were also frequently mentioned. These platforms integrate AI features to streamline the learning experience, allowing educators to automate administrative tasks, manage coursework efficiently, and provide personalized learning opportunities for students. The frequent use of these tools reflects how AI is shaping the way education is delivered and managed.
In contrast, less commonly used AI applications, such as ProctorU (AI exam proctoring) and Quillbot (paraphrasing tool), were only mentioned by a handful of respondents. These findings suggest that while certain AI tools have become staples in the academic environment, others are more specialized or less frequently utilized.
In summary, the findings highlight the widespread adoption of AI tools in Malaysian public universities, with notable applications such as AI chatbots, Grammarly, and Turnitin playing a central role in both educational and administrative processes. The data demonstrates a growing reliance on AI to enhance learning, streamline operations, and maintain academic integrity, underscoring its transformative impact on higher education in Malaysia.
Table 3.0: Ethical concerns associated with the integration of AI tools in Malaysian public universities focusing on data privacy, academic integrity, and algorithmic bias
Statement | Percentage (%) | ||||
1 | 2 | 3 | 4 | 5 | |
Data Privacy | |||||
AI tools that I used at my university adequately protect personal data privacy. | 3.1 | 5.3 | 54.8 | 34.2 | 2.6 |
I am confident that my personal data is secure when using AI tools. | 5.3 | 13.6 | 50.0 | 28.1 | 3.1 |
Universities should implement stricter regulations on data privacy when using AI tools | 2.2 | 2.2 | 17.1 | 43.9 | 34.6 |
I am concerned about how my personal data is collected and used by AI systems in educational settings. | 1.8 | 3.1 | 22.4 | 46.9 | 25.9 |
I feel that AI tools can help maintain fairness in grading and assessments. | 2.6 | 7.0 | 31.6 | 44.3 | 14.5 |
Algorithmic Bias | |||||
AI tools used in my university are free from algorithmic bias. | 1.8 | 10.5 | 65.4 | 20.2 | 2.2 |
The integration of AI tools can perpetuate existing biases in educational systems. | 1.3 | .9.6 | 39.0 | 45.2 | 4.8 |
Universities should actively monitor and address biases in AI algorithms | 1.3 | 2.2 | 19.7 | 54.4 | 22.4 |
Training on recognizing and mitigating algorithmic bias should be provided to faculty and students. | 0.9 | 0 | 15.8 | 53.1 | 30.3 |
My university is making sufficient efforts to address algorithmic bias in AI tools | 1.8 | 11.4 | 55.3 | 28.1 | 3.5 |
Academic Integrity | |||||
AI tools have increased the potential for academic dishonesty (e.g., plagiarism, cheating) at our university. | 1.3 | 12.7 | 21.9 | 41.2 | 22.8 |
There should be clear guidelines on the ethical use of AI in academic settings | 0.9 | 1.3 | 10.1 | 46.5 | 41.2 |
I believe that AI tools can compromise the authenticity of academic work. | 2.2 | 10.1 | 23.7 | 45.2 | 18.9 |
AI tools should be regulated to ensure they do not facilitate academic dishonesty. | 2.2 | 3.1 | 16.7 | 45.2 | 32.9 |
I feel that AI tools can help maintain fairness in grading and assessments. | 0.9 | 8.8 | 29.8 | 48.7 | 11.8 |
In the field of data privacy, 34.6% of respondents strongly agreed that universities should implement stricter regulations on data privacy when using AI tools. This suggests a widespread belief that current privacy protections are inadequate, especially given the sensitive nature of personal data handled by AI systems. Additionally, 25.9% of respondents strongly agreed that they are concerned about how their personal data is collected and used by AI in educational settings, indicating deep unease about data transparency and security. These findings imply that institutions must not only enhance their data protection measures but also clearly communicate how data is managed to build trust among users.
Moreover, 54.8% of respondents took a neutral stance on whether AI tools adequately protect personal data privacy, indicating a significant level of uncertainty or lack of knowledge about the measures in place. Similarly, 50% of respondents were neutral when asked if they are confident in the security of their personal data when using AI tools. These high levels of neutrality suggest that universities are not sufficiently communicating or demonstrating the robustness of their data privacy policies, leaving a gap in user confidence. This ambivalence points to an urgent need for universities to clarify their data protection measures to build trust and ensure users feel secure when using AI tools.
Regarding algorithmic bias, 30.3% of respondents strongly agreed that training on recognizing and mitigating bias should be provided to both faculty and students, highlighting the importance of education in combating AI bias. Respondents also strongly believed that universities should take responsibility for monitoring bias in AI systems, with 22.4% strongly agreeing that institutions need to actively address these biases. This reflects a recognition that while AI holds great potential, unchecked biases in these systems could reinforce existing inequalities, and it is up to universities to ensure fair and equitable outcomes through regular monitoring and education.
In addressing algorithmic bias, 65.4% of respondents were neutral when asked if AI tools at their university are free from bias, again reflecting uncertainty or a lack of awareness about how AI systems operate. This neutrality suggests that many users are unaware of the potential for bias or have not encountered any clear evidence of bias in AI tools. Therefore, it’s critical for institutions to take a more active role in assessing the fairness of AI tools and making these evaluations visible to users.
In terms of academic integrity, the highest response (46.5%) was agreement with the statement that there should be clear guidelines on the ethical use of AI in academic settings. This overwhelming consensus reflects a strong demand for formalized policies that outline acceptable uses of AI, particularly as AI tools become more integrated into education. Without these guidelines, there is a risk that students and staff may misuse AI, either intentionally or unintentionally, leading to breaches in academic integrity. The high percentage of agreement on this statement signals that universities must prioritize the creation and enforcement of clear, comprehensive ethical standards to guide AI use in education.
CONCLUSION
In conclusion, the incorporation of Artificial Intelligence (AI) into Malaysian public institutions has great potential to improve educational results and operational effectiveness. However, it also brings into play a multifaceted set of ethical dilemmas that need to be appropriately addressed. The utilisation of AI-driven technologies can customise learning, optimise administrative procedures, and enhance overall educational experiences. However, it is crucial to carefully consider the drawbacks linked to data privacy, academic integrity, and algorithmic bias accompanying these advantages. The highly confidential character of student data requires strong data protection measures to avoid unauthorised access and possible breaches. Furthermore, the dependability and impartiality of AI mechanisms, notably in the domains of plagiarism identification and automated evaluations, need continuous examination to guarantee they do not unintentionally weaken the educational process or sustain prejudices.
The adoption of a balanced and responsible strategy by Malaysian public institutions is crucial as artificial intelligence (AI) continues to advance and become increasingly integrated into the educational environment. This encompasses the implementation of thorough ethical principles, the promotion of openness in AI software, and the cultivation of a culture that appreciates both technical advancement and the safeguarding of academic honesty. By aggressively confronting these issues, stakeholders can guarantee that AI functions as a tool to augment, rather than diminish, the quality and fairness of education in Malaysia.
ACKNOWLEDGEMENT
This research was supported by Entiti Kecemerlangan (EK) Media and Visual Communication (EK Tier 5) Universiti Teknologi MARA Malacca branch. The supporters had no role in the study design, data collection and analysis, the decision to publish, or the preparation of the manuscript.
Funding
This study is funded by the TEJA Grant 2024 from the Universiti Teknologi MARA Malacca branch (File no. PJI GDT2024/1-11).
REFERENCES
- Abbas, M. A. E., & Hameed, S. (2022). A Systematic Review of Deep Learning Based Online Exam Proctoring Systems for Abnormal Student Behaviour Detection. International Journal of Scientific Research in Science, Engineering and Technology, 9(4), 192-209.
- Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1), 1-14.
- Alam, A. (2022). Employing adaptive learning and intelligent tutoring robots for virtual classrooms and smart campuses: reforming education in the age of artificial intelligence. In Advanced computing and intelligent technologies: Proceedings of ICACIT 2022 (pp. 395-406). Singapore: Springer Nature Singapore.
- Aldahwan, N., & Alsaeed, N. (2020). Use of artificial intelligent in Learning Management Systems (LMS): a systematic literature review. International Journal of Computer Applications, 175(13), 16-26.
- Aldoseri, A., Al-Khalifa, K. N., & Hamouda, A. M. (2023). Re-thinking data strategy and integration for artificial intelligence: concepts, opportunities, and challenges. Applied Sciences, 13(12), 7082.
- Alzubaidi, L., Al-Sabaawi, A., Bai, J., Dukhan, A., Alkenani, A. H., Al-Asadi, A., … & Gu, Y. (2023). Towards Risk‐Free Trustworthy Artificial Intelligence: Significance and Requirements. International Journal of Intelligent Systems, 2023(1), 4459198.
- Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2(3), 431-440.
- Ariffin, A. S., Maavak, M., Dolah, R., & Muhtazaruddin, M. N. (2023). Formulation of AI Governance and ethics framework to support the implementation of responsible AI for Malaysia. Res Militaris, 13(3), 2491-2516.
- Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 1-41.
- Barrett, L. (2022). Rejecting test surveillance in higher education. Mich. St. L. Rev., 675.
- Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education?. International Journal of Educational Technology in Higher Education, 17, 1-12.
- Bietti, E. (2019). Consent as a free pass: Platform power and the limits of the informational turn. Pace L. Rev., 40, 310.
- Bujang, S. D. A., Selamat, A., Krejcar, O., Mohamed, F., Cheng, L. K., Chiu, P. C., & Fujita, H. (2022). Imbalanced classification methods for student grade prediction: a systematic literature review. IEEE Access, 11, 1970-1989.
- Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080.
- Chee, K. N., & Sanmugam, M. (Eds.). (2023). Embracing Cutting-edge Technology in Modern Educational Settings. IGI Global.
- Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278.
- Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22.
- Dawson, A. G. (2023). Artificial intelligence and academic integrity. Aspen Publishing.
- Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
- Doyle, L., Easterbrook, M. J., & Harris, P. R. (2023). Roles of socioeconomic status, ethnicity and teacher beliefs in academic grading. British Journal of Educational Psychology, 93(1), 91-112.
- Fagbohun, O., Iduwe, N. P., Abdullahi, M., Ifaturoti, A., & Nwanna, O. M. (2024). Beyond traditional assessment: Exploring the impact of large language models on grading practices. Journal of Artificial Intelligence and Machine Learning & Data Science, 2(1), 1-8.
- Fedele, A., Punzi, C., & Tramacere, S. (2024). The ALTAI checklist as a tool to assess ethical and legal implications for a trustworthy AI development in education. Computer Law & Security Review, 53, 105986.
- Ferrara, E. (2024). The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. Machine Learning with Applications, 15, 100525.
- Gao, P., Li, J., & Liu, S. (2021). An introduction to key technology in artificial intelligence and big data driven e-learning and e-education. Mobile Networks and Applications, 26(5), 2123-2126.
- George, B., & Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Administrative Sciences, 13(9), 196.
- Gray, G., Schalk, A. E., Cooke, G., Murnion, P., Rooney, P., & O’Rourke, K. C. (2022). Stakeholders’ insights on learning analytics: Perspectives of students and staff. Computers & Education, 187, 104550.
- Hu, C., Li, Y., & Zheng, X. (2022). Data assets, information uses, and operational efficiency. Applied Economics, 54(60), 6887-6900.
- Hwang, G. J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100001.
- Jiang, N., Liu, X., Liu, H., Lim, E. T. K., Tan, C. W., & Gu, J. (2023). Beyond AI-powered context-aware services: the role of human-AI collaboration. Industrial Management & Data Systems, 123(11), 2771-2802.
- Kamalov, F., Santandreu Calonge, D., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12451.
- Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y. S., Kay, J., … & Gašević, D. (2022). Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 3, 100074.
- Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388-409.
- Kuleto, V., Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M., Păun, D., & Mihoreanu, L. (2021). Exploring opportunities and challenges of artificial intelligence and machine learning in higher education institutions. Sustainability, 13(18), 10424.
- Laurijssen, S. J., van der Graaf, R., van Dijk, W. B., Schuit, E., Groenwold, R. H., Grobbee, D. E., & de Vries, M. C. (2022). When is it impractical to ask informed consent? A systematic review. Clinical Trials, 19(5), 545-560.
- Limna, P., Jakwatanatham, S., Siripipattanakul, S., Kaewpuang, P., & Sriboonruang, P. (2022). A review of artificial intelligence (AI) in education during the digital era. Advance Knowledge for Executives, 1(1), 1-9.
- Lira, B., Gardner, M., Quirk, A., Stone, C., Rao, A., Ungar, L., … & Duckworth, A. L. (2023). Using artificial intelligence to assess personal qualities in college admissions. Science Advances, 9(41), 1-10.
- Luan, H., Geczy, P., Lai, H., Gobert, J., Yang, S. J., Ogata, H., … & Tsai, C. C. (2020). Challenges and future directions of big data and artificial intelligence in education. Frontiers in Psychology, 11, 580820.
- Mahmud, S. (Ed.). (2024). Academic Integrity in the Age of Artificial Intelligence. IGI Global.
- Mita, S. (2022). AI proctoring: Academic integrity vs. student rights. Hastings LJ, 74, 1513.
- Mulenga, R., & Shilongo, H. (2024). Academic integrity in higher education: Understanding and addressing plagiarism. Acta Pedagogia Asiana, 3(1), 30-43.
- Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X. C., Moukheiber, M., Khanna, A. K., … & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278.
- Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241.
- Nithiyanandam, N., Dhanasekaran, S., Kumar, A. S., Gobinath, D., Vijayakarthik, P., Rajkumar, G. V., & Muthuraman, U. (2022, August). Artificial intelligence assisted student learning and performance analysis using instructor evaluation model. In 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp. 1555-1561). IEEE.
- O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities. AI & SOCIETY, 1-13.
- Osborne, T. (2019). Not lazy, not faking: teaching and learning experiences of university students with disabilities. Disability & Society, 34(2), 228-252.
- Ouyang, F., & Jiao, P. (2021). Artificial intelligence in education: The three paradigms. Computers and Education: Artificial Intelligence, 2, 100020.
- Paraman, P., & Anamalah, S. (2023). Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils. AI & SOCIETY, 38(2), 595-611.
- Peters, U. (2022). Algorithmic political bias in artificial intelligence systems. Philosophy & Technology, 35(2), 25.
- Politou, E., Alepis, E., Virvou, M., & Patsakis, C. (2022). Privacy and data protection challenges in the distributed era (Vol. 26, pp. 1-185). Heidelberg, Germany: Springer.
- Prokopowicz, D. (2023). Opportunities and Threats to the Development of Artificial Intelligence Applications and the Need for Normative Regulation of this Development. International Journal of Legal Studies (IJOLS), 14(2), 95-129.
- Quach, S., Thaichon, P., Martin, K. D., Weaven, S., & Palmatier, R. W. (2022). Digital technologies: tensions in privacy and data. Journal of the Academy of Marketing Science, 50(6), 1299-1323.
- Ryan, K. (2020). An Overview of Digital Writing: Learning and Teaching. Gakuen, 4(954), 15-23.
- Samuelson, P. (1999). Privacy as intellectual property. Stan. L. Rev., 52, 1125.
- Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology.
- Shams, R. A., Zowghi, D., & Bano, M. (2023). AI and the quest for diversity and inclusion: A systematic literature review. AI and Ethics, 1-28.
- Simbeck, K. (2024). They shall be fair, transparent, and robust: auditing learning analytics systems. AI and Ethics, 4(2), 555-571.
- Singh, J. K. S. (2023). The values of an AI ethical framework for a developing nation: considerations for Malaysia. In Elgar Companion to Regulating AI and Big Data in Emerging Economies (pp. 115-134). Edward Elgar Publishing.
- Slimi, Z., & Carballido, B. V. (2023). Navigating the Ethical Challenges of Artificial Intelligence in Higher Education: An Analysis of Seven Global AI Ethics Policies. TEM Journal, 12(2), 590-602.
- Slusky, L. (2020). Cybersecurity of online proctoring systems. Journal of International Technology and Information Management, 29(1), 56-83.
- Thongprasit, J., & Wannapiroon, P. (2022). Framework of Artificial Intelligence Learning Platform for Education. International Education Studies, 15(1), 76-86.
- Varona, D., & Suárez, J. L. (2022). Discrimination, bias, fairness, and trustworthy AI. Applied Sciences, 12(12), 5826.
- Yang, W. (2022). Artificial Intelligence education for young children: Why, what, and how in curriculum design and implementation. Computers and Education: Artificial Intelligence, 3, 100061.
- Yigzaw, K. Y., Olabarriaga, S. D., Michalas, A., Marco-Ruiz, L., Hillen, C., Verginadis, Y., … & Chomutare, T. (2022). Health data security and privacy: Challenges and solutions for the future. Roadmap to Successful Digital Health Ecosystems, 335-362.
- Yuskovych-Zhukovska, V., Poplavska, T., Diachenko, O., Mishenina, T., Topolnyk, Y., & Gurevych, R. (2022). Application of artificial intelligence in education. Problems and opportunities for sustainable development. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 13(1Sup1), 339-356.