Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.
Integrating the Principles of Federal Constitution and Rukun Negara in AI Laws of Malaysia
- Nor Ashikin Mohamed Yusof
- Intan Sazrina Saimy
- Siti Hasliah Salleh
- Shadiya Baqutayyan
- 285-295
- Dec 18, 2024
- Artificial intelligence
Integrating the Principles of Federal Constitution and Rukun Negara in AI Laws of Malaysia
Nor Ashikin Mohamed Yusof, Intan Sazrina Saimy, Siti Hasliah Salleh, Shadiya Baqutayyan
Department of Business Intelligence, Humanities, and Governance, Universiti Teknologi Malaysia
DOI: https://dx.doi.org/10.47772/IJRISS.2024.814MG0026
Received: 06 November 2024; Accepted: 14 November 2024; Published: 18 December 2024
ABSTRACT
Malaysia’s rapid adoption of Artificial Intelligence (AI) necessitates a framework that aligns with the nation’s core values enshrined in the Federal Constitution and Rukun Negara, which represent the foundation of its national identity. This article proposes a novel approach for regulating and governing AI in Malaysia by integrating these principles. The authors examine how specific applications of the Federal Constitution and Rukun Negara pillars can guide AI development. This includes designing AI systems that respect religious beliefs and ethical considerations (Belief in God), prioritize national security and well-being (Loyalty to King and Country), operate within the bounds of the Constitution (Supremacy of the Federal Constitution), and uphold transparency and accountability (Rule of Law). Additionally, analysis on how Courtesy and Morality can be translated into design principles that promote social harmony and responsible AI development. By integrating these principles, this framework safeguards responsible AI development within Malaysia. Furthermore, it serves as a valuable case study for other nations navigating the ethical and legal complexities of AI in the 21st century.
Keywords: Artificial Intelligence, Federal Constitution, Rukun Negara, AI Laws, ethics
INTRODUCTION
The rapid development of Artificial Intelligence (AI) presents a double-edged sword for nations – immense opportunities and unprecedented challenges. Malaysia, a vibrant country with rich tapestry of cultures, is uniquely positioned to navigate this frontier. However, AI raises profound questions about ethics, governance, and how to ensure this powerful technology aligns with the nation’s core values.
This article proposes a novel approach to AI law formulation and development in Malaysia by integrating the principles enshrined in the Rukun Negara, the nation’s foundational philosophy established in 1970 [1]. These five principles of Rukun Negara; Belief in God, Loyalty to King and Country, Supremacy of the Constitution, Rule of Law, and Courtesy and Morality, have fostered national unity. Their incorporation and integration into AI law formulation and development, would and could ensure Malaysia’s AI framework is not only technologically sophisticated but also ethically sound and reflects the nation’s core values.
This article will delve into the specific ways the Federal Constitution and Rukun Negara principles can be applied to AI law development. We will explore how AI systems can be designed to respect religious beliefs and ethical considerations (Belief in God), prioritize national security and well-being (Loyalty), and operate within the bounds of the Malaysian Constitution (Supremacy of the Constitution). Additionally, we will discuss the importance of establishing transparent and accountable legal frameworks to govern AI development and deployment, upholding the rule of law in the digital age [2]. Finally, we will analyse how AI can be harnessed to promote social harmony and ethical conduct within Malaysian society.
BACKGROUND FACTS
AI encompasses a broad range of technologies that enable machines to simulate human intelligence. Machine learning, a subfield of AI, allows algorithms to learn from data and improve their performance on a specific task without explicit programming [3]. AI applications are already prevalent in various sectors, from facial recognition software to autonomous vehicles and financial trading algorithms [3, 4].
The potential benefits of AI for Malaysia are significant. AI can revolutionize industries, increase efficiency in public services, and drive innovation across various sectors [5]. However, the rapid development of AI also necessitates the creation of robust legal frameworks to ensure responsible development and deployment [5, 6].
Currently Malaysia has no specific law dedicated on AI laws or regulations that directly address the development, deployment, or use of AI. Malaysia then attempts to address some aspects of AI through a combination of existing laws and regulations.
Until a new AI law and regulations are enacted, the said above situation shall stay and Malaysia continues to apply the existing legal framework.
However, these existing laws are all designed in the pre-AI era. Notably, they were designed to address human activities and behaviours, not machine or digital processes. As such the laws struggle to comprehensively grapple with the digital challenges thrown at them. Likewise, it is equally not easy to fit the complex and evolving nature of AI systems, processes, and activities into the rigid square boxes of the various provisions of existing laws and regulations [7-10].
The lack of specific law AI could inadvertently raise ethical concerns [11, 12]. AI systems can perpetuate biases and discrimination if not developed and deployed responsibly. Without clear regulations, ensuring ethical AI development becomes a challenge. Additionally, the potential misuse of AI for malicious purposes, such as cyberattacks or autonomous weapons, necessitates robust safeguards [13].
This legislative gap leaves several crucial questions unanswered. Issues like data privacy, algorithmic bias, and liability in the case of AI-related accidents remain unaddressed by existing legal frameworks. For example, the vast amount of data collected for AI development raises concerns about data privacy and security. Existing laws may not provide adequate safeguards against potential breaches or misuse of personal data. In the event of an accident or malfunction caused by an AI system, determining liability becomes complex. Without clear guidelines, holding developers or users accountable can be difficult.
A. Application of Existing Laws to AI
There are several main existing laws that are closely related to AI and often used in dealing with AI systems or processes listed below.
A(1) Personal Data Protection Act (PDPA) 2010
PDPA serves as a cornerstone for data privacy protection in Malaysia. Enacted to regulate the collection, use, and disclosure of personal data, the PDPA are very useful and strong in protecting the core principles for data handling. For example, the PDPA 2010 demanded for individual’s consent first before any personal data and information about them can be collected, disclosed, used or shared. This principle is crucial for ensuring transparency and user control over their information [14].
The PDPA promotes the collection of only the personal data necessary for a specific purpose. This principle aimed at minimizing the amount of data collected and avoid them from being used or re-used for other purpose than originally meant, thus reduce or mitigate privacy risks [15]. For example, for market profiling, training and analysis [2].
The PDPA mandates data users to implement appropriate security safeguards to protect personal data from unauthorized access, disclosure, loss, or alteration [16]. This is crucial for ensuring the security of sensitive data from being used illegally.
Despite these strengths, the PDPA struggles to fully address the complexities of AI due to unique technological capabilities of AI systems and technology. The Act’s limitations become apparent when considering the specific challenges posed by AI-driven data collection and analysis.
1. Anonymized Data
The PDPA primarily focuses on protecting identifiable personal data that are specifically associated to the individual concern. For example, name, age, religion, academic qualification, gender and so on. However, AI systems often utilize anonymized data sets that can still be used to re-identify individuals, a loophole not adequately addressed by the current legislation [18].
2. Algorithmic Bias
Happens when biasness is already embedded within the algorithms themselves. This happens when programmers has “unconsciously or consciously” chosen or assumed something during algorithm design. For instance, an algorithm designed to identify potential criminals might be biased towards certain racial groups if the training data contained a higher representation of criminals from those groups [20].
a. Data bias
Occurs when the data used to train AI systems reflects existing societal biases. For example, if a loan approval AI is trained on historical data that disproportionately rejected loan applications from a particular demographic group, it might perpetuate this bias in future decisions [19].
b. Algorithmic Bias
Happens when biasness is already embedded within the algorithms themselves. This happens when programmers have “unconsciously or consciously” chosen or assumed something during algorithm design. For instance, an algorithm designed to identify potential criminals might be biased towards certain racial groups if the training data contained a higher representation of criminals from those groups [20].
c. Human Bias
Takes place through the way AI systems are used. If the developers or users of an AI system make decisions based on biased interpretations of the system’s outputs, it can perpetuate discriminatory outcomes [21].
These biases can manifest in various ways, impacting areas like loan approvals, job recommendations, criminal justice predictions, and even facial recognition software. If AI systems are perceived as biased, it can lead to a lack of trust in the technology and its outputs. This can hinder the adoption and societal benefits of AI. Moreover, biased algorithms raise significant ethical concerns. They can perpetuate societal inequalities and undermine the fundamental principles of fairness and justice [22].
3. Data Sharing and Cross-border Transfers
The PDPA regulations on data sharing and cross-border transfers may not be sufficiently robust in the context of global AI development. As AI systems often involve collaboration across borders, clear guidelines are needed to ensure data protection during international transfers. For example, a Malaysian company might partner with a foreign AI developer, sharing anonymized customer data for training purposes. The PDPA might not have clear provisions to ensure the foreign company adheres to the same data protection standards as those mandated in Malaysia.
A(2) Communications and Multimedia Act (CMA) 1998 (amended 2002)
The CMA can be applied to regulate certain aspects of AI, particularly those related to online content and data protection. While the CMA is not specifically designed for AI and may not adequately address the complexities of the technology, it possesses some strengths that can be leveraged for a degree of protection against potential harms associated with AI, particularly in the online environment.
1. Content Moderation and Algorithmic Bias
The CMA empowers authorities to remove online content deemed harmful, offensive, or threatening public order [23]. This can be cautiously applied to address situations where AI-powered platforms generate or promote harmful content, such as hate speech or extremist ideologies.
It is crucial to recognize the limitations of the above approach. Content moderation can be subjective and raise concerns about freedom of expression. Additionally, the CMA’s focus on removing content does not directly address the root cause of algorithmic bias that might be generating such content in the first place.
AI goes far beyond online content. It encompasses a wide range of technologies, including machine learning algorithms that power applications like facial recognition, autonomous vehicles, and even financial trading systems. In this context, the CMA lacks specific provisions to address this issue. The CMA’s scope might not adequately address the diverse ways AI interacts with the online environment.
2. User Protection and Data Privacy (to a limited extent)
The CMA includes provisions for consumer protection against online fraud and misleading advertising [24]. While not directly addressing AI, these provisions can be interpreted to offer some level of protection against malicious AI applications designed to defraud users online.
AI development often involves vast amounts of data collection. The CMA’s data protection measures might not be comprehensive enough to ensure user privacy in the context of AI. Many AI systems operate as “black boxes,” making it difficult to understand how they reach their decisions. Black box refers to an AI system where the internal workings and decision-making processes are opaque and difficult to understand. Though these systems often produce accurate results, but the reasoning behind those results remains a mystery. This lack of transparency poses significant challenges in several ways
a. Explainability
The “black box” of AI systems does not provide explanations for their outputs. It is unclear how the system arrived at a specific decision, making it difficult to assess its validity, fairness, or potential biases. For example, an AI system used for loan approvals might deny a loan application without providing any reason or explanation for the rejection. This lack of transparency can be frustrating for users and hinder trust in the technology [25].
b. Debugging and Improvement
If a “black box” AI system produces incorrect or biased results, it is difficult to pinpoint the source of the problem within the complex algorithms. This makes debugging and improving the system a challenging task. For example, an AI system used for facial recognition might misidentify individuals from certain ethnicities. Without understanding how the system arrived at these misidentifications, it’s difficult to address the underlying bias and improve its accuracy [26].
c. Ethical Concerns
The opaque of “black box” systems raises concerns about potential biases embedded within the algorithms. These biases might not be readily apparent but can lead to discriminatory outcomes. For example, an AI system used for hiring decisions might favour candidates with certain characteristics based on historical data used for training. This could lead to biased hiring practices even if the developers weren’t aware of the bias within the system [26].
If a “black box” AI system causes harm or makes an incorrect decision, it is very challenging to determine who is accountable. The developers might not be able to explain the system’s reasoning, making it difficult to assign responsibility for its actions. An autonomous vehicle using a “black box” AI system might cause an accident. Without understanding how the AI system made a particular decision, assigning blame between the developers, the manufacturer, and the driver becomes complex [25, 26].
The limitations of “black box” systems highlight the need for increased transparency in AI development. The CMA needs a holistic approach and clear regulations on ensuring transparency and explainability in AI systems.
A(3) Sale of Goods Act 1957 (SOGA)
SOGA 1957 governs the sale of goods and consumer protection in Malaysia. The SOGA generally establishes a framework for holding sellers and manufacturers liable for defective goods that cause harm to consumers [27]. The SOGA promotes consumer protection by ensuring a certain level of quality and safety in the goods sold. It offers some potential strengths in addressing product liability concerns [28].
This principle can be extended to AI-powered products. including those related to AI-powered goods especially when the products malfunctions and causes injury or harm a consumer or property damage. The SOGA might be used to hold the manufacturer or seller accountable.
Bearing in mind, the SOGA is enacted in 1957, it is not designed thus hardly adequate for the complexities of AI. The SOGA might not be able to determine liability and accountability when the distinction lines between AI software in the form of AI algorithms and goods or the physical device containing the algorithms are blurry. The SOGA is very effective in situations where the defects in goods have caused injury, harm or damage to a person or his property or assets [29]. Things might not be as applicable to situations where AI systems cause harm due to unforeseen consequences or biases in the algorithms [2, 30].
A(4) Law of Torts
Generally, Tort law is for civil wrongs and compensation for damages caused by negligence or breach of duty. Positively, the legal principles of negligence and duty of care offer a foundation for building legal arguments related to AI. The law of torts might be used to seek compensation from the developers or operators when a faulty autonomous vehicle or faulty programming, and in turn and causes harm [30]. Existing case law on negligence can be referenced when establishing liability in those situations. As such, Tort law can potentially adapt to new technologies, allowing for some level of application to emerging AI-related issues for addressing harm caused by AI systems in Malaysia.
Like SOGA, the law of torts has limitations in the context of AI: Establishing a “duty of care” between developers, operators, and users of AI systems can be challenging. Demonstrating a clear causal link between an AI system’s actions and the harm caused can be complex [31].
While these existing laws offer some potential avenues for addressing AI-related issues, their limitations highlight the need for a comprehensive legal framework specifically designed for AI. Currently, the existing laws are either too broad or narrow to address the complexities of AI. In both situations, the laws could lead to an overly restrictive or ineffective approach.
Recognizing this gap, policy initiatives like the National Data Governance Framework 2020 and National Policy on Industry 4.0 2017 have been hinting at future regulatory developments. It is important to remember that existing laws are not one size fit for all challenges of AI. The complex and evolving nature of AI systems may not easily fit into their provisions and requirements. Developing a comprehensive legal framework for AI is crucial for fostering trust in the technology and ensuring its responsible advancement. AI laws should address issues of transparency, accountability, liability, and human oversight. They should also establish ethical guidelines for data collection, use, and bias mitigation [25].
As a rapidly evolving field, AI needs flexible laws and regulations, designed to adapt to new technologies and applications without unnecessary constant revisions. Although Malaysia then can use the existing legal frameworks and laws as foundation and platform, it is important to ensure the new AI regulatory framework must be independent of existing legal frameworks. It is crucially vital to ensure the framework is based on the principles of transparency and accountability, focussing on the unique challenges posed by AI development and deployment in Malaysia [11].
FEDERAL CONSTITUTION, PRINCIPLES OF RUKUN NEGARA AND AI
The proposed AI legislation should be deeply rooted in the fundamental principles enshrined in the Federal Constitution and Rukun Negara. These two pillars, acting as the bedrock of Malaysia’s national identity, shape its aspirations and core values. The AI legal framework must acknowledge, respect, and reflect the essence of these documents, including their objectives, intentions, and core principles.
The Federal Constitution and Rukun Negara demonstrate remarkable adaptability. They can evolve to accommodate emerging technologies like AI, while still serving the nation’s needs and upholding the seven principles of responsible AI. Therefore, ensuring harmony and synergy between AI deployment, regulations, and the principles outlined in these foundational documents is imperative.
A. Federal Constitution
The Federal Constitution of Malaysia, the supreme law of the land, outlines the fundamental principles that govern the nation. It stands above all other laws, nullifying any legislation that contradicts its spirit or written provisions [32].
As Malaysia ventures into the transformative world of AI, the Federal Constitution becomes even more critical. Its embedded values and principles directly influence the legal framework for AI development. However, the fundamental and legally guaranteed rights and freedoms enshrined in the Constitution could be potentially compromised by the deployment and use of AI systems.
A(1) Fundamental Rights and Freedom
The Constitution enshrines fundamental rights and freedoms for Malaysian citizens, including freedom of expression, speech, religion, and assembly. These rights are at risk of being infringed on continuous basis in the era of AI.
AI systems for facial recognition or social media monitoring could potentially raise concerns about privacy [33] and freedom of expression [34]. AI systems used for content moderation or social media surveillance could raise concerns about censorship and stifle free expression.
While not explicitly mentioned the judicial precedent have interpreted that the Federal Constitution does protect individual privacy through provisions like protection against unlawful arrests and searches [35, 36]. The enactment of PDPA in 2010 strengthened this point. On the same hand, AI systems for facial recognition or social media monitoring raise many concerns about privacy and freedom of expression.
The AI development and AI deployment often relies on vast amounts of data collection, for training and operation purposes. Again, this is another area of concerns about privacy violations if data is collected, stored, or used without proper consent or safeguards.
When AI system is used for content moderation or social media surveillance, the same inadvertently raise concerns about censorship and stifle free expression.
A(2)Equality and Non-discrimination
The Constitution guarantees equality before the law and prohibits discrimination based on factors like race, religion, or gender [37]. However, AI can be programmed and trained to be bias as the AI systems learn from those data that are collected and key in and trained on. If the training data reflects societal biases, the AI system can inherit those biases and perpetuate discriminatory outcomes.
The algorithms used to develop AI systems can also be biased. Programmers might encode unconscious biases into the algorithms or choose metrics that favor certain demographics unintentionally [38].
A(3) Due Process and the Rule of Law
The Constitution upholds the principles of due process and the rule of law. The Constitution guarantees the right to a fair trial, including the right to be heard, due process, rule of law and the presumption of innocence [39].
If AI systems are used in decision-making processes that impact individual’s rights such as loan approvals and criminal bias, fairness and transparency are crucial. These principles should be reflected in AI laws, ensuring transparency and fairness in decision-making processes involving AI systems [40].
In that context, it is paramount that the AI system and law must deploy and promote the development of “explainable AI”. The AI needs to allow individuals to understand how AI decisions are made about them. This fosters accountability and ensures AI systems are used fairly and justly [41].
A(4) Federalism and the Distribution of Powers
The Federal Constitution establishes a federal system of government, dividing legislative powers between the federal and state governments. AI development and regulation might involve collaboration between federal and state authorities [42].
The AI law should be formulated in a way that respects the distribution of powers outlined in the Constitution. This ensures a coordinated approach to AI regulation across different levels of government.
On overall basis, the Federal Constitution provides a strong foundation for developing a robust and ethically sound AI legal framework in Malaysia. By aligning AI regulations with the core principles enshrined in the Constitution, Malaysia can ensure that AI development and deployment benefit all Malaysians while safeguarding fundamental rights and promoting responsible innovation.
For example, the AI law would need to ensure these technologies are used responsibly and in accordance with constitutional rights. The AI law should build upon existing data protection laws and emphasize strong safeguards for user privacy. This could include robust consent mechanisms for data collection, clear data retention policies, and the right to access and delete personal data used in AI systems.
There must be an established regulations to prevent and mitigate algorithmic bias. This could involve promoting the development of “fairness-aware” AI algorithms and requiring regular audits of AI systems to identify and address potential biases. The AI law should establish safeguards to prevent algorithmic bias and ensure fair treatment for all Malaysians.
B. Rukun Negara
The Rukun Negara, launched in 1970 following a period of social and racial unrest between three major races of Malaysia in 1969 [43-45]. Unlike the Federal Constitution, the Rukun Negra is not a legal document but a set of principles that set the cornerstone of national identity.
The Rukun Negara outlines five core objectives for the nation to strive as outlined in the document’s preamble.
“……Whereas our country Malaysia supports the ideals of:-(i) Achieving greater unity among the whole community; (ii) Preserving a democratic way of life; (iii) Create a just society where the prosperity of the country will be enjoyed fairly and fairly; (iv) Securing a liberal way against its rich cultural traditions and various patterns; and (v) Build a progressive society that will use modern science and technology”.
The above objectives are achievable through 5 core principles known as the principles of Rukun Negara. They are: (i) Belief in God, (ii) Loyalty to King and Country, (iii) Supremacy of the Constitution, (iv) Rule of Law, and (v) Courtesy and Morality.
While not legally binding, the Rukun Negara acts as a complementary text to the Federal Constitution, strengthening and illuminating its spirit. Its principles not only guide the interpretation of various constitutional provisions, particularly those concerning fundamental liberties, but also act as a moral compass in various aspects of Malaysian life, including political, economic, religious, and social spheres.
Moving together with time and advancement of technology, these principles can also be applied to guide responsible AI development in Malaysia.
B(1) Rukun Negara for Responsible AI Laws
The five objectives of Rukun Negara are seemingly traditional values. However, they surprisingly, hold profound relevance for the development and application of AI in Malaysia. In the hindsight, there are a lot of similarities and identical values between Federal Constitution, Rukun Negara and responsible AI.
Malaysia’s proposed AI laws can achieve the goal of responsible AI by aligning with both the Federal Constitution and the Rukun Negara principles as shown in Table 1 below.
TABLE I MAPPING OF PRINCIPLES AND PROPOSED AI LAW
Principles | Details |
1. Responsible AI | Fair and Inclusive AI |
Rukun Negara | Rule of law |
Supremacy of the Constitution, | |
Courtesy and Morality | |
Federal Constitution | Equal Rights / Non -discriminatory treatment |
Proposed AI Laws | The Federal Constitution guarantees equal rights and non-discriminatory treatments of citizens, where everyone has the rights to due process of the law. Only through this the objectives of “Creating a Just Society” and “Securing a Liberal Way of Life” as appeared in the preamble of the Rukun Negara can be achieved.
AI laws should ensure that AI applications do not perpetuate discrimination, emphasize inclusivity and equal opportunity and promote access to AI benefits for all Malaysians |
2. Responsible AI | Transparent and Accountable AI |
Rukun Negara | Supremacy of the Law |
Federal Constitution | Supremacy of the Constitution |
Equal rights, including rights to due process and the right to a fair trial | |
Proposed AI Laws | The legal system demands transparency and accountability in AI systems and their deployment. This is particularly crucial for AI decision-making processes, especially when used in legal contexts for example in analysing evidence. Transparency ensures individuals understand how AI-based decisions are made, while clear lines of accountability guarantee recourse if necessary. |
3. Responsible AI | Socially Responsible AI |
Rukun Negara | Belief in God |
Courtesy and Morality | |
Federal Constitution | Freedom of speech and expression. |
Proposed AI Laws | Content moderation of AI can potentially go against the above principles of Federal Constitution and Rukun Negara. Thus, the proposed AI law needs to regulate AI applications to prevent censorship or limitations on these fundamental rights. In order to encourage respect for diverse beliefs and values amongst Malaysians, the AI law should encourage the development of AI systems that consider societal and cultural impacts, fostering a harmonious future for all Malaysians. |
CONCLUSION
The version of this As Malaysia embraces AI, navigating the ethical and legal landscape is critical. Unchecked AI development could lead to societal harms like discrimination or privacy violations. To prevent this, we must prioritize the development of a robust AI Bill that upholds ethical principles and safeguards individual rights. Fortunately, Malaysia has a strong foundation for responsible AI in its Federal Constitution and Rukun Negara. These documents, while not explicitly addressing AI, enshrine core values like equality and justice. By anchoring AI laws in these principles, the AI development can strengthen national unity, promotes social justice, and respects cultural diversity. The Rukun Negara’s principles act as a compass, guiding AI development towards a more just, inclusive, and equitable society for all Malaysians. This framework balances the need for technological progress with the protection of individual liberties, ensuring a future where AI serves the collective good.
ACKNOWLEDGMENT
The authors would like to extend our gratitude to Jawatankuasa Nasional Blockchain dan Kecerdasan Buatan, Ministry of Science, Technology and Innovation Malaysia (MOSTI) for assistance in getting the primary data for this article.
REFERENCES
- Abdul Rahman M.A (2020). Rukun Negara: Pillars of Malaysia’s peace and development. New Straits Times.
- Mittelstadt B.D., Allo P, Taddeo M., Wachter S, Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society:3:2 https://doi.org/10.1177/2053951716679679
- Mohsen S, Arezoo B, Dastres R, (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics,Volume 3.; 54-70, ISSN 2667-413, https://doi.org/10.1016/j.cogr.2023.04.001.
- Can M. (2022). Atlas of AI: power, politics, and the planetary costs of artificial intelligence. International Affairs, Volume 98, Issue 2, March; 783–784, https://doi.org/10.1093/ia/iiac034
- Lee CS, Tajudeen FP, (2020). Usage and Impact of Artificial Intelligence on Accounting. Asian Journal of Business and Accounting 13(1). pg 213-239. https://doi.org/10.22452/ajba.vol13no1.8
- Boukherouaa, E. B., Shabsigh, M. G., AlAjmi, K., Deodoro, J., Farias, A., Iskender, E. S., & Ravikumar, R. (2021). Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. International Monetary Fund
- Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Pand C, Somani BK: (2022). Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front. Surg. 9:862322. DOI: 10.3389/fsurg.2022.862322
- Birruntha S. (2023). Finance field plagued by data breach. New Strait Times, June 6. https://www.nst.com.my/business/2023/06/916970/finance-field-plagued-data-breach.
- Buiten, M.C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation. p. 43-44.
- Abdul Manap, N. & Abdullah, A. (2020). Regulating artificial intelligence in Malaysia: The two-tier approach. UUM Journal of Legal Studies.; 11(2). p. 183-201.
- (2022). Artificial Intellegence in Malaysan Legal System INSAF. Vol. 39: No 1. p. 2-24
- Reiling AD, (2020). Courts and Artificial Intelligence. International Journal for Court Administration; 11; no.2, p. 1-10. DOI:10.36745/ijca.345
- Scherer MU (2016) Regulating artificial intelligence systems: risks, challenges, competencies and strategies. Harv J Law Technology 29(2):353–400. Cambridge: Harvard Law School
- Matthew U. Scherer, (2015). Regulating Artificial Intelligence Systems. Risks, Challenges, Competencies, and Strategies’ 29 SSRN Electronic Journal http://jolt.law.harvard.edu/articles/pdf/v29.pdf
- Section 13 of PDPA (2010). https://www.pdp.gov.my/jpdpv2/laws-of-malaysia-pdpa/personal-data-protection-act-2010/?lang=en
- Section 30 of PDPA (2010). https://www.pdp.gov.my/jpdpv2/laws-of-malaysia-pdpa/personal-data-protection-act-2010/?lang=en
- Section 12 of PDPA (2010). https://www.pdp.gov.my/jpdpv2/laws-of-malaysia-pdpa/personal-data-protection-act-2010/?lang=en
- Ohm, P, (2010). Broken promises of privacy: Disclosing secrets in a data-driven world. University of Chicago Law Review: 77:4. p. 1701-1757.
- Schmorrow D.D. Fidopiastis, C.M. editors (2020). Explainable Artificial Intelligence: What Do You Need to Know? In: Augmented Cognition. Theoretical and Technological Approaches. HCII. Lecture Notes in Computer Science: Springer, Cham. Vol 12196. DOI:10.1007/978-3-030-50353-6_20
- Lipton, Z.C. (2016). The flipside of fairness: A case for the need for explainable artificial intelligence. ICML Workshop on Explainable AI. DOI: 10.1007/978-3-031-04083-2_18
- Ali S, Abuhmed T, El-Sappagh S, Muhammad K, Alonso-Moral J, Confalonieri R, Guidotti R, Del Ser J, Díaz-Rodríguez N, Herrera F, (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence, Information Fusion. Volume 99: 101805, DOI:10.1016/j.inffus.2023.101805.
- Brundage M., Avin S., Clark J., Toner H., Eckersley P., Garfinkel B., Dafoe A., Scharre P., Zeitzoff T., Filar B., Anderson H., Roff H., Gregory C., Steinhardt A, Steinhardt J, Flynn C, Ó hÉigeartaigh S, Beard S, Belfield H, Farquhar S, Lyle C, Crootof R, Evans O, Page M, Bryson J, Yampolskiy R, Amodei D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. 1802.07228, arXiv:1802.07228, DOI:10.48550/arXiv.1802.07228
- Section 211 of Communications and Multimedia Act 1998 [CMA/ Act 588] https://www.mcmc.gov.my/skmmgovmy/media/General/pdf/Act588bi_3.pdf).
- Section 233(1) of Communications and Multimedia Act 1998 [CMA/ Act 588] https://www.mcmc.gov.my/skmmgovmy/media/General/pdf/Act588bi_3.pdf).
- Jobin, Anna I, Vayena, Farid, (2011). The ethics of artificial intelligence. Nature.: 501:7468: p.189-191.
- Edenberg E and Wood A. (2023). Disambiguating Algorithmic Bias: From Neutrality to Justice, Association for Computing Machinery, New York, NY, USA. 2023, DOI:10.1145/3600211.3604695
- Section 16, Section 20, section 31 of Sale of Goods Act 1957 [Act 178] http://www.commonlii.org/my/legis/consol_act/soga19571989203
- Section 20, section 31 of of Sale of Goods Act 1957 [Act 178] http://www.commonlii.org/my/legis/consol_act/soga19571989203
- Mittelstadt, B.D. (2019). Principles alone cannot guarantee ethical AI, Nature Machine Intelligence, 2019: volume 1: p. 501 – 507, https://api.semanticscholar.org/CorpusID:207888555
- State v Loomis 881 N.W.2d 749 (Wis. 2016) 754 (US).
- Čerka P, Grigienė J, Sirbikytė G. (2015). Liability for damages caused by artificial intelligence, Computer Law & Security Review: Volume 31: Issue 3, p. 376-389, DOI:10.1016/j.clsr.2015.03.008.
- Article 4 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Article 5 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Article 10 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Islam MT, Munir AB, Karim ME, Revisiting the Right To Privacy In The Digital Age: A Quest To Strengthen The Malaysian Data Protection Regime Journal of MCL 48 (1) p. 50-79
- Sivarasa Rasiah v Badan Peguam Malaysia & Anor, (2010). 2 Malaysian Law Journal 377.
- Article 8 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Goncharova A., Murach D. (2020). Artificial Intelligence as A Subject Of Civil Law. Knowledge, Education, Law, Management:3:31, vol. 1, p. 153-159. ISSN 2353-8406. DOI DOI:10.51647/kelm.2020.3.1.26
- Dai Z. (2023). The Subjective Status of Artificial Intelligence in Civil Law. Science of Law Journal, Vol. 2 Num. 9: p. 18-27 DOI:10.23977/law.2023.020903
- Article 4 and Article 7 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Verheij B. (2020). Artificial intelligence as law.; 28: p.181–206, DOI:10.1007/s10506-020-09266-0
- Article 77 of Federal Constitution. https://www.bing.com/search?q=federal+constitution+of+malaysia&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=federal
- Abdul Rahman M.A. (2020). Rukun Negara: Pillars of Malaysia’s peace and development. New Straits Times.
- Hai J.C. Nawi NF (2012). Principles of Public Administration: Malaysian Perspectives. Kuala Lumpur: Pearson.: ISBN 978-967-349-233-6
- Hai J.C. (2007). Fundamental of Development Administration. Selangor: Scholar Press. ISBN 978-967-5-04508-0