International Journal of Research and Innovation in Social Science

Submission Deadline- 29th October 2025
October Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th November 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th November 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Artificial Intelligence: Its Legal Accountability On Online Transactions In Malaysia

  • Azlina Mohd Hussain
  • Hazlina Mohd Padil
  • Mohd Syahril Ibrahim
  • Dr. Nor Syamaliah Ngah
  • 7719-7729
  • Oct 23, 2025
  • Law

 Artificial Intelligence: Its Legal Accountability on Online Transactions in Malaysia

Azlina Mohd Hussain1, Hazlina Mohd Padil1, 3, Mohd Syahril Ibrahim1, Dr. Nor Syamaliah Ngah2

1Faculty of Law, Universiti Teknologi MARA, Cawangan Negeri Sembilan Kampus Seremban, 70300 Seremban, Negeri Sembilan, Malaysia

2Faculty of Administrative Science & Policy Studies, Universiti Teknologi MARA, Cawangan Negeri Sembilan Kampus Seremban, 70300 Seremban, Negeri Sembilan, Malaysia

3Accounting Research Institute, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia

DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000630

Received: 12 September 2025; Accepted: 20 September 2025; Published: 23 October 2025

ABSTRACT

In recent years, there has been extensive reliance on Artificial Intelligence (AI) in promoting e-commerce. Businesses and commercial ventures find it easier, more profitable, and cost-saving to install and instruct AI to carry out their business and commercial undertakings. However, the law is still relatively new in respect of accountability of AI. To date, most legislations relate to conservative online transactions, and liability is still mostly absorbed by the party that chooses to employ AI in their transactions. The very definition of AI connotes that to a certain extent AI is equipped to function like normal humans. Therefore, this paper aims to examine whether businesses and commercial ventures in e-commerce can exonerate some of their liabilities to AI. This research adopted a doctrinal legal research methodology, supplemented by comparative legal analysis and qualitative review of secondary sources through literature review of the relevant laws and cases to answer the question as to what extent current Malaysian laws provide for AI liability and accountability in online transactions. Findings showed that businesses and commercial ventures in e-commerce cannot put the blame and liabilities on AI since the legal framework on online transactions involving AI raises unresolved questions of accountability, particularly in contract law, consumer law, and data governance, except that the guilt still falls on the operator of AI. This study can aid legislators, policy-makers, businesses, and the courts in making laws and regulations that can address the continuously evolving challenges of AI and its role in developing e-commerce.

Keywords: Artificial Intelligence (AI), online transactions, accountability

INTRODUCTION

The fast development of Artificial Intelligence (AI) has significantly impacted the world economy. Malaysia, being one of the countries that has kept abreast with the advancements of technology, has also joined the race towards global digital wealth through AI. Malaysia has experienced vast commercial improvement in its implementation of AI into its online transactions market. Online transactions would include; viz; Shopee, Lazada, GrabPay and Zalora etc. and also online banking services which promote safe and efficient online transactions. The reason for the extensive reliance on AI is that AI has managed to infiltrate commercial transactions formerly monopolized by humans (Dodda, 2023; Al-saeedi et al., 2024). AI’s winning ticket is its speed in providing almost accurate to zero error in its transactions. AI’s strength in providing customer service far surpasses that of humans through its automated chatbots, which provide 24/7 service. It also has algorithmic recommendation systems that suggest products tailored to consumer preferences, fraud detection systems to monitor suspicious payments, and algorithmic engines that dynamically price goods and services (Lin, 2024).

The Malaysian Communications and Multimedia Commission (MCMC) has reported the surge in Malaysia’s e-commerce market at RM1.09 trillion in 2023, a growth, believed, largely to be fuelled by AI-driven personalization and automation (The Edge Market, 2024). The Malaysian government’s policy towards AI is enshrined in the Malaysia Digital Economy Blueprint (MyDIGITAL) and the National Artificial Intelligence Roadmap 2021–2025, both explicitly recognize AI as a critical enabler of Malaysia’s ambition to become a regional leader in the digital economy by 2030 (Ministry of Science, Technology & Innovation, 2021). AI is, thus, not merely a technological innovation but a structural driver of Malaysia’s commercial future.

The existence of AI comes with its own set of challenges. Conservative laws, especially concerning the law of contracts, consumer protection, commercial transactions, protection of intellectual property law, and data governance, are intended to bind only humans or natural persons. The current laws do not affect AI. Thus, the question, which is increasingly being asked, is, when AI makes an error or causes harm, who is legally responsible? AI systems operate independently. There is no direct human intervention in AI’s decisions. Unlike humans, AI is not to be held accountable for any misrepresentation under the Consumer Protection Act 1999 (CPA 1999), the question of a contract’s enforceability under the Contracts Act 1950 (CA 1950), and the misuse of personal data under the Personal Data Protection Act 2010 (PDPA 2010).

To date, the laws in existence hold humans or natural persons involved in the programming and instructing AI to be accountable for the errors or misuse made by AI. This liability comes from the understanding that AI is totally incapable of making its own decisions without human intervention. However, a question arose as to how to execute judgment against AI when AI is to be held accountable. The next question to ask is whether AI can be misused to abuse the law. Currently, there is no specific legislation in Malaysia addressing the liability and accountability of AI and the intervention of AI’s decisions. The existing legislations are designed to bind natural persons or even corporate entities for legal transactions. The CA 1950, for example, governs the formation of contracts based on an offer from the proposer, acceptance from the acceptor, and other important elements for the validity of contracts, such as consideration and intention. However, the CA 1950 is silent on contracts concluded by non-human actors such as AI.

Malaysian courts have not yet ruled directly on AI contracting, but jurisprudence suggests a strict interpretation of intention and consensus. In a Singapore case of Chwee Kin Keong v Digilandmall.com Pte Ltd (2005), the court held that contracts concluded through an online automated system were binding, though subject to exceptions such as mistake and misrepresentation. While this is a Singaporean case, it is persuasive in Malaysia due to the similarity in contract law traditions. The ruling implies that Malaysian courts may uphold AI-assisted contracts, but uncertainty remains when the AI itself initiates or concludes the agreement. For instance, a chatbot, fully controlled and operated by AI, on an e-commerce platform mistakenly confirms a promotional price far lower than intended, and bona fide buyers have completed their transactions based on this error, the mistake cannot be recanted. The retailer or supplier that had instructed the AI programming for the commercial advertisement would be held liable for misrepresentation under the CA 1950. Likewise, if a point scoring system is driven by AI, and the AI system illegally favours one purchaser against another by rejecting access to certain financial privileges and/ or services, the business owner who programmed the AI point scoring system can be sued for discrimination and unfair financial practices under banking and financial laws (Bailey, 2019).

CPA 1999, the PDPA 2010, the Communications and Multimedia Act 1998 (CMA 1998), and the Computer Crimes Act 1997 (CCA 1997) do not give full protection against AI’s error and abuse, nor provide any comprehensive solutions for AI’s complicated issues of accountability. For example, the CPA 1999 aims to protect consumers from misleading or unfair practices by a  “supplier” or “trader.” AI will not be liable if it inaccurately advertises the price of a product or makes a false misrepresentation. The business using the AI will be held liable and accountable for the “misconduct”.  In respect of personal data management, the PDPA 2010 provides extensive protection for the personal information of individuals, but it has no provisions that govern AI’s intervention in profiling, automated decision-making, or algorithmic discrimination. This poses a dangerous proposition as online transactions almost always deploy AI systems to process massive amounts of users’ data. This omission presents a legal gap in AI’s accountability for the abuse of personal data information of customers. The CMA 1998 and the CCA 1997 protect against cyber risks. While these laws address hacking, illegal intrusions, and misuse of computer systems, it does not provide sufficient protection for consumers where AI is the perpetrator.

The above instances all point to a regulatory void i.e. the absence of specific laws addressing AI’s liability and accountability in legal transactions. Most laws associated with AI were enacted before the advent of AI. Thus, these laws fail to provide sufficient protection against AI’s self-directed misdemeanours, wrongful intrusions, and abuse. Malaysian jurisprudence is clearly lacking in guidance as to whether AI liability exists, and if so, how to make AI accountable for its independent decisions that extend beyond or separate from human control. If there are no solutions that can resolve this legal omission, this irresolution could corrode the faith in Malaysia’s digital economy, expose consumers to financial ruin, and hamper AI progress and innovation. Thus, this study aims to investigate whether businesses and commercial transactions in e-commerce can exonerate some of their liabilities to AI. This study also wanted to answer the question as to what extent the current Malaysian law provides for AI liability and accountability in online transactions.

LITERATURE REVIEW

Electronic commerce (e-commerce) is directly related to the purchase and sale operations through remote data transmission, advertising through the Internet, and conducting business in electronic form (Kolodin et al., 2020). The growth of online businesses is also synonymous with the growth of digital platforms for trading, such as Shopee, Lazada, TikTok, etc. E-commerce is different from the traditional market (Grover & Teng, 2001; Gupta, 2014; Kıngır & Gezer, 2020). Online shoppers did not have the luxury of physically inspecting the products or validly verify the quality of the same. They rely on the information available on the online platform, which reliance will mostly be on the good reputation of the vendor and the reassurance of the quality of the products or services to the consumers (Gao, 2015). The situation will become worse if the online shopper received the wrong goods or if there was no delivery at all despite payment had been made to the vendor; or the situation can also be that although the goods have been delivered, but it is defective in some way and the online shopper are unable to take further action since the cooling-off period has elapsed  (Amin & Mohd Nor, 2013).

Past studies have shown that the term ‘goods’, as provided in the United Kingdom (UK) Sale of Goods Act 1979, is found not to include digital content such as software, online games, music files, or film files, which are not kept or disseminated in the form of a compact disc (CD) or digital versatile disc (DVD). In fact, the same study found that the UK Consumer Rights Act 2015 needed to be revised in the context of consumer law regarding digital content (Shukri et al, 2024). The current law in Malaysia relating to the sale of goods is the Sale of Goods Act 1957 (SOGA 1957). Amendments have to be made to the law since the old rules and norms governing the sale of goods are becoming obsolete and incapable of addressing legal issues emerging from e-commerce transactions (Nuruddeen & Yusof, 2021). Trade digitalisation entails digital trade with physically delivered trade in goods and services to consumers. Although digital trade eases businesses to launch their goods and services in large quantities to digitally connected consumers, the system remains unregulated, unmanaged, and uncontrolled  (Leal-Arcas et al., 2023).

Malaysia has passed laws to enable the deployment of innovative technology in running the economy. The laws relevant to commercial transactions are the Digital Signature Act 1997 (DSA 1997), CCA 1997, Copyright (Amendment) Act 1997 (also read Copyright Act 1987) (CA 1987), Telemedicine Act 1997 (TA 1997), CMA 1998, E-Commerce Act 2006 (ECA 2006), and PDPA 2010 (Oseni & Omoola, 2019). Past studies contended that although ECA 2006 was meant to ensure fair trading principles and avoid legal disputes, however, complicated progress and hidden dangers of the internet make ECA 2006 insufficient to combat the problems arising from digital trading (Wang, 2025).

The Malaysian contract law recognised the Agency Theory, where an agent may bind a principal to contracts. Scholars like Kerr et al. (2020) argue that AI systems, when acting autonomously, resemble “agents” but without consciousness or legal personality. Section 182 of the Malaysia CA 1950 defines an “agent” as a person employed to do any act for another (s.182). This human-centric definition excludes AI, raising doubt whether AI can be treated as an “agent” in law. E-commerce is also related to products. Past studies suggested that AI should be treated as a product since it provides a workable model, though critics argued that it ignores the autonomous decision-making element (Malgieri & Pasquale, 2022)

The European Parliament proposed granting AI systems a limited form of “electronic personhood” in 2017, and this proposal triggered significant academic debate and was ultimately rejected. Past studies argued that AI should be recognised as a legal subject, while critics maintain that accountability must remain with human actors (Solum, 2020). The European Union Artificial Intelligence Act 2024 represents the most comprehensive attempt to regulate AI. The aforesaid Act predominantly focuses on regulating high-risk AI systems, with a smaller segment addressing limited-risk AI systems PS (2024).  Unlike the European Union, the United Stated has no single AI law, but agencies like the Federal Trade Commission (FTC) regulate unfair or deceptive AI practices under consumer law. The FTC has issued guidance warning businesses against using AI algorithms in ways that discriminate or mislead consumers (Selbst & Barocas, 2022).

Past studies revealed that trust positively and significantly influences personalised customer experiences on digital trading platforms. The trust in AI’s transparency and reliability encourages customers to interact confidently with digital trading platforms (Kanapathipillai et al., 2024), and this has created very reliable and interesting e-commerce transactions. Lack of knowledge and knowledgeable workers in E-commerce is one of the major problems with the development and implementation of e-commerce (Salleh et al., 2020). AI-enhanced digital platforms need to have the control mechanism enforced by the authorities (Tahir & Othman, 2024).

RESEARCH METHODOLOGY

This study employed a doctrinal legal research methodology, supplemented by comparative legal analysis and qualitative review of secondary sources. Doctrinal legal research forms the backbone of this study, involving a black-letter law analysis of statutes, case law, and regulations relevant to online transactions and AI in Malaysia. This is appropriate given that the research objective is to assess the adequacy of Malaysia’s existing legal framework in addressing issues of AI accountability in online transactions.

The methodology also draws upon comparative perspectives from the European Union (EU), United States (US), and Singapore to contextualise Malaysia’s position and extract lessons for reform. Finally, it integrates analysis of policy documents, tribunal practice, and academic commentary, reflecting the multi-dimensional nature of AI governance.

FINDINGS AND DISCUSSION

Legal Literature on AI in Online Transactions

There is a need to identify and advise on laws for the liability and accountability of AI-driven online transactions. A lacuna definitely exists in the Malaysian legislation for the liability and accountability of AI in its direct involvement in online transactions. This research has the potential to address and provide solutions for the lacuna in the law in respect of AI-driven online transactions. Although Malaysian courts have not directly addressed the issue of AI liability, significant references have been made, and Malaysian courts have recognised the position of AI in AI-driven online transactions. This study has analysed the Malaysian statutes and regulatory framework as depicted in Table 4.1.

Table 4.1: List of Relevant Legislation Applicable to Online Transactions

No. Legislation Objective of Legislation Research Purpose
1 Contracts Act 1950

(CA 1950)

This act governs contracts in general, including those formed online. To examine contract formation through AI-driven systems, especially provisions on consent, mistake, and agency.

 

2 Sale of Goods Act 1957

(SOGA 1957)

This act applies to the sale of goods, whether transacted online or offline.

 

To examine the definition, legal implications and liability of AI-driven transactions in respect of goods, implied conditions/warranties, digital  goods and AI-generated products
3 Digital Signature Act 1997 This act addresses the legal validity of digital signatures used in online transactions.

 

To evaluate the authenticity and sufficiency of AI-generated or AI-verified digital signatures and the challenges of AI exploitation in forging, manipulating, or misusing digital signatures in online transactions.

 

4 Consumer Protection Act 1999 (CPA 1999) The CPA 1999 is the main legislation for consumer protection, including e-commerce transactions. It ensures that consumers are protected when engaging in online purchases.

 

To analyse liability for misrepresentation, unfair contract terms, and defective digital products or services.
5 Electronic Commerce Act 2006 (ECA 2006) This act provides the legal framework for e-commerce in Malaysia, recognizing the validity of contracts formed through electronic messages. Section 7(1) of the ECA 2006 specifically addresses the formation of contracts via electronic means, stating that a contract formed electronically is valid, binding, and enforceable.

 

To assess the recognition of online contracts concluded via automated systems.
6 Consumer Protection (Electronic Trade Transactions) Regulations 2012 These regulations, issued under the CPA 1999, impose specific requirements on e-commerce traders, such as providing accurate and sufficient information to consumers.

 

To evaluate consumer protection mechanisms regarding informed consent, disclosure obligations, dispute resolution, and liability in AI-mediated transactions.

 

7 Personal Data Protection Act 2010

(PDPA 2010)

This act provides protection of personal data  against unauthorised or fraudulent use, disclosure, or transfer of personal data and to promote confidence and trust in e-commerce and digital transactions.

 

To evaluate adequacy in governing AI profiling, automated decision-making, and consent.
8 Communications and Multimedia Act 1998 (CMA 1998) This Act safeguards public interest against harmful AI-generated contents viz; misinformation or deepfakes, cybersecurity breach and malicious attacks To analyse enforcement challenges in regulating AI-driven platforms and integration such as  misinformation, cybersecurity, privacy, digital consumer protection, algorithmic content moderation, automated decision-making, liability for AI-generated communications, and data governance.

 

9 Computer Crimes Act 1997 (CCA 1997) To provide legal mechanisms to safeguard information integrity, security, and trust in digital communications and transactions. To examine enforcement challenges to computer crimes facilitated by AI technologies, including issues

of attribution, liability, and evidentiary standards.

 

The CPA 1999 ensures protection against unfair practices, defective goods, and misleading representations. While it does not mention AI, consumer tribunals have consistently prioritised consumer rights over business technicalities. Digital intermediaries in consumer markets pose a risk of misrepresentation, which is both ethically and commercially unacceptable. Such misleading conduct violates human dignity and diminishes individual autonomy. Growing concerns about AI-related harms have led to calls for more targeted regulation, including new laws and ethical principles emphasizing transparency. While transparency is crucial to prevent misleading AI, the article argues that consumer protection insights are essential to determine who benefits and how. Merely offering more information isn’t enough to improve market-wide outcomes. In the absence of specific regulations, strong demands from regulators for transparency are necessary to ensure accountability from companies using AI in consumer decision-making (Paterson, 2022).

There is a need to strengthen consumer protection. Firstly, by expanding the scope of CPA 1999 to include AI-driven misrepresentation, algorithmic unfairness, and automated refusals of services. Secondly, by mandating transparency in AI commerce, where businesses should be required to deploy AI in consumer transactions to disclose AI involvement, or to a certain extent, AI disclosure rules.

The PDPA 2010 protects personal data in commercial transactions. Section 6 mandates consent for data processing, but the Act lacks explicit provisions for automated profiling or AI-driven decision-making. Literature highlights this as a critical gap, since e-commerce platforms in Malaysia routinely use AI for targeted advertising and dynamic pricing. Without reform, consumers may face algorithmic discrimination without adequate remedies. Both CCA 1997 and CMA 1998 criminalise misuse of computer systems, including hacking, fraud, and unauthorized access. While applicable to AI misuse (e.g., AI-driven fraud), neither statute addresses liability for autonomous AI actions.

The laws in other jurisdictions

Singapore adopts a pragmatic, industry-oriented framework. Its Model AI Governance Framework 2020 emphasizes explainability, fairness, and accountability in AI deployment. Rather than imposing rigid rules, it provides voluntary principles and industry best practices. For online transactions, this promotes responsible AI without stifling innovation, offering a useful model for Malaysia as a similarly trade-dependent economy. Unlike Malaysia which stretch the CPA 1999 provisions, the European Union Digital Services Act 2022 imposes duties on platforms to ensure fairness in algorithmic operations.

Cases related to AI

The Federal Court in the case of Dream Property Sdn Bhd v Atlas Housing Sdn Bhd (2015) stressed the binding nature of contractual obligations, underscoring that parties are bound by agreements unless vitiated by mistake or misrepresentation. The issues may arise in AI miscommunication.

In another case of Yong Yoke Lin v Fung Yit Leng [2018] MLJU 1216, the High Court reaffirmed the importance of intention and consensus in contract formation, principles that could be challenged when AI initiates transactions without direct human volition.

Civil and Criminal Liability for AI Misconduct

AI errors (e.g., recommending hazardous products, failing fraud detection) raise tort issues. Under negligence principles, claimants must prove duty, breach, causation, and damage. A challenge arises in attributing breach when harm results from an AI’s autonomous decision. Malaysian courts have not yet confronted this, but likely approaches include imputing liability to the deploying business under vicarious liability and treating AI errors as a failure in duty of care in system design or oversight.

Statutes such as the CCA 1997 and CMA 1998 address misuse of digital systems. If AI is hijacked (e.g., via malware), liability attaches to human actors, not the AI. However, where AI autonomously commits acts (e.g., spamming, denial of service), statutes lack clarity on whether intent/knowledge can be imputed to system operators.

LIMITATION OF STUDY

Current issues relating to Artificial Intelligence (AI) systems during online transactions focuses mainly on operational and technical fault such as data mismatches and algorithmic anomaly. If a better solution to AI’s legal accountability in e-commerce is to be found, research should be conducted on a deeper legal, theoretical, and ethical contexts which addresses the difficulties arising from AI personhood, algorithmic accountability and liability frameworks which are important to create a more consistent and reliable laws in the e-commerce environment.

Firstly, the evolving nature of e-commerce demands a vigorous discussion around liability and accountability in cases of AI-mediated errors. As transactions become increasingly reliant on sophisticated algorithms, establishing clear liability frameworks for developers and operators of these AI systems becomes crucial. For instance, the notion of “duty of care” for AI developers is necessary to ensure that systems designed to engage in automated decision-making are held to a standard that reflects the potential impacts of their inaccuracies on end-users (Chawla & Kumar, 2021). Regulatory discussions must also consider whether certain AI systems possess legal personhood capable of being held liable for errors. Engaging with legal theory in this way could lead to frameworks that do not currently exist, thus aligning with technological advancements and ensuring accountability.

Secondly, emerging discussions around AI personhood highlight the potential for algorithms to possess a degree of agency that raises unique liability questions. Do these systems deserve a form of legal recognition or protection? Numerous publications discuss the implications of algorithmic behaviours in e-commerce settings—specifically, the importance of protecting consumer rights against unintended biases or decision-making errors that algorithms might propagate (Tagi et al., 2022; , Cebeci et al., 2022). By framing these issues within the increasing prevalence of digital marketplaces and AI usage, scholars can contribute to an ongoing dialogue aimed at developing comprehensive regulatory structures that take technological evolution into consideration (Costa & Castro, 2021).

Thirdly, as digital commerce continues to expand, the necessity for interdisciplinary approaches that amalgamate insights from economics, technology, and law becomes relevant. The rise of dynamic digital marketplaces challenges traditional business practices and necessitates a rethinking of existing regulatory measures (Jain & Arya, 2021). For instance, a study has highlighted the need to investigate how automated systems can effectively facilitate fair pricing practices in e-commerce, yet this requires regulatory frameworks that account for algorithm-driven decisions without compromising consumer protections (Ayob et al., 2021). The synthesis of these diverse perspectives could lead to substantive changes in how laws adapt alongside digital commerce innovations, offering a more resilient structure for governance.

Lastly, addressing the intersection of AI and digital commerce also opens avenues for deeper inquiries into ethical considerations. Issues surrounding privacy, data usage, and informed consent in AI-driven transactions need careful contemplation. The urgency for clear legal mechanisms protecting users in the event of algorithmic failures is further magnified in contexts where personal data is an integral component of decision-making (Song & Liu, 2020). Fostering a culture of transparency and ethical standards in AI deployment can mitigate risks and cultivate trust in these digital platforms.

CONCLUSION

The research finds that Malaysia’s existing framework recognises AI in online contracting via ECA 2006 and protects consumers or data subjects indirectly via CPA 1999 and PDPA 2010. Nonetheless, the current legal framework fails to address accountability for autonomous AI actions. The legal literature and research findings so far indicate the reluctance to recognise the independence of AI, resulting in businesses and commercial ventures having to take responsibility for their AI’s actions. It would be good if the law could give some form of protection to businesses and commercial ventures against the independent decisions made by AI. If there are laws recognizing the independence of AI, it can also potentially be abused by business and commercial enterprises to escape liability. AI will continue to grow, and laws need to be in place to control and regulate its use and dominance in e-commerce. Findings showed that businesses and commercial venture in e-commerce cannot exonerate their liabilities. The law is still loose, and the operator of the AI will still be accountable for any mistakes made by the AI.

The study found that AI in online transactions raises unresolved questions of accountability, particularly in contract law, consumer law, and data governance, except that the guilt still falls on the operator of AI.  Global developments provide models Malaysia can draw from, but local statutes remain fragmented and outdated. Malaysian scholars have highlighted the gaps but have yet to propose a comprehensive accountability framework. Contracts, consumer law, and data protection statutes only partially address risks. Without reform, Malaysia risks uncertainty in e-commerce trust and consumer protection. Malaysia can adopt a hybrid approach combining legal reform with sectoral soft law guidance to ensure accountability, fairness, and consumer trust in AI-driven commerce.

RECOMMENDATION

This study recommends future research to compare the legislation in countries where AI laws have evolved significantly to address the issues of AI legal personhood, its accountability and liability in the digital commerce industry.

One such legislation is the European Union’s AI Act (EU AI Act), which officially took effect in 2024 is a piece of legislation that aims to address challenges posed by AI across various sectors, including e-commerce. A critical comparison between the EU AI Act and other jurisdictions, alongside actionable policy recommendations, can help identify potential reforms to close existing accountability gaps in AI-driven commercial environments.

The EU AI Act establishes a comprehensive framework for the governance of AI technology by categorizing AI applications based on risk levels: unacceptable, high, and minimal (Guadamuz, 2024). This risk-based approach requires organizations deploying high-risk AI to adhere to stringent requirements, including transparency and accountability measures (Zhong, 2024). In contrast, regulations in other jurisdictions, such as the U.S. Algorithmic Accountability Act (US AAA), adopt a more flexible strategy that emphasizes innovation while aiming to mitigate risks associated with AI (Mökander et al., 2022). This divergent approach raises essential questions about the efficacy of regulatory structures in achieving responsible AI utilization without stifling technological progression.

One of the critical areas needing reform, as highlighted by the EU AI Act, is the assignment of accountability for errors arising from AI systems in e-commerce (Marchenko & Энтин, 2022). As AI technologies increasingly facilitate significant decisions in online transactions, establishing clear legal frameworks regarding liability becomes paramount. The Act includes provisions that impact risk management strategies, yet businesses may require further guidance on operationalizing these requirements (Schuett, 2023). This ambiguity could be addressed through uniform guidelines that delineate the duties and obligations of AI developers, as well as regulatory bodies, ensuring that firms can navigate compliance effectively.

Moreover, a significant discussion in current AI governance revolves around AI personhood and whether certain AI systems should possess legal standing as accountable entities (Arora et al., 2025). While the EU AI Act focuses on human-centric regulation aimed at protecting fundamental rights, it may benefit from explicitly addressing the ethical implications of AI decision-making and the notion of personhood. Examining other frameworks emerging in jurisdictions like Canada and the UK could yield useful insights into how these concepts are integrated into their respective AI regulations (Reusken, 2024).

To effectively close the accountability gap, several actionable steps are recommended for legislators, businesses, and regulators. First, ongoing training and workshops to familiarize stakeholders with the provisions of the AI Act can enhance compliance and promote a culture of ethical AI deployment (Zhong, 2024). Second, collaboration with international regulatory bodies could foster a more uniform approach to AI governance by sharing best practices and harmonizing standards across jurisdictions, thus mitigating regulatory fragmentation (Łabuz, 2024). Moreover, formulating regulatory sandboxes could allow businesses to experiment with AI technologies in a controlled environment, enabling compliance with regulatory requirements while fostering innovation (Kera, 2023).

In conclusion, the adoption of suitable provisions from AI legislations from other countries would greatly assist Malaysia in its evolution and development of AI regulations, providing a foundational structure for addressing accountability in Malaysia’s AI-driven e-commerce. However, continuous learning from other jurisdictions, proactive engagement with multiple stakeholders, and clear, detailed guidelines will also be needed in refining and implementing effective reforms. Establishing collaborative frameworks that prioritize both ethical AI practices and responsibility can position Malaysia at the forefront of AI technology in e-commerce and regulations globally.

REFERENCES

  1. Al-Kemawee, J. (2024). Artificial Intelligence and the Law: Legal Frameworks for Regulating AI and Ensuring Accountability. Utu Journal of Legal Studies (UJLS), 1(1), 18-29.
  2. Al-saeedi, L. A. E., Shakir, F. J., Hasan, F. K., Shayea, G. G., Khaleel, Y. L., & Habeeb, M. A. (2024). Artificial Intelligence and Cybersecurity in Face Sale Contracts: Legal Issues and Frameworks. Mesopotamian journal of Cybersecurity, 4(2), 129-142.
  3. Amin, N., & Mohd Nor, R. (2013). Online shopping in Malaysia: Legal Protection for E-consumers. European Journal of Business and Management, 5(24), 79-86.
  4. Arora, A., Saboia, L., Arora, A., & McIntyre, J. (2025). Human-centric versus state-driven. International Journal of Intelligent Information Technologies, 21(1), 1–13. https://doi.org/10.4018/ijiit.367471
  5. Ayob, A., Yakob, N., Ja’, R., & Afar, N. (2021). E-commerce adoption in ASEAN: Testing on individual and country-level drivers. International Journal of Business Environment, 12(1), 18. https://doi.org/10.1504/ijbe.2021.112108
  6. Bailey, K. (2019). The Ethics of AI in Finance: How to Detect and Prevent Bias. Corporate Finance Institute. https://corporatefinanceinstitute.com/resources/data-science/ai-ethics-in-finance-detect-prevent-bias/
  7. Cebeci, S., Nari, K., & Ozdemir, E. (2022). Secure e-commerce scheme. IEEE Access, 10, 10359–10370. https://doi.org/10.1109/access.2022.3145030
  8. Chawla, N., & Kumar, B. (2021). E-commerce and consumer protection in India: The emerging trend. Journal of Business Ethics, 180(2), 581–604. https://doi.org/10.1007/s10551-021-04884-3
  9. Costa, J., & Castro, R. (2021). SMEs must go online—E-commerce as an escape hatch for resilience and survivability. Journal of Theoretical and Applied Electronic Commerce Research, 16(7), 3043–3062. https://doi.org/10.3390/jtaer16070166
  10. ‌Dodda, A. (2023). NextGen Payment Ecosystems: A Study on the Role of Generative AI in Automating Payment Processing and Enhancing Consumer Trust. International Journal of Finance (IJFIN)-ABDC Journal Quality List, 36(6), 430-463.
  11. Gao, L. (2015). Understanding consumer online shopping behaviour from the perspective of transaction costs (Doctoral dissertation, University of Tasmania).
  12. Grover, V., & Teng, J. T. (2001). E-commerce and the information market. Communications of the ACM, 44(4), 79-86.
  13. Guadamuz, A. (2024). The EU’s artificial intelligence act and copyright. The Journal of World Intellectual Property, 28(1), 213–219. https://doi.org/10.1111/jwip.12330
  14. Gupta, A. (2014). E-Commerce: Role of E-Commerce in today’s business. International Journal of Computing and Corporate Research, 4(1), 1-8.
  15. Jain, V., & Arya, S. (2021). An overview of electronic commerce (e-commerce). Journal of Contemporary Issues in Business and Government, 27(3). https://doi.org/10.47750/cibg.2021.27.03.090
  16. Kanapathipillai, K., Singkaravalah, L. M., Balam, M. S., & Nararajan, S. (2024). The future of personalised customer experience in e-commerce: decoding the power of ai in building trust, enhancing convenience, and elevating service quality for malaysian consumers. European Journal of Social Sciences Studies, 10(5).
  17. Kera, D. (2023). Sandboxes as “trading zones” for engaging with AI regulation, ethics, and the EU AI act: How to reclaim agency over the future? SocArXiv. https://doi.org/10.31235/osf.io/59qna
  18. Kerr, A., Barry, M., & Kelleher, J. D. (2020). Expectations of artificial intelligence and the performativity of ethics: Implications for communication governance. Big Data & Society, 7(1), 2053951720915939.
  19. Kıngır, S., & Gezer, Y. (2020). Revolution in e-commerce and marketing a contrast between traditional and current marketing practices. Güncel Pazarlama Yaklaşımları ve Araştırmaları Dergisi, 1(1), 20-30.
  20. Kolodin, D., Telychko, O., Rekun, V., Tkalych, M., & Yamkovyi, V. (2020, March). Artificial intelligence in E-commerce: Legal aspects. In III International Scientific Congress Society of Ambient Intelligence 2020 (ISC-SAI 2020) (pp. 96-102). Atlantis Press..
  21. Łabuz, M. (2024). Deep fakes and the Artificial Intelligence Act—An important signal or a missed opportunity? Policy & Internet, 16(4), 783–800. https://doi.org/10.1002/poi3.406
  22. Leal-Arcas, R., Al Damer, L., Al Hokail, H., Al Saud, S. A., Alshaikh, S., Al Saud, S. F., Al Muhanna, S., Al SAud, L. F. Alaiban, N., & Alsaud, M. (2023). The Digitalization of Trade and Artificial Intelligence: a Pandora’s box. Penn St. JL & Int’l Aff., 12, 1.
  23. Lin, A. K. (2024). The AI Revolution in Financial Services: Emerging Methods for Fraud Detection and Prevention. Jurnal Galaksi, 1(1), 43-51.
  24. Malgieri, G., & Pasquale, F. (2022). From transparency to justification: toward ex ante accountability for AI. Brooklyn Law School, Legal Studies Paper, (712).
  25. Marchenko, A., & Энтин, М. (2022). Artificial intelligence and human rights: What is the EU’s approach? Digital Law Journal, 3(3), 43–57. https://doi.org/10.38044/2686-9136-2022-3-3-43-57
  26. Martin-Bariteau, F., & Pavlovic, M. (2021). AI and contract law. Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021).
  27. Ministry of Science, Technology & Innovation (MOSTI). (2021). National Artificial Intelligence Roadmap 2021–2025 [Review of National Artificial Intelligence Roadmap 2021–2025]. https://aipalync.org/storage/documents/main/air-map-playbook-overall-19102021-rtg_1713947002.pdf
  28. Mökander, J., Juneja, P., Watson, D., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. the EU Artificial Intelligence Act: What can they learn from each other? Minds and Machines, 32(4), 751–758. https://doi.org/10.1007/s11023-022-09612-y
  29. Nuruddeen, M., & Yusof, Y. (2021). A comparative analysis of the legal norms for e-commerce and consumer protection. Malaysian Journal of Consumer and Family Economics, 26, 22-41.
  30. Oseni, U. A., & Omoola, S. O. (2019). Banking on ICT.. Fintech in Islamic Finance: Theory and Practice.
  31. Paterson, J. M. (2022). Misleading AI: Regulatory strategies for algorithmic transparency in technologies augmenting consumer decision-making. Loy. Consumer L. Rev., 34, 558.
  32. PS, D. A. (2024). The Need for a Global Regulatory Framework for Artificial Intelligence: Implications of the European Union’s Artificial Intelligence Act 2024.
  33. Reusken, G. (2024). United Kingdom ∙ Striking a balance: UK’s pro-innovation approach to AI governance in light of EU adequacy and the Brussels effect. Journal of AI Law and Regulation, 1(1), 155–159. https://doi.org/10.21552/aire/2024/1/19
  34. Salleh, F., Yatin, S. F. M., Radzi, R. M., Kamis, M. S., Zakaria, S., Husaini, Zaini, M. K.  & Rambli, Y. R. (2020). Malaysian’s new digital initiative to boost e-commerce–where we are. Journal of Academic Research in Business and Social Sciences, 10(11), 1138-1154.
  35. Schuett, J. (2023). Risk management in the Artificial Intelligence Act. European Journal of Risk Regulation, 15(2), 367–385. https://doi.org/10.1017/err.2023.1
  36. Selbst, A. D., & Barocas, S. (2022). Unfair artificial intelligence: how FTC intervention can overcome the limitations of discrimination law. U. Pa. L. Rev., 171, 1023.
  37. Shukri, M. H. M., Ismail, R., & Markom, R. (2024). Exploring the Relationship between Consumer Protection and Product Liability: Civil and Islamic Perspectives. Malaysian Journal of Consumer and Family Economics, 32, 177-95.
  38. Solum, L. B. (2020). Legal personhood for artificial intelligences. In Machine ethics and robot ethics (pp. 415-471). Routledge.
  39. Song, P., & Liu, Y. (2020). An XGBoost algorithm for predicting purchasing behaviour on e-commerce platforms. Tehnički Vjesnik – Technical Gazette, 27(5). https://doi.org/10.17559/tv-20200808113807
  40. Tagi, M., Tajiri, M., Hamada, Y., Wakata, Y., Xiao, S., Ozaki, K., … & Hirose, J. (2022). Accuracy of an artificial intelligence–based model for estimating leftover liquid food in hospitals: Validation study. JMIR Formative Research, 6(5), e35991. https://doi.org/10.2196/35991
  41. Tahir, M. Z. A. M., & Othman, R. (2024, November). A Future Study on Social Media Use With the Emergence of Ai in Malaysia and Indonesia. In 2024 16th International Conference on Knowledge and System Engineering (KSE) (pp. 268-273). IEEE.
  42. The Edge Market. (2024). Malaysia’s E-Commerce Revenue To Reach RM1.65 Tril By 2025. Komunikasi.gov.my. https://www.komunikasi.gov.my/en/public/news/21743-malaysia-s-e-commerce-revenue-to-reach-rm1-65-tril-by-2025
  43. ‌Wang, Z. (2025). Malaysia’s E-Commerce Legal Landscape: Status, Problems and Future.
  44. Wuling, Y., Saripan, H., & Mahmood, A. (2024). Civil Liability of E-commerce Platforms as Online Intermediaries: A Comparative Study Between China and Malaysia. Asian Journal of Law and Governance, 6(4), 40-54.
  45. Zhong, H. (2024). Implementation of the EU AI act calls for interdisciplinary governance. AI Magazine, 45(3), 333–337. https://doi.org/10.1002/aaai.12183

Statutes and Cases

Chwee Kin Keong v Digilandmall.com Pte Ltd [2005] 1 SLR(R) 502 (Singapore)

Dream Property Sdn Bhd v Atlas Housing Sdn Bhd [2015] 2 MLJ 441

Communications and Multimedia Act 1998

Computer Crimes Act 1997

Consumer Protection Act 1999

Contracts Act 1950

Personal Data Protection Act 2010

Yong Yoke Lin v Fung Yit Leng [2018] MLJU 1216

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

0 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER