Integrating Explainable Artificial Intelligence into Malaysia’s Medical Device Regulatory Framework: A Preliminary Analysis
- Nurus Sakinatul Fikriah Mohd Shith Putera
- Hartini Saripan
- Rafizah Abu Hassan
- Noraiza Abdul Rahman
- 7127-7135
- Oct 20, 2025
- Artificial intelligence
Integrating Explainable Artificial Intelligence into Malaysia’s Medical Device Regulatory Framework: A Preliminary Analysis
Nurus Sakinatul Fikriah Mohd Shith Putera, Hartini Saripan, Rafizah Abu Hassan, Noraiza Abd Rahman
Faculty of Law, Universiti Teknologi MARA, 40450 Shah Alam, Malaysia
DOI: https://dx.doi.org/10.47772/IJRISS.2025.909000582
Received: 16 September 2025; Accepted: 20 September 2025; Published: 19 October 2025
ABSTRACT
Today Artificial Intelligence (AI) is transforming the landscape of healthcare by automating mundane processes, enhancing efficiency, refining diagnoses, expediting the development of more effective medicines, and so much more. However, a review of literature on AI in healthcare signals recurring concerns about data quality, from collection and analysis to interpretation and deployment, along with their ethical implications. Data practices, alongside challenges regarding traditional patient–doctor relationships, privacy, autonomy, and institutional trust, raised serious concerns over accountability gaps. In this context, traditional product liability laws and the professional liability framework remain the baseline for assigning liability in AI-related harms but prove to be ill-equipped in addressing the black-box nature of AI. The inability to provide reasons behind AI outputs complicates the legal determination of causation, fault, and the evidentiary standard. Without clear mechanisms to verify decisions or assign responsibility, these concerns undermine public confidence in AI technologies and healthcare governance. Therefore, this research aims to investigate the feasibility of translating the principles of Explainable AI (XAI) into a legally operative framework within Malaysia’s medical device regulatory regime. XAI is a set of principles and techniques developed to facilitate the interpretation of AI-generated outputs for human users. In healthcare, XAI is particularly significant as it enhances transparency, informed consent, duty of care, and accountability by interpreting AI reasoning, allowing patients to weigh risks and options, and enabling more informed clinical decision-making. This research adopts a doctrinal research approach, synthesizing statutory provisions, regulatory documents, and scholarly literature to analyse the integration of explainability requirements within the Malaysian framework. International best practices found in the European Union Artificial Intelligence Act and the International Medical Device Regulators Forum (IMDRF) Software as a Medical Device guideline are referred to reinforce the analysis of this research and eventually devise a contextually relevant framework for Malaysia. The research findings indicate the absence of explicit explainability requirements under Malaysia’s existing medical device regulations, notwithstanding its solid foundation for ensuring AI safety and performance. Nevertheless, strengthening requirements for technical documentation, post-market surveillance, human oversight, and transparency obligations supports the integration of XAI principles into Malaysia’s regulatory structure. Evaluating these explainability requirements not only strengthens accountability but also promotes trust in AI-enabled healthcare.
Keywords: Artificial Intelligence and Law, Explainable AI, AI in healthcare
INTRODUCTION
Two sides of the Artificial Intelligence (AI) coin are far concealed from other technologies, and neither is well understood. Looking from the upside, the socioeconomic dynamics of AI in healthcare, especially those compounded by proven profitable endeavors, create an appetite for its adoption within the sector (Challen et al., 2019). AI’s adeptness in identifying meaningful correlations of data and thus producing invaluable and groundbreaking health-related insights is strategically positioned at the heart of the data-intensive healthcare sector. It is progressively driving innovative solutions in clinical applications, healthcare management and administration, research, and development to public and global health (Bélisle-Pipon et al., 2021).
In the context of this paper, AI is making strides in clinical decision-making – from supporting medical decisions via real-time assistance (Secinaro et al., 2021), automating diagnostic processes, assessing risk profiles, to optimising therapeutic decisions (Xu et al., 2023) and advancing clinical research (Lekadir et al., 2022), AI’s potential in clinical practice is monumental. Applications like IBM Watson for cancer diagnosis (Jie et al., 2021) (Taulli, 2021), IDX-DR for autonomous detection of diabetic retinopathy, Google’s DeepMind Health for advanced eye screening and treatment, automated interpretation of cardiac imaging data processing and risk assessment in cardiology, to name a few, are making us rethink how healthcare is delivered.
Yet, even with the promising impact in healthcare, AI-guided clinical solutions in healthcare host a series of risks that could potentially result in safety concerns for the end-users of healthcare services. And all these stem from none other than the intricate labyrinth of data that underpins the AI algorithms. Clinical data risks of AI algorithms span their entire life cycle, from data acquisition and collection to data quality, as well as development and use. Over time, more data-related risks are coming to light and reflected in the unwavering commitment of the health systems that are actively seeking to reap the benefits of AI while also beginning to define principles of appropriate conduct (Macrae, 2019).
Problems such as contextual differences of training dataset and real-life applications (McKee & Wouters, 2023), representativeness and completeness of data (Hamid, 2016), opacity, and inscrutability of the inner workings of AI (Nordlinger et al., 2020) as well as the practical implementation of AI into the clinical workflow and the entire healthcare system in general are challenging the law as the gatekeeper of patients’ safety and security with regards to medical technologies. If AI (possibly) causes harm to the patient, a question arises as to who should be responsible for determining how much weight to give to the system’s recommendations, especially when they conflict with other evidence visible to the clinicians but not captured by the algorithm; the programmer, the person feeding the system with data (such as the echocardiograph operator), or the clinician deciding on the treatment?
The legal fraternity faces a bottleneck due to numerous theories linking AI mishaps to traditional liability approaches (Mohd Shith Putera & Saripan, 2019). This is simply because the nature of AI defies every bit of our understanding of causal attribution and accountability. Applying traditional regulatory models to AI with autonomous learning abilities proves ineffective. The authors, in this context, are more interested in the recent translational research of importing engineering community approaches into the law’s working – that is, arguably, more compatible with a learning system that utilizes AI. More specifically, the research aims to examine the implementation of engineering-based methodologies, referred to as the principle of Explainable AI (XAI), to establish a framework for legal accountability in cases involving AI in healthcare.
The research examines the application of engineering principles, including failure mode analysis, system reliability, and traceability, to assess the actions and decisions made by AI systems in healthcare settings. A notable research gap in the current landscape of AI-related studies lies in the limited exploration of the translation of XAI from its technical origins within the engineering community into the domain of law and legal practice. While XAI has gained substantial recognition for its potential to address the issues of transparency and opacity inherent to AI systems, it predominantly remains within the purview of technical mechanisms, largely disconnected from the legal framework. Specifically, there is a lack of comprehensive research examining the types of questions and forms of XAI that are most relevant and effective within legal contexts. How XAI can be adapted to meet the needs of key stakeholders remains largely underexplored. Therefore, the identified research gap highlights the importance of bridging the gap between the technical foundations of XAI and its potential application in the legal field landscape.
METHODOLOGY
This research adopts a multidisciplinary and conceptual approach to evaluate the role of XAI in healthcare from a legal perspective. The study analyses key constructs of explanation drawn from engineering literature and reframes them in light of legal principles, particularly those relating to duty of care, informed consent, and liability attribution. This conceptual analysis is further contextualised within Malaysia’s regulatory landscape, drawing on statutes, guidelines, and academic commentary to propose parameters for integrating XAI into healthcare governance.
LITERATURE REVIEW
- Explainability and Artificial Intelligence in Healthcare
Learning algorithms and access to vast amounts of data have enabled AI to supplement or, to a certain extent, replace some of the functions of physicians. Notwithstanding the revolutionary nature of AI, its widespread adoption is halted due to the lack of transparency associated with the way it produces its output recommendations. In this sense, the black-box property of AI contradicts the reliance of clinical medicine on transparency in decision-making. For this reason, the principle of XAI has surfaced as a solution in addressing the opacity and inscrutability of AI in healthcare. XAI is originally a technical measure that reveals the interpretation of an AI system’s decision-making process, which eventually facilitates clinicians’ evaluation of model inputs and parameters (Mohapatra et al., 2025). It contributes to aligning model performance with clinical guidelines and objectives through interpretable models, visualisations, or simplified representations of how the outcome is influenced by input data.
These explanatory approaches also mitigate the risks surrounding AI adoptions by identifying potential biases or errors in the model, which can be addressed and corrected for better patient outcomes (Adewale Abayomi Adeniran et al., 2024; Amodei et al., 2021). Some of the XAI techniques and categories in healthcare were highlighted in a survey concerning medical imaging applications, which include feature map interpretation, varying types of interpretation methods, textual and example-based explanation forms, post-hoc and intrinsic explanation approaches, model specificity (model-agnostic and model-specific), and explanation scopes (local and global) (Houssein et al., 2025). Salih et al., on the other hand, discussed several XAI methods used in cardiac imaging studies, such as Smooth Grad, Class activation mapping, Grad-CAM, and Saliency maps (A. M. Salih et al., 2024).
These methods are used to interpret the model output in image-based models utilising raw or minimally pre-processed images as direct input, and in feature-based models, where preprocessing is required to single out key interest from the images. They allow explanations for model outputs, such as the probability of classification and the contribution of each predictor for each class, thus making the model results more transparent and understandable to end users. Additionally, XAI models such as GradCAM and LIME ensure trust, explainability, interpretability, and transparency in the diagnosis and prediction of diseases by developing logical reasoning for disease prediction (Alkhanbouli et al., 2025).
The implementation of XAI is undermined by significant challenges, primarily related to achieving a delicate balance between ensuring model interpretability and managing the inherent complexity of advanced machine learning algorithms. For instance, van der Velden et al suggest that the implementation of XAI in medical imaging is fraught with the complexity of deep learning models, the requirement for interpretability of high-dimensional data, the integration of AI into clinical workflows, and the regulatory and legal challenges related to patient data privacy and security (van der Velden et al., 2022).
It is also addressed in the literature that designing AI systems that are inherently capable of providing explanations for their outputs is imperative (Combi et al., 2022). However, ensuring the explanations provided by AI models are understandable and actionable for healthcare professionals is extremely difficult, especially in determining the types of explanations needed by different stakeholders. Beyond the implementation phase, evaluating the effectiveness of these XAI systems is proven crucial, requiring the development of metrics and methods to assess the quality of the explanations they generate. In this context, interdisciplinary collaboration is also emphasised as essential, as experts from diverse fields need to work together to develop practical explainability features.
- Motivations for Explainable AI in Law
The motivations for and utility of XAI are diverse and depend on the user’s context and goals. In the eyes of the law, effective safeguards should mean that anyone wronged by an AI-influenced decision has the effective means to challenge the decision (Amodei et al., 2021). Opaque “black box” decisions that obscure an individual’s understanding of the reasoning behind AI decisions not only undermine trust but also pose significant obstacles to proper contestability (Phillips et al., 2021). Explainability in this context is imperative to ensuring that AI-generated evidence and recommendations can be systematically examined, questioned, and challenged by all parties involved (Panigutti et al., 2023).
This capability is crucial for preserving the integrity of legal processes and ensuring accountability in healthcare decision-making processes. When clinicians are unable to assess the inputs, parameters, and assumptions underlying AI outputs, they are impeded in exercising their professional judgment effectively. This limitation undermines informed decision-making and may expose patients to unjust consequences. The lack of explicit rationales—irrespective of an AI system’s predictive performance—highlights the necessity for independent mechanisms to assess reasoning procedures and ensure the responsible deployment of AI (Steging et al., 2021). In healthcare, explainability has significant implications for patients’ rights to informed consent, clinicians’ duty of care, and liability among stakeholders (Hacker et al., 2020a). Patients need to be able to understand not only the recommendations they receive but also the reasoning behind those recommendations, while clinicians must retain the ability to critically evaluate and, if necessary, override algorithmic outputs (Sovrano et al., 2024).
Translating the principles of XAI into legally enforceable mechanisms facilitates alignment between technical design and legal obligations, thereby ensuring that clinicians can communicate and justify decisions in a manner that upholds patient rights and autonomy. Furthermore, scholars such as Hacker and Dosche-Velasquez emphasize that explainability is essential in establishing the legitimacy of machine learning models within contractual and tort frameworks, as it enables courts and regulators to assess whether professional standards of care have been satisfied (Hacker et al., 2020). In the absence of well-defined standards for explanation, ambiguity persists regarding liability and accountability, especially as the deployment of AI-based systems continues to expand widely (Amann et al., 2020). Embedding explainability into regulation can thereby clarify responsibilities among clinicians, developers, and institutions, reduce risks of malpractice, and foster trust without stifling innovation (Waller & Yeung, 2024). By conceptualizing explainability as a legal safeguard rather than a purely technical attribute, XAI emerges as an indispensable instrument for guaranteeing fairness, transparency, and accountability within AI-powered healthcare systems. For this reason, it is proposed that there are three core fields of explainability in law: (1) Informed consent, (2) Certification and approval of AI as medical devices, and (3) Liability (Waller & Yeung, 2024). This research focuses solely on the certification and approval of AI as medical devices to provide more directed outcomes and findings.
FINDINGS AND DISCUSSIONS
- Integrating XAI into the Medical Device Regulatory Framework in Malaysia
Through the Medical Act 2012 (Act 737), Malaysia regulates medical devices along with Medical Device Authority Act 2012 and the Medical Device Regulations 2012. The Medical Device Authority (MDA) acts as the main regulatory body in charge of registration, conformity assessment, and post-market supervision. This said, any device placed on the Malaysian market must be registered under Act 73, through submission via the Authority’s online system (MeDC@St). On the other hand, software intended for medical purposes are subject to a suite of MDA guidance documents that lay out the classification rules, essential safety and performance principles, dossier formats, and sectoral guidance (Medical Device Authority, 2014).
While Malaysian law does not explicitly refer the term “AI” as a separate regulatory category, the MDA mandates the same pre-market requirements onto software that meets the statutory definition of a medical device (i.e., intended for diagnosis, prevention, monitoring or treatment) as other devices (Fraser et al., 2023). Manufactures are required to classify the device following MDA classification rules and proceed with the submission of a Common Submission Dossier Template (CSDT) containing technical documentation that proves compliance to the Essential Principles of Safety and Performance. Generally, Software as Medical Device (SaMD) and AI-enabled software require, for the pre-market dossier, a clear description of the intended use, software architecture and design, clinical assessment or proof of performance and safety, risk management files, and evidence of a quality management system (i.e., ISO 13485), where required based on the device’s classification.
Based on the MDA’s training materials and guidance on SaMD, technical documentation and risk assessment must also take into account software-specific considerations such as algorithmic behaviour and updates (Ebad et al., 2025). There are two prominent reference points for regulators and industry in Malaysia. The first one is the IMDRF SaMD; its risk-based framework offers for categorising software and lays out expectations for documentation, quality management, and clinical evidence in a technology-neutral manner, making it readily applicable to AI-enabled solutions. Secondly, related international standards (e.g., IEC 62304 for medical device software lifecycle processes, ISO 14971 for risk management and ISO 13485 for QMS) are frequently cited in MDA 4 guidance and industry practice, providing practical pathways in demonstrating pre-market compliance for software, inclusive of AI components. Taking IMDRF and related standards into consideration provides guidance for MDA applicants in structuring their technical documentation and clinical assessment in alignment with global best practice.
Meanwhile, post-market obligations following Malaysian framework necessitates manufacturers and authorised representatives to perform post-market surveillance (PMS) systems and provide report on adverse incidents and field safety corrective actions to the MDA. The Medical Device Authority (MDA) has released guidance covering Mandatory Problem Reporting and the harmonised exchange of post-market data, including initiatives under ASEAN cooperation. These emphasise root-cause analysis, corrective and preventive measures, and prompt reporting of any incidents linked to medical devices, including software. For AI-based products, post-market surveillance also extends to tracking real-world performance, managing algorithm updates or retraining, and ensuring that any safety- or performance-related changes are controlled through established procedures and, where necessary, communicated to the regulator. The MDA’s recent guidance on harmonised post-market data sharing reflects a rising regulatory focus on cross-border information exchange and the role of data in identifying new risks in medical software.
At present, Malaysian medical-device legislation and MDA guidance do not impose a standalone requirement for “explainability” (often referred to as XAI) in AI-enabled medical devices. Instead, elements of explainability are embedded within existing obligations — such as the need to demonstrate safety and effectiveness, provide clear instructions for use, supply meaningful clinical evidence, and apply sound risk management. Meeting these measures requires developers to explain the working mechanisms of the device, its limitations, and the safeguards in place to ensure dependable performance. In practice, this can include information on intended use, algorithmic constraints, input data characteristics, and how humans are expected to interact with the system, all of which may be addressed in technical files, labelling, or clinical evaluation documents, as in compliance with MDA requirements.
Internationally, the IMDRF SaMD guidance provides a globally accepted framework for risk-based classification and for handling software-specific risks, which can be applied in Malaysian pre-market submissions to cover AI-enabled devices. In Europe, the Medical Device Regulation (MDR), alongside the EU Artificial Intelligence Act, introduces more explicit obligations for high-risk AI systems, including those in healthcare. These obligations range across technical documentation, data governance, human oversight, record-keeping, and transparency to support accountability and post-market monitoring (Vardas et al., 2025).
While Malaysia has yet to adopt EU AI Act, both regulators and manufacturers can outline its requirements to the local framework by: (1) documenting explainability-related aspects in technical files and clinical evidence, such as model scope, training and validation datasets, subgroup performance, limitations, and required human oversight; (2) strengthening post-market plans to cover model drift, retraining activities, and real-world performance monitoring; (3) including algorithmic outputs, confidence levels, and guidance on clinician responses in updating labelling and instructions for use; and (4) maintaining strong traceability and logging mechanisms to ensure easier investigations and reporting. Taken together, these approaches are in line with MDA’s current requirements while operationalising explainability in a way that is reviewable, auditable, and consistent with international norms (van Kolfschooten & van Oirschot, 2024). The explainability expectations under the EU AI Act are illustrated in the table below:
Table I: Explainability Requirements Under the Eu Artificial Intelligence Act
Obligation | EU Provision | Key Requirements |
Technical documentation (pre-market) | Article 11; Annex IV — Technical documentation requirements | Providers of high-risk AI systems are required to develop and maintain updated detailed technical dossier prior market placement. The dossier must describe: intended purpose and versions; system architecture and algorithm; training datasets/validation/verification; performance metrics; risk management strategies; human oversight measures; mechanisms for logging and traceability. |
Data governance | Article 10 — Data and data governance obligations | High quality training/validation/testing datasets must be implemented in the AI systems development. Providers are required to establish data governance and management practices aligned with the intended purpose (covering design choices, origin and collection purposes), mitigate bias and bridge data gaps, procedures for data labelling/annotation, and documentation of data provenance and pre-processing. These obligations are incorporated into technical documentations and conformity assessment. |
Human oversight | Article 14 — Human oversight requirements | Providers are required to develop high-risk AI systems that permits active human oversight while in use, to prevent or mitigate risks to health, safety and fundamental rights. Instructions for use must include technical and organisational measures (describing user-interface features, override function, persons responsible for verification and training for users). Human oversight must be consistent with the risk profile and foreseeable misuse. |
Log-keeping / automatic recording of events | Article 12 (Record-keeping) and Article 19 (Automatically generated logs / related provisions) | High-risk AI systems must be equipped with automatic recording (logs) of events throughout its use. The logs must be maintained by the providers for an appropriate period and access is given to the competent authorities upon request. Mechanisms for deployers to collect, store and interpret logs must be reflected in the technical documentation. |
Transparency & information to deployers / affected persons (contestability) | Article 13 (information to deployers); Article 50 (additional transparency obligations) and Article 11 / Annex IV (instructions for use) | Deployers (and affected natural persons) must be furnished with adequate information on the AI system’s abilities, limitations, intended purpose, risks, and use instructions. Individuals must be fully informed of AI-generated/assisted decisions by high-risk solutions. Additional transparency obligations are also imposed on identified categories of AI stipulated under Article 50 of the AIA. |
Importantly, the EU’s explainability provisions align with themes that have been consistently highlighted in academic research. These criteria can be mapped throughout the entire AI life cycle, from data collection and model development to validation and improvement, and ultimately to clinical deployment, according to an analysis of a few chosen healthcare-focused XAI publications. This mapping demonstrates the convergence of explainability’s legal, ethical, and technical requirements, highlighting its crucial rolselectedoth regulatory design and real-world implementation. The table below summarises the main conclusions.
Table II: Focus Of Explainability
Author(s) | Focus of Explainability |
(Combi et al., 2022) | AI Model and Output |
(Reddy, 2022) | Data Collection and Preprocessing
Model Training Model Validation |
(A. Salih et al., 2023) | Model Interpretation and Deployment |
(Ali et al., 2023) | AI Model and Output |
(Han & Liu, 2022) | AI Output |
(Doshi-Velez et al., 2017) | AI Model and Output |
(Steging et al., 2021) | Model Interpretation |
(Matulionyte & Hanif, 2021) | Model Interpretation
Model Validation |
(Hacker et al., 2020b) | Design and Development |
CONCLUSION
This research exhibits that Malaysia’s medical device regulations already provide a solid foundation for embedding Explainable AI as a means of bolstering accountability and public trust in AI-driven healthcare. Although explainability is not yet explicitly codified in the current framework, there is ample regulatory room to incorporate transparency, robust documentation, post-market monitoring, and human oversight — drawing guidance from international benchmarks such as the EU AI Act and IMDRF documents. Looking ahead, further work should prioritise translating these explainability goals into workable compliance strategies, developed jointly with the Medical Device Authority, healthcare providers, and industry stakeholders. Such collaboration will be key to ensuring that future implementation is both effective and context-sensitive.
REFERENCES
- Adewale Abayomi Adeniran, Amaka Peace Onebunne, & Paul William. (2024). Explainable AI (XAI) in healthcare: Enhancing trust and transparency in critical decision-making. World Journal of Advanced Research and Reviews, 23(3), 2447–2658.
- Ali, S., Abdullah, Armand, T. P. T., Athar, A., Hussain, A., Ali, M., Yaseen, M., Joo, M. Il, & Kim, H. C. (2023). Metaverse in Healthcare Integrated with Explainable AI and Blockchain: Enabling Immersiveness, Ensuring Trust, and Providing Patient Data Security. Sensors, 23(2), 1–17.
- Alkhanbouli, R., Matar Abdulla Almadhaani, H., Alhosani, F., & Simsekler, M. C. E. (2025). The role of explainable artificial intelligence in disease prediction: a systematic literature review and future research directions. BMC Medical Informatics and Decision Making, 25(1), 1–7.
- Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1), 2.
- Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2021). Accountability of AI Under the Law: The Role of Explanation (Berkman Klein Center Working Group on AI Interpretability).
- Bélisle-Pipon, J. C., Couture, V., Roy, M. C., Ganache, I., Goetghebeur, M., & Cohen, I. G. (2021). What Makes Artificial Intelligence Exceptional in Health Technology Assessment? Frontiers in Artificial Intelligence, 4(November), 1–16.
- Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality and Safety, 28(3), 231–237.
- Combi, C., Amico, B., Bellazzi, R., Holzinger, A., Moore, J. H., Zitnik, M., & Holmes, J. H. (2022). A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine, 133(October), 1–11.
- Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S. J., O’Brien, D., Shieber, S., Waldo, J., Weinberger, D., & Wood, A. (2017). Accountability of AI Under the Law: The Role of Explanation. SSRN Electronic Journal, 3(4), 34–41. https://doi.org/10.2139/ssrn.3064761
- Ebad, S. A., Alhashmi, A., Amara, M., Miled, A. Ben, & Saqib, M. (2025). Artificial Intelligence-Based Software as a Medical Device (AI-SaMD): A Systematic Review. Healthcare (Switzerland), 13(7), 1–20.
- Fraser, A. G., Biasin, E., Bijnens, B., Bruining, N., Caiani, E. G., Cobbaert, K., Davies, R. H., Gilbert, S. H., Hovestadt, L., Kamenjasevic, E., Kwade, Z., McGauran, G., O’Connor, G., Vasey, B., & Rademakers, F. E. (2023). Artificial intelligence in medical device software and high-risk medical devices–a review of definitions, expert recommendations and regulatory initiatives. Expert Review of Medical Devices, 20(6), 467–491.
- Hacker, P., Krestel, R., Grundmann, S., & Naumann, F. (2020a). Explainable AI under contract and tort law: legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 415–439. https://doi.org/10.1007/s10506-020-09260-6
- Hacker, P., Krestel, R., Grundmann, S., & Naumann, F. (2020b). Explainable AI under contract and tort law: legal incentives and technical challenges. Artificial Intelligence and Law, 28(4), 415–439.
- Hamid, S. (2016). The Opportunities and Risks of Artificial Intelligence in Medicine and Healthcare. The Babraham Institute, University of Cambridge, Summer 2016, 1–4.
- Han, H., & Liu, X. (2022). The Challenges of Explainable AI in Biomedical Data Science. BMC Bioinformatics, 22(12), 1–3.
- Houssein, E. H., Gamal, A. M., Younis, E. M. G., & Mohamed, E. (2025). Explainable artificial intelligence for medical imaging systems using deep learning: a comprehensive review. Cluster Computing, 28(7), 445–467.
- Jie, Z., Zhiying, Z., & Li, L. (2021). A meta-analysis of Watson for Oncology in clinical application. Scientific Reports, 11(1), 1–13
- Lekadir, K., Quaglio, G., Garmendia, A. T., & Gallin, C. (2022). Artificial Intelligence in Healthcare. In European Parliament (Vol. 31, Issue 8).
- Macrae, C. (2019). Governing the safety of artificial intelligence in healthcare. BMJ Quality and Safety, 28(6), 495–498.
- Matulionyte, R., & Hanif, A. (2021). A call for more explainable AI in law enforcement. 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), 32–44.
- McKee, M., & Wouters, O. J. (2023). The Challenges of Regulating Artificial Intelligence in Healthcare Comment on “Clinical Decision Support and New Regulatory Frameworks for Medical Devices: Are We Ready for It?-A Viewpoint Paper”. International Journal of Health Policy and Management, 12(1), 7261.
- Medical Device Authority. (2014). Medical Device Guidance Document (The Essential Principles Of Safety And Performance Of Medical Devices). http://www.mdb.gov.my
- Mohapatra, R. K., Jolly, L., & Dakua, S. P. (2025). Advancing explainable AI in healthcare: Necessity, progress, and future directions. Computational Biology and Chemistry, 119, 2–21.
- Mohd Shith Putera, N. S. F., & Saripan, H. (2019). Liability Rules for Artificial Intelligence: Sitting on Either Side of the Fence. Malayan Law Journal, 6(1), 2–6.
- Nordlinger, B., Villani, C., & Rus, D. (2020). Healthcare and artificial intelligence. In Healthcare and Artificial Intelligence.
- Panigutti, C., Hamon, R., Hupont, I., Fernandez Llorca, D., Fano Yela, D., Junklewitz, H., Scalzo, S., Mazzini, G., Sanchez, I., Soler Garrido, J., & Gomez, E. (2023). The role of explainable AI in the context of the AI Act. ACM International Conference Proceeding Series, 1139–1150.
- Phillips, P. J., Hahn, C. A., Fontana, P. C., Yates, A. N., Greene, K., Broniatowski, D. A., & Przybocki, M. A. (2021). Four Principles of Explainable Artificial Intelligence.
- Reddy, S. (2022). Explainability and artificial intelligence in medicine. The Lancet Digital Health, 4(4), e214–e215.
- Salih, A., Boscolo Galazzo, I., Gkontra, P., Lee, A. M., Lekadir, K., Raisi-Estabragh, Z., & Petersen, S. E. (2023). Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models. Circulation: Cardiovascular Imaging, 16(4), E014519. https://doi.org/10.1161/CIRCIMAGING.122.014519
- Salih, A. M., Galazzo, I. B., Gkontra, P., Rauseo, E., Lee, A. M., Lekadir, K., Radeva, P., Petersen, S. E., & Menegaz, G. (2024). A Review of Evaluation Approaches for Explainable AI With Applications in Cardiology. Artificial Intelligence Review, 57(240), 13–44.
- Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V., & Biancone, P. (2021). The role of artificial intelligence in healthcare: a structured literature review. BMC Medical Informatics and Decision Making, 21(1), 1–23.
- Sovrano, F., Lognoul, M., & Vilone, G. (2024, August 27). Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis. ECAI.
- Steging, C., Renooij, S., & Verheij, B. (2021). Rationale Discovery and Explainable AI. Frontiers in Artificial Intelligence and Applications, 346, 225–234.
- Taulli, T. (2021). IBM Watson: Why Is Healthcare AI So Tough? Forbes. https://www.forbes.com
- van der Velden, B. H. M., Kuijf, H. J., Gilhuijs, K. G. A., & Viergever, M. A. (2022). Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 79, 102470.
- van Kolfschooten, H., & van Oirschot, J. (2024). The EU Artificial Intelligence Act (2024): Implications for healthcare. Health Policy, 149, 1–4.
- Vardas, E. P., Marketou, M., & Vardas, P. E. (2025). Medicine, healthcare and the AI act: gaps, challenges and future implications. European Heart Journal – Digital Health. https://doi.org/10.1093/ehjdh/ztaf041
- Waller, P., & Yeung, K. (2024). Can XAI methods satisfy legal obligations of transparency, reason-giving and legal justification?
- Xu, N., Yang, D., Arikawa, K., & Bai, C. (2023). Application of Artificial Intelligence in Modern Medicine. Clinical EHealth.