International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-Today
November 2024 Issue : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th December 2024
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th November 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

A Pilot Study on Handling Transactional BigDataStream for Improved Electronic Medical Health Records Management

  • Nwozor, Blessing U
  • Asheshemi, Nelson Oghenekevwe
  • 375-396
  • Nov 15, 2024
  • Computer Science

A Pilot Study on Handling Transactional BigDataStream for Improved Electronic Medical Health Records Management

Nwozor, Blessing U.; Asheshemi, Nelson Oghenekevwe

Dept of Computer Science, Federal University of Petroleum Resources Effurun, Nigeria

DOI: https://doi.org/10.51584/IJRIAS.2024.910034

Received: 09 October 2024; Accepted: 14 October 2024; Published: 15 November 2024

ABSTRACT

One main objective of electronic medical health records is to provide quality support to aid effective decision making for patients by expert personnel and consultants. A major and efficient means to achieve the highest level of quality therein the healthcare delivery system and infrastructure therein is by discovering meaningful data to aid its future classification and prediction of patients’ activities and conditions visàvis a plethora of other events that may(not) occur within a healthcare system. Identifying and detecting values of abnormal health state of patients and the future forecast of such events via underlying health condition(s) that are hidden within medical records have since become a critical performance index (aim) in healthcare systems. Result showed that system throughput to determine the time interval between a user’s request and apps response time as user feedback yields an average convergence time of 18secs with support 0.1. and confidence 0.1 values. We also assessed its availability, reachability, scalability and resilience with response time for both queries and page retrievalvia cases of 250 and 1000users respectively. Result showed response time of 59secs (queries) and 63secs (pages retrieval)for250users; And had a longer response time of 78secs (queries) and 85secs (page retrieval) for 1000users respectively.

Keywords: BigData, Data Mining, Medical Health Records, Association rule mining, apriori model, frequent pattern growth algorithm,

INTRODUCTION

With the ground truth notion of health is wealth, a key integral foundation to human existence is the state of the health of an individual, and suggests that the socioeconomic wellbeing of a society is directly proportional to the health wellbeing of all citizens that constitutes such society; And indirectly proportional cum correlates to the financial wellbeing of its citizens. The role of the healthcare infrastructure describes the quality of life cum socioeconomic welfare of a modern society, and is been broadly recognized as performance driver for economic growth. However, an advance in digital revolution has and still contributes immensely to the healthcare sector. Patients are unlimited today to receiving medicare at specific healthcare facilities especially during emergencies, where exchange of patient records becomes not only a prerequisite to study patient history; But, also become a panacea to delivery of improved healthcare based on decision support. Tracking a patient’s medical history becomes critical, mandatory and imperative.

With patient historic recordset not readily available to expert medical personnel at healthcare facilities for patients’ treatment these records are accessible within the infrastructure wherein they were created advancing the need for adoption of electronic medical records. This issue alongside: (a) lack of care coordination, (b) inadequate provision of telemedicine for patients’ access to their medical records, (c) records corruption via tamper/mishandle/steal, and (d) records exchange with unauthorized medical experts with(out) patients’ consent. With EMRs critical platform interoperability issues for data exchange, confidentiality, privacy, and security must be addressed urgently.

Traditional collection and storage of the electronic health records utilize centralized techniques that pose several risks and lean systems toward a number of data breaches/attacks that compromise data availability. Blockchain resolves these challenges with its immutability, and prevents records alteration. Its adaption herein will improve user trust level with data security, and aid its effective dissemination of private healthcare records. Blockchain offers solutions such as transparency, improved authentication, consensus verification, data validation, etc and other unique record sharing capabilities. Its adaption has continued to address many challenges to healthcare; While, leveraging on new, emerging technologies. Beside its interoperability, the lack of standards and practice to developing blockchain healthcare apps must be addressed to provision software engineers with a plethora of tools that will ensure its adaption as a frontier, transformative paradigm for both practitioners and researchers alike.

Big-Data in Healthcare Delivery

Data has remained the key primary driver of business efficiency, growth and development. The more data a business has access to, the better her decision support is enhanced via evidence based results as reached. Data collection is thus, crucial and critical to every business, as it yields the inevitable ground truth that helps managers to anticipate current trends using tailored and specific metrics for the inherent task domain that helps define and forecast future events. As businesses become more aware of this feat and make concerted effort toward this society provides businesses with larger volume for collection. With this rise in the available amount of data both in (un)structured forms across a variety of domains, there is become the increased need for analytics tools and mining procedures to make meaning of this data. This need for the efficient, proactive management, will support businesses to stay ahead the curve as well as render them with importance and insightful knowledge to support their daily decisions.

Technology adoption has resulted in significant health gains via accessible health information and extension of health services. Electronic Health Records (EHR) system has emerged as a key component of national health policy. With digital revolution now here, the healthcare sector is witnessing an exponential increase in data collection sources via its adoption of electronic health records, medical imaging, wearable devices, and genetic sequencing. This vast amount is known as bigdata with its potential to transform healthcare delivery via complex analytics, targeted personalized therapy, and improved patient outcomes. This integration however, of digital workflows and the reliance on electronic patient records yields issues namely patient records security, record integrity and record privacy within the healthcare data. Big data becomes imminent with advances and sophistication in clinical recordset which has made them diverse and timely. This paradigm shift is motivated by both regulatory obligations, confidentiality, security, privacy of patient records vis-à-vis the need for improve patient care and access to healthcare without boundaries as means to save more lives with low cost optimal solution(s) Thus, bigdata provides a variety of novel prospects, frontiers applications as clinical decision support system to aid health insurance, disease surveillance, population health management, adverse event monitoring, and therapy optimization for multiorgan disorders.

Big data enables predictive analysis based on electronic information, including clinical lab results, MRI, and disease specific variables. As the healthcare sector migrates from a volume to value based paradigm data is of critical importance for patient care and associated costs. Value based model aims to enhance care quality and prediction analysis for better treatment decisions; Its subsequent deployment cum use has paved the way for evolution of the electronic health records (EHR), electronic patient records (EPR), and healthcare information technology (HIT) respectively. Yoro and Ojugo asserts that provision of quality patient care requires a large volume of patient records from social, clinical, environmental, genomic and psychological recordset among other. These, directly impacts the health status of a patient and must be analyzed while performing further treatment as these windows new opportunities to HER, EPR and health information technology (HIT). This is agreed by.

Big-Data Analytics Methodologies

Data mining or knowledge discovery in database (KDD) refers to the mining of knowledge from large amounts of data. Data mining techniques are wont to operate vast volumes of data to get hidden patterns and relationships helpful in deciding  as in Figure 1 showing knowledge discovery in database]. Schemes for mining knowledge can include regression, classification, clustering, association rules, decision trees, etc explained as thus:

Figure 1. The Knowledge Discovery in Database (Source:)

  1. Regression (or multivariate analyses) as adapted in predictive tasks often helps to model the interplay, connection, relation and correlation influence between one or more independent variables and dependent variables. The underlying independent variables of interests are known attributes; while, the dependent variables are response(s) to be predicted in lieu of the independent variables. However, many real world challenges are not predictive. Prediction is a self evident declaration in a task that can be assumed as the basis for argument. Thus, more complex techniques are used such as logistic regression, decision trees, or neural networks to forecast its future values. An equivalent model is the classification and regression tree (CART) algorithms built to help classify both categorical response variables and to forecast continuous response variables  effectively handling classification and regression task.
  2. Classification is the most applied mining technique. It uses preclassified examples to construct a model to classify the population of records at large. It often uses either decision trees, or biologically inspired classification algorithms. Its procedure often involves learning its underlying features of interest so that the model/ensemble can adequately identify features to help group datapoints into class distributions. For learning, the algorithm analyzes the training dataset; while in classification, it explores test data to estimate the accuracy of its classification rules. If the accuracy is suitable, the principles are applied to the new data points. The classifier training algorithm uses its preclassified examples to determine the parameters required for correct discrimination by encoding these feats into a model. Decision trees generate rules for a dataset in either classification and regression task.
  3. Clustering identifies comparable classes of objects via features such as its density and sparse regions to discover the overall distribution patterns and correlations among the data attributes. This paradigm (computationally expensive) can classify item into groups effectively. It is used as a preprocessing mode for attribute select/classification via association rules and correlation to fan out frequent itemset within large datasets. This design is used in decision such as behavioural analysis, catalog design, cross marketing, etc. With association rule(s), it generates rules that confidently yield values. For example, a neural network as a set of connected input/output units, for which each/every connection features a weight present with it. During training, the network learns by adjusting these weights to suitably predict class labels of the input data; And thus, their meanings are derived via complicated, imprecise data that may also wish to extract patterns and/or detect trends that are too complex to be noticed by conventional processing technique. These are compatible with continuous valued inputs and outputs. Neural networks are best at identifying patterns or trends in data and are compatible with prediction or forecasting needs. Also, a technique used to classify records supported a mixture of the classes of the k record(s) most almost like it during a historical dataset (where k is bigger than or adequate to 1). Sometimes called the k nearest neighbor technique.

Data mining is often used to reinforce our view of the training process that specializes in identifying, extracting, and evaluating underlying parameters of interest associated with the training task within a domain. In healthcare scenario, it is known as healthcare record mining  and it is fast gaining popularity due to its potential for healthcare sector. Shiokawa et al. Mining transactions permit users to research data from different dimensions, categorize it, and summarize its relations as identified via a mining process. The selected 600participants to investigate their performance via Bayes network classification on category, language, and background qualification. Ojugo et al. Investigated student performance using 300 students from a Federal College of Education Technical Asabain Nigeria. They investigated students’ attitude in lieu of: (a) students’ attendance in school, (b) hours spent studying, (c) student’s family income, (d) students’ mother’s age, and (e) students’ mother’s education. Using the rectilinear regression  study reported that the students’ family’s income and students’ mother’s education, were significantly highly correlated with the student’s academic performance.

Brindlmayer conducted a performance study on 400patients in India, to determine the prognostic value of various measures of cognition, personality, and demographics on patient recovery. The choice was supported by the cluster sampling technique, during which the whole population of interest was divided into groups or clusters, and a random sample of those clusters was selected for further analyses. it had been found that girls with high socioeconomic status had relatively higher academic achievement within the science stream, and boys with low socioeconomic status generally had relatively higher recovery time achievement as supported by. Navhandhi et al. explored patient behavior on wearable health wireless sensor based devices vis à vis their relations to aid quick patient recovery via medical support systems monitoring. They applied a decision tree model to predict the ultimate recovery time of patients using 3classification methods that were fused with the sensor based acquired dataset namely decision tree, Random Forest and Gradient Boosting. Their results indicated that the tree based ensemble outperformed other single classifiers as well as yielded better and enhanced predictions for time of patient recovery.

Blockchain and the Electronic Medical Health Records

A blockchain is an incorruptible, distributed database that is maintained and validated over a network of interconnected nodes globally. It records a timestamp to a node to avoid data tampering. There are various blockchains namely private/permissioned, consortium, hybrid and public/permissionless. Each, has their ideal uses, numerous benefits, and drawbacks. Their adoption can yield these: (a) restricted/unauthorized access to patient records , (b) supports customization and identity verification that grant stakeholders access on the networks  as opposed to having users approve each other, (c) network support of known validated users with fault tolerance platform support, (d) higher transaction throughput with preselected participants, and (e) low energy consumption in mining and business transaction logic. In all, the permission mode has less complicated algorithms and a less complex, ease to secure model as users (i.e., patients, medical experts, medicare officers, and a host of medical care facilities) are the only stakeholders who can have access to patients’ medical records as well as involved in the exchange therein.

The public blockchain is often open, decentralized, and accessible to all. A validated user helps validate transactions via proof of work and proof of stake. They are nonrestrictive and use distributed ledgers that require no permission, as any user is authorized to access every data they wish to access. The consortium blockchain is semi decentralized for organizations wishing to manage effectively their own network. Thus, the blockchain can exchange data and mine as well. The hybrid blockchain is a merger of both the public and private blockchain. Better control is required to achieve higher goals  being centralized with decentralized open nodes. It yields better security than a public network (though not better than the private blockchain), greater integrity and transparency, and various benefits.

With peer to peer consensus, the permissionless mode allow nodes to participate in the consensus of a transaction such as Ethereum. But, with Fabric/Corda, if a user is not selected, s(he) is given restricted access to the chain. Corda allows better access control to records and enhances privacy; it achieves greater performance only when all transaction participants have reached a consensus. Also, consensus for the fabric ledger starts from proposing to commit a transaction on ledger. Each node (with roles as patients, expert personnel, others) assumes different tasks in reaching a consensus. A patient invokes a transaction, interacting with other peers, and maintains the distributed ledger; while the expert personnel provide a channel to patients and others over which message is broadcasted. The channel then ensures that all connected peers deliver the same message in the same logical order. To ensure each record is a complete keyset, with its state initialized as a record in the world state, we use a hyper fabric ledger. Thus, the record supports several states with attributes that allow the same ledger in its world state to hold various records of the same patient. This will ensure the system evolves and updates its state(s) and structure with the addition of more records.

Study Motivation

Study is motivated as thus:

  1. Costs: The initial costs of setting up an EHR are greater because to the extensive and complex IT network, but will drastically drop with time. Manually storing paper records requires additional workers to manage, access, and organize papers, leading to significant cost increases over time. EHR saves labour, time, and storage space, resulting in cost savings over time.
  2. Access: Electronic health records (EHR) have the benefit over paper records in terms of accessibility. Healthcare professionals can now access information almost instantly, whenever and wherever they need it thanks to digital health records, which increases their efficiency. However, sharing paper medical records with other healthcare professionals requires them to be physically delivered to them or scanned and sent via email, which takes time and money.
  3. Storage: While paper medical data have to be kept in enormous warehouses, electronic health records may be kept in a secure cloud, making them easy for those who need to access. In addition to taking up room, paper records are not ecologically friendly and have a tendency to deteriorate over time when handled by numerous people, which raised the storage costs.
  4. Security is a major concern for both paper and electronic storage systems, as both are vulnerable to security attacks. Electronic records stored without sufficient security systems are vulnerable to unauthorized access and misuse. Paper records are vulnerable to human mistake, including loss, destruction, and theft. Natural disasters like fires and floods raise concerns about the security of health records.
  5. Readability and Accuracy: Handwritten paper medical records may be difficult to read, which can lead to medical errors. However, electronic health records are frequently written using standardized abbreviations, which make them more accurate and readable worldwide and reduce the chance of confusion. Healthcare practitioners cannot write all the information they need in paper medical records.

The study implements an electronic medical information system to help effectively manage bigdata recordset produced by the medical healthcare infrastructure for improved service delivery. This should also feature transaction authentication and validation that ensures confidentiality, interoperability, etc and poised to comply with regulation standards of the Health Insurance Portability and Accountability Act (HIPAA) in Nigeria. HIPAA compliance helps a system to ensure healthcare providers in Nigeria adhere strictly to standards (i.e., policy framework) targeted at the protection of patient health records to ensure privacy and data security.

MATERIAL AND METHOD

Dataset used was obtained as updated medical patient records from the Federal Medical Centre Asaba from March 2021 to March 2024  consisting of 992,364 patient records and ranging from patient registration, laboratory results, checkout and discharge records, etc.

From the Exisiting to Proposed Electronic Medical Health Records

Pandey et al. implemented an Electronic Health Record Management System (EHR) using sensor based Radio Frequency Identification (RFID) to improve efficiency and accuracy in managing patient records. Study overcomes the risk in paper based record systems via fusion of RFID. Their EHR simplified patient identification, data collection, improved data accuracy, and improve overall healthcare delivery by leveraging on the many benefits of RFID. It offered real time synchronisation to guarantee uptodate, traceable and easy to access patient records, And, supported healthcare expert decision to deliver timely medical interventions to patients. The study in all highlighted the influence in revolutionising healthcare record management and increasing patient outcomes. The system which is a web based application for managing health records was designed to allow accessibility to the doctors, patients and admin. These threefold access control mechanism allows for the monitoring of the activities within the health record to ascertain who, what and when an activity is performed on the application.

Associative Rule Mining

Associative rule mining (ARM) is used in patient record management  to help consciously and strategically ease retrieval of patient recordset from large transactional bigdata streams. It helps ensure that all records of transactions (i.e. medical examinations, laboratory results etc) of a petient as conducted both in cases of emergencies in other healthcare facility or the original healthcare facility where the patient record was created as bundled together as a patient historical recordset. This, improves expert diagnosis, administration of medical opinions and patient traceability and management of recordset therein as transactions; And in turn, improves patient confidence and trust as well as revenue generation for the healthcare facility. Such records manipulation as Items transaction can effectively and efficiently aids patient record search and identification ease, will also enhances combined better inventory, and optimize database layout.

for this study, we adapt the Associative Rule mining (ARM)  which is best suited for analyzing bigdata that involves a large volume of transactions consisting of a high number of patient recordsets with timestamped expert demand and supply recordset chain. A recordset is the collection of patient data grouped as a record, alongside other records gathered and generated in the course of patient treatment and transaction with a particular healthcare facility. These are then stored as a pairset (i.e. recordset)  so that it provides the database/databank a strategic means of modeling and data mining methods, that helps stakeholders (doctors, nurse, healthcare personnels, patient etc) discover insightful patient recordset interactions and relations via generated rules that draw such inferences as to the possibilities of these recordsets as items occurring together in groups of specific types. Thus, as a mining technique as in figure 2 with the requisite proposed architecture we seek to extract all groups of items (and related records) for any/all transaction(s) at time (t) that consists the patient (p). This, will outcomes a collection of association rules used in predicting the recordset (i.e. of all items) wherein a patient (p) within any/all single timestamped transaction exists.

Figure 2. The Proposed System Architecture

The architecture for the proposed model with its components as in figure 3 is described as thus:

  1. Business Management Visualization studies all relevant patient records as well as enables the inherent system use data analytics methods to assist the collection, management and eased retrieval of patient records.
  2. Interoperability helps improve transactional stream data mining and analytics processing in handling large patient medical recordset via its use and adoption of the Luigi v3.7. This software helped our proposed system to monitor changes and amanedments inpatient records as they are accessed and updated from a variety of platforms at the different and varying medical healthcare facilities during regular checkups and cases of emergencies (as away from the patients’ recordset domiciled healthcare facility where the records were originally created).
  3. Big Data Analytics platform is designed specifically for use by and at the Federal Medical Centre Asaba to allow her trace and manage patient records. It creates a recordset management support system that helps the healthcare facility to support decision making activities as well as effectively manage electronically, medical health records of her various patients.
  4. Patient Management Strategy improves experts patient interactions/relations, which helps the system to adequately study patient records access pattern in each patient record datastream transactions. It also helps researchers to aggregate to what extent a patient lifestyle and other conditions impacts their health. This in turn, impacts on patients’ access to their records by assisting healthcare experts to enforce policies and business decisions as means of monitor as well as aid effective control of the patients’ health.

Figure 3. BigData Analytical Model for Management of Patients as Customers (Source)

To ensure that only accurate data is processed, we needed to calibrated the association rule mining for each patient recordset using the Hadoop tableau visualizer for Calibev and Hovritz Thompson estimator. This feature ensures that only the appropriate rules for transactions (for a particular patient) is generated and processed via the frequent pattern growth algorithm. The generated rules are analyzed using RapidMiner v8.1 and calibrates patients’ profile dataset via the simple random sampling without replacement (srswor) distribution as in the algorithm listing 1.This will yield improved service delivery with authenticated and validated transaction(s), will also ensure confidentiality and interoperability between platforms used by various healthcare facilities that complies with standards of Health Insurance Portability and Accountability Act (HIPAA) in Nigeria targeted at protection of patient health records to ensure privacy and data security.

Thus, our Federal Medical Centre (FMC) patient recordset transaction data contain items (and related records as transactions) of a particular patient (single and combined) from any healthcare facility. An itemset as used here, describes datastreams measures and dimensions. Example description of the dimensions for patient is described as in Listing 1 itemset description.

ItemSet Description forPatient_Nelson
Nelson.Diagnosis = FMC_DataCollect⋂FMC_LabReSult⋂FMC_CHKIN⋂FMC_CKOUT
For EachSelectedDiagnosis.Resultdo
FMC_Nelson.DataCollect = Demographics⋂Address⋂MedicalHistory
FMC_Nelson.LabResult = Malaria⋂Typhoid⋂UrinaryTrackInfect
FMC_Nelson.CheckIn = FMC_DataCollect⋂FMC_LabResults
FMC_Nelson.CheckOut = FMC_LabResult ‘cleared’⋂FMC_Diagnosis ‘cleared’
End For Each

Ethical and Legal Challenges of Using Blockchain and Big Data in Healthcare

Advancements in blockchain technology, combined with the rapid growth of network technologies, are transforming traditional business and service models. Healthcare is one sector ripe for this transformation. Still, due to its compassionate and complex nature, the adoption of blockchain in e healthcare must not only ensure utility but also address the significant ethical challenges that come with it. With the integration of blockchain and big data in healthcare, a host of ethical and legal challenges emerge:

  1. Data Privacy and Confidentiality: Protecting patient privacy is paramount, but blockchain’s immutable nature complicates the handling of personal health information (PHI). Once recorded, data cannot be changed or deleted, making privacy safeguards a critical concern, especially as healthcare data grows exponentially in the big data era.
  2. Consent and Data Ownership: Blockchain’s decentralized nature and big data analytics may leave patients uncertain about how their health data is used. Ethical concerns about patient autonomy, consent, and data ownership surface, as patients may have limited control over their data once it’s stored on a blockchain.
  3. Security and Data Breaches: Blockchain is often hailed for its security, but no system is completely immune to hacking or unauthorized access. The aggregation of vast healthcare datasets in big data increases the risk of largescale breaches, threatening patient trust and safety.
  4. Regulatory Compliance: Navigating the legal landscape is a challenge for healthcare organizations. Compliance with regulations such as HIPAA (U.S.) and GDPR (EU) becomes complex when introducing new technologies like blockchain. Adhering to these frameworks while ensuring the integrity of blockchain systems is a delicate balance.
  5. Bias and Fairness in Data Use: Big data’s power lies in its ability to generate insights, but if the data used is biased or unrepresentative, it can lead to unfair outcomes in patient care. This raises significant ethical concerns about equity in treatment and healthcare services.

By addressing these challenges demands a careful balance between technological innovation, patient rights, and strict regulatory compliance. As blockchain and big data continue to evolve in healthcare, a holistic approach to ethical considerations will be essential for the responsible and effective implementation of these technologies.

Blockchain Security System

Blockchain secures data through a combination of cryptographic techniques, decentralized architecture, and consensus mechanisms.

i. Encryption Techniques in Blockchain

Blockchain uses several cryptographic methods to ensure data integrity, confidentiality, and authenticity:

  • Hashing: Blockchain employs cryptographic hash functions (e.g., SHA256) to ensure data integrity. When a block of data is hashed, it produces a unique fixedlength output (hash). If the data is tampered with, even slightly, the resulting hash will change, signaling that the data has been compromised.
  • PublicKey Cryptography (PKC): Blockchain relies on publickey cryptography for secure communication and identity verification. Each participant in a blockchain has a pair of cryptographic keys:

        Public Key: Shared openly to encrypt messages or verify digital signatures

       Private Key: Kept secret and used to decrypt messages or sign transactions. Only the holder of the private key can access or modify the data associated with that key, ensuring confidentiality.

  • Digital Signatures: Every transaction in the blockchain is signed using the sender’s private key, providing authentication and nonrepudiation. The recipient (and anyone else) can verify the signature with the public key to confirm the transaction’s validity without needing access to the private key.

ii. Private vs. Public Blockchains in Securing Patient Data

The choice between public and private blockchains is critical when it comes to handling sensitive data, such as patient records in healthcare.

  • Public Blockchains: These are open, decentralized networks where anyone can participate, verify transactions, and view the ledger. While public blockchains (e.g., Bitcoin or Ethereum) are highly secure due to their decentralized nature and consensus mechanisms (e.g., Proof of Work or Proof of Stake), they are not ideal for patient data, as this data must remain private to protect individuals’ confidentiality.
  • Private Blockchains: These are permissioned networks where only authorized participants can access the blockchain and make transactions. Private blockchains are more suitable for securing sensitive patient data as they allow strict access control. In healthcare, private blockchains can be used by hospitals, insurance companies, and patients, with different levels of access granted based on their roles. For example, doctors may have access to a patient’s full medical history, while insurers may only access billingrelated data.

Data Confidentiality in Private Blockchains

Private blockchains allow organizations to control who can join the network and what data they can access. They can implement stronger privacy protocols, such as:

  • Zero Knowledge Proofs (ZKP): A technique that allows one party to prove to another that they know a value (e.g., a patient’s record) without revealing the actual value. This enhances data privacy by ensuring that sensitive details remain undisclosed while still being verified.
  • Advanced Encryption Standard (AES): Data within a blockchain can be encrypted using symmetric encryption methods like AES to ensure that even if someone gains access to the blockchain, they cannot read the data unless they have the encryption key.

iii. Permissions Management in Blockchain

Blockchain networks, particularly private and consortium blockchains, employ finegrained permission management systems. These determine who can view, write, or validate data.

Granting Permissions

In private blockchains, permissions are typically managed using:

  • Access Control Lists (ACLs): Specific rules are established to define which roles have access to which types of data. For instance, in a healthcare network:

            Doctors could be granted access to view and update patient records.

            Patients might only have read access to their own records.

            Researchers may have permission to view anonymized datasets.

  • RoleBased Access Control (RBAC): Users are assigned roles, and permissions are granted based on these roles. In the healthcare context, a doctor, nurse, or administrator could have different access levels. This method simplifies the management of permissions, especially as the number of users grows.
  • Smart Contracts: In more advanced blockchains, smart contracts can automate permission management. For example, a smart contract might automatically grant a hospital access to a patient’s records upon their consent or revoke access after a predefined period or event, such as the patient’s discharge.

Revoking Permissions

In private blockchains, permission revocation is crucial to maintaining control over sensitive data:

  • Dynamic Permissioning: Blockchain networks can revoke or modify permissions in realtime. For example, if a patient changes hospitals, the permissions associated with their previous healthcare provider can be automatically revoked.
  • Revocation via Smart Contracts: Smart contracts can also handle permission revocation. For example, if an insurance claim is settled, a smart contract can revoke the insurer’s access to the patient’s medical data, ensuring no further access.

iv. Consensus Mechanisms and Security

Blockchain networks use consensus algorithms to ensure that all participants agree on the state of the blockchain. In private blockchains, consensus mechanisms such as Proof of Authority (PoA) or Practical Byzantine Fault Tolerance (PBFT) are often used, which are faster and more efficient than public blockchain algorithms. These mechanisms help secure the network by ensuring that only trusted nodes can validate transactions.

Using the Vertical Scaling Strategy

Vertical scaling, also known as “scaling up,” involves increasing the capacity of a single server or node to handle more workloads by adding more resources like CPU, memory, and storage. In the context of handling big data streams for improving Electronic Medical Health Records (EMHR) management, vertical scaling strategies can provide greater flexibility and robustness by ensuring that the system can efficiently manage increasing amounts of data and transactional demands without major architectural changes. Here’s a breakdown of effective vertical scaling strategies that can be applied to the study:

  1. Optimizing Database Performance

Transactional big data streams, such as those used for EMHR, often involve managing large volumes of realtime data, querying, and updating records efficiently. A robust database is essential to ensure scalability. Several approaches for vertical scaling can optimize database performance:

Upgrading to Faster Storage: Use NVMe (NonVolatile Memory Express) or SSD (Solid State Drives) instead of traditional HDDs to speed up I/O operations, allowing the system to read and write large volumes of medical records more quickly. This reduces latency in accessing transactional data, which is critical in realtime environments.

InMemory Databases: For systems that need to handle highfrequency transactional data, using inmemory databases like Redis or Memcached can significantly speed up data retrieval and processing. Inmemory databases store critical data in RAM, which is orders of magnitude faster than traditional diskbased storage systems.

Database Indexing and Partitioning: Properly indexing frequently accessed columns and partitioning large tables can optimize query execution time. In a healthcare setting, medical records may be partitioned by patient ID or geographic location, improving the system’s ability to retrieve and update records quickly as the data grows.

  1. Efficient Resource Allocation with Dynamic CPU and Memory Scaling

As the data stream and number of transactions increase, the system needs more computational power to handle queries, process data, and manage EMHR efficiently. Several vertical scaling strategies for CPU and memory include:

Multicore Processors: Upgrading to servers with multicore or hyperthreaded CPUs will enable parallel processing of multiple tasks, such as handling concurrent requests for medical records or processing complex queries on patient data. More cores will allow the system to handle high transaction throughput without bottlenecks.

RAM Upgrades: By increasing the amount of available RAM, the system can hold more data in memory, reducing the need for disk access and speeding up data processing. This is especially important for handling transactional streams where rapid read/write operations are necessary, such as updating patient records in real time.

Dynamic Resource Scaling (Autoscaling): Use dynamic scaling techniques to adjust CPU, memory, and storage based on the current system load. Autoscaling can automatically allocate more resources during peak periods (e.g., when there’s a surge in medical record transactions) and scale down during lower demand. This ensures that resources are used efficiently without unnecessary costs.

  1. Using Virtualization and Containers

Virtualization and containerization provide a flexible approach to vertical scaling by optimizing how resources are used:

Virtual Machines (VMs): Using virtualization, additional VMs can be added on the same physical machine to isolate different workloads, such as separating the big data processing layer from the EMHR management system. This improves resource utilization and ensures that high workloads in one area (e.g., data analytics) do not slow down the entire system.

Containers: Implementing containerization with technologies like Docker or Kubernetes allows for more efficient use of system resources. Containers use fewer resources than full VMs and can be spun up quickly to handle new tasks as needed. For example, containers can be used to process specific tasks within the big data stream, such as patient data validation, and shut down once the task is complete, freeing up resources.

Container Scaling: Containers also allow for finetuned scaling within the same physical machine. If the load on the system increases, more containers can be created to handle the additional transactional data stream, ensuring that the system remains responsive.

  1. Implementing Caching and Data Compression Techniques

To improve system robustness and handle large volumes of medical data efficiently, caching and data compression strategies can be utilized:

Caching Frequently Accessed Data: Caching commonly accessed data, such as patient profiles, using a system like Redis or Memcached, can significantly reduce the load on the database. By storing this data in memory, the system can respond more quickly to frequent queries, improving overall performance and reducing latency.

Data Compression for Storage Efficiency: Medical records and transactional logs can consume a large amount of storage space. Implementing data compression techniques, such as Zlib or Gzip, can reduce the storage footprint of these records, allowing the system to handle larger volumes of data on the same infrastructure. Compressed data can be decompressed when needed for analysis or retrieval.

  1. Utilizing Load Balancing for High Availability

Even in vertically scaled systems, bottlenecks can occur if resources are not allocated efficiently. Load balancing helps distribute the workload evenly across available resources, improving system robustness and avoiding single points of failure:

Hardware Load Balancers: Deploying hardwarebased load balancers (such as those from F5 Networks) ensures that incoming transactional data streams are distributed evenly across the available CPU, memory, and storage resources. This minimizes downtime and improves the system’s ability to handle surges in demand.

Software Load Balancers: In conjunction with containerization and virtualization, softwarebased load balancers like NGINX or HAProxy can be used to direct incoming traffic to the most efficient container or VM instance, optimizing resource utilization and ensuring high availability.

  1. Fault Tolerance and Redundancy

Building a system that can recover from failures and prevent data loss is essential for medical health record management:

Redundant Storage Systems: Implementing RAID (Redundant Array of Independent Disks) for storage can provide data redundancy and fault tolerance. In case of a disk failure, the system can continue operating without data loss or significant downtime, ensuring that patient records remain accessible.

Backup and Recovery: For additional robustness, regular backups of the database and transaction logs should be made to ensure that in the event of a failure or data corruption, patient records can be restored quickly.

  1. Advanced Vertical Scaling Strategies Using Hybrid Approaches

For even greater flexibility and robustness, a hybrid approach combining vertical scaling with horizontal scaling techniques (scaling out by adding more servers) can be implemented. This approach includes:

Vertical Scaling with Microservices: Instead of having a monolithic application, breaking the system into microservices allows for more granular vertical scaling. For example, the patient record management system, data analytics, and billing system can be separated into microservices, each scaled independently based on its requirements.

Hybrid Vertical Horizontal Scaling: In cases where vertical scaling reaches its limit (due to hardware or cost constraints), adding more servers (horizontal scaling) can complement the system. Combining both strategies can ensure that the infrastructure scales both vertically (by adding more resources to existing servers) and horizontally (by adding new servers) to handle growing data streams.

The vertical scaling strategies such as database optimization, dynamic resource allocation, containerization, caching, load balancing, and fault tolerance can significantly improve the flexibility and robustness of a system handling transactional big data streams for electronic medical records. By focusing on these key areas, the system will be better equipped to manage growing data volumes, handle high transaction rates, and ensure the availability and security of sensitive medical information.

Challenges of Implementation

When implementing a pilot study for handling transactional bigdata streams to improve Electronic Medical Health Records (EMHR) management, several challenges may arise, particularly in terms of cost implications, technical support, and integration with legacy systems. Below is a brief overview of these potential challenges

1. Cost Implications

Scaling and managing a system for handling big data streams, especially in a healthcare context, can lead to significant cost challenges:

Hardware and Infrastructure Upgrades: Handling large data streams for EMHR systems requires robust servers with substantial processing power, memory, and storage capacity. This may necessitate upgrading existing hardware to highperformance systems with NVMe storage, multicore processors, or inmemory databases. These upgrades, including costs associated with purchasing and maintaining the infrastructure, can be expensive, especially for healthcare organizations with limited budgets.

Software Licensing and Maintenance: The adoption of enterprise level software solutions for managing big data streams, such as specialized databases (e.g., Apache Kafka for data streaming) or analytics tools (e.g., Hadoop or Spark), often comes with high licensing fees. In addition, continuous software maintenance and updates are necessary to ensure compatibility with evolving technologies, adding to the ongoing cost.

Cloud Infrastructure Costs: Many organizations opt for cloudbased solutions (e.g., AWS, Azure, or Google Cloud) to handle big data streams due to their scalability. However, the cost of cloud services, especially for realtime data streaming and storage, can escalate quickly. The more data that needs to be processed and stored, the higher the cloud service bills. Costs for bandwidth, data ingress/egress, storage tiers, and the use of specific cloud services (e.g., serverless computing) can become prohibitive if not carefully managed.

Data Security and Compliance: Given the sensitive nature of healthcare data, organizations must invest heavily in data security measures to comply with regulations like HIPAA (Health Insurance Portability and Accountability Act). This involves encryption tools, secure data storage, audit logging, and compliance certifications for cloud services—all of which add additional costs.

2. Technical Support and Expertise

Running a system that processes big data streams requires specialized technical support, which poses several challenges:

Limited InHouse Expertise: Many healthcare organizations may lack the technical expertise required to manage the complexities of bigdata infrastructure, such as realtime data processing, high availability, or the integration of new technologies. This expertise is necessary for troubleshooting, optimizing data streams, and ensuring system reliability. Hiring or contracting professionals with experience in data engineering, cloud infrastructure, and streaming platforms can be costly, and finding qualified personnel might be difficult.

Ongoing Maintenance and Monitoring: Handling transactional big data streams for medical records involves continuous monitoring to ensure system performance and security. Without adequate technical support, organizations could face issues like server downtime, data loss, or breaches. This might require setting up dedicated teams for DevOps (Development and Operations) or managed service providers, adding to operational expenses.

Integration with Emerging Technologies: The healthcare industry is constantly evolving with new technologies like AI, machine learning, and predictive analytics. Integrating these technologies into a bigdata management system requires continuous technical support and resources. For example, using AI for realtime data analysis or predictive healthcare solutions requires advanced infrastructure, specialized frameworks, and ongoing support to ensure system compatibility and reliability.

3. Integration with Legacy Systems

A major challenge when implementing a new system for handling transactional big data streams in healthcare is integrating it with existing legacy systems. Many healthcare organizations still rely on outdated or custom built EMHR systems, which may not be easily compatible with modern bigdata technologies:

Data Migration Issues: Migrating data from legacy EMHR systems into a new bigdata infrastructure can be challenging. Legacy systems often store data in incompatible formats, and the migration process may result in data inconsistencies or loss. The process of data cleaning, standardization, and conversion to ensure interoperability with modern systems is time consuming and resource intensive.

Compatibility with Older Systems: Legacy systems are often built using older technologies that may not be compatible with new databases or datastreaming platforms. For example, some older EMHR systems may still use SQLbased databases with limited scalability, while newer bigdata platforms rely on NoSQL databases (e.g., MongoDB, Cassandra) that offer greater flexibility and performance. Integrating these systems may require middleware solutions or custom APIs, which increases both complexity and cost.

System Downtime and Disruptions: Integrating a new system into an existing infrastructure without interrupting ongoing healthcare services is challenging. Downtime during the transition could affect patient care, cause delays in processing medical records, or result in financial losses. Managing such transitions carefully, with appropriate testing and redundancy, is essential but may incur additional costs.

4. Data Governance and Compliance

Data governance and regulatory compliance play a critical role in the management of healthcare records, and integrating new bigdata systems can introduce several challenges:

Compliance with Regulations: Handling healthcare data requires strict adherence to regulations such as HIPAA (in the U.S.) or GDPR (in the EU). Ensuring that new bigdata platforms meet these compliance requirements may require additional investments in data encryption, access control, audit trails, and regular security assessments. Noncompliance can lead to severe penalties, lawsuits, and reputational damage.

Data Ownership and Access Control: In a healthcare setting, determining who owns the data and controlling who has access to it is crucial. Integrating bigdata platforms with existing EMHR systems may raise concerns about data privacy, especially if the data is processed in the cloud or across multiple systems. Implementing robust role based access controls (RBAC), auditing mechanisms, and data governance policies to manage access across multiple systems is necessary but challenging to configure and maintain.

5. Scalability and Future Proofing

A key challenge when integrating a bigdata platform for EMHR management is ensuring that it is scalable and able to handle future data growth without significant reconfiguration or additional costs:

Future Data Growth: As the volume of healthcare data continues to grow, especially with the rise of IoT devices, wearables, and genomic data, the system must scale efficiently to handle larger volumes of data. Vertical scaling (adding more resources to existing machines) might reach its limits, and organizations may need to invest in horizontal scaling solutions, such as distributed databases and cloudbased storage. However, scaling horizontally requires a shift in system architecture, which can complicate the integration with existing legacy systems.

Technology Obsolescence: One concern with any new technology investment is ensuring that it remains relevant and supported over time. Rapid technological advancements can render current systems outdated within a few years, requiring continuous upgrades or migrations. Planning for longterm flexibility in system design and ensuring compatibility with future technologies is critical but difficult to guarantee.

Comparision of the new system and Cloudbase Solution or Hybrid Approaches

Aspect A Pilot Study on Handling Transactional BigData Stream for Improved EMHR Management Cloudbased Solutions or Hybrid Approaches
Infrastructure Typicallyonpremise or private infrastructure designed to handle big data streams. Leverages scalable cloud infrastructure or a mix of onpremise and cloud services.
Scalability Limited by physical hardware, requires vertical scaling for increased capacity. High scalability, with the ability to scale horizontally or vertically as needed.
Cost Implications High upfront costs for hardware and ongoing maintenance; requires large CAPEX. Payasyougo model for cloud services, reducing CAPEX but increasing OPEX.
Technical Support Requires specialized inhouse support for big data and realtime processing systems. Cloud providers offer managed services and 24/7 technical support, reducing inhouse needs.Data Security and Compliance
Data Security and Compliance Onpremise infrastructure provides full control over data, but security is inhouse responsibility. Cloud providers offer builtin security and compliance measures (e.g., HIPAA, GDPR), though trust in thirdparty security is required.
Latency and Performance Performance is subject to the capacity of the physical hardware and optimization. Highperformance levels with low latency, especially with hybrid models that combine local storage and cloud services.
Maintenance Requires continuous inhouse technical expertise for system upkeep and troubleshooting. Minimal inhouse maintenance; cloud providers handle much of the system’s upkeep.

RESULTS AND DISCUSSION

Performance Evaluation

For the model architecture, we used 3types of testing namely :

  1. Alpha testing helps a programmer identify errors in the product before its release for public use. It focuses on finding weaknesses before beta tests and seeks to ensure users employ blackbox/whitebox testing modes.
  2. Beta test is before the release of software for commercial use. It is usually the final test and often includes program system distribution to experts seeking means to improve on the product. We sent the product to the store for the beta test.
  3. Unit test requires individual units or components of software to be tested. This phase/stage of software development often seeks to corroborate and ensure that each part of the software performs according to its design specification. The smallest testable part of any software is known as the unit test. It has few inputs with a single output.

We adopt both alpha and unit testing to yield summary for execution time(s) taken to provision feedbacks to user requests from the BigData system.  With bothsupportand confidencelevel of 0.1Table1yield performance of Frequencygrowth pattern (FP) with0.9182 (91.82%) confidence that supports our decision that expert personnel and patients (as users) preferred the adoption of this proposed system support as analyzed transactions.

Table 1. Performance of Generated Rule Mining with Support and Confidence of 0.1 value

Hybrid Association Rules MineFrequencyPattern Growth Support Confidence Execute
FMC.Nelson⋂ASH.Nelson⋂Lily.Nelson (⋂others.Nelson) àNelson.Recordset (All_Healthcare_Facilities) 0.026 0.194  

18secs

FMC.Nelson.LabResult⋂ASH.Nelson.LabResult⋂Lily.Nelson.LabResult (⋂others.Nelson) àNelson.Recordset (All_Healthcare_Facilities) 0.006 0.214

The system throughput as the main performance metric (as adapted) seeks to determine the time interval between a user’s request and apps response time to provision feedback to the user. Thus, the average convergence time it took for the algorithm to compile these requests as sent by the user was 18seconds using a support of 0.1 and a confidence of 0.1.

Discussion of Findings

Since, it is webbasedapplication  its integration, availability, accessibility and reachability also had to be measured. To achieve this, we measure the response time for both queries and http page retrieval as in Table 2  to ascertain the system scalability and resilience via 2cases namely: (a) 250users, and (b) 1000users from the various categories/groups of doctors, nurses, healthcare personnel, patients at the different healthcare facilities.

Table 2. Performance metrics for Scalability and Resilience

Transactions Case1 Case2
Population Time Population Time
Queries 250 59secs 1000 78secs
Htpps 250 63secs 1000 85secs

With a population of 250users, the response time of 59secs was achieved for queries and 63secs for https pages retrieval. Also, with an upscaled size to 1000users, to check for its scalability and reachability of the proposed system  there was naturally a longer response time of 78secs for queries and 85secs for https pages retrieval feedbacks.

CONCLUSIONS

With the current surge in technological development and the widespread adoption of new technologydriven business strategies, businesses can now operate more efficiently, productively, and profitably. Despite the enormous amount of data generated daily, we have observed that the healthcare industry has always kept uptodate with technology; However, the adoption of data analytics and data science will bolster the field of medicine. Thus, for the future  this study is a positive step and should be improved upon. Furthermore, this research work signifies a paradigm shift in the application of artificial intelligence to health diagnostics as supported by .This application yieldsahighperformance, opensourced, and userfriendly support model with transaction privacy and confidentiality.

Journal Reference Format:

A pilot study on handling transactional bigdata stream for improved electronic medical health records management: a pilot study. Journal of Behavioural Informatics, Digital Humanities and Development.

REFERENCES

  1. A. A. Ojugo, C. O. Obruche, and A. O. Eboka, “Quest For Convergence Solution Using Hybrid Genetic Algorithm Trained Neural Network Model For Metamorphic Malware Detection,” ARRUS J. Eng. Technol., vol. 2, no. 1, pp. 1223, Nov. 2021, doi: 10.35877/jetech613.
  2. A. A. Ojugo et al., “CoSoGMIR: A Social Graph Contagion Diffusion Framework using the MovementInteractionReturn Technique,” J. Comput. Theor. Appl., vol. 1, no. 2, pp. 3747, 2023, doi: 10.33633/jcta.v1i2.9355.
  3. K. Christidis and M. Devetsikiotis, “Blockchains and Smart Contracts for the Internet of Things,” IEEE Access, vol. 4, pp. 22922303, 2016, doi: 10.1109/ACCESS.2016.2566339.
  4. S. J. Damoska and A. Erceg, “Blockchain Technology toward Creating a Smart Local Food Supply Chain,” Computers, vol. 11, no. 6, p. 95, Jun. 2022, doi: 10.3390/computers11060095.
  5. D. A. Oyemade et al., “A Three Tier Learning Model for Universities in Nigeria,” J. Technol. Soc., vol. 12, no. 2, pp. 920, 2016, doi: 10.18848/23819251/CGP/v12i02/920.
  6. A. A. Ojugo and O. D. Otakore, “Intelligent cluster connectionist recommender system using implicit graph friendship algorithm for social networks,” IAES Int. J. Artif. Intell., vol. 9, no. 3, p. 497~506, 2020, doi: 10.11591/ijai.v9.i3.pp497506.
  7. A. A. Ojugo and R. E. Yoro, “Extending the threetier constructivist learning model for alternative delivery: ahead the COVID19 pandemic in Nigeria,” Indones. J. Electr. Eng. Comput. Sci., vol. 21, no. 3, p. 1673, Mar. 2021, doi: 10.11591/ijeecs.v21.i3.pp16731682.
  8. K. Fan, Z. Bao, M. Liu, A. V. Vasilakos, and W. Shi, “Dredas: Decentralized, reliable and efficient remote outsourced data auditing scheme with blockchain smart contract for industrial IoT,” Futur. Gener. Comput. Syst., vol. 110, pp. 665674, Sep. 2020, doi: 10.1016/j.future.2019.10.014.
  9. A. A. Ojugo et al., “Evolutionary Model for Virus Propagation on Networks,” Autom. Control Intell. Syst., vol. 3, no. 4, p. 56, 2015, doi: 10.11648/j.acis.20150304.12.
  10. M. I. Akazue, I. A. Debekeme, A. E. Edje, C. Asuai, and U. J. Osame, “UNMASKING FRAUDSTERS : Ensemble Features Selection to Enhance Random Forest Fraud Detection,” J. Comput. Theor. Appl., vol. 1, no. 2, pp. 201212, 2023, doi: 10.33633/jcta.v1i2.9462.
  11. O. Olaewe, S. O. Akinoso, and A. S. Achanso, “Electronic Library and Other Internet Resources in Universities as Allied Forces in Global Research Work and Intellectual Emancipation Senior Lecturer and Senior Research Fellow Department of Science and Technology Education Dean , Faculty of Education Co,” J. Emerg. Trends Educ. Res. Policy Stud., vol. 10, no. 1, pp. 4146, 2019.
  12. A. A. Ojugo and D. O. Otakore, “Redesigning Academic Website for Better Visibility and Footprint: A Case of the Federal University of Petroleum Resources Effurun Website,” Netw. Commun. Technol., vol. 3, no. 1, p. 33, Jul. 2018, doi: 10.5539/nct.v3n1p33.
  13. A. S. Pillai, “MultiLabel Chest XRay Classification via Deep Learning,” J. Intell. Learn. Syst. Appl., vol. 14, pp. 4356, 2022, doi: 10.4236/jilsa.2022.144004.
  14. A. A. Ojugo and A. O. Eboka, “Inventory prediction and management in Nigeria using market basket analysis associative rule mining: memetic algorithm based approach,” Int. J. Informatics Commun. Technol., vol. 8, no. 3, p. 128, 2019, doi: 10.11591/ijict.v8i3.pp128138.
  15. A. A. Ojugo, A. O. Eboka, E. O. Okonta, R. E. Yoro, and F. O. Aghware, “Predicting Behavioural Evolution on a GraphBased Model,” Adv. Networks, vol. 3, no. 2, p. 8, 2015, doi: 10.11648/j.net.20150302.11.
  16. H. Hexmoor and E. Maghsoudlou, “IoT with Blockchain: A New Infrastructure Proposal,” vol. 98, no. March, pp. 154, 2024, doi: 10.29007/htr1.
  17. K. Kakhi, R. Alizadehsani, H. M. D. Kabir, A. Khosravi, S. Nahavandi, and U. R. Acharya, “The internet of medical things and artificial intelligence: trends, challenges, and opportunities,” Biocybern. Biomed. Eng., vol. 42, no. 3, pp. 749771, 2022, doi: 10.1016/j.bbe.2022.05.008.
  18. M. Nasajpour, S. Pouriyeh, R. M. Parizi, M. Dorodchi, M. Valero, and H. R. Arabnia, “Internet of Things for Current COVID19 and Future Pandemics: An Exploratory Study,” Jul. 2020, doi: 10.5281/zenodo.4165885.
  19. C. T. Dhanya and D. Nagesh Kumar, “Predictive uncertainty of chaotic daily streamflow using ensemble wavelet networks approach,” Water Resour. Res., vol. 47, no. 6, pp. 128, 2011, doi: 10.1029/2010WR010173.
  20. A. A. Ojugo and A. O. Eboka, “An Empirical Evaluation On Comparative Machine Learning Techniques For Detection of The Distributed Denial of Service (DDoS) Attacks,” J. Appl. Sci. Eng. Technol. Educ., vol. 2, no. 1, pp. 1827, 2020, doi: 10.35877/454ri.asci2192.
  21. P. O. Ejeh, E. Adishi, E. Okoro, and A. Jisu, “Hybrid integration of organizational honeypot to aid data integration, protection and organizational resources and dissuade insider threat,” FUPRE J. Sci. Ind. Res., vol. 6, no. 3, pp. 8094, 2022.
  22. A. A. Ojugo, C. O. Obruche, and A. O. Eboka, “Empirical Evaluation for Intelligent Predictive Models in Prediction of Potential Cancer Problematic Cases In Nigeria,” ARRUS J. Math. Appl. Sci., vol. 1, no. 2, pp. 110120, Nov. 2021, doi: 10.35877/mathscience614.
  23. A. R. Muslikh, D. R. I. M. Setiadi, and A. A. Ojugo, “Rice Disease Recognition using Transfer Learning Xception Convolutional Neural Network,” J. Tek. Inform., vol. 4, no. 6, pp. 15351540, Dec. 2023, doi: 10.52436/1.jutif.2023.4.6.1529.
  24. S. Umar, G. O. Adejo, N. U. Imam, A. S. Zaharaddeen, Z. S. Abubakar, and A. Abdullahi, “Diet Fortification with Curcuma longa and Allium cepa Ameliorates 2 , 3 , 7 , 8Tetracholorodibenzo pdioxininduced Dyslipideamia and Oxidative Stress in Wistar Rats Length Article,” Niger. J. Basic Appl. Sci., vol. 30, no. 2, pp. 178183, 2022, doi: 10.4314/njbas.v30i2.23.
  25. J. T. Eghwerido, L. C. Nzei, and F. I. Agu, “The Alpha Power Gompertz Distribution: Characterization, Properties, and Applications,” Sankhya A Indian J. Stat., vol. 83, no. 1, pp. 449475, Feb. 2021, doi: 10.1007/s13171020001980.
  26. M. K. G. Roshan, “Multiclass Medical Xray Image Classification using Deep Learning with Explainable AI,” Int. J. Res. Appl. Sci. Eng. Technol., vol. 10, no. 6, pp. 45184526, Jun. 2022, doi: 10.22214/ijraset.2022.44541.
  27. A. A. Ojugo, D. Allenotor, D. A. Oyemade, R. E. Yoro, and C. N. Anujeonye, “Immunization Model for Ebola Virus in Rural SierraLeone,” African J. Comput. ICT, vol. 8, no. 1, pp. 110, 2015.
  28. R. E. Yoro and A. A. Ojugo, “An Intelligent Model Using Relationship in Weather Conditions to Predict LivestockFish Farming Yield and Production in Nigeria,” Am. J. Model. Optim., vol. 7, no. 2, pp. 3541, 2019, doi: 10.12691/ajmo721.
  29. M. Srividya, S. Mohanavalli, and N. Bhalaji, “Behavioral Modeling for Mental Health using Machine Learning Algorithms,” J. Med. Syst., vol. 42, no. 5, 2018, doi: 10.1007/s1091601809345.
  30. A. Enaodona, M. Ifeanyi, A. Efetobore, and A. Clive, “Designing a Hybrid Genetic Algorithm Trained Feedforward Neural Network for Mental Health Disorder Detection,” Digit. Innov. Contemp. Res. Sci. Eng. Technol., vol. 12, no. 1, pp. 4962, 2024, doi: 10.22624/AIMS/DIGITAL/V11N4P4.
  31. F. U. Emordi et al., “TiSPHiMME: Time Series Profile Hidden Markov Ensemble in Resolving Item Location on Shelf Placement in Basket Analysis,” Digit. Innov. Contemp. Res. Sci., vol. 12, no. 1, pp. 3348, 2024, doi: 10.22624/AIMS/DIGITAL/v11N4P3.
  32. A. Ibor, E. Edim, and A. Ojugo, “Secure Health Information System with Blockchain Technology,” J. Niger. Soc. Phys. Sci., vol. 5, no. 992, p. 992, Apr. 2023, doi: 10.46481/jnsps.2023.992.
  33. J. K. Oladele et al., “BEHeDaS: A Blockchain Electronic Health Data System for Secure Medical Records Exchange,” J. Comput. Theor. Appl., vol. 2, no. 1, pp. 112, 2024, doi: 10.33633/jcta.v2i19509.
  34. F. O. Aghware et al., “Enhancing the Random Forest Model via Synthetic Minority Oversampling Technique for CreditCard Fraud Detection,” J. Comput. Theor. Appl., vol. 2, no. 2, pp. 190203, 2024, doi: 10.62411/jcta.10323.
  35. P. Manickam et al., “Artificial Intelligence (AI) and Internet of Medical Things (IoMT) Assisted Biomedical Systems for Intelligent Healthcare,” Biosensors, vol. 12, no. 8, 2022, doi: 10.3390/bios12080562.
  36. F. Mustofa, A. N. Safriandono, A. R. Muslikh, and D. R. I. M. Setiadi, “Dataset and Feature Analysis for Diabetes Mellitus Classification using Random Forest,” J. Comput. Theor. Appl., vol. 1, no. 1, pp. 4148, 2023, doi: 10.33633/jcta.v1i1.9190.
  37. I. P. Okobah and A. A. Ojugo, “Evolutionary Memetic Models for Malware Intrusion Detection: A Comparative Quest for Computational Solution and Convergence,” Int. J. Comput. Appl., vol. 179, no. 39, pp. 3443, 2018, doi: 10.5120/ijca2018916586.
  38. A. A. Ojugo and A. O. Eboka, “Assessing Users Satisfaction and Experience on Academic Websites: A Case of Selected Nigerian Universities Websites,” Int. J. Inf. Technol. Comput. Sci., vol. 10, no. 10, pp. 5361, 2018, doi: 10.5815/ijitcs.2018.10.07.
  39. A. Onyan, D. . Onyishi, T. . Ebirebi, and R. Onawharaye, “Development of an IoTbased wireless remote health monitoring device,” FUPRE J. Sci. Ind. Res., vol. 8, no. 2, pp. 111, 2024.
  40. J. T. Braslow and S. R. Marder, “History of Psychopharmacology,” Annu. Rev. Clin. Psychol., vol. 15, pp. 2550, 2019, doi: 10.1146/annurevclinpsy050718095514.
  41. S. Og and L. Ying, “The Internet of Medical Things,” ICMLCA 2021 2nd Int. Conf. Mach. Learn. Comput. Appl., pp. 273276, 2021.
  42. R. E. Yoro and A. A. Ojugo, “Quest for Prevalence Rate of HepatitisB Virus Infection in the Nigeria: Comparative Study of Supervised Versus Unsupervised Models,” Am. J. Model. Optim., vol. 7, no. 2, pp. 4248, 2019, doi: 10.12691/ajmo722.
  43. A. Maureen, O. Oghenefego, A. E. Edje, and C. O. Ogeh, “An Enhanced Model for the Prediction of Cataract Using Bagging Techniques,” vol. 8, no. 2, 2023.
  44. M. I. Akazue et al., “Handling Transactional Data Features via Associative Rule Mining for Mobile Online Shopping Platforms,” Int. J. Adv. Comput. Sci. Appl., vol. 15, no. 3, pp. 530538, 2024, doi: 10.14569/IJACSA.2024.0150354.
  45. U. R. Wemembu, E. O. Okonta, A. A. Ojugo, and I. L. Okonta, “A Framework for Effective Software Monitoring in Project Management,” West African J. Ind. Acad. Res., vol. 10, no. 1, pp. 102115, 2014.
  46. F. O. Aghware, R. E. Yoro, P. O. Ejeh, C. C. Odiakaose, F. U. Emordi, and A. A. Ojugo, “DeLClustE: Protecting Users from CreditCard Fraud Transaction via the DeepLearning Cluster Ensemble,” Int. J. Adv. Comput. Sci. Appl., vol. 14, no. 6, pp. 94100, 2023, doi: 10.14569/IJACSA.2023.0140610.
  47. E. A. Otorokpo et al., “DaBOBoostE: Enhanced Data Balancing via Oversampling Technique for a Boosting Ensemble in CardFraud Detection,” Adv. Multidiscip. Sci. Res. J., vol. 12, no. 2, pp. 4566, 2024, doi: 10.22624/AIMS/MATHS/V12N2P4.
  48. N. C. Ashioba et al., “Empirical Evidence for Rainfall Runoff in Southern Nigeria Using a Hybrid Ensemble Machine Learning Approach,” J. Adv. Math. Comput. Sci., vol. 12, no. 1, pp. 7386, 2024, doi: 10.22624/AIMS/MATHS/V12N1P6.
  49. A. A. Ojugo and A. O. Eboka, “Modeling the Computational Solution of Market Basket Associative Rule Mining Approaches Using Deep Neural Network,” Digit. Technol., vol. 3, no. 1, pp. 18, 2018, doi: 10.12691/dt311.
  50. Z. Karimi, M. Mansour Riahi Kashani, and A. Harounabadi, “Feature Ranking in Intrusion Detection Dataset using Combination of Filtering Methods,” Int. J. Comput. Appl., vol. 78, no. 4, pp. 2127, Sep. 2013, doi: 10.5120/134781164.
  51. D. H. Zala and M. B. Chaudhari, “Review on use of ‘BAGGING’ technique in agriculture crop yield prediction,” IJSRD Int. J. Sci. Res. Dev., vol. 6, no. 8, pp. 675676, 2018.
  52. A. A. Ojugo, A. O. Eboka, R. E. Yoro, M. O. Yerokun, and F. N. Efozia, “Framework design for statistical fraud detection,” Math. Comput. Sci. Eng. Ser., vol. 50, pp. 176182, 2015.
  53. A. A. Ojugo et al., “Dependable CommunityCloud Framework for Smartphones,” Am. J. Networks Commun., vol. 4, no. 4, p. 95, 2015, doi: 10.11648/j.ajnc.20150404.13.
  54. A. A. Ojugo and O. D. Otakore, “Computational solution of networks versus cluster grouping for social network contact recommender system,” Int. J. Informatics Commun. Technol., vol. 9, no. 3, p. 185, 2020, doi: 10.11591/ijict.v9i3.pp185194.
  55. A. A. Ojugo et al., “Forging a learnercentric blendedlearning framework via an adaptive contentbased architecture,” Sci. Inf. Technol. Lett., vol. 4, no. 1, pp. 4053, May 2023, doi: 10.31763/sitech.v4i1.1186.
  56. F. U. Emordi, C. C. Odiakaose, P. O. Ejeh, O. Attoh, and N. C. Ashioba, “Student’s Perception and Assessment of the Dennis Osadebay University Asaba Website for Academic Information Retrieval, Improved Web Presence, Footprints and Usability,” FUPRE J. Sci. Ind. Res., vol. 7, no. 3, pp. 4960, 2023.
  57. A. A. Ojugo et al., “Evidence of Students’ Academic Performance at the Federal College of Education Asaba Nigeria: Mining Education Data,” Knowl. Eng. Data Sci., vol. 6, no. 2, pp. 145156, 2023, doi: 10.17977/um018v6i22023p145156.
  58. A. A. Ojugo and E. O. Ekurume, “Deep Learning Network AnomalyBased Intrusion Detection Ensemble For Predictive Intelligence To Curb Malicious Connections: An Empirical Evidence,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 10, no. 3, pp. 20902102, Jun. 2021, doi: 10.30534/ijatcse/2021/851032021.
  59. O. Plonsky et al., “Predicting human decisions with behavioral theories and machine learning,” ICT Express, vol. 12, no. 2, pp. 3041, Apr. 2019.
  60. H. Patrinos, E. Vegas, and R. CarterRau, “An Analysis of COVID19 Student Learning Loss,” Educ. Glob. Pract. Policy Res. Work. Pap. 10033, vol. 10033, no. May, pp. 131, 2022, doi: 10.1596/1813945010033.
  61. A. JolicoeurMartineau, J. J. Li, and C. M. T. Greenwood, “Statistical modeling of GxE,” Prenat. Stress Child Dev., vol. 58, no. 11, pp. 433466, 2021, doi: 10.1007/9783030601591_15.
  62. C. C. Odiakaose, F. U. Emordi, P. O. Ejeh, O. Attoh, and N. C. Ashioba, “A pilot study to enhance semiurban telepenetration and services provision for undergraduates via the effective design and extension of campus telephony,” FUPRE J. Sci. Ind. Res., vol. 7, no. 3, pp. 3548, 2023.
  63. Y. Bouchlaghem, Y. Akhiat, and S. Amjad, “Feature Selection: A Review and Comparative Study,” E3S Web Conf., vol. 351, pp. 16, 2022, doi: 10.1051/e3sconf/202235101046.
  64. S. Verma, A. Bhatia, A. Chug, and A. P. Singh, “Recent Advancements in Multimedia Big Data Computing for IoT Applications in Precision Agriculture: Opportunities, Issues, and Challenges,” 2020, pp. 391416. doi: 10.1007/9789811387593_15.
  65. S. Zhang, W. Huang, and H. Wang, “Crop disease monitoring and recognizing system by soft computing and image processing models,” Multimed. Tools Appl., vol. 79, no. 4142, pp. 3090530916, Nov. 2020, doi: 10.1007/s1104202009577z.
  66. S. B. Imanulloh, A. R. Muslikh, and D. R. I. M. Setiadi, “Plant Diseases Classification based Leaves Image using Convolutional Neural Network,” J. Comput. Theor. Appl., vol. 1, no. 1, pp. 110, Aug. 2023, doi: 10.33633/jcta.v1i1.8877.
  67. E. U. Omede, A. Edje, M. I. Akazue, H. Utomwen, and A. A. Ojugo, “IMANoBAS: An Improved MultiMode Alert Notification IoTbased AntiBurglar Defense System,” J. Comput. Theor. Appl., vol. 2, no. 1, pp. 4353, 2024, doi: 10.33633/jcta.v2i1.9541.
  68. B. O. Malasowe, A. E. Okpako, M. D. Okpor, P. O. Ejeh, A. A. Ojugo, and R. E. Ako, “FePARM: The FrequencyPatterned Associative Rule Mining Framework on Consumer PurchasingPattern for Online Shops,” Adv. Multidiscip. Sci. Res. J., vol. 15, no. 2, pp. 1528, 2024, doi: 10.22624/AIMS/CISDI/V15N2P21.
  69. D. A. Obasuyi et al., “NiCuSBlockIoT: Sensorbased Cargo Assets Management and Traceability Blockchain Support for Nigerian Custom Services,” Comput. Inf. Syst. Dev. Informatics Allied Res. J., vol. 15, no. 2, pp. 4564, 2024, doi: 10.22624/AIMS/CISDI/V15N2P4.
  70. F. Omoruwou, A. A. Ojugo, and S. E. Ilodigwe, “Strategic Feature Selection for Enhanced Scorch Prediction in Flexible Polyurethane Form Manufacturing,” J. Comput. Theor. Appl., vol. 2, no. 1, pp. 126137, 2024, doi: 10.62411/jcta.9539.
  71. B. O. Malasowe, M. I. Akazue, E. A. Okpako, F. O. Aghware, D. V. Ojie, and A. A. Ojugo, “Adaptive LearnerCBT with Secured FaultTolerant and Resumption Capability for Nigerian Universities,” Int. J. Adv. Comput. Sci. Appl., vol. 14, no. 8, pp. 135142, 2023, doi: 10.14569/IJACSA.2023.0140816.
  72. A. N. Safriandono et al., “Analyzing Quantum Feature Engineering and Balancing Strategies Effect on Liver Disease Classification,” J. Futur. Artif. Intell. Technol., vol. 1, no. 1, pp. 5063, 2024, doi: 10.62411/faith.202412.
  73. D. R. I. M. Setiadi, K. Nugroho, A. R. Muslikh, S. Wahyu, and A. A. Ojugo, “Integrating SMOTETomek and Fusion Learning with XGBoost MetaLearner for Robust Diabetes Recognition,” J. Futur. Artif. Intell. Technol., vol. 1, no. 1, pp. 2338, 2024, doi: 10.62411/faith.202411.
  74. S. E. Brizimor et al., “WiSeCart: Sensorbased SmartCart with SelfPayment Mode to Improve Shopping Experience and Inventory Management,” Soc. Informatics, Business, Polit. Law, Environ. Sci. Technol., vol. 10, no. 1, pp. 5374, 2024, doi: 10.22624/AIMS/SIJ/V10N1P7.
  75. R. R. Ataduhor et al., “StreamBoostE: A Hybrid BoostingCollaborative Filter Scheme for Adaptive UserItem Recommender for Streaming Services,” Adv. Multidiscip. Sci. Res. J., vol. 10, no. 2, pp. 89106, 2024, doi: 10.22624/AIMS/V10N2P8.
  76. A. M. Ifioko et al., “CoDuBoTeSS: A Pilot Study to Eradicate Counterfeit Drugs via a Blockchain Tracer Support System on the Nigerian Frontier,” J. Behav. Informatics, Digit. Humanit. Dev. Res., vol. 10, no. 2, pp. 5374, 2024, doi: 10.22624/AIMS/BHI/V10N2P6.
  77. S. Saponara, A. Elhanashi, and A. Gagliardi, “Realtime video fire/smoke detection based on CNN in antifire surveillance systems,” J. RealTime Image Process., vol. 18, no. 3, pp. 889900, Jun. 2021, doi: 10.1007/s11554020010440.
  78. S. Do, K. D. Song, and J. W. Chung, “Basics of Deep Learning : A Radiologist ’ s Guide to Understanding Published Radiology Articles on Deep Learning,” Korean J. Radiol., vol. 21, no. 1, pp. 3341, 2020, doi: 10.3348/kjr.2019.0312.
  79. A. A. Ojugo and A. O. Eboka, “An intelligent hunting profile for evolvable metamorphic malware,” African J. Comput. ICT, vol. 8, no. 1, pp. 181190, 2015.
  80. A. A. Ojugo, D. A. Oyemade, R. E. Yoro, A. O. Eboka, M. O. Yerokun, and E. Ugboh, “A Comparative Evolutionary Models for Solving Sudoku,” Autom. Control Intell. Syst., vol. 1, no. 5, p. 113, 2013, doi: 10.11648/j.acis.20130105.13.
  81. A. A. Ojugo and O. D. Otakore, “Seeking Intelligent Convergence for Asymptotic Stability Features of the Prey / Predator Retarded Equation Model Using Supervised Models,” Comput. Inf. Syst. Dev. Informatics Allied Res. J., vol. 9, no. 2, pp. 1326, 2018.
  82. A. A. Ojugo and I. P. Okobah, “Quest for an intelligent convergence solution for the wellknown David, Fletcher and Powell quadratic function using supervised models,” Open Access J. Sci., vol. 2, no. 1, pp. 5359, 2018, doi: 10.15406/oajs.2018.02.00044.
  83. M. K. Elmezughi, O. Salih, T. J. Afullo, and K. J. Duffy, “Comparative Analysis of Major MachineLearningBased Path Loss Models for Enclosed Indoor Channels,” Sensors, vol. 22, no. 13, p. 4967, Jun. 2022, doi: 10.3390/s22134967.
  84. E. B. Wijayanti, D. R. I. M. Setiadi, and B. H. Setyoko, “Dataset Analysis and Feature Characteristics to Predict Rice Production based on eXtreme Gradient Boosting,” J. Comput. Theor. Appl., vol. 2, no. 1, pp. 7990, 2024, doi: 10.62411/jcta.10057.
  85. M. W. Macy, “Food defense and beyond: identifying and responding to insider threats,” J. Lang. Relatsh., vol. 12, no. 1, pp. viiviii, 2020, doi: 10.31826/jlr2015120101.
  86. D. M. Dooley et al., “FoodOn: a harmonized food ontology to increase global food traceability, quality control and data integration,” npj Sci. Food, vol. 2, no. 1, p. 23, Dec. 2018, doi: 10.1038/s4153801800326.
  87. S.A. Pedro, “COVID19 Pandemic: Shifting DigitalTransformation to a HighSpeed Gear,” Inf. Syst. Manag., vol. 37, no. 4, pp. 260266, 2020, doi: https://doi.org/10.1080/10580530.2020.1814461© 2020 Taylor & Francis.
  88. Y. Shiokawa, T. Misawa, Y. Date, and J. Kikuchi, “Application of Market Basket Analysis for the Visualization of Transaction Data Based on Human Lifestyle and Spectroscopic Measurements,” Anal. Chem., vol. 88, no. 5, pp. 27142719, 2016, doi: 10.1021/acs.analchem.5b04182.
  89. M. Brindlmayer, R. Khadduri, A. Osborne, A. Briansó, and E. Cupito, “Prioritizing learning during covid19: The Most Effective Ways to Keep Children Learning During and PostPandemic,” Glob. Educ. Evid. Advis. Panel, no. January, pp. 121, 2022.
  90. A. H. Allam, I. Gomaa, H. H. Zayed, and M. Taha, “IoTbased eHealth using blockchain technology: a survey,” Cluster Comput., vol. 0123456789, 2024, doi: 10.1007/s1058602404357y.
  91. D. Nahavandi, R. Alizadehsani, A. Khosravi, and U. R. Acharya, “Application of artificial intelligence in wearable devices: Opportunities and challenges,” Comput. Methods Programs Biomed., vol. 213, no. December, 2022, doi: 10.1016/j.cmpb.2021.106541.
  92. G. Habib, S. Sharma, S. Ibrahim, I. Ahmad, S. Qureshi, and M. Ishfaq, “Blockchain Technology: Benefits, Challenges, Applications, and Integration of Blockchain Technology with Cloud Computing,” Futur. Internet, vol. 14, no. 11, p. 341, Nov. 2022, doi: 10.3390/fi14110341.
  93. A. A. Ojugo, D. A. Oyemade, D. Allenotor, O. B. Longe, and C. N. Anujeonye, “Comparative Stochastic Study for CreditCard Fraud Detection Models,” African J. Comput. ICT, vol. 8, no. 1, pp. 1524, 2015.
  94. E. Dourado and J. Brito, “Cryptocurrency,” in The New Palgrave Dictionary of Economics, London: Palgrave Macmillan UK, 2014, pp. 19. doi: 10.1057/9781349951215_28951.
  95. H. Tingfei, C. Guangquan, and H. Kuihua, “Using Variational Auto Encoding in Credit Card Fraud Detection,” IEEE Access, vol. 8, pp. 149841149853, 2020, doi: 10.1109/ACCESS.2020.3015600.
  96. E. O. Okonta, A. A. Ojugo, U. R. Wemembu, and D. Ajani, “Embedding Quality Function Deployment In Software Development: A Novel Approach,” West African J. Ind. Acad. Res., vol. 6, no. 1, pp. 5064, 2013.
  97. E. O. Okonta, U. R. Wemembu, A. A. Ojugo, and D. Ajani, “Deploying Java Platform to Design A Framework of Protective Shield for Anti Reversing Engineering,” West African J. Ind. Acad. Res., vol. 10, no. 1, pp. 5064, 2014.
  98. R. De’, N. Pandey, and A. Pal, “Impact of digital surge during Covid19 pandemic: A viewpoint on research and practice,” Int. J. Inf. Manage., vol. 55, no. June, p. 102171, 2020, doi: 10.1016/j.ijinfomgt.2020.102171.
  99. P. K. Paul, “Blockchain Technology and its Types—A Short Review,” Int. J. Appl. Sci. Eng., vol. 9, no. 2, 2021, doi: 10.30954/23220465.2.2021.7.
  100. S. Linoy, N. Stakhanova, and A. Matyukhina, “Exploring Ethereum’s Blockchain Anonymity Using Smart Contract Code Attribution,” in 2019 15th Conference on Network and Service Management, IEEE, Oct. 2019, pp. 19. doi: 10.23919/CNSM46954.2019.9012681.
  101. S. Linoy, N. Stakhanova, and S. Ray, “De‐anonymizing Ethereum blockchain smart contracts through code attribution,” Int. J. Netw. Manag., vol. 31, no. 1, Jan. 2021, doi: 10.1002/nem.2130.
  102. P. O. Ejeh et al., “Counterfeit Drugs Detection in the Nigeria PharmaChain via Enhanced Blockchainbased Mobile Authentication Service,” Adv. Multidiscip. Sci. Res. J., vol. 12, no. 2, pp. 2544, 2024, doi: 10.22624/AIMS/MATHS/V12N2P3.
  103. M. Uddin, K. Salah, R. Jayaraman, S. Pesic, and S. Ellahham, “Blockchain for drug traceability: Architectures and open challenges,” Health Informatics J., vol. 27, no. 2, p. 146045822110112, Apr. 2021, doi: 10.1177/14604582211011228.
  104. M. I. Akazue, R. E. Yoro, B. O. Malasowe, O. Nwankwo, and A. A. Ojugo, “Improved services traceability and management of a food value chain using blockchain network : a case of Nigeria,” Indones. J. Electr. Eng. Comput. Sci., vol. 29, no. 3, pp. 16231633, 2023, doi: 10.11591/ijeecs.v29.i3.pp16231633.
  105. A. A. Ojugo, P. O. Ejeh, C. C. Odiakaose, A. O. Eboka, and F. U. Emordi, “Improved distribution and food safety for beef processing and management using a blockchaintracer support framework,” Int. J. Informatics Commun. Technol., vol. 12, no. 3, p. 205, Dec. 2023, doi: 10.11591/ijict.v12i3.pp205213.
  106. P. Malik, A. Pandey, R. Swarnkar, A. Bavarva, and R. Marmat, “BlockchainEnabled Edge Computing for IoT Networks BlockchainEnabled Edge Computing for IoT Networks,” no. March, 2024, doi: 10.55041/IJSREM28718.
  107. B. O. Malasowe, D. V. Ojie, A. A. Ojugo, and M. D. Okpor, “CoInfection Prevalence of Covid19 Underlying Tuberculosis Disease Using a Susceptible Infect Clustering Bayes Network,” DUTSE J. Pure Appl. Sci., vol. 10, no. 2, pp. 8094, 2024, doi: 10.4314/dujopas.v10i2a.8.
  108. M. S. Sunarjo, H.S. Gan, and D. R. I. M. Setiadi, “HighPerformance Convolutional Neural Network Model to Identify COVID19 in Medical Images,” J. Comput. Theor. Appl., vol. 1, no. 1, pp. 1930, 2023, doi: 10.33633/jcta.v1i1.8936.
  109. A. N. Safriandono, D. R. I. M. Setiadi, A. Dahlan, F. Zakiyah, I. S. Wibisono, and A. A. Ojugo, “Analizing Quantum Features Egineering and Balancing Strategy Effect for Liver Disease Classification,” J. Futur. Artif. Intell. Technol., vol. 1, no. 1, pp. 5062, 2024.
  110. A. A. Ojugo and O. Nwankwo, “SpectralCluster Solution For CreditCard Fraud Detection Using A Genetic Algorithm Trained Modular Deep Learning Neural Network,” JINAV J. Inf. Vis., vol. 2, no. 1, pp. 1524, Jan. 2021, doi: 10.35877/454RI.jinav274.
  111. A. A. Ojugo and R. E. Yoro, “Computational Intelligence in Stochastic Solution for Toroidal NQueen,” Prog. Intell. Comput. Appl., vol. 1, no. 2, pp. 4656, 2013, doi: 10.4156/pica.vol2.issue1.4.
  112. A. A. Ojugo, P. O. Ejeh, C. C. Odiakaose, A. O. Eboka, and F. U. Emordi, “Predicting rainfall runoff in Southern Nigeria using a fused hybrid deep learning ensemble,” Int. J. Informatics Commun. Technol., vol. 13, no. 1, pp. 108115, Apr. 2024, doi: 10.11591/ijict.v13i1.pp108115.
  113. F. Angeletti, I. Chatzigiannakis, and A. Vitaletti, “The role of blockchain and IoT in recruiting participants for digital clinical trials,” in 2017 25th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), IEEE, Sep. 2017, pp. 15. doi: 10.23919/SOFTCOM.2017.8115590.
  114. J. Polge, J. Robert, and Y. Le Traon, “Permissioned blockchain frameworks in the industry: A comparison,” ICT Express, vol. 7, no. 2, pp. 229233, Jun. 2021, doi: 10.1016/j.icte.2020.09.002.
  115. N. R. Madarasz and D. P. Santos, “The concept of human nature in Noam Chomsky,” Verit. (Porto Alegre), vol. 63, no. 3, pp. 10921126, Dec. 2018, doi: 10.15448/19846746.2018.3.32564.
  116. I. A. Omar, R. Jayaraman, K. Salah, M. C. E. Simsekler, I. Yaqoob, and S. Ellahham, “Ensuring protocol compliance and data transparency in clinical trials using Blockchain smart contracts,” BMC Med. Res. Methodol., vol. 20, no. 1, p. 224, Dec. 2020, doi: 10.1186/s12874020011095.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

8 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER