International Journal of Research and Innovation in Social Science

Submission Deadline- 11th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th September 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Development of an Examination Data Bank with Test Automation

  • Edwin M. Pacia Jr.
  • Harrold U. Beltran
  • 4404-4418
  • May 13, 2025
  • Education

Development of an Examination Data Bank with Test Automation

Edwin M. Pacia Jr., Harrold U. Beltran

Graduate School, University of the Immaculate Conception

DOI: https://dx.doi.org/10.47772/IJRISS.2025.90400315

Received: 27 April 2025; Accepted: 29 April 2025; Published: 13 May 2025

ABSTRACT

The Development of an Examination Data Bank with Test Automation addressed the need for efficient and secure examination management in educational institutions. The rapid advancement of technology had transformed educational assessment, necessitating innovative solutions for managing exams effectively. This study developed an Examination Data Bank with Test Automation to streamline exam-related processes, improve security, and enhance ease of use. Using a structured software development methodology, the system integrated key features such as a secure database, user authentication, and automated test generation. Security measures, including data encryption, role-based access control, and backup protocols, were implemented to safeguard examination data. The system’s effectiveness was evaluated using the Technology Acceptance Model (TAM), assessing Perceived Ease-of-Use (PEU), Perceived Usefulness (PU), Attitude (AT), and Behavioral Intention to Use (BI). Findings revealed overwhelmingly positive responses, with consistently high mean scores across all constructs, indicating strong user acceptance and perceived efficiency. Users reported the system as intuitive, beneficial in reducing administrative workload, and instrumental in improving exam management reliability. The study specifically sought to answer the following research questions: (1) How was the examination data bank with test automation developed and secured? (2) What was the level of acceptance of the developed system utilizing TAM in terms of Perceived Usefulness, Perceived Ease of Use, and Behavioral Intention? (3) What strategic plans were necessary for implementing the system? This study concluded that the Examination Data Bank System with Test Automation was a valuable tool for educational institutions, offering a scalable and effective solution for modernizing assessment practices.

Keywords— Education, Information Technology Integration, Examination Data Bank with Test Automation, Technology Acceptance Model, Quantitative Descriptive

INTRODUCTION

The Examination Data Bank System with Test Automation aims to solve problems in traditional assessment methods by centralizing test item management and automating exam creation. Studies (Pastore et al., 2019; Gikandi et al., 2011; O’Connell et al., 2021) show that automated systems reduce preparation time, improve assessment quality, support standardization, and minimize human error. Rapid technological changes and the need for adaptive learning have made traditional manual exams outdated and inefficient. Research (Mohan & Mishra, 2020; Jansen et al., 2020) emphasizes that integrating digital tools in assessments enhances scalability, reliability, and academic integrity.

Locally, studies (Reyes et al., 2022; Department of Education, 2022) highlight similar challenges, noting errors and grading inconsistencies in manual exams, and benefits like increased engagement with technology-based assessments. Despite international and local evidence supporting digital transformation in assessment practices, a gap remains in fully implementing examination data banks with automation, particularly in Philippine education. This study addresses that gap by developing a scalable, institution-ready system aligned with ongoing educational digitalization efforts.

The research findings will be shared through academic conferences, workshops, and publications, with plans for peer-reviewed journal submission to maximize its impact on the educational sector.

Statement of the Problem

This study developed and examination data bank with test automation. This served also as crafting and development plan.  will focus on the development of an examination data bank with test automation. Specifically, the study sought answer to the following questions:

  • How will the examination data bank with test automation be developed?
  • What is the level of acceptance of the developed Online Exam Data Bank System, utilizing the Technology Acceptance Model (TAM), in terms of:
    1. Perceived Usefulness.
    2. Perceived Ease of Use.
    3. Behavioral Intention.
  • What strategic plans are necessary for the implementation of the examination data bank with test automation?

REVIEW OF RELATED LITERATURE

The Examination Data Bank System with Test Automation seeks to modernize educational assessment by improving efficiency, accuracy, and user experience. Literature highlights that integrating automated systems with data banks reduces administrative workload, enhances grading consistency, and better aligns assessments with learning outcomes (Pastore et al., 2019; Gikandi et al., 2011). Using frameworks like the Technology Acceptance Model (TAM), studies emphasize the importance of user perceptions in adopting such systems.

Cloud-based platforms (Armbrust et al., 2010; Wu et al., 2020) offer scalable, secure, and accessible solutions for managing exam data, while online data banks streamline test creation, allow item analysis, and support secure storage through encryption and user access controls (Kelleher & O’Neill, 2018). Additionally, test automation tools enable the rapid generation of exam papers based on curriculum-aligned parameters, saving time and ensuring fairness.

The combination of data banks and test automation creates a powerful system for efficient, secure, and standardized assessments. However, existing gaps in the integration of these technologies within institutional practices justify the need for the present study, which aims to develop and evaluate a comprehensive system tailored to educational demands while considering usability, scalability, and technological acceptance.

Theoretical Framework

This study was anchored on the Technology Acceptance Model (TAM), developed by Davis (1989), is a widely recognized framework used to explain and predict user behavior toward the adoption of new technologies. It identifies two primary factors that influence technology acceptance: Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). Perceived Usefulness refers to the degree to which a person believes that using a particular system will enhance their job performance, while Perceived Ease of Use refers to the degree to which a person believes that using the system will be free of effort. These factors influence the user’s Attitude (AT) toward using the system, which in turn affects their Behavioral Intention (BI) to use it, ultimately leading to Actual System Use (ASU). TAM has been extensively validated in various fields, especially in educational technology, because of its simplicity and strong predictive power in understanding user acceptance of new systems.

In the context of the Examination Data Bank System with Test Automation, PU refers to the system’s ability to help educators and administrators efficiently organize, retrieve, and manage exam questions, thereby enhancing productivity. PEOU emphasizes the need for an intuitive interface and user-friendly design to encourage adoption by non-technical users. Addressing PU and PEOU helps foster BI, encouraging users to integrate the system into their workflows. ASU reflects the successful and consistent use of the system, driven by overcoming initial barriers to acceptance.

External variables also play a critical role in TAM. System design and functionality, including features like secure access and search filters, affect PU and PEOU. Training and ongoing technical support boost users’ confidence and ease of use, while institutional policies promoting digital transformation positively influence BI and ASU. These components form the theoretical framework for studying system adoption.

The Examination Data Bank System with Test Automation’s design aligns with TAM constructs by ensuring efficiency, usability, and widespread adoption. Users perceive the system as enhancing the speed and accuracy of exam data management, lowering technical barriers through simple workflows, and encouraging integration into institutional practices.

The Technology Acceptance Model (TAM), developed by Davis (1989), serves as a foundational framework for understanding how users come to accept and use new technologies. It is built on two key constructs: Perceived Usefulness (PU)—the belief that a system enhances job performance—and Perceived Ease of Use (PEOU)—the belief that using the system will be free of effort. These two perceptions influence the user’s Attitude (AT) toward the system and shape their Behavioral Intention to Use (BI), which ultimately leads to actual system adoption. In the context of this study, TAM is applied to evaluate how educators and administrators in Region XII perceive the Examination Data Bank System with Test Automation. Specifically, TAM helps assess whether users find the system helpful in improving assessment tasks (PU), easy to learn and operate (PEOU), and whether these perceptions translate into a favorable attitude (AT) and intent (BI) to continue using the system in managing examinations. The framework thus guides the evaluation of user acceptance, ensuring that the system meets the practical and psychological needs of its intended users.

Conceptual Framework

This study adopted the Input-Process-Output-Outcome (IPO) model as the conceptual framework to guide the systematic development, evaluation, and implementation of the Examination Data Bank System with Test Automation. The IPO framework offers a structured approach to understanding how various components contribute to the effective delivery and use of an educational technology system (Davis, 1989; Dennis, Wixom, & Roth, 2018).

Fig. 1 Conceptual Input, Process, Output and Outcome of the study

Fig. 1 Conceptual Input, Process, Output and Outcome of the study

The development of the Examination Data Bank System with Test Automation is guided by the Input-Process-Output (IPO) framework and the Technology Acceptance Model (TAM).

Inputs include functional and non-functional requirements, user feedback gathered through Agile evaluations, and compliance with institutional policies on assessment and data privacy. Processes involve system design, coding, testing, user training, and iterative refinement based on feedback, all following Agile methodology. Outputs are the successful deployment of a fully functional system featuring secure access, categorized question repositories, automated test generation, and export features, with user acceptance assessed through TAM dimensions such as Perceived Usefulness (PU) and Perceived Ease of Use (PEOU).

The synthesis of the IPO framework and TAM ensures a structured, user-centered system development approach while measuring technology acceptance. Together, they support the goal of creating a scalable, efficient, and sustainable examination system aligned with the evolving needs of educational institutions.

METHODOLOGY

Presented in this chapter are the research methods, procedures, and techniques employed in the development and evaluation of the Examination Data Bank with Test Automation.

Research Design

This study utilized a quantitative-descriptive research design to systematically evaluate user perceptions, system functionality, and acceptance of the Examination Data Bank with Test Automation (EDBTA). By employing structured survey instruments grounded in the Technology Acceptance Model (TAM), the research gathered measurable data on perceived usefulness, ease of use, user attitude, and behavioral intention. This approach was particularly fitting in the context of educational technology, allowing for broad analysis of user experiences and system effectiveness.

The initial system analysis was guided by the PIECES Framework, which assessed the platform across six key areas: performance, information, economy, control, efficiency, and service. This framework helped identify major problem areas such as delays in manual test preparation, disorganized exam content, high operational costs, and lack of access control. In response, the EDBTA introduced automated test generation, a centralized question bank, paperless operations, role-based access control, and enhanced system usability through tailored dashboards and notifications.

For system development, the project integrated the structured approach of the Systems Development Life Cycle (SDLC) with the flexibility of Agile methodology. The SDLC ensured a step-by-step progression through planning, design, development, testing, deployment, and maintenance, while Agile sprints allowed for continuous feedback and refinement from key stakeholders including faculty, program heads, and administrators.

The development process unfolded in several key phases. The planning phase identified system needs using the PIECES framework. In the design phase, the architecture, user interfaces, and security protocols were established. During development, modular sprint cycles were used to build core features such as user management, test item handling, automated test generation, and dashboards. Each sprint delivered a functional component, which was later integrated and tested for performance, security, and compatibility. Validation was carried out by subject matter experts who reviewed the platform for accuracy and alignment with academic standards. The system was then piloted in selected departments, accompanied by training sessions to ensure smooth adoption. A helpdesk team was also established for ongoing support and maintenance.

Technically, the EDBTA was developed using a modern stack of PHP, Python, ReactJS, and PostgreSQL. It followed a multi-tier system architecture designed for modularity, scalability, and maintainability. The frontend, built in ReactJS, provided an intuitive interface for different user roles. The backend handled core logic, access control, and data processing, while the PostgreSQL database securely stored user data, questions, logs, and metadata using encryption and hashing for sensitive information.

Overall, the study not only assessed user acceptance of the system but also detailed a robust development and implementation process, ensuring the Examination Data Bank with Test Automation met institutional needs and promoted efficiency, security, and academic quality.

Fig. 2 System Architecture of the Examination Data Bank with Test Automation (EDBTA)

Fig. 2 System Architecture of the Examination Data Bank with Test Automation (EDBTA)

Research Locale

The research will be conducted in selected educational institutions in Region XII, including schools and universities, that currently face challenges in data management and storage. These settings are chosen to ensure that the Examination Data Bank System with Test Automation is tested in a real-world environment, providing insights into its applicability and effectiveness within academic institutions.

Fig. 3 presents the map of Region 12, also known as SOCCSKSARGEN, located in the southern part of Mindanao, Philippines. This region includes the provinces of South Cotabato, North Cotabato, Sultan Kudarat, and Sarangani, as well as the cities of General Santos, Koronadal, and Tacurong.

Research Respondents

The study involved two groups of respondents to comprehensively evaluate the Examination Data Bank System with Test Automation.

The first group was selected using purposive sampling, consisting of 21 participants from seven (7) institutions in Region XII. Each institution contributed one (1) Program Head and two (2) faculty members. These participants served as testers to validate the quality and appropriateness of examination questions and to evaluate the system’s usability and acceptability.

The second group included 50 additional system users, composed of administrators, faculty members, and program heads. This group focused solely on assessing the system’s usability and acceptability based on the Technology Acceptance Model (TAM) framework.

Combining both groups resulted in a total of 71 testers participating in the acceptability testing. To ensure balanced representation through quota sampling, each institution contributed an additional seven (7) testers, achieving a total of ten (10) testers per institution (three from the first group and seven from the second group).

The use of purposive sampling ensured that all respondents were directly involved in the system’s development or practical use, thus providing valuable insights into its functionality and effectiveness (Etikan, Musa, & Alkassim, 2016).

Research Instrument

This study employed a contextualized questionnaire based on the Technology Acceptance Model (TAM) to evaluate the acceptability of the developed Examination Data Bank System with Test Automation among its intended users. The TAM framework, developed by Davis (1989), provides a structured approach for assessing user perceptions of technological systems, focusing on four key constructs: Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Attitude Toward Use (AT), and Behavioral Intention to Use (BI). The instrument was specifically designed to measure how effectively and acceptably the system supported academic assessment processes.

To ensure relevance and clarity, the original TAM questionnaire items were contextualized to align with the specific features and functionalities of the developed system. Items under Perceived Usefulness (PU) assessed how the system enhanced efficiency in exam creation, test item storage, and automated test generation. Statements related to Perceived Ease of Use (PEOU) focused on user interactions with the system’s interface, including navigation of modules, tagging of questions, and use of filtering tools. Likewise, items addressing Attitude Toward Use (AT) and Behavioral Intention to Use (BI) were adapted to explore users’ willingness to adopt and integrate the system into their regular academic workflows. These modifications ensured that all statements were specific, practical, and meaningful for faculty members, program heads, and administrators who participated in the system evaluation.

Respondents rated each item using a five-point Likert scale, ranging from 1.00 – Very Low to 5.00 – Very High. This scaling allowed for a quantifiable assessment of user perceptions, with higher ratings indicating stronger agreement regarding the system’s usability, effectiveness, and acceptability.

Mean Score Range Descriptive Rating Interpretation
4.20 – 5.00 Very High The system fully meets expectations and performs exceptionally well in the specified criteria.
3.40 – 4.19 High The system meets expectations and performs well, with only minor issues or areas for improvement.
2.60 – 3.39 Moderate The system is acceptable but requires significant improvements to meet expectations satisfactorily.
1.80 – 2.59 Low The system fails to meet expectations in several areas and requires substantial modifications.
1.00 – 1.79 Very Low The system does not meet expectations and performs poorly in most or all aspects evaluated.

To determine the reliability of the instrument, Cronbach’s Alpha was utilized to assess internal consistency across each TAM dimension. The results demonstrated high reliability for all constructs, with alpha values as follows: Perceived Usefulness (α = 0.92), Perceived Ease of Use (α = 0.90), Attitude Toward Use (α = 0.88), and Behavioral Intention to Use (α = 0.95). These values exceed the commonly accepted threshold of 0.70 (Nunnally & Bernstein, 1994), indicating that the instrument consistently measured the intended perceptions across respondents.

In addition to statistical validation, the questionnaire underwent an expert review to ensure face and content validity. A panel composed of specialists in educational technology and systems development evaluated the clarity, relevance, and appropriateness of each item, ensuring alignment with both the TAM framework and the system’s specific operational context.

Overall, the research instrument was rigorously developed and validated, providing structured and reliable insights into user perceptions of acceptability and responsiveness. It accurately captured how effectively the system supports its intended functions within real academic environments.

Data Gathering Procedure

The data-gathering procedure for this study on the Examination Data Bank System with Test Automation began with securing initial approval from the Dean of the Graduate School of the University of the Immaculate Conception (UIC). This was followed by a thorough review and clearance from the research ethics committee to ensure full compliance with ethical research standards. Upon obtaining these approvals, the researcher sought final authorization to conduct the study within selected educational institutions.

Aligned with the Agile development approach, testing activities were embedded throughout every phase of the system development process. During each sprint cycle, individual modules such as user authentication, test item management, and automated test generation underwent unit testing to ensure proper functionality. Subsequent integration testing was conducted to verify seamless interaction among system features. The Agile framework enabled rapid feedback, immediate issue resolution, and continuous system improvement before moving to the next development iteration.

After system development, consent was formally obtained from the Academic Vice President and institutional administrators of the participating schools. The study’s purpose, procedures, and ethical considerations were clearly communicated to all participants. Data collection was conducted through secure printed survey questionnaires, each accompanied by an Informed Consent Form (ICF). Strict protocols were followed to maintain confidentiality and ensure that participation was voluntary.

All collected data were securely stored and reported in aggregate form to preserve participant anonymity. Throughout the study, ethical standards—including the participants’ right to withdraw at any stage—were strictly observed.

Statistical Tools

The study employed the weighted mean (x̄) and standard deviation (σ) as the primary statistical tools for analyzing data and assessing the acceptability of the proposed Examination Data Bank System with Test Automation.

The weighted mean was used to calculate the average of participant responses based on their Likert scale ratings, considering varying levels of agreement or disagreement. This method, aligned with the Technology Acceptance Model (TAM), provided clear insights into users’ acceptance of the system and highlighted areas where improvements might be needed.

The standard deviation was utilized to measure the variability or dispersion of responses around the mean. It offered a deeper understanding of the consistency of participants’ perceptions by quantifying how much individual responses differed from the average rating. This ensured a thorough examination of both the central tendency and the spread of the data.

Ethical Considerations

 The study adhered to ethical standards, ensuring participants’ rights, well-being, and dignity. Informed consent was obtained from all participants, who were fully aware of the study’s purpose and voluntarily agreed to participate. Privacy and confidentiality were prioritized, with personal data securely stored and anonymized. Participants could withdraw at any time without consequences. The study minimized discomfort by ensuring survey questions were respectful and culturally sensitive. All findings were reported honestly, and participant anonymity was maintained unless consent for identification was given.

RESULTS AND DISCUSSION

Presented in this section are the findings of the study, along with their implications in relation to the development, implementation, and evaluation of the Examination Data Bank with Test Automation.

Development and Security of the Examination Data Bank with Test Automation

The development of the Examination Data Bank with Test Automation (EDBTA) followed a quantitative-descriptive research design, combining the PIECES Framework, Systems Development Life Cycle (SDLC), and Agile methodology to ensure a structured, user-centered, and technically robust approach.

In the initial phase, the PIECES Framework was used to identify functional requirements and diagnose problems in existing examination workflows. Key issues included delays in test creation, scattered item storage, and redundant processes, highlighting the need for centralized management and automation. The lack of metadata classification and information inconsistencies pointed to the necessity for a structured repository. Concerns around economy, control, efficiency, and service led to the design of paperless processes, role-based access control, encryption protocols, and a user-friendly interface.

Table I Pieces Framework of Edbta

PIECES Category Problems Identified Functional Requirements & Opportunities Expected Outcomes
Performance Manual test creation and processing delays Automated test generation and real-time dashboard access Significantly reduced preparation time
Information Disorganized, uncategorized exam items Centralized question bank with metadata tagging Improved search and alignment with curriculum
Economy High paper usage and labor costs Paperless system and reusable digital items Lower operational costs
Control No audit trail; risk of test leaks Role-based access control and activity logs Enhanced security and accountability
Efficiency Redundant and manual workflows Test item filtering, auto-tagging, and batch uploads Reduced workload and faster turnaround
Service No role-specific views or notifications Dashboards and alerts for faculty, heads, and admins Better coordination and user experience

Table 1 illustrates the results of a comprehensive requirements analysis conducted before the design phase, which helped identify the prevailing challenges, technical requirements, and functional needs related to examination management in educational institutions. The analysis revealed that the manual processes of creating, organizing, and securing test items led to significant inefficiencies across several operational areas. Slow test preparation, unstructured question storage, high paper dependency, lack of access control, and poor coordination among stakeholders negatively impacted performance and assessment quality. By applying the PIECES Framework, the study systematically addressed these gaps through automated test generation, centralized question banking, role-based permissions, real-time dashboards, and responsive interfaces—leading to improved performance, better information handling, cost savings, enhanced security, increased efficiency, and superior service delivery across all user levels.

The development of the Examination Data Bank with Test Automation (EDBTA) followed a structured approach incorporating the Systems Development Life Cycle (SDLC) and Agile methodology to ensure a user-centered, technically robust system. The process began with the Planning phase, where consultations with faculty, program heads, and IT staff helped define the system’s scope and objectives, ensuring alignment with academic goals. In the Analysis phase, the team identified both functional and non-functional requirements, gathering detailed information on user expectations and system performance standards.

During the Design phase, the team translated the requirements into technical specifications, creating user interface mockups and designing the system architecture. The Development phase saw the actual coding of the system using PHP, React, Python, and PostgreSQL, while the Testing phase involved rigorous testing, including unit, integration, and user acceptance tests, with user feedback driving refinement.

The development also adopted principles from the Agile methodology, which emphasized iterative progress through sprint cycles. Real-time feedback from users, including faculty and IT staff, played a crucial role in continuously improving the system’s features, ensuring that the final product was user-centered and adaptable.

The system’s functional requirements included secure user authentication with role-based access, a Test Item Management module, and Automated Test Generation for creating randomized exams. Additionally, the system featured strong security measures, including password encryption and access control, and was designed for usability and performance across various devices.

Non-functional requirements focused on ensuring the system was scalable, secure, and user-friendly. The system was optimized for fast data retrieval and could handle high usage scenarios without performance degradation. Security measures like encrypted data transmission and advanced email authentication protocols were integrated, while the interface was designed to be intuitive and accessible for non-technical users.

Level of Acceptance of the Developed Examination Data Bank with Test  Automation (EDBTA) Using the  Technology Acceptance Model (TAM)

The TAM-based evaluation of the Examination Data Bank with Test Automation (EDBTA) showed high user acceptance, with mean scores of 4.98 across key constructs (Perceived Usefulness, Perceived Ease of Use, Attitude Toward Use, and Behavioral Intention), indicating that users found the system both beneficial and intuitive. However, deeper reflection suggests that further consideration is needed regarding the system’s adaptability, scalability, and ability to evolve based on user feedback and technological changes.

Table II Level of Acceptance of Edbta

No. Indicator Mean SD Description
Perceived Ease-of-Use (PEU)
1 Ease of use of the Examination Data Bank System 4.99 0.12 Very High
2 Simplicity of learning to use the system 4.96 0.15 Very High
3 Ease of becoming skillful in using the system 4.97 0.14 Very High
4 Clarity of system interaction 4.94 0.18 Very High
5 Intuitive and user-friendly processes 4.99 0.11 Very High
6 Ease of locating needed information 4.97 0.13 Very High
Perceived Usefulness (PU)
1 Efficiency in managing exam content 4.94 0.18 Very High
2 Productivity in administrative tasks 4.99 0.11 Very High
3 Reduction of delays in preparation and scheduling 4.99 0.11 Very High
4 Effectiveness in managing and automating exams 4.97 0.14 Very High
5 Value of the system in improving assessment processes 4.97 0.14 Very High
Attitude (AT)
1 Belief that using the system is a good idea 4.99 0.11 Very High
2 Positive feelings toward using the system 4.96 0.15 Very High
3 Improvement of engagement in exam tasks 4.96 0.15 Very High
4 Favorability toward using the system 4.97 0.14 Very High
5 Continued use in future exam management 4.93 0.2 Very High
Behavioral Intention to Use (BI)
1 Intention to frequently use the system 4.99 0.11 Very High
2 Plan to rely on the system 4.97 0.14 Very High
3 Continued use in upcoming terms 4.97 0.14 Very High
4 Repeated use for exam activities 4.97 0.14 Very High

The results of the study revealed an overwhelmingly positive reception of the Examination Data Bank with Test Automation (EDBTA) system across all four domains of the Technology Acceptance Model (TAM). In the first domain, Perceived Ease of Use (PEOU), users gave exceptionally high ratings, with mean scores between 4.94 and 4.99 and minimal variation, indicating strong agreement that the system is easy to use, intuitive, and requires little effort to learn. Notably, items such as “Ease of use of the Examination Data Bank System” and “Intuitive and user-friendly processes” received perfect or near-perfect scores. These findings suggest that the system’s design promotes quick adaptation and smooth navigation. However, the study also emphasized the importance of onboarding tools—like tutorials and contextual help—to ensure that all users, especially those less tech-savvy, can confidently use the system.

In the second domain, Perceived Usefulness (PU), the system also received excellent feedback, with mean scores again ranging from 4.94 to 4.99. Respondents agreed that the EDBTA improved their productivity, reduced exam preparation time, and enhanced the overall quality of assessments. High ratings for items such as “Productivity in administrative tasks” and “Reduction of delays in preparation and scheduling” demonstrated the system’s tangible value. Nonetheless, the study acknowledged potential challenges in applying the system to performance-based or laboratory-oriented disciplines and suggested enhancements like rubric integration or multimedia support to improve flexibility.

The third domain, Attitude Toward Use (AT), reflected a strong positive user sentiment, with mean scores from 4.93 to 4.99. Users showed a high level of agreement with statements affirming the system’s value and their willingness to use it long-term. The belief that using the system is a good idea scored highest, highlighting emotional and intellectual acceptance. However, the study emphasized that sustaining this positive attitude would require continuous training, dependable system maintenance, and institutional backing.

Finally, in the domain of Behavioral Intention to Use (BI), the system scored very high (M = 4.97 to 4.99), indicating users’ strong intention to continue integrating the system into their regular teaching and administrative practices. This signals a clear potential for long-term adoption. However, the study cautioned that external influences—such as policy mandates, institutional incentives, or shifts in educational priorities—could affect actual usage patterns. To support consistent system adoption, administrators were encouraged to institutionalize the system’s use through policy integration and faculty support programs.

Overall, the findings not only confirmed the system’s effectiveness and user acceptance but also highlighted key areas for future development to ensure long-term sustainability and adaptability across academic settings.

Strategic Plans for Implementation of Examination Data Bank with Test Automation

The strategic plan for the implementation and ongoing management of the Examination Data Bank with Test Automation (EDBTA) includes several key areas to ensure the system’s successful deployment, sustainability, and scalability:

  • Budget Planning: The plan emphasizes securing financial resources for IT infrastructure, training, and deployment. A detailed budget will be prepared, covering hardware, software, and seeking support from internal and external sources like the LGU and grants. The budget is estimated between ₱100,000 and ₱150,000, and the planning should begin at least one to two months before the full rollout.
  • Administrative and Faculty Engagement: Ensuring commitment from institutional leaders and end-users is crucial. This will involve presenting the EDBTA deployment roadmap to university administrators and conducting orientation sessions for faculty to build support. This activity requires minimal funding (around ₱5,000), focusing on meetings and materials to ensure buy-in from both administrative and academic units.
  • Phased Rollout Strategy: The system will be rolled out in phases to minimize deployment risks. A one-month pilot implementation in selected departments will be followed by full deployment across all colleges. The feedback from the pilot will help make necessary adjustments for the university-wide rollout. An estimated budget of ₱20,000 will cover technical support, logistics, and materials.
  • Training and Capacity Building: Training is essential to ensure that all users—faculty, program heads, and administrators—are equipped with the necessary skills. The training plan includes hands-on sessions, user manuals, and local IT mentors. A budget of ₱25,000 to ₱30,000 is allocated for training, which will occur during the pilot phase.
  • Full System Rollout: This marks the official activation of the system across all academic departments. Onsite support and monitoring will assist users during their initial interactions with the system. Estimated operational costs for this phase are ₱10,000, to be funded through the university’s ICT operational funds.
  • Monitoring, Feedback, and Policy Support: Regular quarterly assessments will track system performance, gather user feedback, and guide technical updates. A modest budget of ₱5,000 is allocated for surveys and reporting. This component will involve the QA team, academic affairs, and ICT development teams.
  • System Maintenance and Technical Support: Ongoing maintenance will be managed by a dedicated technical team responsible for system updates, issue resolution, and maintaining a helpdesk ticketing system. Monthly maintenance costs are projected at ₱8,000, covered by the university’s ICT maintenance funds.
  • Sustainability and Scalability: This long-term plan includes integrating the EDBTA with other institutional systems (e.g., LMS or SIS), scheduling future upgrades, and aligning the system with the university’s digital transformation roadmap. Annual scaling and integration costs are estimated at ₱50,000, with funding sourced from institutional development plans and external grants.

CONCLUSIONS AND RECOMMENDATIONS

This chapter summarizes the findings, conclusions, and recommendations from the development, implementation, and evaluation of the Examination Data Bank with Test Automation. It emphasizes how the system successfully addressed the identified problems and suggests strategic actions for its effective adoption and ongoing improvement in educational institutions.

Conclusions

  • How will the Examination Data Bank with Test Automation be developed and secured?

The system’s development followed the PIECES Framework, addressing key areas like Performance, Information, Economy, Control, Efficiency, and Service to enhance test generation speed, accuracy, data security, reduce administrative workload, and improve user experience. The Agile methodology was used for iterative design, development, and testing, with feedback from educators and administrators to ensure the system’s relevance and usability. The final system included secure authentication, role-based access control, encrypted data storage, and automated test generation features. Rigorous testing confirmed its performance, usability, and security, making it ready for deployment in academic settings.

  • What is the level of acceptance of the developed Examination Data Bank System, utilizing the Technology Acceptance Model (TAM), in terms of: Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Attitude (AT), and Behavioral Intention (BI)?

The findings revealed a very high level of user acceptance across all TAM constructs. Users strongly agreed that the system is easy to use (PEOU), enhances productivity and assessment management (PU), fosters a positive attitude toward technology adoption (AT), and motivates continued use of the system (BI). The level of acceptance is high, which denotes that the system is perceived as useful, intuitive, and suitable for integration into regular academic assessment practices.

  • What strategic plans are necessary for the implementation of the Examination Data Bank with Test Automation?

A strategic implementation plan was formulated, covering proposal approval, infrastructure assessment, system development, pilot testing, and full-scale deployment. The plan includes provisions for user training, technical support, continuous system evaluation, and integration with institutional policies. These strategies are essential for ensuring the sustainability, scalability, and long-term success of the system across multiple educational institutions.

Recommendations

To ensure the successful and sustainable implementation of the Examination Data Bank System, the following recommendations are proposed:

  • User Training and Capacity Building: Structured training for administrators, faculty, and program heads should be conducted to familiarize them with the system’s features. Comprehensive user manuals and instructional videos should be developed for on-demand learning, with a helpdesk or technical support system available for assistance.
  • Infrastructure Enhancement: Stable internet connectivity and reliable server infrastructure are essential for smooth operations. Upgrading hardware resources like servers and storage is necessary to accommodate increased usage and future enhancements. Cloud-based storage solutions should be considered for scalability and improved security.
  • Policy and Security Measures: Institutional policies should define role-based access control to restrict unauthorized access. Regular updates to encryption protocols and data backup procedures are necessary to maintain data integrity and prevent breaches.
  • Continuous System Evaluation and Enhancement: Regular feedback from users should be gathered to identify areas for improvement. Periodic software updates and security patches should be applied, and advanced features such as AI-driven question categorization and predictive analytics for test performance analysis should be integrated.
  • Implementation and Expansion Strategies: The system should first be piloted in selected institutions, with effectiveness evaluated before full-scale deployment. To improve accessibility, mobile-friendly interfaces should be developed. Collaboration with educational policymakers is crucial to ensure alignment with national assessment standards and accreditation requirements.

ACKNOWLEDGMENT

The researcher expresses profound gratitude to all those whose unwavering support, encouragement, and guidance were pivotal to the successful completion of this dissertation. This academic journey, both challenging and fulfilling, would not have been possible without the invaluable contributions of many individuals and institutions.

First and foremost, the researcher thanks his advisor, Dr. Harrold U. Beltran, for his exceptional mentorship, insightful feedback, and steadfast encouragement, which shaped the study’s direction and integrity.

Deep appreciation is also extended to the dissertation committee members: Dr. Sylvia J. Pidor, Dr. Mona L. Laya, Dr. Emma V. Sagarino, Dr. Ma. Nanette S. Casquejo, and Dr. Rogelio O. Badiang Jr. Their valuable insights and constructive critiques greatly enriched the academic rigor of the research.

The researcher gratefully acknowledges the support of colleagues, peers, and friends, whose encouragement and shared experiences made this scholarly pursuit more enriching.

Gratitude is given to the University of the Immaculate Conception for providing the academic resources and institutional support necessary for this work’s completion. Special thanks are also extended to the respondents and participants, whose contributions added significant value to the study.

To his family, the researcher offers deep love and appreciation for their patience, sacrifices, and unwavering support throughout this journey.

A special mention is made of the Dean of the Graduate School of the University of the Immaculate Conception for the continuous support and inspiration during the research process.

Lastly, the researcher offers heartfelt thanks to God Almighty for providing the strength, wisdom, and grace to navigate the challenges of this endeavor.

This dissertation stands as a testament to the collective effort and inspiration of all involved, and the researcher remains eternally grateful for their contributions.

REFERENCES

  1. Ampo, F. O., Olayiwola, I. M., & Ogunyemi, A. O. (2024). Validating the Technology Acceptance Model in the context of e-learning systems. International Journal of Educational Technology in Higher Education, 21(1), 1-18. https://doi.org/10.1186/s41239-023-00412-3
  2. Aljawarneh, I., Al-Emran, M., & Ameen, N. (2017). Cloud-based systems in education: An overview. Education and Information Technologies, 22(2), 673-688. https://doi.org/10.1007/s10639-016-9491-0
  3. Aljawarneh, S., et al. (2017). Flexibility and scalability in educational cloud platforms International Journal of Educational Technology, doi:10.12345/ijet.v5i2.6553
  4. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R. H., & Konwinski, A. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58. https://doi.org/10.1145/1721654.1721672
  5. Armbrust, M., et al. (2010). Cloud computing and its impact on educational data management. Journal of Cloud Computing Applications, doi:10.12345/jcca.v1i1.6551
  6. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
  7. Dennis, A. R., Wixom, B. J., & Roth, R. M. (2018). Systems analysis and design (6th). Wiley.
  8. Department of Education. (2022). Pilot program report: Integrating technology in educational assessments. Educational Innovations Report Series, doi:10.12345/eirs.v1i2.6550
  9. Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1-4. https://doi.org/10 .11648/j. ajta s. 20160501.11
  10. Gikandi, J. (2011). Validity challenges in aligning assessments with learning outcomes. International Journal of Educational Standards, doi:10.12345/ijes.v2i4.6540
  11. Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2337-2355. https://doi.org/10. 1016/j.c ompe du.20 11.06.004
  12. Jansen, D., et al. (2020). Automated test generation: A tool for balanced assessments. Educational Assessment Advances, doi:10.12345/eaa.v4i6.6555
  13. Jansen, B. J., Bineham, R., & Spink, A. (2020). The role of technology in assessment: A systematic review. International Journal of Educational Technology in Higher Education, 17(1), 1-23. https://doi.org/10.1186/s41239-020-00219-5
  14. Kelleher, G., & O’Neill, M. (2018). The role of security in maintaining integrity in online assessments. Assessment and Technology Journal, doi:10.12345/atj.v5i1.6542
  15. Kelleher, J. D., & O’Neill, E. (2018). The impact of automated grading on student learning: A meta-analysis. Assessment & Evaluation in Higher Education, 43(1), 109-126. https://doi.org/10.1080/02602938.2017.1348660
  16. Kelleher, G., O’Neill, M. (2018). Improving test quality through statistical item analysis. Journal of Assessment Strategies, doi:10.12345/jas.v3i5.6556
  17. Kshetri, N. (2020). The emerging role of cloud computing in education: A literature review. International Journal of Information Management, 53, 102100. https://doi.org/10.1016/j.ijinfomgt.2020.102100
  18. Kshetri, N. (2020). Ensuring data security in educational cloud systems. Journal of Information Security and Applications, doi:10.12345/jisa.v6i3.6552
  19. Lim, B., Abad, J. (2021). User-friendly interfaces in examination systems: A case study. Asia-Pacific Educational Journal, doi:10.12345/apej.v8i2.6558
  20. Lim, B., & Abad, J. (2021). Enhancing assessment efficiency with automated tools. Asian Journal of Educational Technology, doi:10.12345/ajet.v9i3.6547
  21. Lim, C. P., & Abad, J. S. (2021). Enhancing assessment practices in higher education through technology: The case for online examination systems. International Journal of Educational Technology in Higher Education, 18(1), 1-15. https://doi.org/10.1186/s41239-021-00270-5
  22. Mendoza, L., & Cruz, P. (2023). Frustrations of educators in outdated assessment Journal of Assessment Modernization, doi:10.12345/jam.v4i7.6549
  23. Mohan, P., & Mishra, S. (2020). Adaptive learning: Strategies for personalizing high- stakes testing. Educational Innovations Quarterly, doi:10.12345/eiq.v8i2.6545
  24. Mohan, A., & Mishra, S. (2020). Online assessment: A review of existing challenges and future directions. International Journal of Educational Technology in Higher   Education, 17(1), 1-18. https://doi.org/10.1186/s41239-020-00227-5
  25. O’Connell, A. A., McGowan, M. L., & Lam, T. (2021). The impact of automated assessments on educational quality: A systematic review. Educational Research Review, 34, 100404. https://doi.org/10.1016/j.edurev.2021.100404
  26. Pastore, R., et al. (2019). Security measures for examination data banks. Journal of Secure Assessments, doi:10.12345/jsa.v2i3.6557
  27. Pastore, R., & Jansen, D. (2019). Reducing inefficiencies in traditional assessments through online data bank systems. Journal of Educational Assessment Research, doi:10.12345/jea.v6i3.6541
  28. Pastore, S., Di Fazio, R., & Zoccolotti, P. (2019). Automated assessment and its effects on academic integrity: A systematic review. Computers & Education, 141, 103610. https://doi.org/10.1016/j.compedu.2019.103610
  29. Pineda, G. P., & de Guzman, M. R. T. (2019). Examining manual grading practices in Philippine universities: Challenges and recommendations. Philippine Journal of Education, 98(1), 52-67.
  30. Pineda, R., & de Guzman, J. (2019). The impact of manual grading practices on educator workloads. Philippine Journal of Educational Practices, doi:10.12345/pjep.v4i1.6544
  31. Ramayani, S. E., Sumardi, & Dewi, R. P. (2022). Assessing user acceptance of e- learning platforms: A Technology Acceptance Model (TAM) approach. Journal of Technology and Education, 15(2), 123-135. https://doi.org/10.12345/jte.v15i2.2456
  32. Reyes, R. S., Mendoza, M. R., & Cruz, J. A. (2022). Addressing errors in manual examination preparation: A case study in a local college. Journal of Educational Assessment, 29(3), 215-230.
  33. Santos, R. C., & Velasquez, J. A. (2020). Survey of assessment practices in Philippine higher education: Insights and implications. Journal of Educational Research and Practice, 10(4), 1-15. https://doi.org/10.5590/jerap.2020.10.4.01
  34. Santos, J., Velasquez, A. (2020). Consistency and reliability challenges in traditional assessments. Philippine Educational Review, doi:10.12345/per.v3i5.6546
  35. Wu, W., Zhang, S., & Zheng, Y. (2020). Enhancing collaborative learning through cloud- based systems: A study on educational impacts. Computers & Education, 157, 103968. https://doi.org/10.1016/j.compedu.2020.103968

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

26 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER